Compare commits

..

468 Commits

Author SHA1 Message Date
Scott Rifenbark
73cc31c11a documentation: Updated title page notes
Updated the notes to help the user be sure they have the
right set of documents for the matching YP release.

(From yocto-docs rev: 8e112affb406731ac98f3c2e08542c5049232ff1)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2018-12-10 20:43:10 +00:00
Scott Rifenbark
444dc2e99b bitbake: bitbake-user-manual: Fixed porno hack for hello world example
Someone hacked the http://hambedded site or it was moved and some
links to that site in the BB manual had been hijacked to point to
an entry portal for a pornography site.  Replaced the link with an
archived version that restores the integrity of the links.

(Bitbake rev: d0a4652fec6d3968b65b4a2776948a7b9e19407e)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2018-01-17 22:32:30 +00:00
Richard Purdie
bddb60b101 local.conf.sample: Weakly set BB_DISKMON_DIRS
For various reasons we need to be able to set and override this from
auto.conf on our test infrastructure. We have tried forcing the variable
but this then breaks other selftests. In the interests of not complicating
things further and needing to modify the tests across releases, weaken
the default assignment.

(From meta-yocto rev: 5eea5239b3172b147bdef8023d1c5a8981d18f7e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2018-01-06 10:07:45 +00:00
Andre Rosa
1083d90888 bitbake: Replace deprecated git branch parameter "--set-upstream"
Since 2017-08-17 (git version 2.14.1.473.g3ec7d702a) using deprecated
git branch parameter "--set-upstream" causes a fetcher error. Replace
it by "--set-upstream-to".

https://git.kernel.org/pub/scm/git/git.git/commit/?id=52668846ea2d41ffbd87cda7cb8e492dea9f2c4d
says, it's deprecated since 2012-08-30 so hopefully all still supported
host distributions have new enough git to support "--set-upstream-to".

ERROR: PACKAGE do_unpack: Fetcher failure: ...;
git -c core.fsyncobjectfiles=0 branch --set-upstream master origin/master failed with exit code 128, output:
fatal: the '--set-upstream' option is no longer supported. Please use '--track' or '--set-upstream-to' instead.

ERROR: PACKAGE do_unpack: Function failed: base_do_unpack

(Bitbake rev: 62a53e9dbb6dc7489e44c32340b0caddd4596f0a)

Signed-off-by: Andre Rosa <andre.rosa@lge.com>
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2ab50074c1a6c56a8a178755de108447d7b7acaf)
Signed-off-by: Javier Viguera <javier.viguera@digi.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-11-07 13:40:32 +00:00
Ross Burton
54e3f82bd7 wpa_supplicant: fix WPA2 key replay security bug
WPA2 is vulnerable to replay attacks which result in unauthenticated users
having access to the network.

* CVE-2017-13077: reinstallation of the pairwise key in the Four-way handshake

* CVE-2017-13078: reinstallation of the group key in the Four-way handshake

* CVE-2017-13079: reinstallation of the integrity group key in the Four-way
handshake

* CVE-2017-13080: reinstallation of the group key in the Group Key handshake

* CVE-2017-13081: reinstallation of the integrity group key in the Group Key
handshake

* CVE-2017-13082: accepting a retransmitted Fast BSS Transition Reassociation
Request and reinstalling the pairwise key while processing it

* CVE-2017-13086: reinstallation of the Tunneled Direct-Link Setup (TDLS)
PeerKey (TPK) key in the TDLS handshake

* CVE-2017-13087: reinstallation of the group key (GTK) when processing a
Wireless Network Management (WNM) Sleep Mode Response frame

* CVE-2017-13088: reinstallation of the integrity group key (IGTK) when
processing a Wireless Network Management (WNM) Sleep Mode Response frame

Backport patches from upstream to resolve these CVEs.

(From OE-Core rev: bfa04fa71c47e8fe9528208848cfcec2e232777d)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-10-16 23:47:12 +01:00
Daniel Lublin
426bc4c357 bitbake: lib/bs4: Fix imports from html5lib >= 0.9999999/1.0b8
As of html5lib 0.9999999/1.0b8 (released on July 14, 2016), some modules
have moved from _base to base. Handle this, while staying compatible
with earlier versions.

(Bitbake rev: a37d0f0247c9174fec124789b7a07c792193d909)

Signed-off-by: Daniel Lublin <daniel@lublin.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-09-04 17:51:16 +01:00
Ross Burton
3ca9f90dff libgcrypt: fix CVE-2017-9526
In libgcrypt before 1.7.7, an attacker who learns the EdDSA session key (from
side-channel observation during the signing process) can easily recover the
long-term secret key. 1.7.7 makes a cipher/ecc-eddsa.c change to store this
session key in secure memory, to ensure that constant-time point operations are
used in the MPI library.

(From OE-Core rev: fb28c54347fcf4957b9b8ee7dee423d859eb7820)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-07-19 15:13:47 +01:00
Ross Burton
ccc964cf9f libgcrypt: fix CVE-2017-7526
Fixes CVE-2017-7526, 'flush+reload side-channel attack on RSA secret keys dubbed
"Sliding right into disaster"'.

(From OE-Core rev: 1a713fb654a31a6dd218dc1b5b810e2b380ecbb1)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-07-19 15:13:46 +01:00
California Sullivan
50fdd78423 initrdscripts/init-install*: Add rootwait when installing to USB devices
It can take a bit for USB devices to be detected, so if a USB device is
your rootfs and you don't set rootwait you will most likely get a kernel
panic. Fix this by adding rootwait to the kernel command line on
installation.

Fixes [YOCTO #9462].

(From OE-Core rev: 7f26cee3d8e4b2e9240b30c21be9fa7661186ccd)

Signed-off-by: California Sullivan <california.l.sullivan@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-06-27 23:41:01 +01:00
Richard Purdie
3cf0e09348 bitbake: siggen: Make calc_taskhash match get_taskhash for file checksums
The code in these two functions is meant to be equivlanet in behaviour
but isn't. Add in code to ensure files that don't exist are handled
consistently by both functions. Users did report being able to generate
tracebacks otherwise.

(Bitbake rev: 51e913e178a02bb603ddf874669e3ce54f90bd5d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-06-27 13:46:39 +01:00
Richard Purdie
4515fc9529 package_ipk: Clean up Source entry in ipk packages
There is the potential for sensitive information to leak through the urls
there and removing it brings this into the behavior of the other package
backends since filtering it is likely error prone.

Since ipks don't appear to be generated at all if we don't set this, set
the field to the recipe name used (basename only, no paths). This avoids
information leaking. We may want to drop the field if opkg can allow that
at a future point but the recipe name is a suitable identifier for now.

Reported-by: Andrej Valek <andrej.valek@siemens.com>
(From OE-Core rev: 1aa51cfb4b8d10f478b1a6a68c69a3e35342b1c0)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-06-16 10:21:12 +01:00
Scott Rifenbark
628aea354d documentation: Updated all manual revision tables to June, 2017
The release was pushed from May to June for 2.1.3 (krogoth). Updated
all manual revision tables.

(From yocto-docs rev: 5ec75c194147fecf0bda8095e430cdd8e6f34b6b)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-06-14 10:19:50 +01:00
Ross Burton
3565a9697f oeqa/selftest/recipetool: actually fix create_github test
The Meson revision was locked down but the license list change wasn't actually
committed...

Also specify the exact path for recipetool to write to, for clarity.

(From OE-Core rev: cbd6a2de4d8bda44f1d53956acc49a4bef810e95)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-06-07 15:59:03 +01:00
Richard Purdie
fe7fb00221 build-appliance-image: Update to krogoth head revision
(From OE-Core rev: 2a1e8e2c9ff2caa6c207d8fe0d517e472715d1d1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-06-07 08:42:54 +01:00
Alexander Kanavin
7241042b70 grub2: enforce -no-pie if supported by compiler
Recent distros are enabling -pie by default; in case of grub
we need to turn it off.

(From OE-Core rev: aaff6c99dde3f1058bb3c4b320f27753c6c992ad)

(From OE-Core rev: 720ac6e2b46d4d78244033a2474a2716a7a08b03)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-06-07 08:40:06 +01:00
Richard Purdie
546c0cffca build-appliance-image: Update to krogoth head revision
(From OE-Core rev: 03487ba4d5eb12e826998c76c6f350672853550f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-06-06 18:52:39 +01:00
Richard Purdie
224e04d6ce poky: Update version to 2.1.3
(From meta-yocto rev: 536c72e8f05c45f910b01856b1a74b0c7a756924)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-06-06 18:52:32 +01:00
Saul Wold
172105c1ef rootfs_rpm: Increase rootfs size
This doubles the amount of extra space that is provided for SMART and
RPM, as they consume more disk space during qa testing via testimage

[YOCTO #9800]

(From OE-Core rev: 2d636068d9d3a1ea2db3ace49462be13ba9ef125)

(From OE-Core rev: 1d35417502aa8bce9d65d15f29d9d7bee077b7cc)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-06-06 18:28:39 +01:00
Ross Burton
0fa93e1412 oeqa/selftest: lock down Meson git revision for reliability
The test_recipetool_create_github test fetches HEAD of the repository so
upstream changes can (and do) break the test.  Avoid these problems by passing
the rev= argument in the URL to lock the checkout to the same version that is
fetched in the github_tarball test.

Also pass the commands to runCmd() as a list instead of a string, the semicolon
in the URL needs more quotes if the shell is involved and passing a list
bypasses the shell entirely.

(From OE-Core rev: b7a26dbca4d92b36aeb8b183e679701b5706adb0)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-06-06 12:29:08 +01:00
Ross Burton
d54e1f4ff5 oeqa/runtime/rpm: use su instead of sudo
This test works fine with su, which is more likely to be installed in images
than sudo.

(From OE-Core rev: 59d10be745a1f7d31c68e4d5da9e1c3461b7d390)

(From OE-Core rev: 0c35ac4b1b78a0b1be8e50ced5502c1bf9d31774)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-06-06 12:25:57 +01:00
Richard Purdie
b24988bec7 libunwind: Fix build race conflict with gcc and musl
Building libunwind, then gcc-runtime causes build failures. This is hard
to fix since gcc-runtime wants the internal gcc unwind.h header but libunwind
wants to provide this. There are differences in include behaviour between gcc
and glibc which are by design.

This patch hacks around the issue by looking for a define used during gcc-runtime's
build and skipping to the internal header in that case. The patch is only enabled
on musl and is the best workaround I could come up with to unblock failing builds
on our autobuilder.

[YOCTO #10129]

(From OE-Core rev: 793b6e57d7cf4a093223b4cd34085a929a5c43c3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-06-05 23:53:54 +01:00
Richard Purdie
a220e2ca34 selftest/recipetool: Fix test for krogoth
This test was backported and doesn't function quite the same way under
krogoth since some of the extended python license checking wasn't yet
added. This tweaks the output to match the expected result in krogoth.

(From OE-Core rev: fcb2fcae57df403f1fff4b9ddb6b2d52e41aea33)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-30 15:07:23 +01:00
Alexander Kanavin
3ac7c847e8 webkitgtk: fix racy double build of WebKit2-4.0.gir
This occasionally triggered autobuilder errors where the .gir file
appeared truncated to introspection tools.

(From OE-Core rev: 2154c1c803b7bd36a1401fa657e7fd8cb1060a70)

RP: backported from 2.12 to 2.10
(From OE-Core rev: cf06e8aa07c8b60a377b4716be5c72311be12f1c)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-28 01:02:19 +01:00
Chang Rebecca Swee Fun
80b35ed1a2 cryptodev-linux: update SRC_URI
Gna! project announced that the download site from gna.org HTTP server
will soon be closing down. We have verified that the site is no longer
accessible without network proxy cache. We need to update SRC_URI to
point to new alternative (nwl.cc HTTP server) in order to avoid fetcher
issues in future.

[YOCTO #11575]

(From OE-Core rev: 0314442ec4cb280fd8ad2f9deb9b3ec8842f8c2a)

Signed-off-by: Chang Rebecca Swee Fun <rebecca.swee.fun.chang@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-27 14:52:40 +01:00
Richard Purdie
7b9e031355 pseudo: Work around issues with glibc 2.24
There are issues with a change made to RTLD_NEXT behaviour in glibc 2.24
and that change was also backported to older glibc versions in some distros
like Fedora 23. This adds a workaround whilst the pseudo maintainer fixes
various issues properly.

(From OE-Core rev: 21c38a091c4a1917f62a942c4751b0fd11dce340)

(From OE-Core rev: 47f5c2a52f93e1984b0269c708ca5218b9fd41ec)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Christopher Larson
cb5649cbb8 pseudo: obey our LDFLAGS
(From OE-Core rev: fc04eae73cb99d3783b09d062120a9b7dc95210a)

(From OE-Core rev: 92214ca9e14d5dda1dd3e958944e96003ef77422)

Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Patrick Ohly
dd20601980 openssl.inc: avoid random ptest failures
"make alltests" is sensitive to the timestamps of the installed
files. Depending on the order in which cp copies files, .o and/or
executables may end up with time stamps older than the source files.
Running tests then triggers recompilation attempts, which typically
will fail because dev tools and files are not installed.

"cp -a" is not enough because the files also have to be newer than
the installed header files. Setting the file time stamps to
the current time explicitly after copying solves the problem because
do_install_ptest_base is guaranteed to run after do_install.

(From OE-Core rev: 101e2a5e0b7822ca3de3d3a73369405c05ab3c5b)

(From OE-Core rev: b309bfa265456cda7269ff67e9df5f5c05a9a5a5)

Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Robert Yang
d3c0a560a8 openssl: fix do_configure error when cwd is not in @INC
Fixed when building on Debian-testing:
| Can't locate find.pl in @INC (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.22.2 /usr/local/share/perl/5.22.2 /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at perlpath.pl line 7.

(From OE-Core rev: c28065671b582c140d5971c73791d2ac8bdebe69)

(From OE-Core rev: d0500320747608783b41f0035bf962b877a6a1c0)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

fixed merge conflict
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Armin Kuster
62685cbff5 openssl: Security fix CVE-2016-2177
Affects openssl <= 1.0.2h
CVSS v2 Base Score: 7.5 HIGH

(From OE-Core rev: 2848c7d3e454cbc84cba9183f23ccdf3e9200ec9)

(From OE-Core rev: 217d245bdb7b19f92fa5f6f93c371094353d6da6)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

fixed merge conflicts
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Joshua Lock
3d9f6dc163 openssl: prevent warnings from openssl-c_rehash.sh
The openssl-c_rehash.sh script reports duplicate files and files which
don't contain a certificate or CRL by echoing a WARNING to stdout.
This warning gets picked up by the log checker during rootfs and results
in several warnings getting reported to the console during an image build.

To prevent the log from being overrun by warnings related to certificates
change these messages in openssl-c_rehash.sh to be prefixed with NOTE not
WARNING.

(From OE-Core rev: 88c25318db9f8091719b317bacd636b03d50a411)

(From OE-Core rev: c270ebf9235c5414de1bf80ff40253f5a98dca2a)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Otavio Salvador
8aea6ad597 openssl: Ensure SSL certificates are stored on sysconfdir
Debian and other generic distributions has moved the certificates for
sysconfdir (/etc/ssl) and made the libdir content to link for it.

This provides several advantages specially for read-only
rootfs. Another benefit is that it ensures foreign implementations
(e.g: BoringSSL, from Chromium, when running with OpenSSL backend for
the certificates) to find the content correctly.

(From OE-Core rev: 50d63fa346bbb05dafffc0cb55e21e1092272d95)

(From OE-Core rev: 735f4528b5046024f118658cda8ee340ff8aa082)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Otavio Salvador
051883f877 openssl: Add Shell-Script based c_rehash utility
The PLD Linux distribution has ported the c_rehash[1] utility from Perl
to Shell-Script, allowing it to be shipped by default.

1. https://git.pld-linux.org/?p=packages/openssl.git;a=blob;f=openssl-c_rehash.sh;h=0ea22637ee6dbce845a9e2caf62540aaaf5d0761

The OpenSSL upstream intends[2] to convert the utility for C however
did not yet finished the conversion.

2. https://rt.openssl.org/Ticket/Display.html?id=2324

This patch adds this script and thus removed the Perl requirement for
it.

(From OE-Core rev: cb6150f1a779e356f120d5e45c91fda75789970a)

(From OE-Core rev: 9ae6e105bb689faf004f60bb4f9f0ea56e3b8fde)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Andrej Valek
0c78f81485 openssl: fix add missing dependencies building for test directory
Regarding the last commit about missing dependencies, another issue
was found. The problem was found, while ptest has been built with some
set extra settings. It means, when ptest is going to be built,
it is necessary to rebuild dependencies for test directory too.

(From OE-Core rev: 030142d0410bec85aeacfff6be27d5fed41ce808)

(From OE-Core rev: 28419a4e9ad9430e477c1eb7f2a2d1f328bcacaf)

Signed-off-by: Andrej Valek <andrej.valek@siemens.com>
Signed-off-by: Pascal Bach <pascal.bach@siemens.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Andrej Valek
98f3e83884 openssl: fix add missing make depend command before make library
Settings from EXTRA_OECONF like en/disable no-ssl3, are transferred
only into DEPFLAGS. It means that settings have no effect on output files.
DEPFLAGS will be transferred into output files with make depend command.

https://wiki.openssl.org/index.php/Compilation_and_Installation#Dependencies

(From OE-Core rev: e3c251427a305780d3257a011260bd978de273d5)

(From OE-Core rev: 11c388226399ec703f4f67ae7cf11c1e4e332710)

Signed-off-by: Andrej Valek <andrej.valek@siemens.com>
Signed-off-by: Pascal Bach <pascal.bach@siemens.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Zubair Lutfullah Kakakhel
819f7c3d03 openssl: Fix MIPS64be and add MIPS64le
MIPS64 target was being configured for linux-mips which defaults to
MIPS32. Doesn't cause any issue as far as I can see but it would be
wiser to use the correct target configuration.

Also add MIPS64le configuration which is missing.

(From OE-Core rev: 0afec72913bc31d315cba079da317e8b28755ded)

(From OE-Core rev: e2b2fbe05fe97a512265d9978011650415e1589a)

Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Armin Kuster
4245995f76 mesa: update SRC_URI
ERROR: mesa-2_11.1.2-r0 do_checkuri: Function failed: Fetcher failure for URL: 'ftp://ftp.freedesktop.org/pub/mesa/11.1.2/mesa-11.1.2.tar.xz'. URL ftp://ftp.freedesktop.org/pub/mesa/11.1.2/mesa-11.1.2.tar.xz doesn't work
ERROR: Logfile of failure stored in: /home/akuster/oss/maint/poky/build/tmp/work/i586-poky-linux/mesa/2_11.1.2-r0/temp/log.do_checkuri.30779
Log data follows:
| DEBUG: Executing python function do_checkuri
| DEBUG: Testing URL ftp://ftp.freedesktop.org/pub/mesa/11.1.2/mesa-11.1.2.tar.xz
| DEBUG: checkstatus() urlopen failed: <urlopen error ftp error: 550 Failed to change directory.>
| DEBUG: Python function do_checkuri finished
| ERROR: Function failed: Fetcher failure for URL: 'ftp://ftp.freedesktop.org/pub/mesa/11.1.2/mesa-11.1.2.tar.xz'. URL ftp://ftp.freedesktop.org/pub/mesa/11.1.2/mesa-11.1.2.tar.xz doesn't work

(From OE-Core rev: 97d9fffca3bddaa9c72acd674b5329b72179f30f)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Armin Kuster
0e8fcf8c9c libpng -lsb: update SRC_URI
ERROR: libpng12-1.2.56-r0 do_checkuri: Function failed: Fetcher failure for URL: 'http://distfiles.gentoo.org/distfiles/libpng-1.2.56.tar.xz'. URL http://distfiles.gentoo.org/distfiles/libpng-1.2.56.tar.xz doesn't work
ERROR: Logfile of failure stored in: /home/akuster/oss/maint/poky/build/tmp/work/i586-poky-linux/libpng12/1.2.56-r0/temp/log.do_checkuri.19750
Log data follows:
| DEBUG: Executing python function do_checkuri
| DEBUG: Testing URL http://distfiles.gentoo.org/distfiles/libpng-1.2.56.tar.xz
| DEBUG: checkstatus() urlopen failed: HTTP Error 404: Not Found
| DEBUG: Python function do_checkuri finished
| ERROR: Function failed: Fetcher failure for URL: 'http://distfiles.gentoo.org/distfiles/libpng-1.2.56.tar.xz'. URL http://distfiles.gentoo.org/distfiles/libpng-1.2.56.tar.xz doesn't work

(From OE-Core rev: e9244796af33d41ad8ee652f0276c427228948b6)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Armin Kuster
13f0eee08d libxslt: update SRC_URI
| ERROR: Function failed: Fetcher failure for URL: 'ftp://xmlsoft.org/libxslt/libxslt-1.1.28.tar.gz'. URL ftp://xmlsoft.org/libxslt/libxslt-1.1.28.tar.gz doesn't work
ERROR: Logfile of failure stored in: /home/akuster/oss/maint/poky/build/tmp/work/x86_64-linux/libxslt-native/1.1.28-r0/temp/log.do_checkuri.16102
Log data follows:
| DEBUG: Executing python function do_checkuri
| DEBUG: Testing URL ftp://xmlsoft.org/libxslt/libxslt-1.1.28.tar.gz
| DEBUG: checkstatus() urlopen failed: <urlopen error ftp error: [Errno 110] Connection timed out>
| DEBUG: Python function do_checkuri finished
| ERROR: Function failed: Fetcher failure for URL: 'ftp://xmlsoft.org/libxslt/libxslt-1.1.28.tar.gz'. URL ftp://xmlsoft.org/libxslt/libxslt-1.1.28.tar.gz doesn't work

(From OE-Core rev: 251e4ed97d837d4420484a718271655589509cae)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Armin Kuster
6eb266a365 libpng: update SRC_URI back to SF
ERROR: Task 944 (virtual:nativesdk:/home/akuster/oss/maint/poky/meta/recipes-multimedia/libpng/libpng_1.6.21.bb, do_checkuri) failed with exit code '1'
ERROR: libpng12-1.2.56-r0 do_checkuri: Function failed: Fetcher failure for URL: 'http://distfiles.gentoo.org/distfiles/libpng-1.2.56.tar.xz'. URL http://distfiles.gentoo.org/distfiles/libpng-1.2.56.tar.xz doesn't work
ERROR: Logfile of failure stored in: /home/akuster/oss/maint/poky/build/tmp/work/i586-poky-linux/libpng12/1.2.56-r0/temp/log.do_checkuri.14781
Log data follows:
| DEBUG: Executing python function do_checkuri
| DEBUG: Testing URL http://distfiles.gentoo.org/distfiles/libpng-1.2.56.tar.xz
| DEBUG: checkstatus() urlopen failed: HTTP Error 404: Not Found
| DEBUG: Python function do_checkuri finished
| ERROR: Function failed: Fetcher failure for URL: 'http://distfiles.gentoo.org/distfiles/libpng-1.2.56.tar.xz'. URL http://distfiles.gentoo.org/distfiles/libpng-1.2.56.tar.xz doesn't work

SF now has a old releases dir which contains this tarball. It got dropped from Gentoo

(From OE-Core rev: 30722ea82dd8e90c33d607e1a8847dabf16b4225)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Armin Kuster
577eb635ab libpcre: update SRC_URI
ERROR: Task 75 (/home/akuster/oss/maint/poky/meta/recipes-support/libpcre/libpcre_8.38.bb, do_checkuri) failed with exit code '1'
ERROR: libpcre-native-8.38-r0 do_checkuri: Function failed: Fetcher failure for URL: 'ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.38.tar.bz2'. URL ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.38.tar.bz2 doesn't work

(From OE-Core rev: cf9f844100fa509829009f0167fc058a3f312393)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:22 +01:00
Joshua Lock
553d5f65e8 zlib: update SRC_URI to fix fetching
Upstream have removed the file from zlib.net as a new version has
been released, switch to fetching from the official sourceforge
mirror.

[YOCTO #10879]

(From OE-Core rev: bb99e4a620efd59556539c156cd98ea23aae74c8)

(From OE-Core rev: b7599330f1d629384e16a5fbeffc1a65c1555667)

(From OE-Core rev: d2522df5bf85875a896d3b7ddeb20b63af3f4470)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Ed Bartosh
47ef871649 populate_sdk_ext: whitelist do_package tasks
With enabled SSTATE_MIRRORS sstate code expects mirrors to
contain entries for all tasks, which is not the case for ext
installer as it uses reduced sstate cache.

Added do_package tasks to BB_SETSCENE_ENFORCE_WHITELIST to prevent
installer failing with ERROR: Sstate artifact unavailable

[YOCTO #10832]

(From OE-Core rev: 2ed46ada4b8e496493835e84b36f7e9c367f59d2)

(From OE-Core rev: eb2fc2cd9081a4533ed30fe81c9f491b06cc5ae1)

(From OE-Core rev: 6549641a65b8e67ed46400921f89acf395f13a80)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Ed Bartosh
db0832ead6 populate_sdk_ext: fix working with uninative sstate
Mapped uninative sstate directories to make ext SDK installer to
use them when it's run on systems with gcc version different from
gcc version used to build installer.

[YOCTO #10832]

(From OE-Core rev: fb945c0fd2e66d70461e6cf2e602020eeabe32f7)

(From OE-Core rev: 31ce79200035584c26576afe043688132532bc8b)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Mingli Yu
863bfa81af tiff: Security fix CVE-2016-9538
* tools/tiffcrop.c: fix read of undefined buffer in
readContigStripsIntoBuffer() due to uint16 overflow.

External References:
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-9538

Patch from:
43c0b81a81 (diff-c8b4b355f9b5c06d585b23138e1c185f)

(From OE-Core rev: 9af5d5ea882c853e4cb15006f990d3814eeea9ae)

(From OE-Core rev: 33cad1173f6d1b803b794a2ec57fe8a9ef19fb44)

(From OE-Core rev: 5597998cf8b852bfe9b794d83314090a148bf78b)

Signed-off-by: Mingli Yu <Mingli.Yu@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Mingli Yu
014af27dcb tiff: Security fix CVE-2016-9535
* libtiff/tif_predict.h, libtiff/tif_predict.c:
Replace assertions by runtime checks to avoid assertions in debug mode,
or buffer overflows in release mode. Can happen when dealing with
unusual tile size like YCbCr with subsampling.

External References:
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-9535

Patch from:
3ca657a879
6a984bf790

(From OE-Core rev: 61d3feb9cad9f61f6551b43f4f19bfa33cadd275)

(From OE-Core rev: d55b4470c20f4a4b73b1e6f148a45d94649dfdb5)

(From OE-Core rev: 3f22e42b981319b1aaa15871a90753060817c911)

Signed-off-by: Mingli Yu <Mingli.Yu@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Zhixiong Chi
ca4703b6cf tiff: Security fix CVE-2016-9539
tools/tiffcrop.c in libtiff 4.0.6 has an out-of-bounds read in
readContigTilesIntoBuffer(). Reported as MSVR 35092.

External References:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-9539

Patch from:
ae9365db1b

(From OE-Core rev: 58bf0a237ca28459eb8c3afa030c0054f5bc1f16)

(From OE-Core rev: 0933a11707a369c8eaefebd31e8eea634084d66e)

(From OE-Core rev: d80b6e399e2c14b99c629b4548c7ec38e35fe93e)

Signed-off-by: Zhixiong Chi <zhixiong.chi@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Zhixiong Chi
98e368e4b6 tiff: Security fix CVE-2016-9540
tools/tiffcp.c in libtiff 4.0.6 has an out-of-bounds write on tiled
images with odd tile width versus image width. Reported as MSVR 35103,
aka "cpStripToTile heap-buffer-overflow."

External References:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-9540

Patch from:
5ad9d8016f

(From OE-Core rev: cc97dc66006c7892473e3b4790d05e12445bb927)

(From OE-Core rev: ad2c4710ef15c35f6dd4e7642efbceb2cbf81736)

(From OE-Core rev: 6f58c18016258c0a49b4d0ef50d170a1bbb671f4)

Signed-off-by: Zhixiong Chi <zhixiong.chi@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Yi Zhao
3c61ee2f68 tiff: Security fix CVE-2016-3632
CVE-2016-3632 libtiff: The _TIFFVGetField function in tif_dirinfo.c in
LibTIFF 4.0.6 and earlier allows remote attackers to cause a denial of
service (out-of-bounds write) or execute arbitrary code via a crafted
TIFF image.

External References:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-3632
http://bugzilla.maptools.org/show_bug.cgi?id=2549
https://bugzilla.redhat.com/show_bug.cgi?id=1325095

The patch is from RHEL7.

(From OE-Core rev: 9206c86239717718be840a32724fd1c190929370)

(From OE-Core rev: 0c6928f4129e5b1e24fa2d42279353e9d15d39f0)

(From OE-Core rev: f10cef0119c3bcf5b23a142f131a2d452ef2b837)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Zhixiong Chi
ec00137169 tiff: Security fix CVE-2016-3658
The TIFFWriteDirectoryTagLongLong8Array function in tif_dirwrite.c in the tiffset tool
allows remote attackers to cause a denial of service (out-of-bounds read) via vectors
involving the ma variable.

External References:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-3658
http://bugzilla.maptools.org/show_bug.cgi?id=2546

Patch from:
45c68450be

(From OE-Core rev: c060e91d2838f976774d074ef07c9e7cf709f70a)

(From OE-Core rev: cc266584158c8dfc8583d21534665b6152a4f7ee)

(From OE-Core rev: 7ba456a35e0e75e0e8b3d8f9530aab312775672d)

Signed-off-by: Zhixiong Chi <zhixiong.chi@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Sona Sarmadi
11b217d60b expat: CVE-2012-6702, CVE-2016-5300
References:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-5300
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-6702
http://www.openwall.com/lists/oss-security/2016/06/04/5

Reference to upstream fix:
https://bugzilla.redhat.com/attachment.cgi?id=1165210
Squashed backport against vanilla Expat 2.1.1, addressing:
* CVE-2012-6702 -- unanticipated internal calls to srand
* CVE-2016-5300 -- use of too little entropy

(From OE-Core rev: c9a2e2f33e8b473f06a3941dab9b4ecccd111a23)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Ross Burton
c71ea3831a oeqa: fix hasPackage, add hasPackageMatch
hasPackage() was looking for the string provided as an RE substring in the
manifest, which resulted in a large number of false positives (i.e. libgtkfoo
would match "gtk+").

Rewrite the manifest loader to parse the files into a proper data structure,
change hasPackage to do full string matches, and add hasPackageMatch which does
RE substring matches.

(From OE-Core rev: b9409863af71899e02275439949e3f4cdfaf2d0f)

(From OE-Core rev: 990db70dac60541ef14977177fff4361e31c51eb)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Armin Kuster
3428c1db71 tzdata: update to 2016i
Briefly: Cyprus split into two time zones on 2016-10-30, and Tonga
  reintroduces DST on 2016-11-06.

  Changes to future time stamps

    Pacific/Tongatapu begins DST on 2016-11-06 at 02:00, ending on
    2017-01-15 at 03:00.  Assume future observances in Tonga will be
    from the first Sunday in November through the third Sunday in
    January, like Fiji.  (Thanks to Pulu ʻAnau.)  Switch to numeric
    time zone abbreviations for this zone.

  Changes to past and future time stamps

    Northern Cyprus is now +03 year round, causing a split in Cyprus
    time zones starting 2016-10-30 at 04:00.  This creates a zone
    Asia/Famagusta.  (Thanks to Even Scharning and Matt Johnson.)

    Antarctica/Casey switched from +08 to +11 on 2016-10-22.
    (Thanks to Steffen Thorsen.)

  Changes to past time stamps

    Several corrections were made for pre-1975 time stamps in Italy.
    These affect Europe/Malta, Europe/Rome, Europe/San_Marino, and
    Europe/Vatican.

    First, the 1893-11-01 00:00 transition in Italy used the new UT
    offset (+01), not the old (+00:49:56).  (Thanks to Michael
    Deckers.)

    Second, rules for daylight saving in Italy were changed to agree
    with Italy's National Institute of Metrological Research (INRiM)
    except for 1944, as follows (thanks to Pierpaolo Bernardi, Brian
    Inglis, and Michael Deckers):

      The 1916-06-03 transition was at 24:00, not 00:00.

      The 1916-10-01, 1919-10-05, and 1920-09-19 transitions were at
      00:00, not 01:00.

      The 1917-09-30 and 1918-10-06 transitions were at 24:00, not
      01:00.

      The 1944-09-17 transition was at 03:00, not 01:00.  This
      particular change is taken from Italian law as INRiM's table,
      (which says 02:00) appears to have a typo here.  Also, keep the
      1944-04-03 transition for Europe/Rome, as Rome was controlled by
      Germany then.

      The 1967-1970 and 1972-1974 fallback transitions were at 01:00,
      not 00:00.

(From OE-Core rev: daf95f7fd9f7ab65685d7b764d8e50df8d00d308)

(From OE-Core rev: 989be1015d678ed6b11fde3bd153a92a42e8ec72)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Armin Kuster
4cd7b56228 tzcode: update to 2016i
Changes to code

  The code should now be buildable on AmigaOS merely by setting the
  appropriate Makefile variables.  (From a patch by Carsten Larsen.)

(From OE-Core rev: d2b8c4ee535684f5d874082a7f76efbda1907ea5)

(From OE-Core rev: 866d48628393acc9ea95ba50453f34a192aaadc4)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Armin Kuster
5dd02c6db1 openssl: Security fix CVE-2016-8610
affects openssl < 1.0.2i

(From OE-Core rev: 0256b61cdafe540edb3cec2a34429e24b037cfae)

(From OE-Core rev: edb2fe2202a7e725aa6abd731bdef830ee2dbd97)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Yi Zhao
0ed07f2658 tiff: Security fix CVE-2016-3622
CVE-2016-3622 libtiff: The fpAcc function in tif_predict.c in the
tiff2rgba tool in LibTIFF 4.0.6 and earlier allows remote attackers to
cause a denial of service (divide-by-zero error) via a crafted TIFF
image.

External References:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-3622
http://www.openwall.com/lists/oss-security/2016/04/07/4

Patch from:
92d966a5fc

(From OE-Core rev: 0af0466f0381a72b560f4f2852e1d19be7b6a7fb)

(From OE-Core rev: 928eadf8442cf87fb2d4159602bd732336d74bb7)

(From OE-Core rev: e2eeb68f33e671d9520afda149f5aea27ab546bd)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Yi Zhao
c33bac8883 tiff: Security fix CVE-2016-3623
CVE-2016-3623 libtiff: The rgb2ycbcr tool in LibTIFF 4.0.6 and earlier
allows remote attackers to cause a denial of service (divide-by-zero) by
setting the (1) v or (2) h parameter to 0.

External References:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-3623
http://bugzilla.maptools.org/show_bug.cgi?id=2569

Patch from:
bd024f0701

(From OE-Core rev: d66824eee47b7513b919ea04bdf41dc48a9d85e9)

(From OE-Core rev: f0e77ffa6bbc3adc61a2abd5dbc9228e830c055d)

(From OE-Core rev: 4cb329454fec849ca0ea6106d78d1240c760bd11)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Yi Zhao
c76d565ce2 tiff: Security fix CVE-2016-3991
CVE-2016-3991 libtiff: Heap-based buffer overflow in the loadImage
function in the tiffcrop tool in LibTIFF 4.0.6 and earlier allows remote
attackers to cause a denial of service (out-of-bounds write) or execute
arbitrary code via a crafted TIFF image with zero tiles.

External References:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-3991
http://bugzilla.maptools.org/show_bug.cgi?id=2543

Patch from:
e596d4e27c

(From OE-Core rev: d31267438a654ecb396aefced201f52164171055)

(From OE-Core rev: cf58711f12425fc1c29ed1e3bf3919b3452aa2b2)

(From OE-Core rev: a0115f89df6c082949796a75551ea43b35c39ccd)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:21 +01:00
Yi Zhao
04f04d0d17 tiff: Security fix CVE-2016-3990
CVE-2016-3990 libtiff: Heap-based buffer overflow in the
horizontalDifference8 function in tif_pixarlog.c in LibTIFF 4.0.6 and
earlier allows remote attackers to cause a denial of service (crash) or
execute arbitrary code via a crafted TIFF image to tiffcp.

External References:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-3990
http://bugzilla.maptools.org/show_bug.cgi?id=2544

Patch from:
6a4dbb07cc

(From OE-Core rev: c6492563037bcdf7f9cc50c8639f7b6ace261e62)

(From OE-Core rev: d7165cd738ac181fb29d2425e360f2734b0d1107)

(From OE-Core rev: 5e87d1d9e2861521b52216625a68649a44748ce3)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Yi Zhao
d8cbc618cc tiff: Security fix CVE-2016-3945
CVE-2016-3945 libtiff: Multiple integer overflows in the (1)
cvt_by_strip and (2) cvt_by_tile functions in the tiff2rgba tool in
LibTIFF 4.0.6 and earlier, when -b mode is enabled, allow remote
attackers to cause a denial of service (crash) or execute arbitrary code
via a crafted TIFF image, which triggers an out-of-bounds write.

External References:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-3945
http://bugzilla.maptools.org/show_bug.cgi?id=2545

Patch from:
7c39352ccd

(From OE-Core rev: 04b9405c7e980d7655c2fd601aeeae89c0d83131)

(From OE-Core rev: 3a4d2618c50aed282af335ef213c5bc0c9f0534e)

(From OE-Core rev: 0add1a3b19c4807afdfcd1c2ea6f4a382466adf7)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Armin Kuster
1c73e41159 tzdata: Update to 2016h
Changes to future time stamps

    Asia/Gaza and Asia/Hebron end DST on 2016-10-29 at 01:00, not
    2016-10-21 at 00:00.  (Thanks to Sharef Mustafa.)  Predict that
    future fall transitions will be on the last Saturday of October
    at 01:00, which is consistent with predicted spring transitions
    on the last Saturday of March.  (Thanks to Tim Parenti.)

Changes to past time stamps

    In Turkey, transitions in 1986-1990 were at 01:00 standard time
    not at 02:00, and the spring 1994 transition was on March 20, not
    March 27.  (Thanks to Kıvanç Yazan.)

Changes to past and future time zone abbreviations

    Asia/Colombo now uses numeric time zone abbreviations like "+0530"
    instead of alphabetic ones like "IST" and "LKT".  Various
    English-language sources use "IST", "LKT" and "SLST", with no
    working consensus.  (Usage of "SLST" mentioned by Sadika
    Sumanapala.)

(From OE-Core rev: ff11ca44fec8e4b2aa523e032bd967e3ab8339a8)

(From OE-Core rev: 5637d1555b51569cdd7202ee47a0b913a0b429cb)

(From OE-Core rev: 0e4c2ba133b4c2feba53688ac98ad991382c08d9)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Armin Kuster
212ca3bee1 tzcode-native: update to 2016h
Changes to code

zic no longer mishandles relativizing file names when creating
symbolic links like /etc/localtime, when these symbolic links
are outside the usual directory hierarchy.  This fixes a bug
introduced in 2016g.  (Problem reported by Andreas Stieger.)

(From OE-Core rev: 9c5de646e01a83219be74e99dcf7c1e56ba38b53)

(From OE-Core rev: 9288b6e699abbf5b314029b0db9230ca159b335a)

(From OE-Core rev: 56eaca6fad1d1a53e2899ea6072dcc0b99a3ce67)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Sona Sarmadi
384801e827 curl: CVE-2016-8625
IDNA 2003 makes curl use wrong host

Affected versions: curl 7.12.0 to and including 7.50.3
Reference:
https://curl.haxx.se/docs/adv_20161102K.html

(From OE-Core rev: bf8d4e9c8a7fed4e190d600a6a26d314d4b15a08)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Sona Sarmadi
5c9148ff6a curl: CVE-2016-8624
invalid URL parsing with '#'

Affected versions: curl 7.1 to and including 7.50.3
Reference:
https://curl.haxx.se/docs/adv_20161102J.html

(From OE-Core rev: 3127e968c9e9bb2ba302553ba4eeeb030b1eee53)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Sona Sarmadi
cec5e508ec curl: CVE-2016-8623
Use-after-free via shared cookies

Affected versions: curl 7.10.7 to and including 7.50.3
Reference:
https://curl.haxx.se/docs/adv_20161102I.html

(From OE-Core rev: 3bbd9634e6ae3ebaf998812a316e7a84025d0949)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Sona Sarmadi
ddc6a9f5cd curl: CVE-2016-8622
URL unescape heap overflow via integer truncation

Affected versions: curl 7.24.0 to and including 7.50.3
Reference:
https://curl.haxx.se/docs/adv_20161102H.html

(From OE-Core rev: a712024f69a319c0b37ed5fd99ecdcaa9c3b0026)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Sona Sarmadi
8b50a8676b curl: CVE-2016-8621
curl_getdate read out of bounds

Affected versions: curl 7.12.2 to and including 7.50.3
Reference:
https://curl.haxx.se/docs/adv_20161102G.html

(From OE-Core rev: db6106a208891aeb3d2c00170e61bab8c648654a)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Sona Sarmadi
12afe3c057 curl: CVE-2016-8620
glob parser write/read out of bounds

Affected versions: curl 7.34.0 to and including 7.50.3
Reference:
https://curl.haxx.se/docs/adv_20161102F.html

(From OE-Core rev: 7308140d81299dca7db98259461d60e0fe86878e)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Sona Sarmadi
f5e807efc7 curl: CVE-2016-8619
double-free in krb5 code

Affected versions: curl 7.3 to and including 7.50.3
Reference:
https://curl.haxx.se/docs/adv_20161102E.html

(From OE-Core rev: 4e18b8af45e1e7769842952f773ba71276e24372)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Sona Sarmadi
cf7507f8c4 curl: CVE-2016-8618
double-free in curl_maprintf

Affected versions: curl 7.1 to and including 7.50.3
Reference:
https://curl.haxx.se/docs/adv_20161102D.html

(From OE-Core rev: 4163dacd30373501313fc40fd678c525980d1ccd)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Sona Sarmadi
eb0dff0c98 curl: CVE-2016-8617
OOB write via unchecked multiplication

Affected versions: curl 7.1 to and including 7.50.3

Reference:
https://curl.haxx.se/docs/adv_20161102C.html

(From OE-Core rev: 82415212303d75ca9a6f15a9abda42c9675efde4)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Sona Sarmadi
9a72d46aed curl: CVE-2016-8616
case insensitive password comparison

Affected versions: curl 7.7 to and including 7.50.3

Reference:
https://curl.haxx.se/docs/adv_20161102B.html

(From OE-Core rev: 0bec84bd79b9e96500f304dec9eecaf7b11424f5)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Sona Sarmadi
ad2cce0f1e curl: CVE-2016-8615
cookie injection for other servers

Affected versions: curl 7.1 to and including 7.50.3

Reference:
https://curl.haxx.se/docs/adv_20161102A.html

(From OE-Core rev: ba4e218d1e09aaecbdb760a299826c03202a9ba9)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Daniel Díaz
c96936cfd9 weston: Add no-input-device patch to 1.9.0.
The included patch, backported from Weston master, allows
it to run without any input device at launch. An ini option
is introduced for this purpose, so there is no behavioral
change.

Related change in weston.ini:
  [core]
  require-input=true

Default is true; setting it false allows Weston to run
without a keyboard or mouse, which is handy for automated
environments.

(From OE-Core rev: c14624953c856b39bb9b80dba31a8ca41ecdca93)

Signed-off-by: Daniel Díaz <daniel.diaz@linaro.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Adrian Dudau
047e58b4ba qemu: Security fix CVE-2016-4952
affects qemu < 2.7.0

Quick Emulator(Qemu) built with the VMWARE PVSCSI paravirtual SCSI bus
emulation support is vulnerable to an OOB r/w access issue. It could
occur while processing SCSI commands 'PVSCSI_CMD_SETUP_RINGS' or
'PVSCSI_CMD_SETUP_MSG_RING'.

A privileged user inside guest could use this flaw to crash the Qemu
process resulting in DoS.

References:
----------
http://www.openwall.com/lists/oss-security/2016/05/23/1

(From OE-Core rev: 3d6b4fd6bc4338b139ebcaf51b67c56cc97ba2ed)

Signed-off-by: Adrian Dudau <adrian.dudau@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Adrian Dudau
485e244db8 qemu: Security fix CVE-2016-4439
affects qemu < 2.7.0

Quick Emulator(Qemu) built with the ESP/NCR53C9x controller emulation
support is vulnerable to an OOB write access issue. The controller uses
16-byte FIFO buffer for command and data transfer. The OOB write occurs
while writing to this command buffer in routine get_cmd().

A privileged user inside guest could use this flaw to crash the Qemu
process resulting in DoS.

References:
----------
http://www.openwall.com/lists/oss-security/2016/05/19/4
https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2016-4441

(From OE-Core rev: 1bc071172236ea020cac9db96e33de81950a15ff)

Signed-off-by: Adrian Dudau <adrian.dudau@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:20 +01:00
Otavio Salvador
e8676b4f1a gstreamer1.0-libav: Add 'valgrind' config option
This fixes following error:

,----
| src/libavutil/log.c:51:31: fatal error: valgrind/valgrind.h: No such file or directory
|  #include <valgrind/valgrind.h>
`----

(From OE-Core rev: d32af0298ddfa88478f485aaffe2d36c69e1d9d6)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:19 +01:00
Zeeshan Ali
cef5f86f43 nss: Disable warning on deprecated API usage
nss itself enables Werror if gcc is version 4.8 of greater, which fails
the build against new glibc (2.24) because of use of readdir_r(), which
is now deprecated. Let's just disable warnings on deprecated API usage.

https://bugzilla.yoctoproject.org/show_bug.cgi?id=10644

(From OE-Core rev: 6df5997bc0a7f7af73f625b172f99964cfed9f6e)

Signed-off-by: Zeeshan Ali <zeeshan.ali@pelagicore.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:19 +01:00
Ross Burton
1a2ec16ec0 binutils: apply RPATH fixes from our libtool patches
We don't autoreconf/libtoolize binutils as it has very strict requirements, so
extend our patching of the stock libtool to include two fixes to RPATH
behaviour, as part of the solution to ensure that native binaries don't have
RPATHs pointing at the host system's /usr/lib.

This generally doesn't cause a problem but it can cause some binaries (such as
ar) to abort on startup:

./x86_64-pokysdk-linux-ar: relocation error: /usr/lib/libc.so.6: symbol
_dl_starting_up, version GLIBC_PRIVATE not defined in file ld-linux.so.2 with
link time reference

The situation here is that ar is built and as it links to the host libc/loader
has an RPATH for /usr/lib.  If tmp is wiped and then binutils is installed from
sstate relocation occurs and the loader changed to the sysroot, but there
remains a RPATH for /usr/lib.  This means that the sysroot loader is used with
the host libc, which can be incompatible.  By telling libtool that the host
library paths are in the default search path, and ensuring that all default
search paths are not added as RPATHs by libtool, the result is a binary that
links to what it should be linking to and nothing else.

[ YOCTO #9287 ]

(From OE-Core rev: 6b201081b622cc083cc2b1a8ad99d6f7d2bea480)

(From OE-Core rev: 29ddf96f8db2ac8d1aabbac21514ab3865603dcd)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:19 +01:00
Ross Burton
035c33c405 binutils: fix typo in libtool patch
There was a clear typo in a function name, correct it.

(From OE-Core rev: dcf44e184a807d76463a3bf1b2315e80b9469de3)

(From OE-Core rev: 6470e50928ad330a76442541ec5d864701c7fc68)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
minor fixup
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:19 +01:00
Ross Burton
8aaffcd59a classes/native: set lt_cv_sys_lib_dlsearch_path_spec
This variable is used by libtool to know what paths are on the default loader
search path.  As we have modified loader paths, native.bbclass can tell libtool
that both the sysroot libdir and the host library paths are searched, so no
RPATHs for those will be generated.

(From OE-Core rev: 2d0a1b029447842a6f97f72ae636c9020c4206a9)

(From OE-Core rev: f1849bbdf723c07c5ec1b8a5d484293b72927064)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:19 +01:00
Ross Burton
73274f258a classes/cross: set lt_cv_sys_lib_dlsearch_path_spec
This variable is used by libtool to know what paths are on the default loader
search path.  As we have modified loader paths, cross.bbclass can tell libtool
that both the sysroot libdir and the host library paths are searched, so no
RPATHs for those will be generated.

(From OE-Core rev: 5b61324fa76b27bb6ce13e78b17e767eed2f8f57)

(From OE-Core rev: add28b02e42ffc68a8762029521d08c13110b847)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:19 +01:00
Richard Purdie
b291829cfc rm_work: Ensure we don't remove sigbasedata files
We don't remove sigdata files, we also shouldn't remove sigbasedata files
as this hinders debugging.

(From OE-Core rev: 988349f90c8dc5498b1f08f71e99b13e928a0fd0)

(From OE-Core rev: c8d96b10ee3bc2eae0fd269d2564286fd0bc82ed)

(From OE-Core rev: 014683be144a7e782c91cc5577b3576ca6a533fb)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:19 +01:00
Richard Purdie
1bccc216ee sstate: Ensure we don't remove sigbasedata files
We don't remove sigdata files, we also shouldn't remove sigbasedata files
as this hinders debugging.

(From OE-Core rev: 1ebd85f8dfe45b92c0137547c05e013e340f9cec)

(From OE-Core rev: 3764a5ce8a1f26b46c389c256c10596ed8d31cc7)

(From OE-Core rev: b7c06011fa057ae1aaf828a6249e7b76485b2d5a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:14:19 +01:00
Scott Rifenbark
8f4b7758b5 documentation: Updated YP set for 2.1.3 Krogoth release in May '17
1. Updated poky.ent to contain 2.1.3 variables
2. Updated mega-manual.sed to use "2.1.3" string
3. Updated all Manual Revision tables to use "May 2017" date

(From yocto-docs rev: 49e08a543347d7e6548f6873faf701a0e5e95ae8)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:12:30 +01:00
Paul Eggleton
95dae8b598 bitbake: lib/bb/checksum: avoid exception on broken symlinks
If using OE's externalsrc with a source tree that is not tracked by git
and contains broken symlinks, you can receive "TypeError: unorderable
types: NoneType() < str()" within the file checksum code due to:

 checksums.sort(key=operator.itemgetter(1))

Don't add files with no checksum to the checksums list in order to avoid
this.

(Bitbake rev: 484fe5a3f5b840e5422cbdff0eef9aecfe944a19)

(Bitbake rev: c60f952a5adb1bcbab403779ce08927759bcfb63)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:12:30 +01:00
Ross Burton
129060f0b7 bitbake: fetch2/wget: attempt checkstatus again if it fails
Some services such as SourceForge seem to struggle to keep up under load, with
the result that over half of the autobuilder checkuri runs fail with
sourceforge.net "connection timed out".

Attempt to mitigate this by re-attempting once the network operation on failure.

(Bitbake rev: 54b1961551511948e0cbd2ac39f19b39b9cee568)

(Bitbake rev: 0b48acbf0428975e67012877417b9f90d3e1778c)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Hand applied
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:12:30 +01:00
Richard Purdie
f69b958176 bitbake: siggen: Ensure taskhash mismatches don't override existing data
We recalculate the taskhash to ensure the version we have matches
what we think it should be. When we write out a sigdata file, use
the calculated value so that we don't overwrite any existing file.
This leaves any original taskhash sigdata file intact to allow a
debugging comparison.

(Bitbake rev: dac68af6f4add9c99cb7adcf23b2ae89b96ca075)

(Bitbake rev: 03f6025a5b0cc4d883a9b2071e026769330752c8)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Minor fixup
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:12:30 +01:00
Richard Purdie
c3c14808dc bitbake: siggen: Pass basehash to worker processes and sanity check reparsing result
Bitbake can parse metadata in the cooker and in the worker during builds. If
the metadata isn't deterministic, it can change between these two parses and
this confuses things a lot. It turns out to be hard to debug these issues
currently.

This patch ensures the basehashes from the original parsing are passed into
the workers and that these are checked when reparsing for consistency. The user
is shown an error message if inconsistencies are found.

There is debug code in siggen.py (see the "Slow but can be useful for debugging
mismatched basehashes" commented code), we don't enable this by default due to
performance issues. If you run into this message, enable this code and you will
find "sigbasedata" files in tmp/stamps which should correspond to the hashes
shown in this error message. bitbake-diffsigs on the files should show which
variables are changing.

(Bitbake rev: 46207262ee6cdd2e49c4765481a6a24702ca4843)

(Bitbake rev: aa873f982ae4a56b135abd9eee169794e4c3aadd)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Fixed up do to python3 changes not being in krogoth.
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:12:30 +01:00
Richard Purdie
c60a0a51d7 bitbake: build: Ensure we preserve sigbasedata files as well as sigdata ones
We don't remove sigdata files, we also shouldn't remove sigbasedata files
as this hinders debugging.

(Bitbake rev: 24611df046f798276e7aa3f5d65976249ee117d4)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-05-18 13:12:30 +01:00
Richard Purdie
e59717e80f Revert "file: update SRCREV for 5.25 to fix fetch fail on missing commit"
This reverts commit b35225c88ff681a4a903f7fb4612ac768214f539.

Upstream restored the original hashes.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-03-21 22:39:25 +00:00
Paul Gortmaker
b4df9df462 file: update SRCREV for 5.25 to fix fetch fail on missing commit
Machines that cloned a while ago will have the commit, but new
deployments won't because it seems the upstream changed/rebased
and the old commit ID has been garbage-collected away.  Hence
the fetch fails to check out the named commit ID.

Both the old (gone) commit, and the "new" commit show the same
dates and commit log and point at 5.25, so hopefully this is
the right thing to do.  A git diff of the two seems to only show
a blanket uprev of CVS tags and deletion of a couple autogen'd
files, and no real source changes.

(From OE-Core rev: adb71e06768adadda7b69c3b5e81ca3ad67237f4)

Cc: Christos Zoulas <christos@zoulas.com>
(From OE-Core rev: b35225c88ff681a4a903f7fb4612ac768214f539)

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Denys Dmytriyenko <denys@ti.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-03-20 13:59:38 +00:00
brian avery
ae9b341ecf bitbake: bitbake: toaster: settings set ALLOWED_HOSTS to * in debug mode
This is a backport of 7c3a47ed89

>From the commit to master:
As of Django 1.8.16, Django is rejecting any HTTP_HOST header that is
not on the ALLOWED_HOST list.  We often need to reference the toaster
server via a fqdn, if we start it via webport=0.0.0.0:8000 for instance,
and are hitting the server from a laptop. This change does reduce  the
protection from a DNS rebinding attack, however, if you are running the
toaster server outside a protected network, you should be using the
production instance.

[YOCTO #10586]

(Bitbake rev: 449dc9b955dfbe048e380f5ab9fd61c3d1489dad)

Signed-off-by: brian avery <brian.avery@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-28 14:23:48 +00:00
Scott Rifenbark
3bf928a3b6 dev-manual: Fixed typo for "${INC_PR}.0"
The string appeared in the text as "$(INC_PR).0".  So, fixed
it to be proper with the curly braces.

(From yocto-docs rev: 5fa1691503fdf82476616a4ebb13c47d92deb03e)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-16 10:38:24 +00:00
Scott Rifenbark
0742e8a43b documentation: Updated manual rev tables for Dec 2016 2.1.2 release
(From yocto-docs rev: 922482b4b9bc9a28858ac2760df027d3828f2d5a)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-16 10:38:23 +00:00
Richard Purdie
cca8dd15c8 build-appliance-image: Update to krogoth head revision
(From OE-Core rev: 28da89a20b70f2bf0c85da6e8af5d94a3b7d76c9)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-09 00:06:48 +00:00
Richard Purdie
8e4188e274 poky: Update distro version to 2.1.2
(From meta-yocto rev: 5e0f74876155b2174e9b078e1829559a58347c9c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-09 00:06:38 +00:00
Armin Kuster
0ad194919f meta-linux-yocto: update 4.4 to 4.4.26
(From meta-yocto rev: 3e177af3d87ec5bb162a2fe0da2a030ffede2115)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:15 +00:00
Armin Kuster
49a01fd044 meta-linux-yocto: update to 4.1.33
(From meta-yocto rev: ab7e0db588462e11ff7c9cae04c3173d575b8623)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:15 +00:00
Enrico Scholz
0aedf304e5 bitbake: fetch: copy files with -H
When using a PREMIRROR with plain (non-unpack) files, a SRC_URI like

SRC_URI = "file://devmem2.c"

will cause devmem2.c to be a symlink in the WORKDIR pointing to the
local PREMIRROR.

Trying to apply a patch on this file will either modify the file on
the PREMIRROR or will fail due to sanity checks:

ERROR: devmem2-1.0-r7 do_patch: Command Error: 'quilt --quiltrc /cache/build-ubuntu/sysroots/x86_64-oe-linux/etc/quiltrc push' exited with 1  Output:
Applying patch devmem2-fixups-2.patch
File devmem2.c is not a regular file -- refusing to patch

(Bitbake rev: e82862ba8fedb2c5cd478c731b3d259d16c6e3d8)

Signed-off-by: Enrico Scholz <enrico.scholz@sigma-chemnitz.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:15 +00:00
Aníbal Limón
f0f6acac03 bitbake: bb.event: fix infinite loop on print_ui_queue
If bitbake ends before _uiready and bb.event.LogHandler was add
to the bitbake logger it causes an infinite loop when logging
something.

The scenario is print_ui_queue is called at exit and executes
the log handlers [2] one of them is bb.event.LogHandler this handler
appends the same entry to ui_queue causing the inifine loop [3].

In order to fix a new copy of the ui_queue list is created when iterate
ui_queue.

[YOCTO #10399]

[1] https://bugzilla.yoctoproject.org/show_bug.cgi?id=10399#c0
[2] http://git.openembedded.org/bitbake/tree/lib/bb/event.py?id=41d9cd41d40b04746c82b4a940dca47df02514fc#n156
[3]
http://git.openembedded.org/bitbake/tree/lib/bb/event.py?id=41d9cd41d40b04746c82b4a940dca47df02514fc#n164

(Bitbake rev: bb56a8957255999b9ffd1408d249cc5b715b5a3a)

Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
Joshua Lock
bae35b3e5f bitbake: event: prevent unclosed file warning in print_ui_queue
Use logger.addHandler(), rather than assigning an array of Handlers
to the loggers handlers property directly, to avoid a warning from
Python 3 about unclosed files:

$ bitbake
Nothing to do.  Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.
WARNING: /home/joshuagl/Projects/poky/bitbake/lib/bb/event.py:143: ResourceWarning: unclosed file <_io.TextIOWrapper name='/home/joshuagl/Projects/poky/build/tmp/log/cooker/qemux86/20161004094928.log' mode='a' encoding='UTF-8'>
  logger.handlers = [stdout]

(Bitbake rev: 775888307dc2917ef4b52799cc1600a6b3a01abe)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
Ed Bartosh
2de121703d bitbake: event.py: output errors and warnings to stderr
All logging messages are printed on stdout when processing
UI event queue. This makes it impossible to distinguish between
errors and normal bitbake output. Output to stderror or stdout
depending on log level should fix this.

(Bitbake rev: c4029c4f00197804511fc71e1190d34eb120212a)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
Bruce Ashfield
8a12e713f9 perf: adapt to Makefile.config
commit 4842576cd857 [perf tools: Move config/Makefile into Makefile.config]
relocated the configuration Makefile of perf. As such, we need to adapt
our fixup routines to work with the Makefile no matter where it is.

(From OE-Core rev: 573d584ff704025387782e35ed344e73294d6d0a)

(From OE-Core rev: 857f0190d334abc6e338938d6b1db1664d5c6987)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
Sujith Haridasan
2b0f105e59 perf: Fix to obey LD failure
This patch brings the last bit from meta-mentor for the perf
to build successfully with minnowmax BSP. The meta-mentor
commit for the same is:
http://git.yoctoproject.org/cgit/cgit.cgi/meta-mentor/commit/meta-mentor-staging?id=a8db95c0d4081cf96915e0c3c4063a44f55e21cc

The previous fix:
http://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/meta/recipes-kernel/perf?id=ef942d6025e1a339642b10ec1e29055f4ee6bd46
was incomplete and was not submitted upstream. And due to that this change is required.

When built on minnowmax ( machine name: intel-corei7-64),
an error is noticed during the do_compile:

 /home/sujith/codebench-linux-install-2015.12-133-i686-pc-linux-gnu/codebench/bin/i686-pc-linux-gnu-ld:
Relocatable linking with relocations from format elf64-x86-64
(/home/sujith/MEL/dogwood/build-minnowmax/tmp/work/intel_corei7_64-mel-linux/perf/1.0-r9/perf-1.0/fd/array.o)
to format elf32-i386 (/home/sujith/MEL/dogwood/build-minnowmax/tmp/work/intel_corei7_64-mel-linux/perf/1.0-r9/perf-1.0/fd/libapi-in.o)
is not supported

This change help fix the issue.

(From OE-Core rev: 122ae03e2f1a2252a6914d51087531557f9a08f2)

(From OE-Core rev: 3c4f57c163100ec07ca5f463d8ca7f3f0eed3d3c)

Signed-off-by: Sujith Haridasan <Sujith_Haridasan@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
Christopher Larson
c9f172aa5e perf: Fix to obey LD failure on qemux86-64
When built on an i686 host for qemux86-64 without the
fix to obey LD and it fails:

/scratch/dogwood/toolchains/x86_64/bin/i686-pc-linux-gnu-ld:
Relocatable linking with relocations from format elf64-x86-64
(/scratch/dogwood/perf-ld-test/build/tmp/work/qemux86_64-mel-linux/perf/1.0-r9/perf-1.0/fs/fs.o)
to format elf32-i386 (/scratch/dogwood/perf-ld-test/build/tmp/work/qemux86_64-mel-linux/perf/1.0-r9/perf-1.0/fs/libapi-in.o)
is not supported

This is because LD includes HOST_LD_ARCH, which contains TUNE_LDARGS,
which is -m elf32_x86_64 for x86_64. Without that, direct use of ld will fail.

(From OE-Core rev: 0ce06611068e74e6ea2e226e3f967aaa91fecd25)

(From OE-Core rev: a98f6ed189f564bd1897308a893e294456c1666a)

Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Sujith Haridasan <Sujith_Haridasan@mentor.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
Jussi Kukkonen
f7e1cd9f85 This is a backport from master of 2 consecutive fixes.
First fix commit:
1100af93cb
Second fix commit:
b7b2e34871

The error these commits fix can prevent Eclipse debugging on
certain target configurations.

* base-files: Add shell test quoting

  tty can return "not a tt" which results in warnings when /etc/profile
  is executed.

  (From OE-Core rev: eed586dd238efe859442b21b425f04e262bcdb2b)

  Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
  Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

* base-files: fix profile error under < /dev/null

  Previous attempts to constrain execution of `resize` to only TTYs did
  not properly handle situations when `tty` would return the string "not a
  tty". The symptom is "/etc/profile: line 34: test: too many arguments".
  Fix this by utilizing the exit code of `tty`. Also use `case` instead of
  `cut` to eliminate a subshell.

  (From OE-Core rev: e67637e4472ff3a1e2801b84ee3d69d4e14b9efc)

  Signed-off-by: Richard Tollerton <rich.tollerton@ni.com>
  Signed-off-by: Ross Burton <ross.burton@intel.com>
  Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

(From OE-Core rev: e86ab7487450aea7e44ff70b225517dbb056e3b5)

Signed-off-by: brian avery <brian.avery@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
California Sullivan
ec240f45ae parselogs.py: Add disabling eDP error to x86_common whitelist
The NUC6 firmware tells the kernel to try and initialize an embedded
DisplayPort it does not have, causing this warning. Its harmless, so
just whitelist it.

Fixes [YOCTO #9434].

(From OE-Core rev: 4c3fb7f63aad4a5d1b9720c76091cd0646859c2a)

(From OE-Core rev: 117bd3402001878314317a58d583b55f238a4cd8)

Signed-off-by: California Sullivan <california.l.sullivan@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
Richard Purdie
e92679a6eb oeqa/parselogs: Don't use cwd for file transfers
If you run:

MACHINE=A bitbake <image> -c testimage
MACHINE=B bitbake <image> -c testimage

and A has errors in parselogs, machine B can pick these up and cause
immense confusion. This is because the test transfers the log files
to cwd which is usually TOPDIR. This is clearly bad and this patch
uses a subdir of WORKDIR to ensure machines don't contaminate each
other.

Also ensure any previous logs are cleaned up from any existing
transfer directory.

(From OE-Core rev: ac8f1e58ca3a0945795087cad9443be3e3e6ead8)

(From OE-Core rev: 64ff5be5909705395b2db8d64e8d2c2c76092e1c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
California Sullivan
f979c50029 parselogs.py: Ignore Skylake graphics firmware load errors on genericx86-64
These errors can't be fixed without adding the firmware to the initramfs
and building it into the kernel, which we don't want to do for
genericx86-64. Since graphics still work acceptably without the firmware
blobs, just ignore the errors for that MACHINE.

(From OE-Core rev: d73a26a71b2b16be06cd9a80a6ba42ffae8412c4)

(From OE-Core rev: cc1b341b0a8e834a15c4efe107886ad366f7678c)

Signed-off-by: California Sullivan <california.l.sullivan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
Richard Purdie
a6b8fda00c parselogs: Ignore uvesafb timeouts
We're periodically seeing uvesafb timeouts on the autobuilder. Whitelist these
errors as there is little it seems we can do about them and we therefore
choose to ignore them rather than fail the builds.

[YOCTO #8245]

There is a better solution proposed in the bug with a -1 timeout however
this avoids failed builds until such times as that is implemented.

(From OE-Core rev: 8097f2da79b7862733494d2321e3dfdb0880804d)

(From OE-Core rev: 37356aa62558434bd3a6402c35f16f2f75903af0)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
Richard Purdie
1b9a98f78c parselogs: Ignore amb_nb warning messages under qemux86*
(From OE-Core rev: 857f4ca134e4575e71993b4fa255ebafec612d1e)

(From OE-Core rev: 2effeec9a7f689f03ab74421280335214f125869)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
California Sullivan
d72e66f34b parselogs.py: Add dmi and ioremap errors to ignore list for core2
These errors have been occuring since the introduction of the 4.4
kernel with no apparent functionality loss. Whitelist for now.

(From OE-Core rev: 47b9058994f15507fc18ce0b08ac82a4c052966e)

(From OE-Core rev: 34df2a5aebf69a9022aa7c0b8b3dad438ecdec48)

Signed-off-by: California Sullivan <california.l.sullivan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
California Sullivan
e2c2d723ed parselogs.py: Add amd_nb error to x86_common whitelist
This has always silently failed on hardware without AMD Northbridge,
and a recent kernel patch made it not silent. It would be ideal to only
whitelist the error for genericx86 MACHINEs and disable the CONFIG
option that enables it in intel-* MACHINEs, but in order to disable
this configuration option we would have to enable EXPERT and
DEBUG_KERNEL, which we don't want. Instead just whitelist it on all
x86 MACHINEs.

Fixes [YOCTO #10261].

(From OE-Core rev: 9c432dae1045a087f8eb2de7c9bd3a9cbd46c459)

(From OE-Core rev: bc575e92c7c2df541b79a33670ddb06ef9778995)

Signed-off-by: California Sullivan <california.l.sullivan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
Bruce Ashfield
478a38187f linux-yocto/4.1: fix CVE-2016-5195 (dirtycow)
Backporting commit 19be0eaffa [mm: remove gup_flags FOLL_WRITE games
from __get_user_pages()] to address the dirtycow exploit.

(From OE-Core rev: 8470ea4cfd5fca4c9573e39c7c3486aeb310990a)

(From OE-Core rev: e501785bcb8bfdbeaba93e1c2f8275780a3425a6)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
Bruce Ashfield
cc811f4992 linux-yocto/4.4: update to v4.4.26
Integrating the 4.4.23->26 -stable releases. Among other fixes
this contains commit:

  mm: remove gup_flags FOLL_WRITE games from __get_user_pages()

Which addresses CVE-2016-5195.

(From OE-Core rev: e2472c1a66ef62f6904cc9b635b275e7da32e51a)

(From OE-Core rev: 5f2ab4bc14863e9ddfd622b770b28b8cb0d3c0d6)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
Bruce Ashfield
f7ec29ca3f linux-yocto/4.4/4.8: kernel config warning cleanups
Merging the following patches into 4.4 and 4.8 to remove kernel
configuration warnings:

  bbaf01752b01 meta-yocto-bsp: beaglebone: remove the stale kernel options
  552a83790b17 features: Fix configcheck warnings in features used by intel-quark BSPs
  c33d9c2c575f features: Fix configcheck warnings in features used by intel-core* BSPs

(From OE-Core rev: ac9842bc3a17f15c3807aa06e4469c030346420e)

(From OE-Core rev: e353d51c8caf3ed09715997b1ff973da8534c683)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Droped the 4.8 kernel changes, 4.8 not supported
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
Bruce Ashfield
0d390bfb5a linux-yocto/4.1/4.4: remove innappropriate standard/base patches
Before standard/intel/* was created in the 4.1 and 4.4 kernel trees,
some patches were merged to standard/base to add features/support for
intel platforms.

While this isn't entirely bad, there have been some compile issues
reported in some configurations. Since we don't need these commits
on standard/base, we can relocate them to make standard/base upstream
clean.

This commit removes those patches from standard/base, and restores
then to the standard/intel/* branches.

(From OE-Core rev: 2c19e6378697141992c9bd7ff2bd4d57a4f9fe9b)

(From OE-Core rev: 3b7ad0bb67f6789ec038ea7df41274bae78e21a3)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:14 +00:00
Bruce Ashfield
ca9d26a08d linux-yocto/4.4: update to v4.4.22
(From OE-Core rev: 286d893f9e7caed06035f7916492a74e0212df6a)

(From OE-Core rev: 3865d4cfe00e8e1ee2b84e742f154ff0c994a253)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Hand applied to manage merge conflicts.
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Bruce Ashfield
49de8caab0 linux-yocto/4.1: update to 4.1.33
(From OE-Core rev: af4e9d92ae23f0e668da4732ef79cd1f1bb6fc1f)

(From OE-Core rev: 81b67e1de7ba8f91f9a73ee274796ee685cf2e90)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Hand applied to manage merge conflicts.
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Juro Bystricky
d672a4cc3c gcc-runtime.inc: Add CPP support for x86-64-x32 tune
Using the following setup (as specified in yocto sample code):

MACHINE = "qemux86-64"
require conf/multilib.conf
MULTILIBS = "multilib:libx32"
DEFAULTTUNE_virtclass-multilib-libx32 = "x86-64-x32"

We fail to compile simple CPP programs because CPP cannot
find relevant header files, looking for them in a non-existing place.
To fix this, we create a symlink of the name CPP expects and point it to
the corresponding existing directory.

[YOCTO#10354]
[YOCTO#10380]

(From OE-Core rev: 9f9be229040f4f9a523a1e25afd78d5c3f4efc23)

(From OE-Core rev: 979b28c55c3b9b0134dbddbb09e30b9bf0db9231)

Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Juro Bystricky
be15df5099 gcc-runtime.inc: add CPP support for mips64-n32 tune
This patch fixes the problem where the CPP compiler cannot find include files.
The compiler is configured to look for the files in places that do not exist.
When querying the CPP for search paths, we observe messages such as these:

multilib configuration:

MACHINE="qemumips64"
require conf/multilib.conf
MULTILIBS = "multilib:lib64 multilib:lib32"
DEFAULTTUNE = "mips64-n32"
DEFAULTTUNE_virtclass-multilib-lib64 = "mips64"
DEFAULTTUNE_virtclass-multilib-lib32 = "mips32r2"

ignoring nonexistent directory "<path>/sysroots/mips64-n32-poky-linux-gnun32/usr/include/c++/6.2.0/mips64-poky-linux/32

single lib configuration:
MACHINE="qemumips64"
DEFAULTTUNE = "mips64-n32"
ignoring nonexistent directory "<path>/sysroots/mips64-n32-poky-linux-gnun32/usr/include/c++/6.2.0/mips64-poky-linux/

To fix this, create a symlink of the name CPP expects and point it to the corresponding "gnun32" directory.

[YOCTO#10142]

(From OE-Core rev: 55115f90f909d27599c686852e73df321ad1edff)

(From OE-Core rev: fe61e95a3368d0bc0e66958d0e703b1e3c40c9bb)

Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Juro Bystricky
2cb87d12d2 libgcc-common.inc: Fix broken symlinks for multilib SDK
This patch fixes broken "32" symlinks for multilib settings:

MACHINE = "qemuarm64"
require conf/multilib.conf
MULTILIBS = "multilib:lib32"
DEFAULTTUNE_virtclass-multilib-lib32 = "armv7a"

and

MACHINE = "qemux86-64"
require conf/multilib.conf
MULTILIBS = "multilib:libx32"
DEFAULTTUNE_virtclass-multilib-libx32 = "x86-64-x32"

[YOCTO#8642]
[YOCTO#10380]

(From OE-Core rev: 2810671a0f96776c135137f27a5ca52194ddd692)

(From OE-Core rev: 1c9a1b518d4c653799d4f6ca4bc5ef191fa8a349)

Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Sona Sarmadi
57531002b8 bash: Security fix CVE-2016-0634
References to upstream patch:
https://ftp.gnu.org/pub/gnu/bash/bash-4.3-patches/bash43-047
http://openwall.com/lists/oss-security/2016/09/16/8

(From OE-Core rev: 24455c63494b7030b8a337f0dad98687d15d9ce6)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Sona Sarmadi
c4061a0a68 dropbear: fix multiple CVEs
CVE-2016-7406
CVE-2016-7407
CVE-2016-7408
CVE-2016-7409

References:
https://matt.ucc.asn.au/dropbear/CHANGES
http://seclists.org/oss-sec/2016/q3/504

[YOCTO #10443]

(From OE-Core rev: cca372506522c1d588f9ebc66c6051089743d2a9)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Markus Lehtonen
6962ee3689 rpm: prevent race in tempdir creation
This patch fixes an extramely rare race condition in creation of rpmdb
temporary directory. The "rpmdb-more-verbose-error-logging" patch is
still left in place, just for the case.

[YOCTO #9416]

(From OE-Core rev: 84de3283fa2a2908d367eb58953903ae685b0298)

(From OE-Core rev: 1ae228ee5181f12955356c1fe10d341373dd5fcc)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Khem Raj
191666022a binutils: Fix gas error with cfi_section inconsistencies
This error is visible when using clang but not when using gcc
this has been reported and fixed upstream.

llvm bug https://llvm.org/bugs/show_bug.cgi?id=29017
binutils bug https://sourceware.org/bugzilla/show_bug.cgi?id=20648

(From OE-Core rev: e5a81575f11dc2a0ec9ee4184514750d2dbd09aa)

(From OE-Core rev: e299ac7d5b1e7af7940766e1232f6e425029fab6)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

hand merged to apply against 2.26
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Ola x Nilsson
53766fb01f devtool: Use the wildcard flag in update_recipe_patch
The --wilcard-version flag was only used in the srcrev variant of the
update-recipe command.

(From OE-Core rev: d3057cba0b01484712fcee3c52373c143608a436)

(From OE-Core rev: ab9ec025122357f2736fe31a398a2db04a2b7b3b)

Signed-off-by: Ola x Nilsson <ola.x.nilsson@axis.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Ola x Nilsson
3134fb2861 devtool: build_image: Fix recipe filter
The missing split() causes dev and dbg packages to match.

(From OE-Core rev: bf83e0f0a3d52958c4380599f1afc4b8e058afd7)

(From OE-Core rev: d2196d8fd25df21e9cc569f0d37f20bf6242de92)

Signed-off-by: Ola x Nilsson <ola.x.nilsson@axis.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Paul Eggleton
b169435134 classes/externalsrc: re-run do_configure when configure files change
If the user modifies files such as CMakeLists.txt in the case of cmake,
we want do_configure to re-run so that those changes can take effect. In
order to accomplish that, have a variable CONFIGURE_FILES which
specifies a list of files that will be put into do_configure's checksum
(either full paths, or just filenames which will be searched for in the
entire source tree). CONFIGURE_FILES then just needs to be set
appropriately depending on what do_configure is doing; for now I've set
this for autotools and cmake which are the most common cases.

Fixes [YOCTO #7617].

(From OE-Core rev: 923fc20c2862a6d75f949082c9f6532ab7e2d2cd)

(From OE-Core rev: 4019bb8454c36c4baf1d4f23e2d4fafb6c47fbc0)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Paul Eggleton
95e3d71080 devtool: add: fix error message when only specifying a recipe name
We were supposed to be printing out the specified recipe name here but I
forgot to specify a parameter for the string.

(From OE-Core rev: 87f844e533adfc229a5d26857a82cc6b125216c8)

(From OE-Core rev: 9bff81f882f30b9f317516330608c203601a4769)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Paul Eggleton
2de1a5cefb oe-selftest: recipetool: add tests for git URL mangling
Add three tests to verify that the git URL mangling is working the way
it's supposed to. This should prevent us regressing on this again in
future.

(From OE-Core rev: d8d01f462ddbb79cff23b544fcd0ce251f05f8ce)

(From OE-Core rev: e8d0b5ca2e0f6086d9e9873137b335a527630a54)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Paul Eggleton
a7c3e18de0 recipetool: create: fix greedy regex that broke support for github tarballs
The regex here needs to be anchored to the end or it'll match longer
URLs, which was exactly what I was trying to avoid. This regression was
introduced in OE-Core revision 7998dc3597657229507e5c140fceef1e485ac402.

Fixes [YOCTO #10023].

(From OE-Core rev: 9291c5d3c257d5ada7605dfe46ababda08f6d3c1)

(From OE-Core rev: 9e5886036fd77454dff1cb359c2c6cebca60ecbe)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Paul Eggleton
bd2cc670be lib/oe/recipeutils: fix patch_recipe*() with empty input
If you supplied an empty file to patch_recipe() (or an empty list to
patch_recipe_lines()) then the result was IndexError because the code
checking to see if it needed to add an extra line of padding didn't
check to see if there were in fact any lines before trying to access the
last line.

Fixes [YOCTO #9972].

(From OE-Core rev: 92a73e870478ddb2a2d137e3fff28828809bec2e)

(From OE-Core rev: 5ce14441f02894e68881807138e8f45074900ba2)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Paul Eggleton
b108f2a6de recipetool: create: fix handling of github URLs
For a while now, Github hasn't been advertising a specific repository
URL since cloning the web URL with git works. Armed with this knowledge
and fully expecting people to just paste the github URL, we need to
handle this situation specially. If it looks like a github URL to the
root of a repository then treat it as a git repository instead of a
normal https URL to be fetched by the wget fetcher.

(From OE-Core rev: 7998dc3597657229507e5c140fceef1e485ac402)

(From OE-Core rev: fc8d9266fd0e1733bc7caf4dddb05209b9ad7e9e)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Paul Eggleton
2fcc8d6e52 devtool: reset: allow reset to work if the recipe file has been deleted
We were attempting to open the recipe file unconditionally here - we
need to account for the possibility that the recipe file has been
deleted or moved away by the user.

(From OE-Core rev: 47822a2aff56fd338c16b5ad756feda9f395a8a1)

(From OE-Core rev: 6fb1bb71b92d47eda48d24d3c0440b5219ac1fcd)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Paul Eggleton
c3c25ac53d devtool: update-recipe: fix --initial-rev option
In OE-Core revision 7baf57ad896112cf2258b3e2c2a1f8b756fb39bc I changed
the default update-recipe behaviour to only update patches for commits
that were changed; unfortunately I failed to handle the --initial-rev
option which was broken after that point. Rework how the initial
revision is passed in so that it now operates correctly.

(From OE-Core rev: b2ca2523cc9e51a4759b4420b07b0b67b3f5ac43)

(From OE-Core rev: d62aa298b80af78bc89f6e64736ce7383c3fa2de)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Zheng Ruoqin
7343438092 bind: fix two CVEs
Add two CVE patches from upstream
git: https://www.isc.org/git/

1.CVE-2016-2775.patch
2.CVE-2016-2776.patch

(From OE-Core rev: 5f4588d675e400f13bb6001df04790c867a95230)

(From OE-Core rev: ecc0a8ba077305c51804fd7bc287758b43420a76)

Signed-off-by: zhengruoqin <zhengrq.fnst@cn.fujitsu.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:13 +00:00
Saul Wold
8f5becc3ab archiver: fix gcc-source handling
The source archiver was not handling the gcc-source target correctly, since it uses the
work-shared directory, we don't want to unpack and patch it twice, just as the comments
say, but the code was not there to check for the gcc-source target.

[YOCTO #10265]

(From OE-Core rev: bbac0699ceadb7a25a60643fb23dffce8b4d23d0)

(From OE-Core rev: 7c83d20fe48064df2200f4aa9e7c7d772b69f574)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:12 +00:00
Pascal Bach
732dd581f3 glibc: fix CVE-2016-1234, CVE-2016-3075, CVE-2016-5417
Only relevant for krogoth since version 2.24+ (master, morty) is not affected.

(From OE-Core rev: 88be4b40bacc7c8a08fb76fc220f491deb2c1c3a)

Signed-off-by: Pascal Bach <pascal.bach@siemens.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-08 23:47:12 +00:00
Scott Rifenbark
40f4a6d075 bsp-guide: Updated the yocto-bsp create selections in the example.
(From yocto-docs rev: 3008f226da2466e3ecaf8bdbc458b4df58d1a618)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-13 23:16:56 +01:00
Scott Rifenbark
88b7f1a1e2 yocto-project-qs: Fixed Minnow MAX build example
Fixes [YOCTO #9667]

The actual command in the example to build the image for
Minnow MAX should be for 'core-image-base'.  I changed it to
be that.

(From yocto-docs rev: ea8c9eaa069a44807800a7143f2a4be40707cc74)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-13 23:16:56 +01:00
Scott Rifenbark
8e2ab57852 yocto-project-qs: Altered MinnowBoard MAX example
Fixes [YOCTO #9667]

The example that built the image out for the MinnowBoard MAX was
buiding a core-image-minimal.  This was not ideal.  I have fixed
it so that several types of images are suggested as examples with
a reference to the Images chapter in the ref-manual.  The actual
command now builds out core-image-base.

(From yocto-docs rev: feb4c1ae79fa15ef03dfba3c629f8da8bbd58e24)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-13 23:16:56 +01:00
Scott Rifenbark
204b2bae4a bsp-guide: Fixed the yocto-bsp create example output
Fixes [YOCTO #10385]

The output for the yocto-bsp create example uses 4.1 as the
default kernel when it should be 4.4.  I updated the exmaple
output to reflect reality for the Krogoth release.

(From yocto-docs rev: 9c2eea8693e439accdee6091484072aa54a5d02e)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-11 08:51:13 +01:00
Christophe Chapuis
e93596fe74 binutils: fix AR issue when opkg is unpacking IPKs containing empty entries
* this patch is backported from 2.26.1 which is already in oe-core/master
  since this patch:
  commit 37e8b6ecf9f9163d7b5b3becdc2feba57df4838f
  Author: Khem Raj <raj.khem@gmail.com>
  Date:   Thu Jul 7 11:08:29 2016 -0700
  Subject: binutils: Upgrade to 2.26.1

  -SRCREV = "71fa566a9cf2597b60a58c1d7c148bab637454a6"
  +SRCREV = "c29838e7f484e0b5714b02e7feb9a88d3a045dd2"

* verified that the patch exists in this SRCREV range:
  ~/projects/binutils $ git log --oneline 71fa566a9cf2597b60a58c1d7c148bab637454a6..c29838e7f484e0b5714b02e7feb9a88d3a045dd2^C
  ...
  343a405 Allow zero length archive elements
  ...
  so it isn't needed in master branch

(From OE-Core rev: a8f44dff13481feaa97e494a3aeafb5b63d40f3f)

Signed-off-by: Christophe Chapuis <chris.chapuis@gmail.com>
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 14:09:42 +01:00
Armin Kuster
56a27c9aad python3: Security fix CVE-2016-1000110
(From OE-Core rev: 744eb37c8abf4c30a0c462580541bf195a987a56)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:30 +01:00
Armin Kuster
4b27738c5e python: Security fix CVE-2016-1000110
(From OE-Core rev: d3f0d6834416b3ee0e09f7b6a3ae09839fc16376)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:30 +01:00
Mingli Yu
529bbe2cc2 perl: fix CVE-2016-1238
Backport patch to fix CVE-2016-1238 from perl upstream:
http://perl5.git.perl.org/perl.git/commitdiff/cee96d52c39b1e7b36e1c62d38bcd8d86e9a41ab

(From OE-Core rev: 7d06ffcbcd0c71dc6dc9efde02bf0cd8d7c7d7e3)

(From OE-Core rev: 3f22b7ee01b4ce8592401db59c7ca4a7f3f88ede)

Signed-off-by: Mingli Yu <Mingli.Yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:30 +01:00
Joshua Lock
82641d700d multilib_header: avoid sstate checksum issues for -nativesdk recipes
Much as with -native recipes, as addressed in commit
b15730caf0, arch specific variables
like MIPSPKGSFX_ABI were affecting -nativesdk sstate checksums for
recipes like nativesdk-glibc-initial.

Disable multilib_header for nativesdk as we don't use multilibs in
this scenario.

[YOCTO #10320]

(From OE-Core rev: f1c7b4f16dc9a7e5155108641fed8b3d98c931f3)

(From OE-Core rev: 8faaa040d205ac07417255d3c4a452b43e47c956)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:30 +01:00
Armin Kuster
118f7a2247 tzdata: update to 2016g
LICENSE md5sum changed do to rewording some text not released to the license.
see 8c143a2b65

  Changes to future time stamps

    Turkey switched from EET/EEST (+02/+03) to permanent +03,
    effective 2016-09-07.  (Thanks to Burak AYDIN.)  Use "+03" rather
    than an invented abbreviation for the new time.

    New leap second 2016-12-31 23:59:60 UTC as per IERS Bulletin C 52.
    (Thanks to Tim Parenti.)

  Changes to past time stamps

    For America/Los_Angeles, spring-forward transition times have been
    corrected from 02:00 to 02:01 in 1948, and from 02:00 to 01:00 in
    1950-1966.

    For zones using Soviet time on 1919-07-01, transitions to UT-based
    time were at 00:00 UT, not at 02:00 local time.  The affected
    zones are Europe/Kirov, Europe/Moscow, Europe/Samara, and
    Europe/Ulyanovsk.  (Thanks to Alexander Belopolsky.)

  Changes to past and future time zone abbreviations

    The Factory zone now uses the time zone abbreviation -00 instead
    of a long English-language string, as -00 is now the normal way to
    represent an undefined time zone.

    Several zones in Antarctica and the former Soviet Union, along
    with zones intended for ships at sea that cannot use POSIX TZ
    strings, now use numeric time zone abbreviations instead of
    invented or obsolete alphanumeric abbreviations.  The affected
    zones are Antarctica/Casey, Antarctica/Davis,
    Antarctica/DumontDUrville, Antarctica/Mawson, Antarctica/Rothera,
    Antarctica/Syowa, Antarctica/Troll, Antarctica/Vostok,
    Asia/Anadyr, Asia/Ashgabat, Asia/Baku, Asia/Bishkek, Asia/Chita,
    Asia/Dushanbe, Asia/Irkutsk, Asia/Kamchatka, Asia/Khandyga,
    Asia/Krasnoyarsk, Asia/Magadan, Asia/Omsk, Asia/Sakhalin,
    Asia/Samarkand, Asia/Srednekolymsk, Asia/Tashkent, Asia/Tbilisi,
    Asia/Ust-Nera, Asia/Vladivostok, Asia/Yakutsk, Asia/Yekaterinburg,
    Asia/Yerevan, Etc/GMT-14, Etc/GMT-13, Etc/GMT-12, Etc/GMT-11,
    Etc/GMT-10, Etc/GMT-9, Etc/GMT-8, Etc/GMT-7, Etc/GMT-6, Etc/GMT-5,
    Etc/GMT-4, Etc/GMT-3, Etc/GMT-2, Etc/GMT-1, Etc/GMT+1, Etc/GMT+2,
    Etc/GMT+3, Etc/GMT+4, Etc/GMT+5, Etc/GMT+6, Etc/GMT+7, Etc/GMT+8,
    Etc/GMT+9, Etc/GMT+10, Etc/GMT+11, Etc/GMT+12, Europe/Kaliningrad,
    Europe/Minsk, Europe/Samara, Europe/Volgograd, and
    Indian/Kerguelen.  For Europe/Moscow the invented abbreviation MSM
    was replaced by +05, whereas MSK and MSD were kept as they are not
    our invention and are widely used.

  Changes to zone names

    Rename Asia/Rangoon to Asia/Yangon, with a backward compatibility link.
    (Thanks to David Massoud.)

(From OE-Core rev: d1341aeda6d9fa5d7f13afabadae60a6fc295b87)

(From OE-Core rev: 73d5a84c3eaa32ee9c066bc80847f57d3724293c)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:29 +01:00
Armin Kuster
5b24e5b39b tzcode-native: Update to 2016g
LICENSE file checksum changed do to a verbage change.

  Changes to code

    zic no longer generates binary files containing POSIX TZ-like
    strings that disagree with the local time type after the last
    explicit transition in the data.  This fixes a bug with
    Africa/Casablanca and Africa/El_Aaiun in some year-2037 time
    stamps on the reference platform.  (Thanks to Alexander Belopolsky
    for reporting the bug and suggesting a way forward.)

    If the installed localtime and/or posixrules files are symbolic
    links, zic now keeps them symbolic links when updating them, for
    compatibility with platforms like OpenSUSE where other programs
    configure these files as symlinks.

    zic now avoids hard linking to symbolic links, avoids some
    unnecessary mkdir and stat system calls, and uses shorter file
    names internally.

    zdump has a new -i option to generate transitions in a
    more-compact but still human-readable format.  This option is
    experimental, and the output format may change in future versions.
    (Thanks to Jon Skeet for suggesting that an option was needed,
    and thanks to Tim Parenti and Chris Rovick for further comments.)

  Changes to build procedure

    An experimental distribution format is available, in addition
    to the traditional format which will continue to be distributed.
    The new format is a tarball tzdb-VERSION.tar.lz with signature
    file tzdb-VERSION.tar.lz.asc.  It unpacks to a top-level directory
    tzdb-VERSION containing the code and data of the traditional
    two-tarball format, along with extra data that may be useful.
    (Thanks to Antonio Diaz Diaz, Oscar van Vlijmen, and many others
    for comments about the experimental format.)

    The release version number is now more accurate in the usual case
    where releases are built from a Git repository.  For example, if
    23 commits and some working-file changes have been made since
    release 2016g, the version number is now something like
    '2016g-23-g50556e3-dirty' instead of the misleading '2016g'.
    Official releases uses the same version number format as before,
    e.g., '2016g'.  To support the more-accurate version number, its
    specification has moved from a line in the Makefile to a new
    source file 'version'.

    The experimental distribution contains a file to2050.tzs that
    contains what should be the output of 'zdump -i -c 2050' on
    primary zones.  If this file is available, 'make check' now checks
    that zdump generates this output.

    'make check_web' now works on Fedora-like distributions.

  Changes to documentation and commentary

    tzfile.5 now documents the new restriction on POSIX TZ-like
    strings that is now implemented by zic.

    Comments now cite URLs for some 1917-1921 Russian DST decrees.
    (Thanks to Alexander Belopolsky.)

    tz-link.htm mentions JuliaTime (thanks to Curtis Vogt) and Time4J
    (thanks to Meno Hochschild) and ThreeTen-Extra, and its
    description of Java 8 has been brought up to date (thanks to
    Stephen Colebourne).  Its description of local time on Mars has
    been updated to match current practice, and URLs have been updated
    and some obsolete ones removed.

(From OE-Core rev: 19c365b23c3b835dcb5595aba598f35bf16a6d81)

(From OE-Core rev: e125775a1acdcb183d470d4d4e1c360c918e8d0a)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:29 +01:00
Davis, Michael
a78dddb624 pulseaudio: Disable unit tests
Pulseaudio unit tests create a dependency on check not in the recipe.
Since unit tests are not used they are disabled to eliminate build race condition.

Backported from master commit 92cfdb2ba7e04e2b70986c6569f500dd2a48b5d1

(From OE-Core rev: 3bb87439e8458cff898a4e120dd65a9e32d7197b)

Signed-off-by: Michael Davis <michael.davis@essvote.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:29 +01:00
Richard Purdie
3b3cdfd71a pigz: Update SRC_URI
Upstream have released a new tarball and removed the old one. Revert to
the Yocto Project source mirror instead, preserving the upstream version
check.

(From OE-Core rev: da3f47842a511c4622e4e66075e386e7d623a855)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-05 10:41:34 +01:00
Richard Purdie
ed4ed5313b useradd: Fix infinite build loop
http://git.openembedded.org/openembedded-core-contrib/commit/?id=642c6cf0b6a0371de476513162bd0cefa9c438b3
introduces a problem if the USERADD_PARAM variable has trailing
whitespace as the code infinitely loops causing build hangs.

Add a similar sed expression to $remaining to avoid this.

(From OE-Core rev: d6241e4c94a0a72acfc57e96a59918c0b2146d65)

(From OE-Core rev: 0900fed3fb6eec62e9e25f6d03af934f9776d105)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Denys Dmytriyenko <denys@ti.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-29 11:00:51 +01:00
Maxin B. John
de056577ce libarchive: respect disable-acl configuration option
Update configure.ac to properly handle --disable-acl option

[YOCTO #9668]

(From OE-Core rev: 84fe3f29f2bdaf98c9beefdfede143084fba093b)

(From OE-Core rev: 687d3b8d54aa3190bbbbc94ae2f91303fccf7c8d)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Denys Dmytriyenko <denys@ti.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 22:18:51 +01:00
Khem Raj
2ea93e2b1d useradd.bbclass: Strip trailing ';' in cmd params
When there are more than 1 packages in a recipe requiring useradd
services, they are concatnated and a ';' is inserted just after
each of the users being added by the packages. A situation arises
in cases where this is controlled by PACKAGECONFIG then we add a
';' separator in the USERADD_PARAM value itself for each packagecofig
since we do not know which one will be picked, we end up in situation
where the final string returned from get_all_cmd_params() appears to be

a; ; b; c;

and then the logic which uses these cmds triggers with ';' as separator
but in this case it will fail after executing useradd 'a' because the next
cmd it will call will be just a whitespace

This is highlighted by the systemd patch to add more users as needed
by systemd 229 components.

(From OE-Core rev: e8d4356c38e3c2aacd6dc49231c73bcb7d597308)

(From OE-Core rev: 4f69a4be79e17ef009351c447694e46b5cb517c2)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 09:05:57 +01:00
Armin Kuster
2b330e5439 openssl: Security fix CVE-2016-6306
affects openssl < 1.0.1i

(From OE-Core rev: 378e58a93127cbf7c330aa1ae4df9a96681bc410)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 09:05:57 +01:00
Armin Kuster
e08094e604 openssl: Security fix CVE-2016-6304
affects openssl < 1.0.1i

(From OE-Core rev: ae1db7aea891978e42e5205d2ffc93c16703134c)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 09:05:57 +01:00
Armin Kuster
5f97311702 openssl: Security fix CVE-2016-6303
affects openssl < 1.0.1i

(From OE-Core rev: bb812836c2c8d89da54d905b65487a9f1acd5f3c)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 09:05:57 +01:00
Armin Kuster
7026b2b05a openssl: Security fix CVE-2016-6302
affects openssl < 1.0.1i

(From OE-Core rev: 6d26328bd1d950ddc5ca1cda47da4b8f3d432a1e)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 09:05:57 +01:00
Armin Kuster
8e5e92193a openssl: Security fix CVE-2016-2182
affects openssl < 1.0.1i

(From OE-Core rev: 4be4162d5a03af6a20adc2314575e4d0baa5337a)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 09:05:57 +01:00
Martin Jansa
06ed5c5a10 useradd: use bindir_native for pseudo PATH
* useradd/userdel functions will fail for recipes which override their target prefix
  (e.g. to /opt/foo), because it will try to use pseudo from native-sysroot/opt/foo/bin/pseudo

(From OE-Core rev: 96189e71a86c0f4833e8e51d678208fd908bfe30)

(From OE-Core rev: fe20ce64de7a3d8bcd21bb1fc2cfd65563b82767)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 09:05:57 +01:00
Armin Kuster
9fa0bc4500 openssl: Security fix CVE-2016-2181
affects openssl < 1.0.1i

(From OE-Core rev: 401f3ccd509d012c4b048eb9fcb5d0f4ab5cc7d2)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 09:05:57 +01:00
Armin Kuster
82017f2367 openssl: Security fix CVE-2016-2180
affects openssl < 1.0.1i

(From OE-Core rev: 94b44f40fb52f642eeab1211bd5fc57ceba29f7e)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 09:05:57 +01:00
Armin Kuster
e1e5b18a5e openssl: Security fix CVE-2016-2179
affects openssl < 1.0.1i

(From OE-Core rev: 8eb58cf801a26ec17dfc67bae2881f0fc03ea49b)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 09:05:57 +01:00
Armin Kuster
9995a7a144 openssl: Security fix CVE-2016-2178
affects openssl < 1.0.2i

(From OE-Core rev: 2752dba61da730ccd914b7720490754a476d1024)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 09:05:57 +01:00
Dengke Du
9fd6b093a4 cracklib: Apply patch to fix CVE-2016-6318
Fix CVE-2016-6318

Backport from cracklib upstream:

47e5dec521

(From OE-Core rev: bc7691c47f21a7d7549788fe0370c3080fc4dff5)

(From OE-Core rev: 64757265e0122314036e80aa1440c29654c052c0)

Signed-off-by: Dengke Du <dengke.du@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 09:05:56 +01:00
Zhixiong Chi
b7bb83a4bb wpa_supplicant: Security Advisory-CVE-2016-4477
Add CVE-2016-4477 patch for avoiding \n and \r characters in passphrase
parameters, which allows remote attackers to cause a denial of service
(daemon outage) via a crafted WPS operation.
Patches came from http://w1.fi/security/2016-1/

(From OE-Core rev: d4d4ed5f31c687b2b2b716ff0fb8ca6c7aa29853)

(From OE-Core rev: 9db41b45beae7224ba928f9267046f1b6a8288a0)

Signed-off-by: Zhixiong Chi <zhixiong.chi@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 09:05:56 +01:00
Zhixiong Chi
45bc60015c wpa_supplicant: Security Advisory-CVE-2016-4476
Add CVE-2016-4476 patch for avoiding \n and \r characters in passphrase
parameters, which allows remote attackers to cause a denial of service
(daemon outage) via a crafted WPS operation.
Patches came from http://w1.fi/security/2016-1/

(From OE-Core rev: ed610b68f7e19644c89d7131e34c990a02403c62)

(From OE-Core rev: 6ef620c717c43a29f51ccd298c84070552bdfe52)

Signed-off-by: Zhixiong Chi <zhixiong.chi@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 09:05:56 +01:00
Richard Purdie
e6c1d03d3d oeqa/buildiptables: Switch from netfilter.org to yoctoproject.org mirror
We've had some upstream mirror instability so use our own mirror for the
iptables sources to ensure this doesn't affect the test results.

(From OE-Core rev: 25f6af8895d5f5c6dcedde0a21285d63522769c8)

(From OE-Core rev: c3110b9a360571f308123b23f7c99500362b4987)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-24 09:27:51 +01:00
Alejandro Hernandez
d2ca721d31 python3: Fixes several python3 dependency problems
This patch adds the packages python3-signal, python3-enum and python3-selectors,
while it also fixes python3-subprocess which in turn fix the installation of
python3-modules

[YOCTO #10276]

(From OE-Core rev: 8c0f2775bcc25f460d7a0b38031690fa10a0f11d)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:39:36 +01:00
Scott Rifenbark
260ff60f93 documentation: Changes to support a 2.1.2 krogoth release.
Updated the poky.ent file to have the 2.1.2 variables.

Updated the manual revision tables to use 2.1.2 and October (a guess)

Updated the mega-manual.sed file so mega-manual links would resolve

(From yocto-docs rev: edf0777e7aa1fc2b41691791284c29d75dc94357)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:25 +01:00
Jussi Kukkonen
71291ed53e tiff: Update download URL
remotesensing.org domain has been taken over by someone unrelated.
There does not seem to be an up-to-date tiff homepage, but
osgeo.org is a reliable download site.

(From OE-Core rev: f544e1d10e9dc0f750efdb45a78ce9d5c9603070)

(From OE-Core rev: ee2b4b537233172cfc62779bc2397eac598d87e6)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Mingli Yu
5b3af2abd7 perl: fix CVE-2015-8607
Backport patch to fix CVE-2015-8607 from perl upstream:
http://perl5.git.perl.org/perl.git/commitdiff/0b6f93036de171c12ba95d415e264d9cf7f4e1fd

(From OE-Core rev: e2289647ace9ef96e6a7e4aae201fd9149e56678)

(From OE-Core rev: 7978432bb5bcf11e3baa78cd1a9051f472338a00)

Signed-off-by: Mingli Yu <Mingli.Yu@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Mingli Yu
70c4134e4b perl: fix CVE-2016-6185
Backport patch to fix CVE-2016-6185 from perl upstream:
http://perl5.git.perl.org/perl.git/commitdiff/08e3451d7

(From OE-Core rev: 81e550d0c23c9842b85207cdfa73bbe9102e01fb)

(From OE-Core rev: 05202a9328c92e006ff8c349cef9c059e74ac10b)

Signed-off-by: Mingli Yu <Mingli.Yu@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
He Zhe
90dd677528 perl: Correct perl path for ptest
Substitute /usr/local with ${bindir}

(From OE-Core rev: bc372d65bc395290e1b7132908a3b943e1b73144)

(From OE-Core rev: 74ded01feab9d0ba2b837e015d40d15a78fec544)

Signed-off-by: He Zhe <zhe.he@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Ioan-Adrian Ratiu
6db9299d9e perl-native: backport libnm link fix
pre-5.25.0 perl by default tries to link to an antiquated libnm (new
math) which is not used anymore since the early 1990's. After 2014
another libnm appeared for NetworkManager causing build failures.

(From OE-Core rev: 97d2ba227044571408151f84cfe611e1a72dd816)

(From OE-Core rev: 60e0374240c2121485dc91892a693cd6ac2eae24)

Signed-off-by: Ioan-Adrian Ratiu <adrian.ratiu@ni.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Kai Kang
2561b58ac8 perl: fix CVE-2016-2381
Backport patch to fix CVE-2016-2381 from perl upstream:

http://perl5.git.perl.org/perl.git/commitdiff/ae37b791a73a9e78dedb89fb2429d2628cf58076

(From OE-Core rev: 07ca8a0131f43e9cc2f720e1cdbcb7ba7c074886)

(From OE-Core rev: 9f90044241cfe7910e707d97c966ee7d88883c26)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Bill Randle
9e14b83fa4 perl: fix several perl test failures
Several ExtUtils-MakeMaker tests fail when cross-compiled and run on
the target machine. Backport an upstream patch to fix the issues. Also
update the customized.dat hash file for the files modified by this patch
and other existing patches so the porting/customized.t test passes.

[YOCTO #8656]

(From OE-Core rev: bf1160a62d758b0148856482cb7b3f6fed63a0c2)

(From OE-Core rev: f8548ffd9e2b57ba2eb91ed9372ed4b45fe946db)

Signed-off-by: Bill Randle <william.c.randle@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Bill Randle
a8ac03fce1 perl: some perl tests require libssp
Add libssp to the list of dependencies when building with perl-ptest
as some tests require it.

[YOCTO #8656]

(From OE-Core rev: 9ea1d6474c5cd3546d1cad7c0f02a1ee8b3c76bb)

(From OE-Core rev: e0f6cba32a1682ac48196ae5ecad26275b9ce72b)

Signed-off-by: Bill Randle <william.c.randle@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Bill Randle
8b9b998258 perl: set proper perl subversion number in config files
During the upgrade from Perl 5.22.0 to 5.22.1 in commit
f4c9908eae1ae3dcc38877abe2d5fbeb46851dd4 the config.sh file was hand edited
to change the subversion numbers. However, the edit was not entirely
correct. As a result the Perl version test failed. Set the correct
version strings.

[YOCTO #8656]

(From OE-Core rev: 6e06fec1ca71979e361d8a6e35ef4ec442e71881)

(From OE-Core rev: 3f828924d2e4c2ac8423e40a693c4bca19b514f7)

Signed-off-by: Bill Randle <william.c.randle@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Armin Kuster
76aa0c3d5d qemu: Secuirty fix for CVE-2016-5403
affects qemu < 2.7.0-rc0

(From OE-Core rev: c53820180cdccd97de1f314078570fac1ff16052)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Armin Kuster
11c8c8aa15 qemu: Security fix for CVE-2016-4002
affects qemu < 2.6.0

(From OE-Core rev: 4c6493e90c7102a5bfa8aba4c00b112d083e91b8)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Armin Kuster
5a8a6a753f qemu: Security fix CVE-2016-6351
affects qemu < 2.6.0

(From OE-Core rev: 72ee7cac11523a56b99282c03199b5b84326edf5)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Armin Kuster
aa4b7b2257 qemu: Security fix CVE-2016-4439
affects qemu < 2.6.0

(From OE-Core rev: b5c787631cd35fa5b3f10391c883ae7a3717690f)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Armin Kuster
ea62893915 qemu: Security Fix CVE-2016-3712
affects qemu < 2.6.0

(From OE-Core rev: ed78691a46a3c928297ae166e92fabdffa9e53c9)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Armin Kuster
990b8e7919 qemu: Security Fix CVE-2016-3710
affects Qemu < 2.6.0

(From OE-Core rev: aa366a5cb5c4ed84537381d71dd5e66514c575be)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Armin Kuster
db8258864e util-linux: Security fix for CVE-2016-5011
affects util-linux < 2.28.2

(From OE-Core rev: 72a8636e3cfdfef8d95fee4af721dd7acaa89ffc)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:06 +01:00
Sona Sarmadi
58538b0703 dropbear: upgrade to 2016.72
The upgrade addresses CVE-2016-3116:

- Validate X11 forwarding input. Could allow bypass of
  authorized_keys command= restrictions,
  found by github.com/tintinweb.
  Thanks for Damien Miller for a patch. CVE-2016-3116

References:
https://matt.ucc.asn.au/dropbear/CHANGES
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3116

(From OE-Core rev: 5ebac39d1d6dcf041e05002c0b8bf18bfb38e6d3)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Armin Kuster
96fe15caf6 wget: Security fix CVE-2016-4971
affects wget < 1.18.0

(From OE-Core rev: f4ea85d9c33a18f9e18e789a3399cf2d5c4f8164)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Armin Kuster
b6e4966874 openssh: Security fix CVE-2015-8325
openssh <  7.2p2

(From OE-Core rev: 94325689e52cd86faf732d0cc01a29d193e6abfe)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Armin Kuster
a837c6be8f openssh: Security fix CVE-2016-5615
openssh < 7.3

(From OE-Core rev: 800bd6e734837a16dfe0f2f0e6591f7a1b37a593)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Armin Kuster
414aad04b6 openssh: Security fix CVE-2016-6210
affects openssh < 7.3

(From OE-Core rev: 3bc2ea285637894d158d951ed721c54c1f1af4c3)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Juro Bystricky
8a7607f470 busybox: Avoid race building libbb
When building busybox, an occasional error was observed.
The error is consistently the same:

libbb/appletlib.c:164:13: error: 'NUM_APPLETS' undeclared (first use in this function)
  while (i < NUM_APPLETS) {

The reason is the include file where NUM_APPLETS is defined is not yet generated (or is being modified)
at the time libbb/appletlib.c is compiled.
The attached patchset fixes the problem by assuring libb is compiled as the last directory.

[YOCTO#10116]

(From OE-Core rev: a866a05e2c7d090a77aa6e95339c93e3592703a6)

(From OE-Core rev: 6c94afadaa3e035bb58755985a9e193cae5e9b34)

Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Dengke Du
cce2867828 busybox: fix "sed n (flushes pattern space, terminates early)" testcase failure
It is a busybox upstream known bug. When the busybox sed sub-command 'n'
hit the files EOF, it print an extra character that have been printed, but
the GNU sed would not print it.

In busybox source code ../editors/sed.c
------------------------------------------------------------------------
    case 'n':
        if (!G.be_quiet)
                sed_puts(pattern_space, last_gets_char);
            if (next_line) {
                    free(pattern_space);
                    pattern_space = next_line;
                    last_gets_char = next_gets_char;
                    next_line = get_next_line(&next_gets_char, &last_puts_char, last_gets_char);
                    substituted = 0;
                    linenum++;
                    break;
            }
            /* fall through */

    /* Quit.  End of script, end of input. */
    case 'q':
        /* Exit the outer while loop */
            free(next_line);
            next_line = NULL;
            goto discard_commands;
------------------------------------------------------------------------
when read at the end of the file, the 'next_line' is null, it would go
"case 'q'" and goto discard_commands, the discard_commands would print
the old pattern space which have been printed.

So in order to comply with GNU sed, in case 'n', when the next_line is null
I add "else" at the end of the second "if": "goto again;" and send it to
the busybox upstream, the busybox maintainer adopt it and make a little
changes to the patch, we can see it at:

His reply:

	http://lists.busybox.net/pipermail/busybox/2016-September/084613.html

The new patch on busybox master branch:

	https://git.busybox.net/busybox/commit/?id=76d72376e0244a5cafd4880cdc623e37d86a75e4

(From OE-Core rev: 5a680c267454d7c135c4bfe4e551a780f38a5087)

(From OE-Core rev: efcd439977d111b10bd2c74ff3bc4fa30d8b394d)

Signed-off-by: Dengke Du <dengke.du@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Jérémy Rosen
0458275013 rpm: manually cleanup sysck
version 5.4.1 of rpm was not properly distclean before release, which
causes problems when cross-compiling.

The previous version this recipe called make distclean, but that would
trigger a call to ./configure which would fail when no gcc is
available and make the whole do_configure fail further down the line

This patch manually removes the files from the recipe.

(From OE-Core rev: 6c9f61233f64356291a0c42761a833f3b151114c)

(From OE-Core rev: 66dd4d3abb708376fbfbf37cab1ef1f2dee2049b)

Signed-off-by: Jérémy Rosen <jeremy.rosen@smile.fr>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Zhixiong Chi
6f60d91adc rpm: ensure rpm2cpio call rpm relocation code
We need to call rpmcliInit to ensure the rpm relocation code is called.
when we allow rpm2cpio to be relocatable, The adjusted path used to find
the macro files was being built into the binary and this path was valid
for the machine it was built on and some of our other build machines,
but invalid on some others, and was not being properly overridden at
runtime.

when we export the wrsdk and source the sdk, then execute rpm2cpio xxx.rpm|cpio -t.
we will get the following error :
"rpm-5.4.14/rpmdb/dbconfig.c:493:
db3New: Assertion `dbOpts != ((void *)0) && *dbOpts != '\0'' failed.

(From OE-Core rev: aea2bf5c8101ac0bb27776a5614be345835c4a03)

(From OE-Core rev: b55e1de5b7371e06ec999fdf588052b4babbc3d2)

Signed-off-by: Zhixiong Chi <Zhixiong.Chi@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Robert Yang
642890f5d0 rpm: make --nosignature work
OE-core uses rpm's --nosignature, but it never worked:
self._invoke_smart('config --set rpm-check-signatures=false')

Now fix it with:
* Define SUPPORT_NOSIGNATURES to 1 in system.h
* !QVA_ISSET(qva->qva_flags, SIGNATURE) -> QVA_ISSET(qva->qva_flags, SIGNATURE),
  otherwise, when use --nosignature would read database and verify
  signature, this is not expected.

This can fix some race issues, for example, when more than one process
are querying rpm file with "rpm -qp --nosignature", they may hang up
because of race issues (the processes are trying to get RW/RD lock on
the database, but they shouldn't read the database at all since -qp and
--nosignature are used).

(From OE-Core rev: 038c09d6ab9581030efdc16aa1b96972970eeaab)

(From OE-Core rev: 6a09190c7b7b316c9988b7e5e279bd124f331b17)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Markus Lehtonen
5368cfee9e python-smartpm: use md5 as the digest for rpm_sys channel
Use md5 sum instead of mtime as the "digest" method for rpm_sys channel.
The digest is used to determine if the channel has been updated. It was
found out that mtime was not a reliable digest. On some systems mtime
of the rpm db does not get updated after every transaction if transactions
(smart install / remove commands) are fired in quick succession. As a
consequence smartpm cache and rpm db get out of sync.

[YOCTO #10244]

(From OE-Core rev: e7267b4e78461e71a1175f93e2eb5e90272c2b47)

(From OE-Core rev: c126a48a38e4f9c57f48b9ef77537cfd98901fb3)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Mariano Lopez
e588da43b0 python-smartpm_git.bb: Add patch for debugging random errors
This will add a patch to debug random errors seen in the
autobuilders, it won't solve the errors, but will give us
a better idea of what is happening.

[YOCTO #8383]

(From OE-Core rev: c52a7e910a3a52a7455a2409d9ade449bbbd66d4)

(From OE-Core rev: 8d46dc71cead3779f00537e0cace577767304f75)

Signed-off-by: Mariano Lopez <mariano.lopez@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
mingli.yu@windriver.com
c32c7522e5 python-smartpm: add support to check signatures
RPMv5 has removed support for _RPMVSF_NOSIGNATURES,
the flag can be replaced with a flags set:
"RPMVSF_NODSAHEADER|RPMVSF_NORSAHEADER|RPMVSF_NODSA
RPMVSF_NORSA"

(From OE-Core rev: 5c0c1b8a64643ad7130b17b5dfce9cecffa6d962)

(From OE-Core rev: 8edaf4e9592877a4cb48c2f5c896c11a129a5404)

Signed-off-by: Haiqing Bai <Haiqing.Bai@windriver.com>
Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Richard Purdie
62696defc0 python-smartpm: Avoid locale issue with bitbake python3
(From OE-Core rev: fa2ca7660e8f3279736624aa2493b4ca952ae466)

(From OE-Core rev: 6c756fe2a61843050debd06d7194e6441c26cb20)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Khem Raj
deca0d3736 xserver-xf86-config: pre-load int10 and exa modules
musl doesn't like lazy loading that xorg uses, therefore
load the needed modules explicitly

[YOCTO #10169]

(From OE-Core rev: e279c9a30f0df400b06a47a487967a734854714b)

(From OE-Core rev: 13fd49fd719d7e59ea347241934ccb991264f14f)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Alexander Kanavin
a220c3a1a9 arch-mips.inc: Disable QEMU usermode usage when building with n32 ABI
QEMU usermode doesn't support n32 binaries, erroring with "Invalid
ELF image for this architecture".

(From OE-Core rev: 66aa39a959bd41f7063fe64a9225eb9fd6c3293b)

(From OE-Core rev: 013dfa3e9f14f50a3d1efb5e98a45ce1e579abcf)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Alexander Kanavin
ef6ff739c7 gobject-introspection.bbclass: disable introspection for -native and -nativesdk recipes
It is not necessary for those targets, adds to the build time, and pulls
in the unneeded qemu-native dependency.

(From OE-Core rev: be18364edd5cd2c664f68120063a1e147563faab)

(From OE-Core rev: 4dbe39ee56ff888190b1a110496bc0fb6c400d9a)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Thomas Witt
d9369d1ea0 cmake.bbclass: call cmake with a relative path
CMake wants a relative path for CMAKE_INSTALL_*DIR, an absolute path
breaks cross-compilation. This fact is documented in the following
ticket: https://cmake.org/Bug/view.php?id=14367

$sysconfdir and $localstatedir are not relative to $prefix, so they are
still set as absolute paths. With his change ${PROJECT}Targets.cmake
that are generated by cmakes "export" function will contain relative
paths instead of absolute ones.

(From OE-Core rev: c03b32bd71dbe04f2f239556fea0b53215e403d7)

(From OE-Core rev: 3d37394f8f279d127db85784cf01056d27c19b36)

Signed-off-by: Thomas Witt <Thomas.Witt@bmw.de>
Signed-off-by: Clemens Lang <clemens.lang@bmw-carit.de>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:05 +01:00
Maxin B. John
17e4586d6e useradd_base: avoid unintended expansion for useradd parameters
Now, useradd dollar sign requires three prepending backslash characters to
avoid unintended expansion. It used to be just one prepending backslash
character before Krogoth. Restore that behaviour.

[YOCTO #10062]

(From OE-Core rev: 9e43a73c7ad576666d53c8c9e0283bc6bb9087a8)

(From OE-Core rev: 42a0d59d5923fb43882d8e60f6973b45b263e262)

Signed-off-by: Niko Mauno <niko.mauno@vaisala.com>
Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Sona Sarmadi
6175bd0930 curl: security fix for CVE-2016-7141
Affected versions:
    Affected versions: libcurl 7.19.6 to and including 7.50.1
    Not affected versions: libcurl >= 7.50.2

Reference to upstream patch:
https://curl.haxx.se/CVE-2016-7141.patch

(From OE-Core rev: fb8f291d9ea2ebc011403f72cb91af372a795091)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Sona Sarmadi
016df260e5 sudo: CVE-2015-8239
Fixes race condition when checking digests in sudoers.

Reference:
http://seclists.org/oss-sec/2015/q4/327

Reference to upstream fixes:
https://www.sudo.ws/repos/sudo/raw-rev/397722cdd7ec
https://www.sudo.ws/repos/sudo/raw-rev/0cd3cc8fa195

(From OE-Core rev: 3564999bd987b08188e2e0eead59a49bebbc5e32)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Reinette Chatre
5d781f41ff binutils: advance SRCREV to obtain versioned symbols
Libraries needing versioned symbols, for example mysql, are not
supported by current version of binutils in krogoth.

When mysql library from MariaDB is compiled with the current
version of binutils we encounter errors at runtime as seen
below where php linked to mysql tries to run:

php: relocation error: php: symbol mysql_server_init, version
 libmysqlclient_16 not defined in file libmysqlclient.so.18
 with link time reference

Above error appears even though symbols exist in library:

   245: 000000000001ecc0     0 FUNC    GLOBAL DEFAULT   13 mysql_server_init@@libmysqlclient_16
   279: 000000000001ecc0   297 FUNC    GLOBAL DEFAULT   13 mysql_server_init@@libmysqlclient_18

The problem results from a bug in binutils that has already been
fixed upstream as well as on the 2.26 and 2.27 branches. We advance
the SRCREV on the 2.26 branch used in krogoth release to pick up the fix.

Details about bug: https://sourceware.org/bugzilla/show_bug.cgi?id=19698

(From OE-Core rev: 2d35281de8eeeb23343478aa2c87ea0f2aa7ba06)

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Markus Lehtonen
d3ee5489c9 base.bbclass wipe ${S} before unpacking source
Make sure that we have a pristine source tree after do_unpack.

[YOCTO #9064]

(From OE-Core rev: eccae514b71394ffaed8fc45dea7942152a334a1)

(From OE-Core rev: 696dd4607766a07fcdbb7e6bfc07f3b815bc9d5c)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Jeremy Puhlman
9a1694e242 bitbake.conf: set READELF for cross compilation
In the case of using an external toolchain that supports multilib
compilation with a single binary, TARGET_PREFIX is the same for both main
and multilib abis. Without READELF exported, python3 assumes it is
either the readelf for ${BUILD_SYS}-readelf. Exporting cross readelf
fixes the build issue.

checking LDLIBRARY... libpython$(LDVERSION).so
checking for i586-montavistamllib32-linux-ranlib...
x86_64-montavista-linux-ranlib
checking for i586-montavistamllib32-linux-ar...
x86_64-montavista-linux-ar
checking for i586-montavistamllib32-linux-readelf... no
checking for readelf... readelf
configure: WARNING: using cross tools not prefixed with host triplet

(From OE-Core rev: 3442ee423813d547be7899a25ea31efe719e662f)

(From OE-Core rev: e24b5fe3f04cbb5953ec82f9e4d040f6600012b3)

Signed-off-by: Jeremy Puhlman <jpuhlman@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Jeremy Puhlman
3cc3ff6244 Fix random python backtrace in mutlilib handling code.
newval is not defined in all cases. Set to None and check if it is set.

  File
"/local/foo/builds/x86/layers/openembedded-core/meta/classes/multilib_global.bbclass",
line 90, in preferred_ml_updates(d=<bb.data_smart.DataSmart object at
0xf6fd528c>):
                 if not d.getVar(newname, False):
    >                d.setVar(newname, localdata.expand(newval))
             # Avoid future variable key expansion
UnboundLocalError: local variable 'newval' referenced before assignment

(From OE-Core rev: 25ebd3bbc1f9f4b1b6147d98dd43690c3bf03ee7)

(From OE-Core rev: 81e6c67db85b5e4864aa11f6504a8bef59be8609)

Signed-off-by: Jeremy Puhlman <jpuhlman@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Ross Burton
16f046f38f cml1: fix tasks after default [dirs] changed
These tasks relied upon [dirs] being ${B} by default.  As the functions are not
simple, add back [dirs] so they work again.

[ YOCTO #10027 ]

(From OE-Core rev: 614d976ee97d6386c37afb54add5b83741ca401e)

(From OE-Core rev: e29faba0b27ee6237dcd022d9519eddc7cdcc441)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Jacob Kroon
7639be6851 bitbake.conf/toolchain-scripts.bbclass: Remove debug prefix mappings in SDK
CFLAGS/CXXFLAGS in the SDK environment script adds debug-prefix mappings
that include staging area/work directories. Remove them since the SDK
shouldn't be aware of them.

(From OE-Core rev: 7918e73e9c5fe8c8c1c1d341eaa42f2f7d3ddb69)

(From OE-Core rev: e52b98077e94e7071e70de28ed95092aad74d3ac)

Signed-off-by: Jacob Kroon <jacob.kroon@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Khem Raj
a10c9109e2 gdb: Cache gnu gettext config vars for musl builds
intl is used in gdb as well and we run the configure for
it when running do compile. So we need to insert these
caching of variables to extra oe_make

(From OE-Core rev: 60de4d6c717c6a5131b02de29234d53a6ca1b993)

(From OE-Core rev: e33aaed01b1b26d8ea22fc87afe436a93b64a790)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Alejandro Hernandez
6ac72e8be2 initramfs-live-boot: Make sure we kill udev before switching root when live booting
When live booting, we need to make sure the running udev processes are killed
to avoid unexepected behavior, we do this just before switching root,
once we do, a new udev process will be spawned from init and will take care
of whatever work was still missing

[YOCTO #9520]

(From OE-Core rev: e88d9e56952414e6214804f9b450c7106d04318d)

(From OE-Core rev: e5190cdcf4efe5e80967bded13ef8e530811b0ec)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Jackie Huang
c594ff73ab e2fsprogs: Fix missing check for permission denied.
If the path to "ROOT_SYSCONFDIR /mke2fs.conf" has a permission denied problem,
then the get_dirlist() call will return EACCES. But the code in profile_init
will treat that as a fatal error and all executions will fail with:
      Couldn't init profile successfully (error: 13).

But the problem should not really be visible for the target package as the path
then will be "/etc/mke2fs.conf", and it is not likely that a user have no
permission to read /etc.

(From OE-Core rev: 9d7c32a88e0670a09e5e1097ff8bca58e9a7943f)

Fixup bb for Krogoth.

(From OE-Core rev: 49086f40c8068ed504d301ef8f56528fd813e10f)

Signed-off-by: Jian Liu <jian.liu@windriver.com>
Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Yi Zhao
b9e99832b9 tiff: Security fix CVE-2016-5323
CVE-2016-5323 libtiff: a maliciously crafted TIFF file could cause the
application to crash when using tiffcrop command

External References:
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-5323
http://bugzilla.maptools.org/show_bug.cgi?id=2559

Patch from:
2f79856097

(From OE-Core rev: 4ad1220e0a7f9ca9096860f4f9ae7017b36e29e4)

(From OE-Core rev: e066ba81ac7aecd3d9dfa1cb5d89acb6dc073e8f)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Yi Zhao
440e3cd2c2 tiff: Security fix CVE-2016-5321
CVE-2016-5321 libtiff: a maliciously crafted TIFF file could cause the
application to crash when using tiffcrop command

External References:
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-5321
http://bugzilla.maptools.org/show_bug.cgi?id=2558

Patch from:
d9783e4a14

(From OE-Core rev: 4a167cfb6ad79bbe2a2ff7f7b43c4a162ca42a4d)

(From OE-Core rev: ff5d0abf31394d332c5db06a2d3ef337b1f8db9d)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Yi Zhao
977dd47c69 tiff: Security fix CVE-2016-3186
CVE-2016-3186 libtiff: buffer overflow in the readextension function in
gif2tiff.c allows remote attackers to cause a denial of service via a
crafted GIF file

External References:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-3186
https://bugzilla.redhat.com/show_bug.cgi?id=1319503

Patch from:
https://bugzilla.redhat.com/attachment.cgi?id=1144235&action=diff

(From OE-Core rev: 3d818fc862b1d85252443fefa2222262542a10ae)

(From OE-Core rev: bebb2683ddeda2bef25eca3077c366c93c0a81b4)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Armin Kuster
2b029e56f9 tiff: Security fix CVE-2015-8784
CVE-2015-8784 libtiff: out-of-bound write in NeXTDecode()

External Reference:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-8784

(From OE-Core rev: 36097da9679ab2ce3c4044cd8ed64e5577e3f63e)

(From OE-Core rev: a1839427c5626367beb6bf59d900904dedb6bf03)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Armin Kuster
b6f4d24fbc tiff: Security fix CVE-2015-8781
CVE-2015-8781 libtiff: out-of-bounds writes for invalid images

External Reference:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-8781

(From OE-Core rev: 9e97ff5582fab9f157ecd970c7c3559265210131)

(From OE-Core rev: 18d8f81c16cbf165183f5deda71fef0763386a21)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Richard Purdie
ab4f42608a busybox: Add parallel make fix
We're seeing regular parallel make failures in applet headers in busybox.
This adds a patch to try and avoid the issue, building upon a fix already
backported from upstream. The patch has been sent to upstream.

[YOCTO #10116]

(From OE-Core rev: 199cef0e8a50b20d0ee6fefd1d4cf3372eba7728)

(From OE-Core rev: e3cca9da7e7a7f10db708f39097e1d8700f8ba2d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:04 +01:00
Richard Purdie
23aabca217 busybox: Backport makefile fix from upstream
This at least partially addresses one of the build races we've seen
on the autobuilder in busybox. Its a straightforward backport from
upstream.

(From OE-Core rev: 8599059164ad0eb908fd1177044af8bc9a9881e4)

(From OE-Core rev: 542a182af6503ac5d5ddea4bf307ea38ddaeeb50)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:03 +01:00
Stefan Agner
d9d046c28a busybox: Fix busybox-init on non-tty consoles
When using non-tty consoles (e.g. VirtIO console /dev/hvc0) the
current init system fails with:
process '/sbin/getty 115200 hvc0' (pid 545) exited. Scheduling for restart.
can't open /dev/ttyhvc0: No such file or directory

The first field needs to be a valid device. The BusyBox inittab example
explains as follows:
"<id>: WARNING: This field has a non-traditional meaning for BusyBox init!

The id field is used by BusyBox init to specify the controlling tty for
the specified process to run on.  The contents of this field are
appended to "/dev/" and used as-is."

(From OE-Core rev: a53393082f331a613cb3eb973a07bab22cefcde8)

(From OE-Core rev: 3c5097574e24a3923b093d8ef92506411dc8df08)

Signed-off-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:03 +01:00
Henry Bruce
b6bb27c4c9 npm: npm.bbclass now adds nodejs to RDEPENDS
We expect that any package that uses the npm bbclass
will have a runtime dependency on node.js

(From OE-Core rev: 769fae0b74d7c7992aa593907f446fab98ef5128)

(From OE-Core rev: a2d9d36818bbc7773ed4295c286fc53fe7c31345)

Signed-off-by: Henry Bruce <henry.bruce@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:03 +01:00
Tanu Kaskinen
0271b3ab00 pulseaudio: fix crash when disconnecting bluetooth devices
[YOCTO #10018]

Add a patch that makes the bluetooth code create the HSP/HFP card
profile only once. The old behaviour of creating the profile twice
was not compatible with 0001-card-add-pa_card_profile.ports.patch.

This fix is not needed for master, because master doesn't any more
have 0001-card-add-pa_card_profile.ports.patch.

(From OE-Core rev: e416c32f6059a5d4cb47809186c2feaaef7ff4ba)

Signed-off-by: Tanu Kaskinen <tanuk@iki.fi>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:03 +01:00
Stephano Cetola
4cbb398d85 systemd: allow add users as a rootfs postprocess cmd
Adding all the users / groups to systemd is only available for readonly
file systems. This change allows users to add them to read / write file
systems as well by specifying:

ROOTFS_POSTPROCESS_COMMAND += "systemd_create_users"

Also, add "--shell /sbin/nologin" to each user's add params.

[ YOCTO #9497 ]

(From OE-Core rev: 98a4c642444a524f547f5d978a28814d20c12354)

(From OE-Core rev: 9e040927957dd06b5d1a7974a355e21a8e36ade4)

Signed-off-by: Stephano Cetola <stephano.cetola@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 79be110c1f)
Signed-off-by: Kristian Amlie <kristian.amlie@mender.io>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:03 +01:00
Khem Raj
66a4366e8f systemd: Create missing sysusers offline
Some system users which are needed by systemd components were missing
create these users knobbed with relevant packageconfig

(From OE-Core rev: d18957925c6c073b7194e3a233efea24e436f74e)

(From OE-Core rev: 901a6dbe420eb3f76503871ca3ccfe544b9b3b57)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit fd36a447d0)
Signed-off-by: Kristian Amlie <kristian.amlie@mender.io>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:03 +01:00
Jonathan Liu
b64fa0af89 meta/classes: fix bb.build.FuncFailed typos
(From OE-Core rev: 32fb246f7288199c74794f7736da4b32a08a756f)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:03 +01:00
Khem Raj
8f300880c4 python{3}-numpy: Predefine of sizeof off_t on mips/mipsel/ppc
Fixes below errors as seen on musl

| In file included from numpy/core/include/numpy/ndarraytypes.h:4:0,
|                  from numpy/core/include/numpy/ndarrayobject.h:18,
|                  from numpy/core/include/numpy/arrayobject.h:4,
|                  from numpy/core/src/multiarray/compiled_base.c:7:
| numpy/core/include/numpy/npy_common.h:167:10: error: #error Unsupported size for type off_t
|          #error Unsupported size for type off_t
|           ^~~~~
| In file included from numpy/core/include/numpy/ndarraytypes.h:4:0,
|                  from numpy/core/include/numpy/ndarrayobject.h:18,
|                  from numpy/core/include/numpy/arrayobject.h:4,
|                  from numpy/core/src/multiarray/compiled_base.c:7:
| numpy/core/include/numpy/npy_common.h:167:10: error: #error Unsupported size for type off_t
|          #error Unsupported size for type off_t
|           ^~~~~

(From OE-Core rev: 6d8cc72e7f83b9819ff1bbdb72ca61f98de403a4)

(From OE-Core rev: 0697278232521db7f640f5d32ff3b707d2aaea6e)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:03 +01:00
Pascal Bach
28344dfed4 gcc, qemuppc: Explicitly disable forcing SPE flags for 4.9
This ports the missing changes from commit: 7a51776a830167e43cbd185505f62f328704e271
from 5.3 to 4.9 so that qemuppc can be compiled.

(From OE-Core rev: e625a25c473948d8c97eae5be9914f608f6a95bf)

Signed-off-by: Pascal Bach <pascal.bach@siemens.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 15:27:03 +01:00
Ross Burton
8c69f7d56c bitbake: lib/bb/tests/fetch: remove URL that doesn't exist anymore
The CUPS ipptool URL we were checking now redirects to github where the tarball
isn't present, so remove it from the test suite.

(Bitbake rev: e64564bcaa7331f505baa5209fef1f50dfda1469)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-13 16:19:46 +01:00
Maxin B. John
aad7166704 curl: security fix for CVE-2016-5421
Affected versions: libcurl 7.32.0 to and including 7.50.0

(From OE-Core rev: 2a9f4823483b6f5decc6d504858f06f66ab9e06c)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 08:48:29 +01:00
Maxin B. John
6980d4fa2f curl: security fix for CVE-2016-5420
Affected versions: libcurl 7.1 to and including 7.50.0

(From OE-Core rev: cc567d8fb9eca630cd21d40ece99babcc5b7d045)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 08:48:29 +01:00
Maxin B. John
094a36886f curl: security fix for CVE-2016-5419
Affected versions: libcurl 7.1 to and including 7.50.0

(From OE-Core rev: 0b56a2f6174a44495f8a58dc0864c161ffd37b80)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 08:48:28 +01:00
Alexander Egorenkov
7e11efef59 bitbake: toaster: Fix adding of bitbake variables containing ':'
This fix is a backport from toaster-next.

Krogoth Toaster is unable to add a variable containing ':'
and fails with the following error message:

error on request:
too many values to unpack
Traceback (most recent call last):
 File "bitbake/lib/toaster/toastergui/views.py", line 2171, in
xhr_configvaredit
  variable, value = t.spli(":")
ValueError: too many values to unpack.

[YOCTO #10170]

(Bitbake rev: bee144eeed6c08ec2829533e82f94405058ce453)

Signed-off-by: Alexander Egorenkov <Alexander.Egorenkov@vector.com>
Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-31 20:37:44 +01:00
Derek Straka
8854de1ffd python3: update manifest RDEPENDS for importlib and compression packages
zipfile.py has dependencies on importlib, threading, and shell
importlib has a dependency on lang
operator and contextlib added to the lang package instead of falling into misc

(From OE-Core rev: 8bbfe9bd229e3f795577eb5df1cd5104651e2ba2)

Signed-off-by: Derek Straka <derek@asterius.io>
(cherry picked from commit 769ad8e114fda1fe112d3747408edbeb7b066a85)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-31 20:37:44 +01:00
Fabio Berton
cefa06d985 python-3.5-manifest: Add argparse module
Adding argparse module from Python's standard library. This allow use
argparse without installing all python-misc modules. For compatibility,
add python3-argparse as RDEPENDS to python3-misc.

(From OE-Core rev: 6acbda5ac9c4edbcabbe11227db1655fbc8d904c)

Signed-off-by: Fabio Berton <fabio.berton@ossystems.com.br>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit f2b96001e074d26f5eb8711c2217a695fb02de4c)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-31 20:37:44 +01:00
Fabio Berton
ecb5183b9a python-3.5-manifest: Rename Queue module to queue
The Queue module has been renamed to queue in Python 3.

(From OE-Core rev: 9681e957fbf3370a6905b54e42dac17fa976db70)

Signed-off-by: Fabio Berton <fabio.berton@ossystems.com.br>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit e19a430da2ef60b2c6cf6a67210ec1a7b292c8ca)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-31 20:37:44 +01:00
Fabio Berton
2bb93e3567 python3-native: Extend python3-native rproviders
Add the following modules to RPROVIDES:

  - python3-email-native
  - python3-io-native
  - python3-json-native
  - python3-lang-native
  - python3-misc-native
  - python3-netclient-native
  - python3-netserver-native
  - python3-numbers-native
  - python3-pkgutil-native
  - python3-pprint-native
  - python3-re-native
  - python3-shell-native
  - python3-subprocess-native
  - python3-threading-native
  - python3-unittest-native

(From OE-Core rev: 1b807313f3e2d841922189bc7777a6d10bc83dcb)

Signed-off-by: Fabio Berton <fabio.berton@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 1a62ffd108e6aa7b7e5d0a81819550e8a7afeb60)
2016-08-31 20:37:43 +01:00
Fabio Berton
2a17af9652 python3-native: Change code style for rprovides
Use a more readable code style for RPROVIDES and sort recipes
alphabetically.

(From OE-Core rev: 344bb143ce73cd6ea70286bcdbc8aa702391a3e5)

Signed-off-by: Fabio Berton <fabio.berton@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 21130e2afc4762ad84c86e377146b99224d16032)
2016-08-31 20:37:43 +01:00
Fabio Berton
3831cdc1b1 yocto-uninative: Update to 1.0.1 tarball
The 1.0.1 uninative tarball includes the change for GlibC to use the
host locale data, which is required for Python 3 to work properly.

(From OE-Core rev: 4ac90c58032e1097abefc14bfc5029db0a893aa9)

Signed-off-by: Fabio Berton <fabio.berton@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-31 20:37:43 +01:00
Tom Hochstein
e01993c3d5 mesa-demos: Fix OpenGL ES configurability
The most recent patch 0011-drop-demos-dependant-on-obsolete-MESA_screen_surface.patch
incorrectly removed the configuration constructs that allowed the
package to be configured without OpenGL ES support.

(From OE-Core rev: 824c1206ace9a0d8183c8eeb5b7c3cb67935c191)

Signed-off-by: Tom Hochstein <tom.hochstein@nxp.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-31 13:04:05 +01:00
Khem Raj
7d70e67479 lzop: Fix build with gcc-6
(From OE-Core rev: 384ca1c459d28ed2e1b4290e05e88cf4aef2dc6a)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Tim Orling <timothy.t.orling@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:15 +01:00
Khem Raj
fc75bea445 musl: Fix mips regressions in 1.1.15
Bobby Bingham (2):
      remove or1k version of sem.h
      remove obsolete gitignore rules

Rich Felker (4):
      remove obsolete and unused gethostbyaddr implementation
      fix asctime day/month names not to vary by locale
      fix regression in tcsetattr on all mips archs
      revert unrelated change that slipped into last commit

(From OE-Core rev: bd7b23c63a9beb6118bbdfe1dd1564e2735c0159)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:54:34 +01:00
Ross Burton
59ef3c315b glibc: use the host locale archive in nativesdk builds
The nativesdk libc when used by buildtools has a hard requirement on supporting
a UTF-8 locale because Python 3 needs a UTF-8 locale.  However we currently only
ship the C locale, which means that Python attempts to lookup the user's locale
(for example, en_NZ.UTF-8) in the locale archive under it's prefix it fails and
falls back to C.  This the results in Python using ASCII instead of UTF-8 for
file encoding, and bitbake breaks.

Th obvious solution would be to ship all locales, but this would add
approximately 250MB to the size of the buildtools tarball (which is currently
around 30MB).  Generating a binary locale archive reduces this down to 100MB,
but this is still a drastic increase in footprint.  If we ship a subset of
locales in the tarball then there will be users whose locale isn't in the
tarball, and they'll have to change their locale to an "approved" one, which
isn't the best of messages to send to new users.

The alternative is to tell the nativesdk libc that the locale archive isn't
under it own prefix but is in fact at /usr/lib/locale/locale-archive, so the
buildtools libc uses the host locale archive. The locale archive format appears
to be at least fairly stable: our glibc 2.24 can read the locale archive
generated by glibc 2.17 (Centos 7).

[ YOCTO #9775 ]

(From OE-Core rev: d36a2314a8b25a37a8e4ea0b33ce5197e44fedeb)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:29:49 +01:00
bavery
eef3fb99d0 base-files: restrict resize to run on serial consoles only in profile │·
We don't need/wan't to run resize on an ssh connection. It's useless and
it breaks the Eclipse SSH debug connection. So, we added a check.

YOCTO #9362

(From OE-Core rev: c97a232272b18bbc2a102fd3ab305b862bb3b954)

Signed-off-by: bavery <brian.avery@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:29:49 +01:00
Scott Rifenbark
12eb72ee3b documentation: Updated manual revision list tables for August
The date for the 2.1.1 release has pushed into August now.
Updated all the manual's release dates in the revision history
tables as needed.

(From yocto-docs rev: ccd7930ca3fdeec87003c2d3861ebd491c7c6d18)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:19 +01:00
Scott Rifenbark
a8377d1073 ref-manual: Fixed typo in the "Shared State" section.
Fixes [YOCTO #9823]

The do_deploy[sstate-inputdirs] string was wrongly
do_deploy[sstate-inputsdirs].

(From yocto-docs rev: e4e6cde59b81ec66af4d01b41d89f5ab9a10571a)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:19 +01:00
Scott Rifenbark
52e13fb007 ref-manual: Review edits to the PR variable in glossary.
Fixes [YOCTO #9843]

Some minor rewordings and removal of a stray comma.

(From yocto-docs rev: 9983619766bdb9d1a50948e219617aeef3170524)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:19 +01:00
Scott Rifenbark
9f0eaae229 ref-manual: Updated the RDEPENDS variable description in the glossary
Fixes [YOCTO #9380]

Updated the shlibdeps description for this variable to try and
satisfy automatically added version restrictions.

(From yocto-docs rev: 40f3f7b483c8c2f3faae9161c62084d1d691bf32)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:19 +01:00
Scott Rifenbark
cf181cdb52 ref-manual: Updated the PR variable description.
Fixes [YOCTO #9843]

The variable description was very brief.  These changes added some
substance to the description and how the OpenEmbedded build system
uses the variable.

(From yocto-docs rev: 7603eee7f3d31edaf5a01d3e0deedb8dc53a66b4)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:19 +01:00
Scott Rifenbark
6f9ef13d0a dev-manual: Review edits to the package installation section
Fixes [YOCTO #9672]

A couple typos here needed fixed.  Also, a missing statement in
the JSON example.

(From yocto-docs rev: b35a68262574c4b562b198fd3d3ef710f3b90190)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:19 +01:00
Scott Rifenbark
4c36d5209e documentation: Updated manual revision tables for July 2016 date
YP release 2.1.1 moved from the June timeframe to July.  Updated
the manual revision tables.

(From yocto-docs rev: 09f228e7228146685af56dc341ca8fbd81e63282)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:19 +01:00
Scott Rifenbark
e177680fa0 ref-manual: Updated the flag descriptions for shared state details
Fixes [YOCTO 9823]

I added more details to the explanations of how shared state is
implemented.  Included a bulleted list of the various statements
of code to help explain flags and settings.

(From yocto-docs rev: 2b9db6faa0109b9001c07516c874e9935bf743e8)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:19 +01:00
Scott Rifenbark
365f85179d dev-manual: Edits to the package feed creation section.
Updated the introduction of the trio of variables used for package
feed naming in the "Build Considerations" section.

Fixes [YOCTO #1882]

(From yocto-docs rev: ec0003799935ad9981905a1f8cb72a1748967ca0)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:19 +01:00
Scott Rifenbark
8ffab431a2 ref-manual: Updated the DISTRO_FEATURES description of Bluez5
Edits to explain that by default, DISTRO_FEATURES backfills
bluetooth support with Bluez5.  If the user wants to use the
Bluez4 feature, they need to backfill consider Bluez5.

(From yocto-docs rev: f46331bf0de77941114ffb223f979987d281ed57)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:18 +01:00
Scott Rifenbark
ebed0191f9 ref-manual: Updated SSTATE_MIRRORS examples to match reality
Fixes [YOCTO #9773]

Updated two examples that set SSTATE_MIRRORS so that they match the
changes made by YOCTO #3220.

(From yocto-docs rev: 6236e4dee686f1a6436d2ad0fc46441c802b3eb7)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:18 +01:00
Scott Rifenbark
a779b36e9c dev-manual: Updated Runtime Testing for Package Installation
Fixes [YOCTO #9672]

Updated the "Exporting Tests" section to reflect the proper
local.conf settings.

Added a new section "Installing Packages in the DUT Without the
Package Manager" that describes how to use a JSON file to accomplish
package installation on a Device Under Test without a package
manager.

(From yocto-docs rev: d46f2449d01913b794572a9cf8de07d812616d2e)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:18 +01:00
Scott Rifenbark
ea438b421d dev-manual: Updated the method to set SimpleHTTPServer for testing
Fixes [YOCTO #1882]

Re-did the steps to set this server up.

(From yocto-docs rev: dd51855e97a9fda308564a9e000c2b8ed333e23e)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:18 +01:00
Scott Rifenbark
45f2a20349 ref-manual: Fixed a typo for installing "python3-git"
Fixes [YOCTO #9712]

(From yocto-docs rev: 533412bc482f09ace57345733cb1f9494bb4b34c)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:18 +01:00
Scott Rifenbark
853db300f5 dev-manual: Applied edits to the package feed section.
Fixes [YOCTO #1882]

(From yocto-docs rev: ffa3d03fe20f8ba38d1ac508aa208415baa9caf2)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:18 +01:00
Scott Rifenbark
f07fedb2fb ref-manual: Updated the UPSTREAM_CHECK_* variables.
Fixes [YOCTO #9671]

(From yocto-docs rev: 8bd3f4d487bdc2929a42563eb376dc28fc33358b)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:18 +01:00
Scott Rifenbark
1969871269 ref-manual: Review edits to the UPSTREAM_CHECK_* variables.
Applied some review comments.

Fixes [YOCTO #9671]

(From yocto-docs rev: f1630b792063cbfe1cae4994d63ff7031d4dfabf)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:18 +01:00
Scott Rifenbark
a8279122b9 dev-manual: Updated Host Server Machine Setup for package feeds
Removed the extra server instructions and just left the ones
for SimpleHTTPServer.

Fixes [YOCTO #1882]

(From yocto-docs rev: 50a1323a44c645426fb4b77f07d4e3280931a9ac)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:18 +01:00
Scott Rifenbark
9adc11d4ac ref-manual: Added note about installing Git-Python package
buildhistory-diff tool requires the Git-python package.
I added a note indicating this.

Fixes [YOCTO #9712]

(From yocto-docs rev: 61814503f5656b241646d43c208c6bcaf530a282)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:18 +01:00
Scott Rifenbark
b5a67a2f7b ref-manual: Updates to the UPSTREAM_CHECK_* variables
I applied some grammar edits and re-wordings as directed by
technical reviews.

Fixes [YOCTO #9671]

(From yocto-docs rev: b494b67aa5694967af70854c1c780c42f7d378af)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:17 +01:00
Scott Rifenbark
bd47f3f3e6 dev-manual: Review edits applied to the package feed build considerations.
(From yocto-docs rev: 817e64500e39a20682c618a54fc45db965e85232)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:17 +01:00
Scott Rifenbark
1d7983106c ref-manual: Edits to UPSTREAM_CHECK_* variables.
Fixes [YOCTO #9671]

Applied some review comments to these three variables.
Edits to be sure to qualify that the variables only to recipes
who inherit the distrodata class.

(From yocto-docs rev: bb9a9866733e92d5c79bdc6b3b3c930468c0d616)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:17 +01:00
Scott Rifenbark
4cf38836ac ref-manual: Added descriptions for three UPSTREAM* variables.
Fixes [YOCTO #9671]

Put in descriptions for the following variables:

 * UPSTREAM_CHECK_GITTAGREGEX_pn
 * UPSTREAM_CHECK_URI_pn
 * UPSTREAM_CHECK_URI_pn

(From yocto-docs rev: 5eb6d241dbe862cc84f697b419c11223e1b5d191)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:17 +01:00
Scott Rifenbark
ffb615a50b dev-manual: Updated Package Feed Creation sections
Fixes [YOCTO #1882]

Edited the sections in the "Working with Packages" section
beginning with the "Build Considerations" section with text
received from Daniela Placencia.

(From yocto-docs rev: 1ebd6a805699fc962a43a8f744194ab6e65b733c)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:17 +01:00
Scott Rifenbark
1931dfc1cb ref-manual: Updated the INHIBIT_PACKAGE_STRIP variable
Fixes [YOCTO #9553]

Added detail to this variable description.

(From yocto-docs rev: 2be60cd54cc8ca55a25c3ec9f9af0231fe09d5a7)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:17 +01:00
Scott Rifenbark
2be23abe85 ref-manual: Added BlueZ version 5 feature to distro feature section.
(From yocto-docs rev: 2529a8d31cb28f4290b657c4871700fef2320c07)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:17 +01:00
Scott Rifenbark
9891a867ef sdk-manual: Fixed three broken links to sections within manual.
(From yocto-docs rev: 25eb664cf20c08014f2ad6cf61ffe07b76fb23df)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:17 +01:00
Scott Rifenbark
e1f49c6068 sdk-manual: Updated configure.ac file in helloworld example.
The file was named 'configure.in' and was slightly different than
what it needed to be in order to work.  The file needs to be named
'configure.ac' and have slightly different contents.  Fixed both.

(From yocto-docs rev: ea2aa991e8072ac8d371afdcbb72daf34065d5fb)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:17 +01:00
Scott Rifenbark
046fd3cb83 documentation: Set up for the 2.1.1 YP Release
* poky.ent variables updated.
* Manual revision tables entries added "June 2016" date
* mega-manual.sed string "2.1" globally changed to "2.1.1"

(From yocto-docs rev: 59ffde8e39df96cbc41dc294e8623b94b217a0a4)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:16 +01:00
Scott Rifenbark
913b4e5910 kernel-dev: Fix the locations of .config and source directory
The locations of the kernel .config file and source direcotry
moved a couple releases ago.  Updated the documentation
accordingly.

Also added a note explaining how to check the expansion of
variables, which servs a couple of purposes:

 * For curious readers, shows them how to understand where
   these variables come from and how they are used.

 * For suspicious readers, shows them how they can verify that
   the variables in the documentation are actually correct.

Author: Tom Zanussi <tom.zanussi@linux.intel.com>
(From yocto-docs rev: db6287fd0bf7dd47635f42b1b10814b9b6db438f)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:16 +01:00
Scott Rifenbark
046f1e6b4c profile-manual: Added cross-reference links to INHIBIT_PACKAGE_STRIP
I added some reference links to this variable in the ref-manual
glossary.

(From yocto-docs rev: 8ed1505874b4815a61e123f5c650a4901d2b59a8)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:16 +01:00
Scott Rifenbark
e5353a9158 ref-manual: Fixed *[doc] string for INHIBIT_PACKAGE_DEBUG_SPLIT
The string was a copy paste error.  It was using the string
for INHIBIT_PACKAGE_STRIP.

(From yocto-docs rev: 20a649c21272240b67314cc20fd026c43839ee2d)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:42:16 +01:00
Richard Purdie
f5da2a5913 build-appliance-image: Update to krogoth head revision
(From OE-Core rev: 1dc9ce406497d6e996a40afc53293d9a576c8314)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 15:48:24 +01:00
Maxin B. John
e244da150b libproxy: use snapshot.debian.org for SRC_URI
Using ${DEBIAN_MIRROR} for SRC_URI doesn't work very well as that will
only contain releases that are currently in Debian.

So, move all of SRC_URI to the .bb so it can use snapshot.debian.org
instead, and set UPSTREAM_CHECK_URI to ${DEBIAN_MIRROR} so upstream
release checking continues to work.

[YOCTO #10040]

(From OE-Core rev: 85ab50390edd3c0de632386da71ccc9256d4d4c5)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 12:14:40 +01:00
Maxin B. John
c4d6b277f2 libaio: use snapshot.debian.org for SRC_URI
Using ${DEBIAN_MIRROR} for SRC_URI doesn't work very well as that will
only contain releases that are currently in Debian.

So, move all of SRC_URI to the .bb so it can use snapshot.debian.org
instead, and set UPSTREAM_CHECK_URI to ${DEBIAN_MIRROR} so upstream
release checking continues to work.

[YOCTO #10040]

(From OE-Core rev: d0955fbabaa6324ebf2100d443c11ab41b74b429)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 12:14:40 +01:00
Maxin B. John
e89b6b84d8 blktool: use snapshot.debian.org for SRC_URI
Using ${DEBIAN_MIRROR} for SRC_URI doesn't work very well as that will
only contain releases that are currently in Debian.

So, move all of SRC_URI to the .bb so it can use snapshot.debian.org
instead, and set UPSTREAM_CHECK_URI to ${DEBIAN_MIRROR} so upstream
release checking continues to work.

[YOCTO #10040]

(From OE-Core rev: 43181af5a85b073c9b09a8a0ba912d51815a83de)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 12:14:40 +01:00
Maxin B. John
325515a685 linuxdoc-tools: use snapshot.debian.org for SRC_URI
Using ${DEBIAN_MIRROR} for SRC_URI doesn't work very well as that will
only contain releases that are currently in Debian. So, move all of SRC_URI
to the .bb so it can use snapshot.debian.org instead, and set
UPSTREAM_CHECK_URI to ${DEBIAN_MIRROR} so upstream release checking continues
to work.

[YOCTO #10040]

(From OE-Core rev: ad033ed04f3894ad723d11a0bfd29b94a468add7)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 12:14:39 +01:00
Maxin B. John
c2d93dcf42 docbook-xml-dtd4: use snapshot.debian.org for SRC_URI
Using ${DEBIAN_MIRROR} for SRC_URI doesn't work very well as that will
only contain releases that are currently in Debian. So, move all of SRC_URI
to the .bb so it can use snapshot.debian.org instead, and set
UPSTREAM_CHECK_URI to ${DEBIAN_MIRROR} so upstream release checking continues
to work.

[YOCTO #10040]

(From OE-Core rev: 464fceaa5afe5cca67efe46d5cd5e13e40a8f7f1)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 12:14:39 +01:00
Maxin B. John
116ee14fe0 netbase: use snapshot.debian.org for SRC_URI
Using ${DEBIAN_MIRROR} for SRC_URI doesn't work very well as that will
only contain releases that are currently in Debian. So, move all of SRC_URI
to the .bb so it can use snapshot.debian.org instead, and set
UPSTREAM_CHECK_URI to ${DEBIAN_MIRROR} so upstream release checking continues
to work.

[YOCTO #10040]

(From OE-Core rev: 55e7a0e1c829de1294f8b96a01de64334d5b464c)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 12:14:39 +01:00
Maxin B. John
b869068751 serf: use snapshot.debian.org for SRC_URI
Using ${DEBIAN_MIRROR} for SRC_URI doesn't work very well as that will
only contain releases that are currently in Debian. So, move all of SRC_URI
to the .bb so it can use snapshot.debian.org instead, and set
UPSTREAM_CHECK_URI to ${DEBIAN_MIRROR} so upstream release checking continues
to work.

[YOCTO #10040]

(From OE-Core rev: 114ac0213c0f80ac4192bd7ab7b1a5c974a965e8)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 12:14:39 +01:00
Maxin B. John
4eeaae772f mailx: use snapshot.debian.org for SRC_URI
Using ${DEBIAN_MIRROR} for SRC_URI doesn't work very well as that will
only contain releases that are currently in Debian.

So, move all of SRC_URI to the .bb so it can use snapshot.debian.org
instead, and set UPSTREAM_CHECK_URI to ${DEBIAN_MIRROR} so upstream
release checking continues to work.

[YOCTO #10040]

(From OE-Core rev: 57deb12858aee9437390c2ac5784dd1c273ab39c)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 12:14:39 +01:00
Maxin B. John
4dc76844a6 ossp-uuid: use snapshot.debian.org for SRC_URI
Using ${DEBIAN_MIRROR} for SRC_URI doesn't work very well as that will
only contain releases that are currently in Debian. So, move all of SRC_URI
to the .bb so it can use snapshot.debian.org instead, and set
UPSTREAM_CHECK_URI to ${DEBIAN_MIRROR} so upstream release checking continues
to work.

[YOCTO #10040]

(From OE-Core rev: a98c257ce6136712668a791a6dff2338c50b4138)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 12:14:39 +01:00
Maxin B. John
d85ccb3daf apmd: use snapshot.debian.org for SRC_URI
Using ${DEBIAN_MIRROR} for SRC_URI doesn't work very well as that will
only contain releases that are currently in Debian.

So, move all of SRC_URI to the .bb so it can use snapshot.debian.org
instead, and set UPSTREAM_CHECK_URI to ${DEBIAN_MIRROR} so upstream
release checking continues to work.

v2:
        use ${BPN} instead of ${PN} in SRC_URI for multilib builds

[YOCTO #10040]

(From OE-Core rev: a03f087fd49288539bb6a63a52bf907f1bcdc4d6)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 12:14:39 +01:00
Tim Orling
4a98ef84b9 at: use snapshot.debian.org for SRC_URI
[YOCTO #10005] Krogoth-next checkuri failures

Using ${DEBIAN_MIRROR} for SRC_URI doesn't work very well as that will only
contain releases that are currently in Debian, so currently doesn't contain
3.1.18 as unstable has moved on to 3.1.20.

So, move all of SRC_URI to the .bb so it can use snapshot.debian.org instead,
and set UPSTREAM_CHECK_URI to ${DEBIAN_MIRROR} so upstream release checking
continues to work.

(From OE-Core rev: e3ff0aa75c3169b19ef90f50b63914f4036790d0)

Signed-off-by: Tim Orling <timothy.t.orling@linux.intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 12:14:39 +01:00
Tim Orling
b6a3c9c298 dpkg: use snapshot.debian.org for SRC_URI
[YOCTO #10005] Krogoth-next checkuri failures

From Ross Burton (commit 1d39e4c145)

Using ${DEBIAN_MIRROR} for SRC_URI doesn't work very well as that will only contain releases that are currently in Debian, so currently doesn't contain 1.18.4 as unstable has moved on to 1.18.9.

So, move all of SRC_URI to the .bb so it can use snapshot.debian.org instead, and set UPSTREAM_CHECK_URI to ${DEBIAN_MIRROR} so upstream release checking continues to work.

(From OE-Core rev: 85378ebe19730cc42587bf1e5e5e15b3deda638b)

Signed-off-by: Tim Orling <timothy.t.orling@linux.intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 12:14:39 +01:00
Armin Kuster
fbe10c86e8 foomatic-filters: Security fixes CVE-2015-8327
CVE-2015-8327 cups-filters: foomatic-rip did not consider the back tick as an illegal shell escape character

(From OE-Core rev: 512825509cfc1fb9d78fa3722bb4f077904e957a)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 12:14:39 +01:00
Armin Kuster
c96f149679 foomatic-filters: Security fix CVE-2015-8560
CVE-2015-8560 cups-filters: foomatic-rip did not consider semicolon as illegal shell escape character

(From OE-Core rev: c8b0b69a28bb4a6d88a6c2ecf2b89144b21ffe6d)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-01 12:14:38 +01:00
Ross Burton
f73006031e oeqa/recipetool: update recipe test to pass SHA
(From OE-Core rev: 71dd4c05c41e8b363dc1ecac1f5105d316ee82dc)

(From OE-Core rev: c0375bd9e3a25c605f07381ae7cbe83febb5ce56)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 14:10:51 +01:00
Khem Raj
c3f90184c7 grub: Fix build with gcc-6
Backport patch which silences following

'../../grub-2.00/grub-core/'`gfxmenu/model.c
../../grub-2.00/grub-core/gettext/gettext.c:37:36: error: storage size of 'main_context' isn't known
 static struct grub_gettext_context main_context, secondary_context;
                                    ^~~~~~~~~~~~
make[3]: *** [gettext/gettext_module-gettext.o] Error 1

(From OE-Core rev: 4efac9861ab59d696bdc81ea59497febfa2d0dc8)

(From OE-Core rev: c1ad29a96dc38da87290b024c8b5a502baeea5e9)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:44:17 +01:00
Ross Burton
ec0de3b71e oeqa/devtool: update recipe test as libmatchbox changed
(From OE-Core rev: b36712eef14c20007e0adb01cc7d4bce9e7926bb)

(From OE-Core rev: dbf7a797b22bef8ccfcc4df7b76736619bf13418)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:41:43 +01:00
Tim Orling
54086de158 nss: fix build for gcc-6
[YOCTO #9897] (Fedora-24 host is gcc-6)

(From OE-Core rev: 1882abd101d211e5ab3f1a0a77580395778e6301)

Signed-off-by: Tim Orling <timothy.t.orling@linux.intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:38 +01:00
Armin Kuster
6dca3c67c3 tzcode-native: update to 2016f
changes done in data

(From OE-Core rev: 29377fa91a5f679909d582317c2b53d1f2e5da88)

(From OE-Core rev: b4c4ba05f52904cceb792a6d4863ffab1f471359)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:38 +01:00
Armin Kuster
88023056fe tzdata: update to 2016f
Changes affecting future time stamps

    The Egyptian government changed its mind on short notice, and
    Africa/Cairo will not introduce DST starting 2016-07-07 after all.
    (Thanks to Mina Samuel.)

    Asia/Novosibirsk switches from +06 to +07 on 2016-07-24 at 02:00.
    (Thanks to Stepan Golosunov.)

  Changes to past and future time stamps

    Asia/Novokuznetsk and Asia/Novosibirsk now use numeric time zone
    abbreviations instead of invented ones.

  Changes affecting past time stamps

    Europe/Minsk's 1992-03-29 spring-forward transition was at 02:00 not 00:00.
    (Thanks to Stepan Golosunov.)

(From OE-Core rev: dc80bf9b092a76f758d01474619cd9db46a1070d)

(From OE-Core rev: 777d93c0b0368828e1c1fe59f7d5908ba980698d)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:38 +01:00
Tim Orling
37b3e44b9d gcc-5.3: fix build for gcc-6
Backport upstream patch.
It had been applied to 4.9, but not 5.3.

[YOCTO #9897] (Fedora-24)

(From OE-Core rev: 41756d499f1c5ed57bcb7e3e8ab768ec020086f6)

Signed-off-by: Tim Orling <timothy.t.orling@linux.intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:38 +01:00
Daniel McGregor
cb7787af8a openjade-native: work around bug exposed by GCC 6
Simply turn off the optimzation that is causing this breakage. I had
originally used -fno-lifetime-dse, but -fno-tree-dse works at least
going back as far as gcc 4.8.

This isn't a real fix, but it allows openjade to work enough to complete
a build.

(From OE-Core rev: 39e7dd90878325158c143dfec8234d563b841b86)

(From OE-Core rev: 901c179680629f49ac3c05c336b2fe752a87ea2b)

Signed-off-by: Daniel McGregor <daniel.mcgregor@vecima.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:38 +01:00
Dan McGregor
3ee0f6afc8 binutils: disable werror on native build
It's disabled on cross builds, and it's needed for gcc 6

(From OE-Core rev: ce1b37e29dc89b67dc698e856007b59faa16c4df)

(From OE-Core rev: 640235620061c1b7155e1504702e5c26b5ecfdaa)

Signed-off-by: Dan McGregor <dan.mcgregor@usask.ca>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:38 +01:00
Khem Raj
b3acdca9b6 glib-2.0: Ignore useless warning found with gcc-6
../../glib-2.46.2/glib/gdate.c:2497:7: error: format not a string literal, format string not checked [-Werror=format-nonliteral]
       tmplen = strftime (tmpbuf, tmpbufsize, locale_format, &tm);
       ^~~~~~

| ../../../../../../../../workspace/sources/glib-2.0/glib/tests/gdatetime.c: In function 'test_strftime':
| ../../../../../../../../workspace/sources/glib-2.0/glib/tests/gdatetime.c:1338:3: error: '%c' yields only last 2 digits of year in some locales [-Werror=format-y2k]
|    "a%a A%A b%b B%B c%c C%C d%d e%e F%F g%g G%G h%h H%H I%I j%j m%m M%M " \

Additionally fix the problem seen where write() return code is ignored

(From OE-Core rev: 3fdecff96dd7516605ec9248b2a39de4db81306f)

(From OE-Core rev: 76271b5710e8d02d4ca0559cbf72c149f9beb4e2)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:38 +01:00
Khem Raj
7204ed57ed rpm: Fix build with gcc6
(From OE-Core rev: e9c86d85460f45011bd978e1495a2b802d733020)

(From OE-Core rev: d60a2ce4b5169d8e903981f492304dadd2a205fb)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:37 +01:00
Tim Orling
7db217e7ac elfutils: Fix build for gcc-6
Backport patch from upstream.

[YOCTO #9897] (Fedora-24)

(From OE-Core rev: 619eff37f41dacbc35ea480559ce393cc3f2c17b)

Signed-off-by: Tim Orling <timothy.t.orling@linux.intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:37 +01:00
Khem Raj
553ffcb941 elfutils-0.148: Fix build with gcc6
(From OE-Core rev: c2668171f5d76bfea085ecf2fa7dfe1e42df1e63)

(From OE-Core rev: ea6afc2eeee7cc647c7ca64da97fa5321edc6766)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:37 +01:00
Tim Orling
ba4d4372b1 pkgconfig: Fix build with gcc-6
Our patch in master was on top of 0.29.1 update.
commit b83a808fcb

Apply to krogoth stable 0.29 version instead.

[YOCTO #9897] (Fedora-24)

(From OE-Core rev: 5b50a9948bbd4e5c1a56183defe4c150a85dcb15)

Signed-off-by: Tim Orling <timothy.t.orling@linux.intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:37 +01:00
Ross Burton
4f2716218f binutils: backport fix for TLSDESC relocations with no TLS segment on arch64
As exposed by WebKit on aarch64 hosts, which causes binutils to throw an
internal error.

[ YOCTO #9509 ]

(From OE-Core rev: a6c75ed55b7ef809bd7d4e69365ea5fb0d88d02e)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:37 +01:00
Khem Raj
08e0391d9c musl: Update to v1.1.15 release
here is shortlog of changes
http://git.musl-libc.org/cgit/musl/commit/?id=faf69b9a73d09fafcbe4fd3007b8d8724293d8e1

(From OE-Core rev: 3164db2a2f16eedfed3bcd2413321e7473900637)

(From OE-Core rev: 6e7a9fd67a982f81a72a928709f145d61186e320)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:37 +01:00
Dominic Sacré
5ddf1463d3 dropbear: Remove incorrect SFTPSERVER_PATH from CFLAGS
Openssh now installs the sftp-server binary as /usr/libexec/sftp-server,
whereas the dropbear recipe assumes a different path.
Dropbear uses the correct path by default, so it's no longer necessary
to override SFTPSERVER_PATH via CFLAGS.

This fixes SFTP access to systems using dropbear as the SSH server.

(From OE-Core rev: df798bca330583103b2301678236cc841cc861dd)

(From OE-Core rev: e9bbced4da1f13951abdd298590a3577f377866e)

Signed-off-by: Dominic Sacré <dominic.sacre@gmx.de>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:37 +01:00
Khem Raj
5f4369eb2a musl: Upgrade to tip
Rich Felker (4):
      fix undefined pointer arithmetic in CMSG_NXTHDR macro
      fix a64l undefined behavior on ILP32 archs, wrong results on LP64 archs
      avoid padding gaps in struct sockaddr_storage
      remove comments on copyright status from UTF-8 implementation files

Szabolcs Nagy (8):
      fix the use of uninitialized value in regcomp
      add preadv2 and pwritev2 syscall numbers for linux v4.6
      add SO_CNX_ADVICE to sys/socket.h, new in linux v4.6
      add ETH_P_MACSEC netinet/if_ether.h, new in linux v4.6
      update siginfo struct for linux v4.6
      add CLONE_NEWCGROUP clone flag, new in linux v4.6
      add new tcp_info fields from linux v4.6
      update sys/socket.h to linux v4.6

(From OE-Core rev: d81bb8c6362d59a124bbe9b3a60cb259733b120d)

(From OE-Core rev: fc73e73e9a879909edf2f129790d26d4e883b3c2)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:37 +01:00
Khem Raj
20aae4e5ef musl: Update to latest tip
Bobby Bingham (3):
      x32: remove arch-specific syscall remapping
      x32: eliminate __X32_SYSCALL_BIT constant
      deduplicate __NR_* and SYS_* syscall number definitions

(From OE-Core rev: 6993e88cccbfe2f990e4ea9bd7cc186d59e5a84b)

(From OE-Core rev: 11b36c1a2672c0a6240a934144828c2529a6e0a3)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:37 +01:00
Khem Raj
d4280db033 musl: Upgrade to tip of tree
COPYRIGHT file was changed to clarfiy the MIT Licence
which resulted in checksum change, see

http://git.musl-libc.org/cgit/musl/commit/?id=f0a61399330bae42beeb27d6ecd05570b3382a60

below are changes in upgrade

Andrew Kelley (1):
      fix incorrect protocol name and number for egp

Bobby Bingham (1):
      add powerpc64 port

LeMay, Michael (1):
      fix redundant processing of --build flag in configure script

Petr Vaněk (1):
      remove dead store in res_msend

Rich Felker (10):
      fix undefined pointer comparison in stdio-internal __toread
      fix regression disabling use of pause instruction for x86 a_spin
      fix read past end of haystack buffer for short needles in memmem
      add support for mips and mips64 r6 isa
      add mips n32 port (ILP32 ABI for mips64)
      fix thread structure/dtv-pointer corruption on powerpc
      fix FILE buffer underflow in ungetwc
      update COPYRIGHT file to clarify that permissions apply for all files
      follow standard configure behavior for cross compile prefix
      fix spurious trailing whitespace in powerpc & powerpc64 bits/errno.h

(From OE-Core rev: 21d8d60b2bfb205dcb5d304119d4dbd627db7163)

(From OE-Core rev: d867cc39394c3b0bdd2286b90344f222138ae36e)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:37 +01:00
Armin Kuster
d79f5a98f7 glibc: Security fix for CVE-2016-4429
Master will a have fix after pending update

(From OE-Core rev: c14f2ba7ae1ddef3dc7bb837454e51469bead948)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:37 +01:00
Armin Kuster
22198f07af glibc: Security fix for CVE-2016-3706
Master not affected.

(From OE-Core rev: 6c5aaa3150e6cf74219e5bcf4819365ae3628102)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:36 +01:00
tom.zanussi@linux.intel.com
03d9d8e7d3 systemtap: Add missing memory flag to fix stap module compilation
The 4.4 kernel removed some memory flag definitions, which cause
module compilation errors, rendering sytemtap essentially useless in
krogoth.

The problem is fixed in systemtap 3.0 and therefore in master, but as
mentioned in Systemtap BZ1285348, the fix for older versions is this
patch.

(From OE-Core rev: 7c27f257286dfca745a956bae15c1f4ed505343f)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:36 +01:00
Armin Kuster
229e3e4e5f ghostscript: update SRC_URI
ERROR: Function failed: Fetcher failure for URL: 'http://downloads.ghostscript.com/public/ghostscript-9.18.tar.gz'. URL http://downloads.ghostscript.com/public/ghostscript-9.18.tar.gz doesn't work

(From OE-Core rev: 7aa7d0c54f9d8f1b27a0cf855da685459bdbcc93)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:33:36 +01:00
Belen Barros Pena
6ea7b46ef6 toaster: toasterconf.json Remove master release
With the move to python3 completed in master, Toaster 2.1 no longer
builds the master branch. This patch removes the master release from the
Yocto Project toaster configuration file so that the master branch is
not listed as an option to select when creating a project.

(From meta-yocto rev: 25a91ee63bad4771d0c867c04d13b6fcdf6a5417)

Signed-off-by: Belen Barros Pena <belen.barros.pena@linux.intel.com>
Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-22 14:12:34 +01:00
Richard Purdie
98c57bb512 build-appliance-image: Update to krogoth head revision
(From OE-Core rev: dd330056ace289c8a9c5d77b6bb6e860b9f0913e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-06 17:28:04 +01:00
Richard Purdie
ae849a348c poky.conf: Bump version for 2.1.1 krogoth release
(From meta-yocto rev: 19c53669baf39ef793b3fb8f0e01345e450f1f78)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-06 17:27:58 +01:00
Richard Purdie
95b2e086cb build-appliance-image: Update to krogoth head revision
(From OE-Core rev: 6d3751ff5d1ee0b34b24a1572b89a2c46f1b8d19)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-06 17:27:58 +01:00
Ed Bartosh
eea30774b4 wic: rawcopy: make source filenames unique
Rawcopy plugin copies source files to build folder before using them
to assemble result image. After assembling the image wic renames
source files to <image>.p<partition number>. If the same source file
is used in multiple partitions wic breaks trying to rename file that
doesn't exist.

Added <line number> suffix to the files when copying them to the
build dir. This should make filename unique even if the same source
file is used for multiple partitions.

[YOCTO #9826]

(From OE-Core rev: 6f7afd6f76c40e1b050e40bc4965cb5000df7088)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-06 17:25:48 +01:00
Anuj Mittal
250212eee6 gcc: make sure header path is set correctly
We're setting the native header paths in do_configure_prepend,
and don't need to set them again here.

This results in gcc-target not being able to locate the headers
and not being able to detect glibc version, which in turn
results in SSP support not getting detected even though it's available
in libc.

(From OE-Core rev: 463909e876a66555d5df628591bace8cea0a6b0c)

Signed-off-by: Anuj Mittal <anujx.mittal@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit 85630aa894278e7818c867179dc19ca2fbd994fc)
Signed-off-by: Anuj Mittal <anujx.mittal@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-06 17:25:48 +01:00
André Draszik
e6917603e2 feature-arm-vfp.inc: fix overzealous ARMPKGSFX_FPU modification
Since commit 972b4fc (feature-arm-neon.inc: restore vfpv3-d16 support)
we're replacing _all_ dashes (-) in ARMPKGSFX_FPU, which is causing
problems for all legitimate uses of the dash as TUNE_PKGARCH doesn't
have the right value anymore:

E.g. on raspberrypi2:

ERROR:  OE-core's config sanity checker detected a potential misconfiguration.
    Either fix the cause of this error or at your own risk disable the checker (see sanity.conf).
    Following is the list of potential problems / advisories:

    Error, the PACKAGE_ARCHS variable (all any noarch armv5hf-vfp armv5thf-vfp
armv5ehf-vfp armv5tehf-vfp armv6hf-vfp armv6thf-vfp armv7ahf-vfp
armv7at2hf-vfp armv7vehf-vfp armv7vet2hf-vfp armv7vehf-neon armv7vet2hf-neon
armv7vehf-neon-vfpv4 armv7vet2hf-neon-vfpv4 cortexa7hf-vfp cortexa7hf-neon
cortexa7hf-neon-vfpv4 cortexa7t2hf-vfp cortexa7t2hf-neon
cortexa7t2hf-neon-vfpv4 raspberrypi3) for DEFAULTTUNE (cortexa7thf-neon-vfpv4)
does not contain TUNE_PKGARCH (cortexa7hf-neonvfpv4).

Fix this by being more explicit about what we're modifying.

Reported-by: Khem Raj <raj.khem@gmail.com>
(From OE-Core rev: 2c4ae03834be3f4449487a2c7c40829d94051d99)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-06 17:25:48 +01:00
Khem Raj
320dacf891 gcc-5: Fix hang with mmusl option on cmdline
When using -m32 -mmusl options in this order, gcc hangs
in parsing the options decode_cmdline_options_to_array()
the reason is that we have broken the link when adding
mmusl options, the order of specifying libc was not kept
in order as a result it was unable to contruct the array
correctly and ended in parse hang.

We fix the options to specify the order properly.

(From OE-Core rev: b6f1b26db8a1da2aae9557eeb8aae5beb7af1a06)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-06 17:25:48 +01:00
Ed Bartosh
11ca5f99a7 devshell.bbclass: fix double unbuffering
stdout is already unbuffered in bitbake code. Attempt to
do it again in devshell.bbclass causes this crash when
running devpyshell:
  File "scripts/oepydevshell-internal.py", line 29, in <module>
      pty = open(sys.argv[1], "w+b", 0)
  IOError: [Errno 13] Permission denied: '/dev/pts/6'

(From OE-Core rev: 90a12e07ee22df900fa740c6c2f1efe41e93b9f4)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-06 17:25:48 +01:00
Armin Kuster
cd0afe151c Revert "openssl: prevent ABI break from earlier krogoth releases"
This patch should not have been back ported.

This reverts commit 18b0a78f439ce26ea475537cc20ebbc1d091920c.

(From OE-Core rev: 08f85da10b3a7fc6165f163fd0f23784a2c9c8e4)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-06 17:25:48 +01:00
Zhenhua Luo
f7b994b752 image.bbclass: do exact match for rootfs type
Do exact match for rootfs type, instead of pattern match, to avoid
unexpected build error due to redundant rootfs type build.

E.g. when building ext2.gz.u-boot, both .gz.u-boot and .u-boot are matched,
the following build error will appear, actually .u-boot is not needed.
| mkimage: Can't open .../core-image-minimal-<machine>-<yyyymmddhhmmss>.rootfs.ext2.gz: No such file or directory

(From OE-Core rev: 46bc438374de74af76d288520c6252c9b7840767)

(From OE-Core rev: 1d0ea655e266e7c5acc9c282fa91406fbe9bfb85)

Signed-off-by: Zhenhua Luo <zhenhua.luo@nxp.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:37:19 +01:00
Leonardo Sandoval
720ae18403 scripts/lib/bsp/kernel.py: force patching when branch is machine branch is re-use
When a branch is re-used, the kernel tools turns off any patch pushing unless
'mark patching' is explicitly set.

[YOCTO #9120]

(From meta-yocto rev: 427f5473722e15e288cbce251a9ce18989c23548)

(From meta-yocto rev: e98cce42b8454545874a68979af70ca1813a7ad2)

Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:02 +01:00
Leonardo Sandoval
ae832446d9 bitbake: fetch2: Safer check for BB_ORIGENV datastore
BB_ORIGENV value on the datastore can be NoneType thus raising an AttributeError
exception when calling the getVar method. To avoid this, a check is done before
accesing it.

[YOCTO #9567]

(Bitbake rev: f368f5ae64a1681873f3d81f3cb8fb38650367b0)

(Bitbake rev: 25859009b710cb35ac8f9ee9eb3a7305f9e13402)

Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:02 +01:00
Peter Kjellerstedt
ba29029581 useradd-staticids.bbclass: Allow missing UIDs/GIDs to generate warnings
Previously when USERADD_ERROR_DYNAMIC was set to "1", an exception was
raised if no numeric UID/GID could be determined for a user/group. Now
it is possible to set it to either "error", which results in the old
behavior, or "warn" in which case a warning is issued instead.

For backwards compatibility reasons, it is still possible to set
USERADD_ERROR_DYNAMIC to "1" and get an exception in case of failure.

(From OE-Core rev: 58c82f79efee8e68fa63b96a32f54660afb15769)

(From OE-Core rev: 5a37852e4ab3a7438cab372b288663535ecdfee1)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 3037e0df9b)
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:02 +01:00
Peter Kjellerstedt
a4d74c100d useradd-staticids.bbclass: Restore failure on missing UIDs/GIDs
A regression was introduced with commit 3149319a whereby setting
USERADD_ERROR_DYNAMIC no longer resulted in an error for users and
groups that were missing numeric UIDs and GIDs but were not mentioned
at all in any passwd or groups file.

[YOCTO #9777]

(From OE-Core rev: adc0f830a695c417b4d282fa580c5231e1f0afbe)

(From OE-Core rev: b64316f34a45dcf7a31e0486e51799fcd6b0ed2d)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c99750d17e)
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:02 +01:00
Peter Kjellerstedt
b8e749ddd6 documentation.conf: Add information about USERADD variables
(From OE-Core rev: 6064ef3f3f9e03b2bafb5e55f02fac9b17901615)

(From OE-Core rev: 1526c8ebfcada2cb3a8b6122a3cbb51a22c94d2a)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 4ed711a2b3)
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:02 +01:00
Ed Bartosh
6240f7092e pseudo: remove rpath from libpseudo.so
Setting rpath causes clash of host and sdk libc and makes
pseudo to crash with relocation error: libpthread.so.0:
    symbol __libc_vfork, version GLIBC_PRIVATE not defined
    in file libc.so.6 with link time reference

Removing rpath fixes this as it makes pseudo to use only host
pthread and libc.

[YOCTO #9761]

(From OE-Core rev: be5c943e82a21d3ef2dfaaa5b41b6a2814f2fb19)

(From OE-Core rev: d2d2b63abeb38635dcb83d94583d3b5770150bfa)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 8f7f8f7cfa)
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:02 +01:00
Matthew Campbell
b2a6a89a29 openssh: fix init script restart with read-only-rootfs
restart in the init script uses the check_config() function which doesn't have
the $SSHD_OPTS passed through. This causes it to check the wrong config (and
fail when read-only-rootfs is enabled.

(From OE-Core rev: cb6f78072deb8b8c22baf5c31c3bd19d7e0af236)

(From OE-Core rev: ad5a14484b780ea5d48d35dac0de8062c53077de)

Signed-off-by: Matthew Campbell <mcampbell@izotope.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 772ba8d865)
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:02 +01:00
Andre McCurdy
39d2072ae9 binutils: configure with --enable-deterministic-archives
Causes ar to use zero for timestamps and uids/gids by default when
creating static archives, which helps make builds deterministic.

  https://bugzilla.redhat.com/show_bug.cgi?id=1124342
  https://wiki.debian.org/ReproducibleBuilds/TimestampsInStaticLibraries

(From OE-Core rev: df0d525c02780b5a0bd7a177a249c55f41797476)

(From OE-Core rev: 6564ab0ff6be2a2a697798ee99106e1bc3208a94)

Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:02 +01:00
Ross Burton
8bdaefd8bd bitbake.conf: don't set CCACHE_DIR to $HOME by default
If the user hasn't inherited ccache.bbclass then CCACHE_DIR is set to $HOME.

This was to work around a bug (#2554) for some users where if ccache < 3.1.10
(released 2014-10-19) was installed and enabled by default (i.e. /usr/bin/gcc is
a symlink to ccache) and ccache.bbclass wasn't being inherited then autogen
would fail to build because it sets $HOME to /dev/null during the build and
ccache (prior to 3.1.10) would always create CCACHE_DIR even if it was disabled.
As the default is $HOME/.ccache, this results in ccache attempting to create
/dev/null/.ccache.

However there was a mistake in this assignment of CCACHE_DIR - it should be
$HOME/.ccache - as ccache will do cleanup inside CCACHE_DIR which will result in
it deleting $HOME/tmp.  In the future when we can assume that everyone has
ccache 3.1.10 onwards this assignment can be deleted, but as of now we still
support OpenSUSE 13.2 which ships with 3.1.9 so fix the assignment to be
$HOME/.ccache.

[ YOCTO #9798 ]

(From OE-Core rev: 15eaf9cb1fa19036fe4442905876dae94070b04d)

(From OE-Core rev: 8bcfed5a5d8c53a481028ef6e55008670cfbe8dc)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:02 +01:00
Otavio Salvador
74f34dc4d2 initramfs-framework: base: Ensures /run/lock is available
Depending on the module we use, the /run/lock may be required. This
creates it as part of initial setup and thus makes it available for
every sub module.

(From OE-Core rev: 1cf288a0514ae9365fe55a0ff90b5abe35042cef)

(From OE-Core rev: ac26089702a634654530114bbbf151bc0fde5711)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:02 +01:00
Otavio Salvador
e91e5324d0 initramfs-framework: mdev: Add a runtime dependency on busybox-mdev
The mdev support relies on the mdev support inside busybox, which thus
builds the busybox-mdev package. Adding the runtime dependency ensures
its installation fails if mdev support is disabled.

(From OE-Core rev: 48dbdc0317db6836cfeba083844910c15d5beb77)

(From OE-Core rev: a32a7743003fb4b90b0dca7440235eceee787c00)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:02 +01:00
Armin Kuster
99f695ac99 tzdata: update to 2016e
Changes affecting future time stamps

Africa/Cairo observes DST in 2016 from July 7 to the end of October.
Guess October 27 and 24:00 transitions. (Thanks to Steffen Thorsen.)
For future years, guess April's last Thursday to October's last
Thursday except for Ramadan.

Changes affecting past time stamps

Locations while uninhabited now use '-00', not 'zzz', as a
placeholder time zone abbreviation.  This is inspired by Internet
RFC 3339 and is more consistent with numeric time zone
abbreviations already used elsewhere.  The change affects several
arctic and antarctic locations, e.g., America/Cambridge_Bay before
1920 and Antarctica/Troll before 2005.

Asia/Baku's 1992-09-27 transition from +04 (DST) to +04 (non-DST) was
at 03:00, not 23:00 the previous day.  (Thanks to Michael Deckers.)

(From OE-Core rev: ddcf128e76ed0678ce42416531f4ecb309c57439)

(From OE-Core rev: 202e0784f258281f04bda814c83239d4e5543291)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:01 +01:00
Armin Kuster
51e4dabf70 tzcode: update to 2016e
V2: typo in title (jet lagged)
Changes to code

zic now outputs a dummy transition at time 2**31 - 1 in zones
whose POSIX-style TZ strings contain a '<'.  This mostly works
around Qt bug 53071 <https://bugreports.qt.io/browse/QTBUG-53071>.
(Thanks to Zhanibek Adilbekov for reporting the Qt bug.)

Changes affecting documentation and commentary

tz-link.htm says why governments should give plenty of notice for
time zone or DST changes, and refers to Matt Johnson's blog post.
tz-link.htm mentions Tzdata for Elixir.  (Thanks to Matt Johnson.)

(From OE-Core rev: 5f3340e5c966f4233e0cd4ec468b20a1fd5a7346)

(From OE-Core rev: cf79454942bec75dbd830d09d35a70d5cd155772)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:01 +01:00
Hongxu Jia
ff7c814661 libxml2: upgrade to 2.9.4
- Drop configure.ac-fix-cross-compiling-warning.patch,
  libxml2 2.9.4 has fixed it

(From OE-Core rev: 323c7cec65603476994dde196f4c2c151d0e0d31)

updated stable for these reasons:
this includes the following security fixes:
CVE-2016-1762
CVE-2016-3705
CVE-2016-1834
CVE-2016-4483
CVE-2016-1840
CVE-2016-1838
CVE-2016-1839
CVE-2016-1836
CVE-2016-4449
CVE-2016-1837
CVE-2016-1835
CVE-2016-1833
CVE-2016-3627

plus many bug fixes. see http://xmlsoft.org/news.html for details.

(From OE-Core rev: 1576cb4ac24340cda504ee9807b465f8428138f0)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:01 +01:00
Khem Raj
ac84a1ce15 gcc-runtime, libgcc: Symlink c++ header and startup files in target_triplet for SDK use
We build SDKs such that gcc-cross-candian is built for only one
target *-*-linux and then use -muclibc or -mmusl to let it compile
code for other libc variants. This works fine when libc = glibc
however it does not work for c++ programs when libc != glibc since
there are c++ headers installed under ${includedir}/c++/${BINV}/${TARGET_SYS}
which is fine when gcc-runtime and gcc-cross-candian uses same --target options
gxx includedir searches in right triplet, but it fails with musl/uclibc
since gcc will look for glibc based triplet but gcc-runtime will install
them under musl/uclibc triplet.

This patch symlinks the musl/uclibc triplet to glibc triplet when libc != glibc

This fixes SDKs for musl/uclibc

(From OE-Core rev: 610c48be139b046860a234baccf13d1e6fafe2b4)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:01 +01:00
Khem Raj
2bf1e70e3d musl: Create symlinks for stub libraries
Some libraries e.g. libm.so are needed to be
created so that SDKs built with distros which
disable static librararies can have the stubs
and since default linker script requires -lm
this helps in compiling applications with SDK

there are .a equivalents for these libraries
but they do not land in SDKs when static libs
are disabled distrowide

(From OE-Core rev: 0f4dfb6ce041e8ba4bc67de956512cfb6ac225c9)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:01 +01:00
Bruce Ashfield
ec2d08375b linux-yocto/4.1: fix musb compilation error
We had a partial musb change merged into the 4.1 tree, which resulted in:

  | kernel-source/drivers/usb/musb/musb_dsps.c:
  In function 'dsps_create_musb_pdev':
  | kernel-source/drivers/usb/musb/musb_dsps.c:750:8:
  error: 'struct musb_hdrc_config' has no member named 'maximum_speed'
  |   config->maximum_speed = usb_get_maximum_speed(&parent->dev);
  |         ^~

By backporting commit:

  9b7537642cb6a [usb: musb: set the controller speed based on the config setting]

We get our missing structure field, and we can once again build musb.

[YOCTO: #9680]

(From OE-Core rev: b746223787a0195c3a4d16523003c62ec0ac8451)

(From OE-Core rev: b6b0a40e5c9ffe1a2150b36cb2a447a1361d474b)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>

fixup as meta hash was not updated to latest
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:01 +01:00
Bruce Ashfield
217448b911 linux-yocto/4.4: integrate v4.4.11
Updating to the korg stable release.

(From OE-Core rev: bb4ead9b7b1400c37a72d148d9775bdf4210ec37)

(From OE-Core rev: f24cb853eeab542b8f779ee050349051f9cc5541)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:01 +01:00
Bruce Ashfield
662840a9ac linux-yocto/4.4: beaglebone: build in the usb controller drivers
Merging the following meta data change:

[
    In the current codes, we build the drivers for usb controller as
    modules. But for some image types, such as minimal or
    full-cmdline, these driver modules are not installed to the rootfs by
    default. This makes the using of the usb pretty inconvenience. So
    make them all builtin.

    Reported-and-suggested-by: hiims <h@101.org.il>
    Signed-off-by: Kevin Hao <kexin.hao@windriver.com>
    Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
]

(From OE-Core rev: cf5004a37f120043815bb9ee4ae065c1877f404a)

(From OE-Core rev: f26b38c21d63e63b0f3a5f63cc8c164d94d46ece)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:01 +01:00
Bruce Ashfield
1e583b1eb8 linux-yocto/4.1: v4.1.24 and gcc6 powerpc fixes
Bumping to the v4.1.24 -stable release, and backporting a ppc
gcc6 fix from the 4.4 kernel.

(From OE-Core rev: aee5a879032df0c1642f17408b70a33d06df972a)

(From OE-Core rev: cf5ec8c55f2eb8b632c1106c612f7f1500c97e6d)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:01 +01:00
Martin Jansa
682cb00f04 linux-yocto-rt, core-image-rt*: Explicitly skip when PREFERRED_PROVIDER_virtual/kernel isn't set to linux-yocto-rt
* just like linux-yocto-dev is doing
* fixes following errors in world builds:
  ERROR: Nothing PROVIDES 'linux-yocto-rt' (but /home/jenkins/oe/world/shr-core/openembedded-core/meta/recipes-rt/images/core-image-rt-sdk.bb DEPENDS on or otherwise requires it)
  ERROR: linux-yocto-rt was skipped: PREFERRED_PROVIDER_virtual/kernel set to linux-yocto, not linux-yocto-rt
  ERROR: linux-yocto-rt was skipped: PREFERRED_PROVIDER_virtual/kernel set to linux-yocto, not linux-yocto-rt
  ERROR: Required build target 'core-image-rt-sdk' has no buildable providers.
  Missing or unbuildable dependency chain was: ['core-image-rt-sdk', 'linux-yocto-rt']

  ERROR: Nothing PROVIDES 'linux-yocto-rt' (but /home/jenkins/oe/world/shr-core/openembedded-core/meta/recipes-rt/images/core-image-rt.bb DEPENDS on or otherwise requires it)
  ERROR: linux-yocto-rt was skipped: PREFERRED_PROVIDER_virtual/kernel set to linux-yocto, not linux-yocto-rt
  ERROR: linux-yocto-rt was skipped: PREFERRED_PROVIDER_virtual/kernel set to linux-yocto, not linux-yocto-rt
  ERROR: Required build target 'core-image-rt' has no buildable providers.
  Missing or unbuildable dependency chain was: ['core-image-rt', 'linux-yocto-rt']

(From OE-Core rev: 048c901fc32a1fd9a6c4b6f68f618101dfdf94ad)

(From OE-Core rev: 6ff8b98b6f176503671c651bacecef90dd9f4d89)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:01 +01:00
Bruce Ashfield
726a2bf3bd linux-yocto/4.4: gcc6 build fixes (powerpc and mips)
Khem provided fixes to fix gcc6 build issues, these are safe for
all gcc versions, so we integrate them directly.

(From OE-Core rev: f1c75b93a4e11425e595c5ce043fbb0276a41931)

(From OE-Core rev: 4c3a91e1b82a4aedb1884c3413d2f18e530c61be)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:01 +01:00
Robert Yang
5658888f11 eudev: remove eudev-hwdb from RRECOMMENDS_eudev
The eudev-hwdb needs 12M after install, this made small images like
core-image-minimal much biggher than before, and may also hurt the
devices which use udev, so remove it RRECOMMENDS_eudev by default.

(From OE-Core rev: dfb2dc45943d64f3d6da84c0d7b99ac5254fc738)

(From OE-Core rev: 99e2a4351804e77d7f5863aa2d99e2c0ed3839e9)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:00 +01:00
tom.zanussi@linux.intel.com
6a268a6cf1 linux-yocto-rt/4.4: Update KBRANCH
standard/preempt-rt was replaced by standard/preempt-rt/base in
linux-yocto-4.4.git, so KBRANCH needs to be updated accordingly.

(From OE-Core rev: 2c11968fff42d46726028177a59662b2012bb46a)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:00 +01:00
Renato Caldas
b2ab8f4321 perl: reorder tar arguments in do_install_ptest()
On some distributions tar requires the FILE argument to be the last, and
the existing order was causing the subsequent --exclude options to be dropped.

Fixes [YOCTO #9673].

(From OE-Core rev: aef455c655f610eada6899d9f59caf0bdda11795)

Signed-off-by: Renato Caldas <rm.santos.caldas@gmail.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:00 +01:00
Davis, Michael
76a4804f2b ] syslinux.bbclass: Added configurable SYSLINUX_ALLOWOPTIONS variable
The new variable allows for images to be created without an
editable boot line in syslinux.  Default behavior remains unchanged.

Backport from master (935578c139a260c18e437419be82d7fd7e8be81a)

(From OE-Core rev: 9bbacbe563c1c7dd4761b30da1c10e247aa49cd8)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:00 +01:00
Anders Darander
71ee363046 lib/oe/rootfs: Fix DEBUGFS generation, without openssl
In commit 20ea6d274bb0a9a5addb111f32793de49b907865, debugfs generation
for images using opkg, which included openssl was fixed.

However, that broke the generation of the opkg-based images, that lacks
openssl. The error is a python stack trace, showing that shutil.copytree
tries to copy a non-existing directory.

This relates to [YOCTO #9040].

(From OE-Core rev: 6289046a86a64cb2f9d314d1fd99d9ef5ee4f991)

Signed-off-by: Anders Darander <anders@chargestorm.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f6b0b260ce18a30d04edfb0afb7942b9f9a5480b)
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:00 +01:00
Ross Burton
28066b3a21 zip: update SRC_URI
The infozip FTP server appears to have been taken down, so change the SRC_URI to
point at their SourceForge project.

Also as the SRC_URI can't be generated from the version and there is no other
user of the .inc, merge the .bb and .inc together.

[ YOCTO #9655 ]

(From OE-Core rev: 5cb1e0ec46e4fde1c15aeb6812eaaece4840ac1c)

removed fix-security-format.patch changes

(From OE-Core rev: 24c0b9913eb4431703c882d8f2cb18a08c18204d)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:00 +01:00
Aníbal Limón
4b57c55182 classes/base: get_lic_checksum_file_list imporve validaton of url's
When specify an URL different that supported file:// the function
returns an empty path causing an exception without notice the user
that the URL is Malformed.

[YOCTO #9211]

(From OE-Core rev: 6c28251d3d187b60ceb534055dbd8b4fffd06429)

(From OE-Core rev: 81c1327c33e4e9cfcb0f264c19f71e9144c852d6)

Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:00 +01:00
Dengke Du
073c0ba55e coreutils: fix for native and nativesdk
The do_install_append is used for moving/renaming for ALTERNATIVE, but
it breaks native, for example there is no ln, but ln.coreutils, that
makes coreutils-native don't work. This patch fixes the problem.

(From OE-Core rev: 1b5b831d1bbb92760ce01b38347cf0bcaa1bb59f)

(From OE-Core rev: 14bcfa16e33c09ce9898bd58872e4fdf56ed8325)

Signed-off-by: Dengke Du <dengke.du@windriver.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:00 +01:00
Robert Yang
727504235c gnu-efi: set COMPATIBLE_HOST_armv4 to null
It doesn't build with armv4:
lib1funcs.S: Assembler messages:
Assembler messages:
gnu-efi-3.0.3/lib/arm/lib1funcs.S:140: Error: selected processor does not support `clz r3,r1' in ARM mode
gnu-efi-3.0.3/lib/arm/div64.S:95: Error: selected processor does not support `clz r2,r4' in ARM mode
gnu-efi-3.0.3/lib/arm/lib1funcs.S:140: Error: selected processor does not support `clz r2,r0' in ARM mode
[snip]

(From OE-Core rev: a3e958fae0cd6349a03fececcaa3d880c73b9298)

(From OE-Core rev: 7ae869c4aa9153e53a8e033f87d68668c4bb0c69)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:00 +01:00
Robert Yang
6c56ed7b02 cogl-1.0: set COMPATIBLE_HOST_armv4 to null
It doesn't build with armv4:
cogl-texture-deprecated.c  -fPIC -DPIC -o deprecated/.libs/cogl-texture-deprecated.o
{standard input}: Assembler messages:
{standard input}:831: Error: selected processor does not support `clz r3,r0' in ARM mode
make[4]: *** [deprecated/cogl-fixed.lo] Error 1
[snip]

(From OE-Core rev: 858dc0b21e2b65b90c115411c678ae8ca80134e5)

(From OE-Core rev: 7c011a9e0f3a07bb12813022c548b24254886e6d)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:00 +01:00
Ross Burton
306cd99e98 openssh: change URI to http:
The OpenBSD FTP server isn't accepting connections from wget, which breaks
fetches.  Luckily they also have a HTTP server on the same host.

[ YOCTO #9628 ]

(From OE-Core rev: 8b10f0af3c434145b460fd5d7a9f394dc1284260)

(From OE-Core rev: 511f3ba2b66aa61cf8212f95df762b8de1eaa92d)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:36:00 +01:00
Ross Burton
f2b952fe99 unzip: update SRC_URI
The infozip FTP server appears to have been taken down, so change the SRC_URI to
point at their SourceForge project.

[ YOCTO #9655 ]

(From OE-Core rev: 879b2c5ee2ae39d6c1ae9d44ab243d8c7b7874b4)

(From OE-Core rev: 945919ce01385b2ef48dd17b472e806a30b21d13)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:59 +01:00
George McCollister
b17a009b65 wic: fix path parsing, use last occurrence
If the path contains 'scripts' more than once the first occurrence will be
incorrectly used. Use rfind instead of find to find the last occurrence.

(From OE-Core rev: f30c486c17060d2f21618612804a692512ad6a57)

(From OE-Core rev: d34a0fd910babe233d89ad9c1e9d61dcec1c4b63)

Signed-off-by: George McCollister <george.mccollister@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:59 +01:00
Raymond Tan
426eb13fa9 mkefidisk.sh: mount images as read-only
Mount the hddimg and rootfs.img as read-only when creating the bootable
image on the medium. Otherwise, the md5 checksum values of the hddimg will
be altered. As this changed checksum value might cause issue for users
whom would reuse the hddimg.

(From OE-Core rev: a1391c8a603f0ed972ee0bcc8c74999f5f43be43)

(From OE-Core rev: 97c447ba39a6c81f13f02b7abd43138c538285e6)

Signed-off-by: Raymond Tan <raymond.tan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:59 +01:00
Klauer, Daniel
4007b48cf0 python-smartpm: Fix channel command --remove-all option (again)
SmartPM's --remove-all option was unusable, because the fix from
commit 03266e89a6 was lost in commit 5fc580fc44. Thus, add a new
patch to fix --remove-all.

It seems like the previous fix was lost by mistake:
Upstream merged the *old* version of the patch (smartpm 406541f569),
and when SmartPM in oe-core was upgraded to the new upstream release,
the --remove-all fix from the *new* patch was not carried over.

(From OE-Core rev: ba2adda60dd34b6a8feba413e3207dd8e4580294)

(From OE-Core rev: df76bd9ff6289d2b561d8f79a39bc90ba3c6a488)

Signed-off-by: Daniel Klauer <daniel.klauer@gin.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:59 +01:00
Armin Kuster
e506807f54 python-numpy: fix build failure with python-matplotlib
Fix for aarch64, mips64 and ppc64

numpy/core/include/numpy/npy_common.h:149:10: error:
|          #error Unsupported size for type off_t

(From OE-Core rev: dff54b8affad38ffcd5f80308f4c3a265dc2dbae)

(From OE-Core rev: 3b57e9afedc39e473763ac26b7ee014788a915dc)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:59 +01:00
Joshua Lock
5a70719762 openssl: prevent ABI break from earlier krogoth releases
The backported upgrade to 1.0.2h included an updated GNU LD
version-script which results in an ABI change. In order to try and
respect ABI for existing binaries built against fido this commit
partially reverts the version-script to maintain the existing ABI
and instead only add the new symbols required by 1.0.2h.

Suggested-by: Martin Jansa <martin.jansa@gmail.com>
(From OE-Core rev: 480db6be99f9a53d8657b31b846f0079ee1a124f)

(From OE-Core rev: 4d1cb0646eafca44fae5321f48c6114a32fbf164)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:59 +01:00
Ross Burton
7901b12541 bitbake.conf: add default for IMAGE_FSTYPES_DEBUGFS
If debug filesystem generation is enabled but this isn't assigned then the
generation code throws exceptions.

(From OE-Core rev: 0a1b02fab0e2604cd55ea6f45d764a864599213a)

(From OE-Core rev: c622eaff01383b2f18d243d10b2d2dd4393ef6f1)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:59 +01:00
Peter Kjellerstedt
3674b577f6 metadata_scm.bbclass: Do not assume ${COREBASE} is a Git repo
The functions base_detect_revision() and base_detect_branch() try to
extract SCM meta information from the path returned by
base_get_scmbasepath(), which currently returns ${COREBASE}. However,
making the assumption that ${COREBASE} contains SCM meta information
can be false. It is true for Poky, but not necessarily other
environments. A better option is to look for the SCM meta information
based on the meta layer.

Since this works as expected for Git but not SVN, the call to
base_get_metadata_svn_revision() from base_detect_revision() was also
removed. This is not expected to affect anyone (partly based on the
comment in base_get_metadata_svn_revision()).

(From OE-Core rev: 53fd0a4a37023642a770a9fbf3cd5511d3c82af7)

(From OE-Core rev: 59b7a5b64c19afc342ca72ccee99cdcfb818e341)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:59 +01:00
Dengke Du
9f0741a613 lttng-tools: filter random filename of ptest output
When run the ptest of lttng-tools, it produced many random filename
when the tests passed, the output confused QA analysis, so we need
to filter the ptest output if tests passed and add up the passed and
failed tests.

NOTE:The tests invoked the run.sh twice, so it output like this:
...
FAIL:...
unit_tests statistics
total pass: 133 tests passed!
total fail: 5 tests failed!
...
FAIL:...
fast_regression statistics
total pass: 1904 tests passed!
total fail: 202 tests failed!

(From OE-Core rev: 29a8c45be2862be02afe2ebbc5c026a42f351990)

(From OE-Core rev: 2c936f186f3b44e92fb8bd01b0bceb87feec63a4)

Signed-off-by: Dengke Du <dengke.du@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:59 +01:00
André Draszik
31351ce146 feature-arm-neon.inc: restore vfpv3-d16 support
Commit 6661718 (feature-arm-{neon,vfp}.inc: refactor and fix issues)
effectively changed the gcc -mfpu= option from -mfpu=vfpv3-d16 to
-mfpu=vfpv3d16, which gcc doesn't understand.

Restore the original value.

After doing that, we also need to adjust ARMPKGSFX_FPU which should
contain the same value without dash '-' as it is used that way
throughout.

(From OE-Core rev: 972b4fc459258572eeaad8af91e48ee9f0acade7)

(From OE-Core rev: c95b89f65dc7b13c4973e3fd6cdaed331d161219)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:59 +01:00
Randy Witt
1c89eae86b populate_sdk_ext: Change lockedsigs task mismatch to a warning
It has been determined that it is highly likely that users might get
signatures that don't match in an extensible sdk. This doesn't
necessarily happen with oe-core, so we can set the mismatch to an error
during testing if we like.

However, for the case where users are creating their own sdks, we don't
need an error halting their progress. locked-sigs will still function as
it should.

(From OE-Core rev: 6ba86d847275126bf435f144e7d029d10e7ab17d)

(From OE-Core rev: 0822edc390eea27f68bc257531d84959e3cc1efe)

Signed-off-by: Randy Witt <randy.e.witt@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:59 +01:00
Leonardo Sandoval
95441efe6e populate_sdk_ext.bbclass : Show logfile in case the SDK EXT installation failed
To avoid lots of output in the SDK EXT installation phase, system redirects
it to a logfile ($target_sdk_dir/preparing_build_system.log) but in case of error,
the contents should be shown so debugging could be faster.

[YOCTO #9576]

(From OE-Core rev: 227d2cbf9e0b8c35fa6644e3d72e0699db9607fa)

(From OE-Core rev: 502442403e3cdab11e34d355610b07ae4a6db7bb)

Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:59 +01:00
Ian Reinhart Geiser
5465517c85 classes/image_vm: allow different filesystems to be used for VM images.
This allows for things like btrfs to be used vs just ext4.
The default value of ext4 is kept so there is no functional
change unless VM_ROOTFS_TYPE is set in the inherting recipe.

(From OE-Core rev: df0b217f3df2c36a32e5c4afaec36a28bfc77bbb)

(From OE-Core rev: 6ae2c1a2301eceb52523e48f06b5748b3e59451d)

Signed-off-by: Ian Reinhart Geiser <geiseri@geekcentral.pub>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:58 +01:00
Ross Burton
d38658b784 image_types: fix image/compression dependency collection
As compressions can be chained (i.e. cpio.bz2.md5sum) we need to walk the fstype
list to collect the dependencies from each step.

(From OE-Core rev: 05c59ed987cdddc00e9e217032a69197e40a8448)

(From OE-Core rev: b1869e336b937f9c0f41eac781f2a75897e93d30)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:58 +01:00
Ismo Puustinen
abae515395 libpcre: Fix CVE-2016-3191
Fix workspace overflow for (*ACCEPT) with deeply nested parentheses.

The patch is from libpcre version control at
http://vcs.pcre.org/pcre?view=revision&revision=1631 with the ChangeLog
part removed. Original author is Philip Hazel.

(From OE-Core rev: 386534f968f4da376ba7778b5d436bad4ce8355b)

(From OE-Core rev: 4d3dad3329c8a9c9bb5254bb329031e9d2dafd7b)

Signed-off-by: Ismo Puustinen <ismo.puustinen@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:58 +01:00
Armin Kuster
55b9718bd8 librsvg: Security fixes via update to 2.40.15
CVE-2016-4347 librsvg2: DoS parsing SVGs with circular definitions in certain rsvg_cairo_*() functions

CVE-2016-4348 librsvg2: DoS parsing SVGs with circular definitions _rsvg_css_normalize_font_size() function

(From OE-Core rev: 76f061c91fd00370e33bfc3d45ff98d8b3f63c41)

(From OE-Core rev: c5a78cd4e3c0673d358305ea1ad663cf087b44b1)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:58 +01:00
Yuqing Zhu
97753ee3ad alsa-lib: Fix incorrect appl pointer when mmap_commit() returns error.
The appl pointer needs to be updated only when snd_pcm_mmap_commit() is
successfully returned. Or it shouldn't be updated.
This is to fix the avail_update()'s result is incorrect when returns error.

(From OE-Core rev: fcd7e439497174256a5c467532aad402f4d19ca1)

(From OE-Core rev: 4ddef11c6a0f0a2d2ff0d4e556c0bbb3d5999f83)

Signed-off-by: Yuqing Zhu <carol.zhu@nxp.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:58 +01:00
André Draszik
fa602e6cc9 gdb: fix QA warning (uClibc)
WARNING: QA Issue: gdb rdepends on libiconv, but it isn't a build dependency? [build-deps]

We already have virtual/libiconv which is set appropriately
in all environments, so let's use it to fix the issue.

(From OE-Core rev: 255699aeb9275d609e7c03ead69ac902456674dd)

(From OE-Core rev: 6510f9252fdfe21b9fe629a3d9a6a5f525316053)

Signed-off-by: André Draszik <adraszik@tycoint.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:58 +01:00
Diego Rondini
2806bed309 base-files: add some safety checks in profile
Add some safety checks when sourcing files in /etc/profile.d/, in particular:
- source only *.sh files, not every file. This is the practice in use in both
  Fedora and Debian/Ubuntu (see
  https://help.ubuntu.com/community/EnvironmentVariables#A.2Fetc.2Fprofile.d.2F.2A.sh);
- check the input is actually a file and is readable. This check is especially
  important if profile.d is empty, as "*.sh" will get expanded only if
  profile.d is not empty. Previously if profile.d was present but empty,
  "/etc/profile.d/*" was sourced causing errors on login and breaking stuff, for
  example X startup.

(From OE-Core rev: 8961bc4b71723477a3b4a837a1d9c25c1b860b9e)

(From OE-Core rev: fde37b91284953cedc50bc32d22aac65a65afde1)

Signed-off-by: Diego Rondini <diego.ml@zoho.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:58 +01:00
Ross Burton
896b2a0a33 bitbake.conf: change APACHE_MIRROR to point at archive.apache.org
The official download servers www.[country].apace.org only host the latest
release, so the URL is only valid when the recipe is fully up to date.

In the general case this isn't a problem as our mirror list includes
archive.apache.org, but the upstream URI checking (the checkuri task) fails as
that explicitly doesn't use the mirrors.

(From OE-Core rev: ddd003805782e1fcfc3d59d9b0a1277cf3d1fae9)

(From OE-Core rev: bc657f9c310a247047d52253f7b62061be5d8404)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:58 +01:00
Ross Burton
e576607c2d mesa: add PACKAGECONFIG for gbm
gbm is an optional library and some environments (for example, mesa-gl where
there are separate drivers that provide libgbm) may not want to build it.

(From OE-Core rev: bb5265a31587e4a4d4df4d42f343054d6c224e24)

(From OE-Core rev: 40e03c0d5051f0208778792f9b113c35c5a1ef64)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:58 +01:00
Robert Yang
4fe89e0acd libxsettings-client: fix COPYING file
Fixed:
* Move the code of copy COPYING file from do_configure_append() to
  do_patch[postfuncs] since we had moved license-checksum from
  do_package_qa to do_populate_lic.
* Add xsettings-client.c and xsettings-common.c to LIC_FILES_CHKSUM.
* Update comments.

(From OE-Core rev: 89332686ac6c756672cbf67c2df70c5150efa998)

(From OE-Core rev: 6eb173a6f4e67a9426dd19307a65dde6f3bf8974)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:58 +01:00
Ross Burton
ff5b6b7eb1 dbus-test: install executables not libtool wrapper scripts
All of the binaries are linked with libtool now, so install the binaries and not
the wrapper scripts.

Also remove dbus-1.init from SRC_URI as dbus-test doesn't use it.

[ YOCTO #9528 ]

(From OE-Core rev: a4b5076b2c06cafff0ce764955d0aa7c334c7a8e)

(From OE-Core rev: b4db000519da45cc4e911a43dedaa5bd20a8624e)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:58 +01:00
Awais Belal
83731d04b3 mesa-demos: remove demos using obsolete screen surface
The mesa surface EGL_MESA_screen_surface was obsoleted
and then dropped from mesa some time ago. Drop demos
depending on this.

(From OE-Core rev: 061c53c86e483c65f5cd350d6587dbae53c4ee75)

(From OE-Core rev: 31e121789f6fd98751122a48446c435f49b4c7c6)

Signed-off-by: Awais Belal <awais_belal@mentor.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:57 +01:00
Alexander Kanavin
5cf3b562c3 arch-powerpc64.inc: disable the use of qemu usermode on ppc64
It simply does not work at all:
https://lists.yoctoproject.org/pipermail/yocto/2016-April/029698.html

(From OE-Core rev: d044743cdc415745e68f3e26a3a7e2c94caecd93)

(From OE-Core rev: c507e83c33a35b4ba28557da74dd2f6441657b6f)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:57 +01:00
Ross Burton
613a74fcc8 eudev: add PACKAGECONFIG for hwdb
Some users may not want the hwdb at all, so add a PACKAGECONFIG option to
disable building it entirely.

(From OE-Core rev: 7006d3084bd4d6aab2ca64d052df3a014abaf813)

(From OE-Core rev: 87606439e7eadcdcbea510b3facf8754ed7d0220)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:57 +01:00
Maxin B. John
4dd2808856 libxml2: fix dependencies and QA Issues
Fix the following QA warnings:

WARNING: libxml2-2.9.3-r0 do_package_qa: QA Issue: libxml2 rdepends on
libiconv, but it isn't a build dependency, missing libiconv in DEPENDS
or PACKAGECONFIG? [build-deps]

WARNING: libxml2-2.9.3-r0 do_package_qa: QA Issue: libxml2-python
rdepends on libiconv, but it isn't a build dependency, missing libiconv
in DEPENDS or PACKAGECONFIG? [build-deps]

(From OE-Core rev: 3d97a40cffb780cda4d4acf6d87371427912228b)

(From OE-Core rev: 66ee51986db68e1bcd7d8e2b5e91dcdbcb0e6d84)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:57 +01:00
Maxin B. John
f43a689b3c bash: fix dependencies and QA Issue
Fix the following QA warning:

WARNING: bash-4.3.30-r0 do_package_qa: QA Issue: bash rdepends on libiconv,
but it isn't a build dependency, missing libiconv in DEPENDS
or PACKAGECONFIG? [build-deps]

(From OE-Core rev: 5c6b10c7c37d9ca216d56c1667dce29998a2f525)

(From OE-Core rev: 0c398456a7421433ba2d04f23653e33dd089de3f)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:57 +01:00
Maxin B. John
9f968375b4 popt: fix dependencies and QA Issue
Fix the following QA warning:

WARNING: popt-1.16-r3 do_package_qa: QA Issue: popt rdepends on
libiconv, but it isn't a build dependency, missing libiconv in DEPENDS
or PACKAGECONFIG? [build-deps]

(From OE-Core rev: 08aeb5a9e0067e2e9e0fba8614409102e5a0a00e)

(From OE-Core rev: df05fa063c6d0b41156c8af9b46cf894176500e6)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:57 +01:00
Ross Burton
b5fcf4ec1b oeqa/selftest/buildoptions: remove buildhistory signature test
This test is a subset of the new sstate_noop_samesigs test, and less helpful
when it breaks, so remove it.

(From OE-Core rev: 7157261014e1dcbe9a57e7504dbb0ab2a53aa4d8)

(From OE-Core rev: da040dab3b1e15821b1a57a3c4c8c352b15e7fea)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:57 +01:00
Ross Burton
386c7c6ff5 mesa-gl: add missing MESA_CRYPTO to PACKAGECONFIG
Otherwise the build can fail or there is a floating dependency on whatever SSL
library Mesa can find.

(From OE-Core rev: 8ce5d90044bd371d132312e85197ee262855ad29)

(From OE-Core rev: 341182d9e897def5fa956f5a413b4034bf18b68a)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:57 +01:00
Tristan Van Berkom
c29636c369 cross-localedef-native_2.22.bb: Use autotools configure
Use the autotools default configure commands and just tell autotools
where to run configure from.

This fixes the build when running on an aarch64 host, which the prebuilt
configure scripts with glibc 2.22 do not recognize.

(From OE-Core rev: 33d4c758a5d71435437dde74556d32404d91342f)

(From OE-Core rev: ae347b60406990c79fe1b89d23b175a48439274a)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:57 +01:00
Robert Yang
99b89011f1 insane.bbclass: remove workdir from package_qa_check_license()
The parameter workdir is not used in package_qa_check_license()

(From OE-Core rev: 9da177c149c657dc337a1f0d241175f1496fa07d)

(From OE-Core rev: 64d69eba87394f0fbf564da7c37dc6b1d2e7ec1b)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:57 +01:00
Ross Burton
786f7eec78 qemu: remove explicit but redundant native build dependencies
qemu-native was optionally depending on libxext-native if the DISTRO_FEATURES
included x11.  This dependency was required back when we didn't build
libsdl-native and causes an undesirable relationship between DISTRO_FEATURES and
qemu-native.

As the dependency isn't required anymore, remove it.

(From OE-Core rev: f58f364b1ae97805abc5f9eb7b300617f59826b2)

(From OE-Core rev: 9558dfc37abfbdd3e66107b346b78ac31074c4dd)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:57 +01:00
Ross Burton
3d6178ed1d webkitgtk: remove gnome-common dependency
webkitgtk ported to CMake long ago, so by definition can't use gnome-common's
autoconf macros anymore.

(From OE-Core rev: 90890eca6cbefb42f1e63231c93dfe4de4dab014)

(From OE-Core rev: 06cab51af62b0924d86f994f485004ed8c77e86a)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:56 +01:00
Ross Burton
73761bd8ac gnome-desktop3: remove redundant gnome-common dependency
The gnomebase class already depends on gnome-common-native, so there's no need
to depend on it again.

(From OE-Core rev: da33549ea6cb2082ef908480825ffcac07814c16)

(From OE-Core rev: 4a885ec3e7bcb54aadc02c690bd808ba9b6b7983)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:56 +01:00
Ross Burton
1ad7757cbb python-pygobject: remove redundant gnome-common dependency
The gnomebase class already depends on gnome-common-native, so there's no need
to depend on it again.

(From OE-Core rev: 13621e8ac158e1eb65a04054899f7cdec796d38f)

(From OE-Core rev: ab7ab03a3fc732c0962cbfe916dcdc82108ad10f)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:56 +01:00
Paul Eggleton
720c926271 recipetool: create: fix falling back to declared license for npm packages
Fix two problems falling back to the "license" field from package.json
when no license file is present:
1) The function that was supposed to return the license field value was
   always explicitly returning None, and this was never noticed (because
   the test cases never exercised the fallback as they provided license
   files for each module).
2) Fix the main package not falling back because it had a default of an
   empty list, which evaluates to '' instead of 'Unknown'.

(From OE-Core rev: 59381a9450949ce6b4b03adb717e950b999830f3)

(From OE-Core rev: 2d96460f2dcac4263f43ebcb7556722ce55c9918)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:56 +01:00
Paul Eggleton
e89f4e531f recipetool: create: fix picking up false npm package directories
It is possible for a Node.js module to have node_modules subdirectories
that contain no package.json file (e.g. iotivity-node has such a
directory). It appears these should simply be ignored, or else with the
way the current code works we will get errors later.

(From OE-Core rev: 8c522f1f536270e195c8c73f5c72801495e7b33b)

(From OE-Core rev: 8da9185a1c68c8274269841d0867d7d4abf426f0)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:56 +01:00
Denys Dmytriyenko
7925e8942f arch-armv7ve: inherit armv7a tunes file
armv7a is a subset of armv7ve:
https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html

   -march=armv7ve is the armv7-a architecture with virtualization extensions.

By inheriting armv7a from armv7ve it's possible for e.g. Cortex-A15 machines
to include tune-cortexa15.inc and have a full range of optimizations, but
set DEFAULTTUNE as "armv7a" to produce binaries compatible with Cortex-A8
machines, etc.

(From OE-Core rev: 5bf5e68e540dc4e034288702094d306ebd19fef9)

(From OE-Core rev: c2267c885848b438b52b45dd45c8a217cdb661a6)

Signed-off-by: Denys Dmytriyenko <denys@ti.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:56 +01:00
Ross Burton
3a1b40b685 autotools: add default for CACHED_CONFIGUREVARS
Ensure that this variable has a default value so that we don't get debug
messages that the variable couldn't be expanded.

(From OE-Core rev: 27fd1bb7969b558864463450e1837c4400a03f9c)

(From OE-Core rev: 06c3f9f53f30667854dc431344b94d46a3b23f09)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:56 +01:00
Joshua Lock
5b6e5ab134 packagegroup-core-lsb: fix whitespace in meta-qt* warnings
Without these extra space characters the messages are ill-formatted, i.e:
'The meta-qt3 layer should be added, this layer provides Qt 3.xlibraries.
Its intended use is for passing LSB tests as Qt3 isa requirement for LSB.'

Changes to:
'The meta-qt3 layer should be added, this layer provides Qt 3.x libraries.
Its intended use is for passing LSB tests as Qt3 is a requirement for LSB.'

(From OE-Core rev: f0220cd4e686c3d28d222d434f2dbd7f0b41188c)

(From OE-Core rev: e772d7cc924fafdd7a678710bca3e260bd622a01)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:56 +01:00
Stephano Cetola
613fee3563 sysvinit-inittab: restrict labels to 4 chars
The current recipe creates inittab labels based off the device node name
of TTYs used as consoles. If those names exceed the 4 character label
limit of inittab, it will break. This change takes the last 4 chars of
the device names in order to avoid any errors.

[ YOCTO #9529 ]

(From OE-Core rev: 30acc7a6b9e6d1c42ba1df6e5a362d10b43cb4eb)

(From OE-Core rev: 3bfa60541216e1d1bd228b6d8c516d4a5736ae09)

Signed-off-by: Stephano Cetola <stephano.cetola@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:56 +01:00
Stephano Cetola
7b10fd2026 toolchain-scripts: replace source built-in call
Some shells (e.g. dash) do not support the source built-in. This
replaces it with the dot operator.

[ YOCTO #9535 ]

(From OE-Core rev: eef010bd91933d0c4b917d12e5716aa7e16b7307)

(From OE-Core rev: 7c44f2c0f6404cdb46c542f0be455a2cf4078dcb)

Signed-off-by: Stephano Cetola <stephano.cetola@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:56 +01:00
Ross Burton
5581f5a0f6 oeqa/sstatetests: remove temporary DL_DIRs in noop_samesigs
(From OE-Core rev: a98acf4840fc4888c0f4b8998a0a3983c639ecc2)

(From OE-Core rev: 7d6460c0aff047ea2c666956d3a7a1b24d419b23)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:56 +01:00
Ross Burton
9825580d79 oeqa/sstatetests: add http_proxy to no-op hash test
Add two values for http_proxy to verify that changing it doesn't change any
unexpected tasks.

As this causes uninative to fail to fetch, ensure that uninative is always
disabled.

(From OE-Core rev: 7d8ffd22303a5b89cb129e804c124a2d1dedf9ab)

(From OE-Core rev: f65003cbb3cd606d0d520a0ae5ddd21363f9a1e0)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:56 +01:00
Ross Burton
0df81c8485 bluez5: enable out-of-tree builds
A patch is needed to fix a race in out-of-tree builds, and the install-ptest
logic can be simplified.

(From OE-Core rev: 471fdafb340e90a4ab2e31854f69d5204e9380bf)

(From OE-Core rev: 75fad33f495ca8a548b98054e4731940d1491d94)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:55 +01:00
Ross Burton
2aae36ea43 mx: move to autotools instead of autotools-brokensep
Now that MX inherits gtk-doc we can also remove fix-build-dir.patch.

(From OE-Core rev: e8d4e80e5cc98e2e0470c85f3c08574d30d466c1)

(From OE-Core rev: d08070e6b68941a1eba495b1b8386ef8228b04f4)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:55 +01:00
Ross Burton
7691471070 mx-1.0: inherit gtk-doc
(From OE-Core rev: fdc24995bcd6c4206eadbc7398ce7528b1a70773)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:55 +01:00
Ross Burton
d8d2fa887d meta: add comments to explain autotools-brokensep use
(From OE-Core rev: f0ffea3e6047402f194d408a038272a8cadcde4a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:55 +01:00
Ioan-Adrian Ratiu
226d54067e wic: isoimage-isohybrid: fix splash file paths
os.path.join discards the cr_workdir var contents if the path of the
second arguments is absolute.

(From OE-Core rev: dba099d77dcc66b239523a55f3ed26784f9a662a)

(From OE-Core rev: ef37c7d8e4abf896aa791ee01e52a74f24aadb99)

Signed-off-by: Ioan-Adrian Ratiu <adrian.ratiu@ni.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:55 +01:00
Alexander D. Kanevskiy
200f6c8c35 image.bbclass: don't execute compression commands multiple times
In case of chained conversion methods are used via COMPRESS_CMD_*
there is chance that some of steps would be executed multiple times.

[YOCTO #9482]

(From OE-Core rev: 94f61c2682e5cfd819ac84535650c3e0a654415a)

(From OE-Core rev: b12bd3c8ae266b393aedea93587acfbbc5e631cb)

Signed-off-by: Alexander D. Kanevskiy <kad@kad.name>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:55 +01:00
Robert Yang
7f9a10b861 grub_git: set COMPATIBLE_HOST_armv7a to null
It doesn't work with armv7a:
| build-grub-module-verifier: error: unsupported relocation 0x2b.
| make[3]: *** [reboot.mod] Error 1
| make[3]: *** Waiting for unfinished jobs....
| build-grub-module-verifier: error: unsupported relocation 0x2b.
| build-grub-module-verifier: error: unsupported relocation 0x2b.
| make[3]: *** [halt.mod] Error 1
| make[3]: *** [cat.mod] Error 1
| build-grub-module-verifier: error: unsupported relocation 0x2b.
| build-grub-module-verifier: error: unsupported relocation 0x2b.
| build-grub-module-verifier: error: unsupported relocation 0x2b.
| make[3]: *** [disk.mod] Error 1
| make[3]: *** [gptsync.mod] Error 1
| make[3]: *** [eval.mod] Error 1
| build-grub-module-verifier: error:build-grub-module-verifier: error:  unsupported relocation 0x2bunsupported relocation 0x2b.

(From OE-Core rev: a96c3ea4fb4676a13b24b8e8d1164b31080c4f56)

(From OE-Core rev: 91c9f3d41213858847a947ab957aa4b00e6e4245)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:55 +01:00
Ioan-Adrian Ratiu
b347363768 wic: isoimage-isohybrid: add grubefi configfile support
The latest wic kickstart refactoring introduced a bootloader option
"--configfile" which lets wks' specify a custom grub.cfg for use
while booting. This is very useful for creating stuff like boot menus.

This change lets isoimage-isohybrid use --configfile; if this option is
not specified in a wks, it generates a default cfg as before.

(From OE-Core rev: bf673a769514b13558ad9c785ae4da3a5adfd1e0)

(From OE-Core rev: e5e35d055b0a72f2204f9530a1ad39bc51e79217)

Signed-off-by: Ioan-Adrian Ratiu <adrian.ratiu@ni.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:55 +01:00
Ross Burton
70918fbf25 busybox: don't build ar
As it's not 1978 anymore, nobody is using ar for anything apart from static
archives.  If people are using static archives, then binutils provides a far
more capable ar.

(From OE-Core rev: 664a7743a7a2dd6a5c3676c06c35b692af2907e2)

(From OE-Core rev: cd88d65d4c1f8f56ddccb95f7e75cd9f5229602c)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:55 +01:00
Dengke Du
d089de0d16 bash: fixed ptest run-builtins failed
1. redirect the stderr output of the command exec with -l option to
   /dev/null.
   Because when we run command exec with -l option in builtins.tests,
   it is a login shell, so it would read the file /etc/profile, that
   file executes the /usr/bin/resize which added by commit:
	 cc6360f4c4d97e0000f9d3545f381224ee99ce7d
   The /usr/bin/resize is produced by busybox that source code resize.c
   contains:
	fprintf(stderr, ESC"7" ESC"[r" ESC"[999;999H" ESC"[6n");
   In the end, it outputs an escape sequence to the stderr, so when we
   compare the test output file /tmp/xx with builtins.right, it failed.
   we need to redirect the stderr output to the /dev/null to solve the
   problem.

2. ensure the target system contains the locales "en_US.UTF-8".
   Because when run the run-builtins, it executes the source5.sub file
   that contain:
	LC_ALL=en_US.UTF-8
   such as add the following to the local.conf:
	IMAGE_LINGUAS_append = " en-us"

(From OE-Core rev: 5f82f3df7d4a7d6ae9a1ea3b6bc1d620a3d6c329)

(From OE-Core rev: 7107b7832a98c311f5020513229b091be6c4f769)

Signed-off-by: Dengke Du <dengke.du@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:55 +01:00
Ruslan Bilovol
b3dc50e620 libunwind: backport aarch64_be support
Backport 2 patches from v1.2-rc1 tag of libunwind git repo.
These patches add aarch64_be support to this package.

(From OE-Core rev: 396353c3127b20244c4c5cc321adad7d4e48f544)

(From OE-Core rev: e4761a4e62f44847343f939577009b425816b753)

Signed-off-by: Ruslan Bilovol <rbilovol@cisco.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:55 +01:00
Andre McCurdy
64512cbab8 image.bbclass: don't emit redundant IMAGE_CMD_xxx functions
IMAGE_CMD_xxx commands are always inlined within do_image_xxx.

When IMAGE_CMD_xxx is defined as a function (e.g. IMAGE_CMD_btrfs,
IMAGE_CMD_cpio, etc), a redundant copy of the function will be emitted
by default. Remove IMAGE_CMD_xxx 'func' flags to prevent that.

(From OE-Core rev: 118c1ca4d8d62162e87caf287f96d90707ee5903)

(From OE-Core rev: c316e3624b7bc0787904110994d0a519b9ce4d87)

Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:54 +01:00
Christopher Clark
8e44c6bff9 linux-firmware: break out bnx2 mips firmware and WHENCE license
commit a19cfee10c1f1762da601125c17035cf7701ce91
Author: Christopher Clark <christopher.clark6@baesystems.com>
Date:   Thu Apr 14 17:00:20 2016 -0700

    linux-firmware: break out bnx2 mips firmware and WHENCE license

    Break out the bnx2 mips firmware into an independent subpackage.

    Since the bnx2 firmware license is contained in the common WHENCE file
    also package that separately so that other firmware that is licensed
    within that file may depend upon a standalone package containing it.

    Signed-off-by: Christopher Clark <christopher.clark6@baesystems.com>

(From OE-Core rev: a73a316429b256061a7aa48bcf29c5f96df68a8c)

(From OE-Core rev: bc4a122c87b66be194deb829dcaaaa7ad0cc6e0a)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:54 +01:00
Ross Burton
2fcce54c83 package: ensure do_split_packages doesn't return duplicates
do_split_package() constructs a list of packages that were created as it
iterates through the files, so if multiple files go into the same package then
the package will be repeated in the output.

Solve this by using a set() to store the created packages so that duplicates are
ignored.

(From OE-Core rev: b251f8b212f16b16b88183cc9a959d8cfa24fe3c)

(From OE-Core rev: 1aff01ddea6db059322939af0284dac370901546)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:54 +01:00
Ruslan Bilovol
1b769a0774 kernel-uimage: change target image to vmlinux
Commit e69525: "kernel: Build uImage only when really
needed" hardcoded target kernel image to zImage for
case if uImage is generated by OpenEmbedded buildsystem.

However not all kernel architectures support zImage
target, for example AArch64 doesn't, so building of
kernel is failing on this step.

So instead of building zImage target that may not
exist for many architectures, build vmlinux target
that exists for all architectures.

Since kernel-uboot.bbclass uses vmlinux anyway for
creating image, there is no side effect on this change.

(From OE-Core rev: ac5d4d42a5903cbcafd7247c282df1cb98f79f08)

(From OE-Core rev: 4b85501f4713ec1b7f54f2d3728f63cda32b5164)

Signed-off-by: Ruslan Bilovol <rbilovol@cisco.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:54 +01:00
Felipe F. Tonello
be8dfdcb5a packagegroup-core-tools-profile: Enable valgrind on ARMv7a and above
Fixes: e5f41c221356 ("task-core-tools-profile: fix valgrind for arm and
systemtap for mips")

Valgrind works on ARMv7a and above.

(From OE-Core rev: 08cbf28d70505a6564193c3df63a0c1798d5214f)

(From OE-Core rev: dde8b5d61a3e97deabe09b5888094dd148914430)

Signed-off-by: Felipe F. Tonello <eu@felipetonello.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:54 +01:00
Jussi Kukkonen
19f89f76bb gcc-sanitizers: Depend on target gcc
Without this the target gcc might not be in the sysroot
leading to configure failure.

(From OE-Core rev: 329c532db4b2124fa3f4b3ab8c4c6d6c93ca7c2f)

(From OE-Core rev: 198a992cc1e30f1d061d97595c4f08e9a0bade76)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-29 19:35:54 +01:00
Christian Ege
4376fb8517 bluez5: fixed path to bluetoothd in sysvinit script
Within the sysvinit script the path to bluetoothd is wrong. Because of this
the init scripts silently terminates without any message

(From OE-Core rev: 4bcd78028ae1000ea4cd86f4a729d4497618ae85)

Signed-off-by: Christian Ege <k4230r6@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-15 18:18:12 +01:00
Elliot Smith
8f51f6153a toasterconf.json: exclude releases Toaster can't build
Due to changes in master to support Python 3, Toaster is no
longer able to build from master.

Remove references to master and set default release to krogoth.

(From OE-Core rev: b0b91490e4ede61a302eb547da2cc65aa7da87ff)

Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-03 15:02:19 +01:00
Ross Burton
829706d3c5 bitbake: fetch2: export DBUS_SESSION_BUS_ADDRESS to support authentication agents
Some users may want to use authenticated SSH connections with credentials stored
in a keyring, such as gnome-keyring.  These typically need a DBus session bus
connection, so pass DBUS_SESSION_BUS_ADDRESS into the fetcher environment.

To avoid the user needing to set it in their local.conf (which wouldn't be
usable) or adding it to the environment-cleansing whitelist (which would
potentially impact builds) allow the variables being passed to the fetchers to
come from the data store (first) or the original environment (second).

>From bitbake master rev: 20ad1ea87712d042bd5d89ce1957793f7ff71da0

(Bitbake rev: 26379ff2b686313c82af87a3a35b47adbc0183be)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-02 21:14:31 +01:00
Saul Wold
da4bfbef46 gdb: Backport patch to changes with AVX and MPX
The current MPX target descriptions assume that MPX is always combined
with AVX, however that's not correct.  We can have machines with MPX
and without AVX; or machines with AVX and without MPX.

This patch adds new target descriptions for machines that support
both MPX and AVX, as duplicates of the existing MPX descriptions.

The following commit will remove AVX from the MPX-only descriptions.

This commit is backported from 7.12

(From OE-Core rev: 350fd5d16888b3882b861ce955a3383e99420bd4)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-22 08:43:01 +01:00
Ioan-Adrian Ratiu
eff84a76ac gcc-4.9: fix build with gcc 6
Building gcc-cross 4.9.3 with gcc 6 fails with the following error:

error: 'const char* libc_name_p(const char*, unsigned int)' redeclared inline with 'gnu_inline' attribute

This is a backport of the upstream fix.

(From OE-Core rev: 178c1253c4e50d287476436abc92781fa96ef4fc)

Signed-off-by: Ioan-Adrian Ratiu <adrian.ratiu@ni.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-18 20:50:30 +01:00
Anders Darander
e8898f0188 lib/oe/rootfs: Fix DEBUGFS generation for opkg & openssl-cnf
When enabling extra DEBUGFS image generation with opkg, the following error is
seen when openssl-cnf is included in the image.

Collected errors:
 * file_md5sum_alloc: Failed to open file /mnt/cs-builds/anders/oe-build/build-ccu/tmp/work/ccu-oe-linux-gnueabi/ccu-image/1.0-r0/rootfs/usr/lib/ssl/openssl.cnf: No such file or directory.

Lots of similar issues was fixed by an earlier commit in oe-core,
5084ed9401250ed269a49d27b303806ab173c5d5, but openssl-cnf is outside of that fix.

Followup to [YOCTO #9490]

(From OE-Core rev: 20ea6d274bb0a9a5addb111f32793de49b907865)

(From OE-Core rev: cd4ad2b8a5bd11e91e854cea6a36c7b92fb7cea8)

Signed-off-by: Anders Darander <anders@chargestorm.se>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 21:29:30 +01:00
Randy Witt
5d11ed7162 devtool: Fix build-sdk when pn doesn't match filename
If an image with the filename foo.bb could be built using the name "bar"
instead, then build-sdk would fail to create the derivative sdk.

This was because the code assumed that the file name matched the target,
which is not necessarily the case.

(From OE-Core rev: d58a326b6960be14b8a049253559aec9582b7d0d)

(From OE-Core rev: da9e793fd7497e63404c987d68e3b630a89fc1c2)

Signed-off-by: Randy Witt <randy.e.witt@linux.intel.com>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 21:29:30 +01:00
Richard Purdie
22f8a46d2d lib/classextend: Fix determinism issue
The ordering of dependency variables needs to be deterministic to avoid task checksums
changing. Use an OrderedDict to achieve this.

(From OE-Core rev: 855a2d21503856af392ab2d54ccfa270505ba142)

(From OE-Core rev: a89e4e27ba3f4bc3d1c649b3b8ad8ddc4d227d0d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 21:29:30 +01:00
Richard Purdie
cc2522771e update-alternatives: Fix determinism issue
getVarFlags returns a dict and there is therefore no sort order. This
means the order of the X_VARDEPS_X variables can change and hence the
task checksums can change. This can lead to rebuilds of any parts of
the system using update-alternatives and their dependees. This is a
particular issue under python v3.

Add in a sort to make the order of the variables deterministic.

(From OE-Core rev: ecd1bfed5534f83b775a6c79092c04bd13c3af0a)

(From OE-Core rev: 438b140050a9040cdfb150bd53ecfd0647ec7d97)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 21:29:30 +01:00
Richard Purdie
9e01e2ee5c image: Fix IMAGE_FEATURES determinism issue
remain_features uses a dict which means the order is not deterministic. This
can lead to the task hash changing depending on the state of the memory at
parse time. This is particularly noticeable under python v3.

Since the dict is helpful in constructing the data, pass the data through
sort() so the order is always deterministic.

(From OE-Core rev: b08344e28dd33e3af5596007b11185d04fce255e)

(From OE-Core rev: 6443cdfc963045ff305779f5d2326b1d588c6efe)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 21:29:30 +01:00
Armin Kuster
f000d11753 openssl: Security fix via update to 1.0.2h
CVE-2016-2105
CVE-2016-2106
CVE-2016-2109
CVE-2016-2176

https://www.openssl.org/news/secadv/20160503.txt

fixup openssl-avoid-NULL-pointer-dereference-in-EVP_DigestInit_ex.patch

drop crypto_use_bigint_in_x86-64_perl.patch as that fix is in latest.

(From OE-Core rev: c693f34f54257a8eca9fe8c5a9eee5647b7eeb0c)

(From OE-Core rev: 73daaa207754e48efef59b516ad5601129cf4bac)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 21:29:30 +01:00
Bruce Ashfield
9930fca92b linux-yocto/4.4: bump to v4.4.10
(From OE-Core rev: 4f2898f598c466fa0fde5be64ac4d6a60aae68f7)

(From OE-Core rev: 776192eea7530aa9ffd4774d37bc5cfab84c51c4)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:38 +01:00
Bruce Ashfield
387560a718 linux-yocto/4.4: beaglebone: Enable drm for omap
To enable modsetting out of the box, we must turn on DRM.

(From OE-Core rev: 8d2b635cc2491e3d88d3a98465a9c9c063b6b9b5)

(From OE-Core rev: 4ce0d71d1a5433fb47c7c21100ae10d3cc767801)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:38 +01:00
Bruce Ashfield
0e5cbe52b0 linux-yocto/4.4: update to v4.4.9
Updating to the v4.4.9 korg -stable release:

(From OE-Core rev: d8d93df3282ad0f3bd23566152db99577f27ad90)

(From OE-Core rev: 2a7260bb2d59e53528c3c7b42c50f4f9c92250fa)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:37 +01:00
Bruce Ashfield
3261f47b97 linux-yocto/4.4: bump to v4.4.8
Integrating the korg -stable releases.

(From OE-Core rev: 7ec1682e94c731b0a57faf2c01efb51725455592)

(From OE-Core rev: 5688f6062dad5862ed21180f354830fdf9f78337)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:37 +01:00
Bruce Ashfield
b275edd305 linux-yocto-rt/4.1: update to rt23
(From OE-Core rev: ff6e06dcf0dd3da971cde22b3ce46b63f36db089)

(From OE-Core rev: 305995d6c0379c6c3ca818fec7093e499521c052)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:37 +01:00
Bruce Ashfield
0011368622 linux-yocto/4.4: bump to v4.4.8
Integrating the korg -stable releases.

(From OE-Core rev: 688ec7b424b1daa92a5ca92491468af2c1ba226f)

(From OE-Core rev: c447db8744b078a7aaea1be02772e5e9646fded1)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:37 +01:00
Bruce Ashfield
0461a6d433 linux-yocto/4.4: broxton enablement and refactoring
Merging the following commits to refactor and add broxton support:

 0d73a3bf6129 bsp/intel-corei7-64: Add intel-telemetry feature
 cee29e6234c7 features: add intel-telemetry feature
 3a700d737b65 bsp/intel-common: Add broxton to supported SoCs in intel-core* BSPs
 f584a0c22a39 features: add broxton soc feature
 7c2c2bd1a6aa baytrail;valleyisland: Use designware-usb3 feature instead of config
 7216db4cc7a6 features/usb: Add usb-designware2 and 3 features
 ade182658359 cfg/sound.cfg: Add USB audio support
 18ee21d9fba8 features/i915: Add CONFIG_KMS_FB_HELPER=y
 b3fa745962c2 features/soc/skylake: Refactor and comment config fragment

(From OE-Core rev: f6d09d460d8ef4b6468abf5b7813c5eba92adab3)

(From OE-Core rev: 978ca663d45f7147d66be1d38fcaa880d0001c67)

Signed-off-by: California Sullivan <california.l.sullivan@intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:37 +01:00
Bruce Ashfield
b1df214a43 linux-yocto/4.4: skylake configuration
Integrating the following patches for skylake features and config:

  82c2ea9f6bf intel-common: enable support for skylake in intel common bsp
  269b6a7a98e2 intel-common-drivers: enable OSS Support
  71a19d3e6dc6 intel-pinctrl: enable pinctrl driver for skylake
  281f7db8c839 features: soc: enable configurations for skylake.

(From OE-Core rev: ab94ad02c35effad6fd3a1472737d1c73f53f7b3)

(From OE-Core rev: 4c9ec7633405eaee262aa9639cdf28cc4cec9688)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:37 +01:00
Bruce Ashfield
9c353b0849 linux-yocto/4.4: BXT mmc fixes + PUNIT, tubropower, and telemetry backport
Integrating the following mainline (or mainline destined) patches to
support Intel Broxton:

076cc85486fd mmc: sdhci-acpi: Set MMC_CAP_AGGRESSIVE_PM for Broxton controllers
5d9c3aba78a1 mmc: sdhci-pci: Remove redundant runtime PM calls
aa0cd9a58d54 mmc: sdhci: Fix sdhci_runtime_pm_bus_on/off()
f47597d00af0 mmc: sdhci: 64-bit DMA actually has 4-byte alignment
a052a0703aed mmc: sdhci: Fix DMA descriptor with zero data length
f9200dd4bfec mmc: sdio: Fix invalid vdd in voltage switch power cycle
7bbf49488269 mmc: sdhci: Do not BUG on invalid vdd
39fde8b630a6 tools/power turbostat: decode BXT TSC frequency via CPUID
2b4b633da512 tools/power turbostat: initial BXT support
ee708ab5b74e intel_telemetry_debugfs: Fix unused warnings in telemetry debugfs
3053465d066b intel_telemetry_pltdrv: Change verbosity control bits
4c7732ec34bf platform:x86: Add Intel Telemetry Debugfs interfaces
401915397ddc platform:x86: Add Intel telemetry platform driver
eaaee25ac936 platform/x86: Add Intel Telemetry Core Driver
44c969c62726 platform:x86 decouple telemetry driver from the optional IPC resources
a6a2ecaf9980 platform:x86: Add Intel telemetry platform device
e1f16b86eab0 intel_pmc_ipc: Avoid pending IPC1 command during legacy suspend
ae91be46eb0d intel_pmc_ipc: Fix GCR register base address and length
3e15c1b19c81 intel_pmc_ipc: update acpi resource structure for Punit
5ec614cfd985 intel_punit_ipc: add NULL check for input parameters
4c3f01b178db platform:x86: add Intel P-Unit mailbox IPC driver
4826dbaac15f usb: dwc3: pci: add ID for one more Intel Broxton platform

(From OE-Core rev: 802758b2ade24040d16ce4b692a07f97bef39331)

(From OE-Core rev: 86bab7e5eaf19d259e60db6207ef687d43475dec)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:37 +01:00
Bruce Ashfield
3be5cc1cb9 linux-yocto/4.1: make ltsi content available
In the better late than never category, this commit integrates the
ltsi content into linux-yocto 4.1. We we already matching LTSI on
the kernel version front with a small gap in patches. With this
commit, we have a "ltsi" branch that is pure ltsi on the mailine
kernel, and then that commit is merged into standard/base (to
make it available to all BSPs).

(From OE-Core rev: 7071ab47ce566398b398ac3d24eb3620a0353897)

(From OE-Core rev: e874e18ef46798e683c35a0ee7082ee4b6dd8d7e)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:37 +01:00
Bruce Ashfield
2956e5ab34 linux-yocto/4.1: update to v4.1.22
Integrating the korg -stable releases.

(From OE-Core rev: 417b1ef4d180b7434e69e5e8dff20298788f4007)

(From OE-Core rev: 571d500d33e0c555ad689565f299d0ed20c793cc)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:37 +01:00
Bruce Ashfield
de9f4b6982 kernel-yocto: allow branch auditing to be suspended
When working on the yocto-bsp and kernel-lab update for yocto 1.2
we found it was impossible for a end-user BSP to isolate patches
on a branch, since with the following commit:

  [kernel-yocto: enforce SRC_URI specified branch]

Any new branch would be switched to whatever was specified on the
SRC_URI and undoing the work that the yocto-bsp tool did to support
board specific patches.

To fix this, we'll keep the enforcing of branch consistency enabled
by default, but introduce a variable "KMETA_AUDIT" that when not
set will skip the check.

There's no impact for existing users, and it is only something that
other plumbing commands and tools will need to use (or care about).

[YOCTO: #9120]

(From OE-Core rev: 1d4c120edeb6e45665eafd6962a10ebb89d758eb)

(From OE-Core rev: 364a3ba6a3e92fd24be1f9898683f3ae71ac143d)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:37 +01:00
Bruce Ashfield
f911299295 kern-tools: handle directories with, or without, trailing /
Robert P. J. Day reported that configuration fragments and kernel
features were not being found when organized in a particular manner:

  linux
   - $BOARD
       - mm.patch
       - mm.scc
   - ssd_sil.cfg
   - ssd_sil.patch
   - ssd_sil.scc
   - uio.cfg
   .. etc

There was a bug in the tools that did not handle the mix of subdirs
properly and ended up leaving a trailing / on the elements *not* in
the $BOARD subdir. As a result, the configuration fragments were not
properly found when searching the include paths, and a configuration
failure was triggered (due to missing files).

This change tweaks the tools to always check a path with and without
a trailing / when processing config fragments so they can be later
found when processing the configuration of the kernel.

Reported-by: "Robert P. J. Day" <rpjday@crashcourse.ca>
(From OE-Core rev: 92ba77bea59a33b0ddbd5db36e2a1b42e8fd7190)

(From OE-Core rev: 552e0a88a5e666396f0464fa99c953b4759aa35d)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:36 +01:00
Bruce Ashfield
67454487be linux-yocto/4.4: sched/cgroup: Fix/cleanup cgroup teardown/init
backporting a mainline commit to address splats that have been
seen on the 4.4 kernel:

(From OE-Core rev: 52550828662cc430fe4c5273d44c4b818aa21150)

(From OE-Core rev: 361e693b727073c088c25930c9c54b9e43a2b32a)

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:36 +01:00
Bruce Ashfield
c0d988e676 linux-yocto/uvesafb: print error message when task timeout occurs
Integrating the following commit to have a more informative error
message:

    uvesafb: print error message when task timeout occurs

    The driver waits for response from user space for a pending
        task until a timeout (UVESAFB_TIMEOUT) occurs. But the
            existing error message in later steps is a little obscure.

    This patch throws out an error message when timeout happens.

    Signed-off-by: Jianxun Zhang <jianxun.zhang@linux.intel.com>

(From OE-Core rev: 1c6ba3c57eae77adb9ae5c0a60e3a9174ef398b6)

(From OE-Core rev: 8bc749b82e5ab1563cfbda2d32c5213681427f35)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:36 +01:00
Armin Kuster
42a4637a99 gcc: Security fix CVE-2016-4490
(From OE-Core rev: 927a53784f2cdc63332628f3c7938ce78a54c23b)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:36 +01:00
Armin Kuster
5b97ffa980 gcc: Security fix CVE-2016-2226
(From OE-Core rev: 3152fc813db81398bd225323f7de3d59034ed879)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:36 +01:00
Armin Kuster
93f29f536e gcc: Security fix CVE-2016-4489
(From OE-Core rev: 448e625c566d305e70321bdfbbaa39be34211704)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:36 +01:00
Armin Kuster
5a1ac4ea59 gcc: Security fix CVE-2016-4488
(From OE-Core rev: de673641ec75b20a73eda81f3e7e8a8259993a14)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:36 +01:00
Christopher Larson
3aa988ef77 gcc: obey ldflags in the link of libgcc
Explicitly obey it, the way it should, rather than only relying on
--with-linker-hash-style.

(From OE-Core rev: 146f601c7ff8d7af7e3704eaec815cec51953c4f)

Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:53:36 +01:00
Elliot Smith
75ca532114 bitbake: toaster: fix progress bar in MySQL environment
When using MySQL, the project builds info delivered by MySQL
differs from that delivered by SQLite: the former returns text
values from the enumeration for Build outcomes, while the latter
returns the integer value. This causes the progress bar JS to
break, as it is expecting outcome strings.

Modify the recent_build() method to include an outcomeText property
for each Build object, then use this in the conditionals in the
progress bar JS.

[YOCTO #9498]

(Bitbake rev: 9ea7d3ec59c2b09ae60cf0c7f18472355bfb98d7)

Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-13 17:45:58 +01:00
Scott Rifenbark
898a78357e ref-manual: Added GObject Introspection to 2.1 migration section.
(From yocto-docs rev: 0b9ee8da66ff81e0724465f18b0323f1216cb9fa)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-29 07:50:01 +01:00
Scott Rifenbark
4f483c7390 dev-manual: Added Gobject Introspection section.
(From yocto-docs rev: be442bcb971c8685f8a2c6dde92b64479a211e2e)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-29 07:50:01 +01:00
Scott Rifenbark
bdb4b02a08 ref-manual: Added new 2.1 migration misc. Change
Lists packages removed if package-management was not in
IMAGE_FEATURES.

(From yocto-docs rev: 45768d661b800782e32b76b4fa7efa0f70cb7e47)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-29 07:50:01 +01:00
Scott Rifenbark
c914668db2 sdk-manual: Applied review edits throughout the manual.
Updates included minor items for wordings and clarity.  Review
comments from David Kinder, Stephen Ballard, and Paul Eggleton.

(From yocto-docs rev: b25e5cab60f9c1e059fadd844a3a75d9df450ebf)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-29 07:50:00 +01:00
Scott Rifenbark
badbddadcd ref-manual: Applied 2.1 Migration section review edits.
(From yocto-docs rev: d641e8404d13aa96f23c537045d1ce165a0fe119)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-29 07:50:00 +01:00
Scott Rifenbark
5448ecf1a2 sdk-manual: Updated the normal customization.xml file.
Needs to used the downloadable XSL files and not the static
local 1.76.1 versions.

(From yocto-docs rev: 1dfc6081ffb745e424ff5f73c708e2559466831e)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-29 07:50:00 +01:00
Scott Rifenbark
9b1af2eb0c ref-manual: Fixed a grammar consistency error
Referring to multiple options that function the same as
two separate options.  I had two successive sentences that were
inconsistent.

(From yocto-docs rev: 291fa846dba2bfcffae9d0538eba65df71c1092b)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-29 07:50:00 +01:00
Scott Rifenbark
cf88da9ae0 ref-manual: Applied review edit comments to the 2.1 migration section.
(From yocto-docs rev: 50eb2e0bcd4afaa2c097b4fa121051920cf21053)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-29 07:50:00 +01:00
Scott Rifenbark
f832db4bbd dev-manual: Updated the "varname" use to "VARNAME"
This makes the use of this replaceable consistent with the
migration chapter in the ref-manual.

(From yocto-docs rev: 5c2f13f505986d2efc7bfa72c79b933f5a5c5ec1)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-29 07:50:00 +01:00
Scott Rifenbark
9f96ef9d98 sdk-manual: Updated eclipse customization file.
This file was still using the 1.76.1 XSL style sheets.  They need
to use the downloadable ones.

(From yocto-docs rev: 27e29bedb2d1c080a23298fc0ae23054c40971aa)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-29 07:50:00 +01:00
5741 changed files with 283189 additions and 255368 deletions

10
.gitignore vendored
View File

@@ -18,13 +18,9 @@ hob-image-*.bb
!meta-yocto
!meta-yocto-bsp
!meta-yocto-imported
/documentation/*/eclipse/
/documentation/*/*.html
/documentation/*/*.pdf
/documentation/*/*.tgz
/bitbake/doc/bitbake-user-manual/bitbake-user-manual.html
/bitbake/doc/bitbake-user-manual/bitbake-user-manual.pdf
/bitbake/doc/bitbake-user-manual/bitbake-user-manual.tgz
documentation/user-manual/user-manual.html
documentation/user-manual/user-manual.pdf
documentation/user-manual/user-manual.tgz
pull-*/
bitbake/lib/toaster/contrib/tts/backlog.txt
bitbake/lib/toaster/contrib/tts/log/*

58
README Normal file
View File

@@ -0,0 +1,58 @@
Poky
====
Poky is an integration of various components to form a complete prepackaged
build system and development environment. It features support for building
customised embedded device style images. There are reference demo images
featuring a X11/Matchbox/GTK themed UI called Sato. The system supports
cross-architecture application development using QEMU emulation and a
standalone toolchain and SDK with IDE integration.
Additional information on the specifics of hardware that Poky supports
is available in README.hardware. Further hardware support can easily be added
in the form of layers which extend the systems capabilities in a modular way.
As an integration layer Poky consists of several upstream projects such as
BitBake, OpenEmbedded-Core, Yocto documentation and various sources of information
e.g. for the hardware support. Poky is in turn a component of the Yocto Project.
The Yocto Project has extensive documentation about the system including a
reference manual which can be found at:
http://yoctoproject.org/documentation
OpenEmbedded-Core is a layer containing the core metadata for current versions
of OpenEmbedded. It is distro-less (can build a functional image with
DISTRO = "nodistro") and contains only emulated machine support.
For information about OpenEmbedded, see the OpenEmbedded website:
http://www.openembedded.org/
Where to Send Patches
=====================
As Poky is an integration repository (built using a tool called combo-layer),
patches against the various components should be sent to their respective
upstreams:
bitbake:
Git repository: http://git.openembedded.org/bitbake/
Mailing list: bitbake-devel@lists.openembedded.org
documentation:
Git repository: http://git.yoctoproject.org/cgit/cgit.cgi/yocto-docs/
Mailing list: yocto@yoctoproject.org
meta-poky, meta-yocto-bsp:
Git repository: http://git.yoctoproject.org/cgit/cgit.cgi/meta-yocto(-bsp)
Mailing list: poky@yoctoproject.org
Everything else should be sent to the OpenEmbedded Core mailing list. If in
doubt, check the oe-core git repository for the content you intend to modify.
Before sending, be sure the patches apply cleanly to the current oe-core git
repository.
Git repository: http://git.openembedded.org/openembedded-core/
Mailing list: openembedded-core@lists.openembedded.org
Note: The scripts directory should be treated with extra care as it is a mix of
oe-core and poky-specific files.

View File

@@ -1,26 +0,0 @@
OE-Core aims to be able to provide basic LSB compatible images. There
are some challenges for OE as LSB isn't always 100% relevant to its
target embedded and IoT audiences.
One challenge is that the LSB spec is no longer being actively
developed [https://github.com/LinuxStandardBase/lsb] and has
components which are end of life or significantly dated. OE
therefore provides compatibility with the following caveats:
* Qt4 is provided by the separate meta-qt4 layer. Its noted that Qt4
is end of life and this isn't something the core project regularly
tests any longer. Users are recommended to group together to support
maintenance of that layer. [http://git.yoctoproject.org/cgit/cgit.cgi/meta-qt4/]
* mailx has been dropped since its no longer being developed upstream
and there are better, more modern replacements such as s-nail
(http://sdaoden.eu/code.html) or mailutils (http://mailutils.org/).
* A few perl modules that were required by LSB 4.x aren't provided:
libclass-isa, libenv, libdumpvalue, libfile-checktree,
libi18n-collate, libpod-plainer.
* libpng 1.2 isn't provided; oe-core includes the latest release of libpng
instead.
* pax (POSIX standard archive) tool is not provided.

View File

@@ -1 +0,0 @@
meta-yocto-bsp/README.hardware

500
README.hardware Normal file
View File

@@ -0,0 +1,500 @@
Poky Hardware README
====================
This file gives details about using Poky with the reference machines
supported out of the box. A full list of supported reference target machines
can be found by looking in the following directories:
meta/conf/machine/
meta-yocto-bsp/conf/machine/
If you are in doubt about using Poky/OpenEmbedded with your hardware, consult
the documentation for your board/device.
Support for additional devices is normally added by creating BSP layers - for
more information please see the Yocto Board Support Package (BSP) Developer's
Guide - documentation source is in documentation/bspguide or download the PDF
from:
http://yoctoproject.org/documentation
Support for physical reference hardware has now been split out into a
meta-yocto-bsp layer which can be removed separately from other layers if not
needed.
QEMU Emulation Targets
======================
To simplify development, the build system supports building images to
work with the QEMU emulator in system emulation mode. Several architectures
are currently supported:
* ARM (qemuarm)
* x86 (qemux86)
* x86-64 (qemux86-64)
* PowerPC (qemuppc)
* MIPS (qemumips)
Use of the QEMU images is covered in the Yocto Project Reference Manual.
The appropriate MACHINE variable value corresponding to the target is given
in brackets.
Hardware Reference Boards
=========================
The following boards are supported by the meta-yocto-bsp layer:
* Texas Instruments Beaglebone (beaglebone)
* Freescale MPC8315E-RDB (mpc8315e-rdb)
For more information see the board's section below. The appropriate MACHINE
variable value corresponding to the board is given in brackets.
Reference Board Maintenance
===========================
Send pull requests, patches, comments or questions about meta-yocto-bsps to poky@yoctoproject.org
Maintainers: Kevin Hao <kexin.hao@windriver.com>
Bruce Ashfield <bruce.ashfield@windriver.com>
Consumer Devices
================
The following consumer devices are supported by the meta-yocto-bsp layer:
* Intel x86 based PCs and devices (genericx86)
* Ubiquiti Networks EdgeRouter Lite (edgerouter)
For more information see the device's section below. The appropriate MACHINE
variable value corresponding to the device is given in brackets.
Specific Hardware Documentation
===============================
Intel x86 based PCs and devices (genericx86)
==========================================
The genericx86 MACHINE is tested on the following platforms:
Intel Xeon/Core i-Series:
+ Intel Romley Server: Sandy Bridge Xeon processor, C600 PCH (Patsburg), (Canoe Pass CRB)
+ Intel Romley Server: Ivy Bridge Xeon processor, C600 PCH (Patsburg), (Intel SDP S2R3)
+ Intel Crystal Forest Server: Sandy Bridge Xeon processor, DH89xx PCH (Cave Creek), (Stargo CRB)
+ Intel Chief River Mobile: Ivy Bridge Mobile processor, QM77 PCH (Panther Point-M), (Emerald Lake II CRB, Sabino Canyon CRB)
+ Intel Huron River Mobile: Sandy Bridge processor, QM67 PCH (Cougar Point), (Emerald Lake CRB, EVOC EC7-1817LNAR board)
+ Intel Calpella Platform: Core i7 processor, QM57 PCH (Ibex Peak-M), (Red Fort CRB, Emerson MATXM CORE-411-B)
+ Intel Nehalem/Westmere-EP Server: Xeon 56xx/55xx processors, 5520 chipset, ICH10R IOH (82801), (Hanlan Creek CRB)
+ Intel Nehalem Workstation: Xeon 56xx/55xx processors, System SC5650SCWS (Greencity CRB)
+ Intel Picket Post Server: Xeon 56xx/55xx processors (Jasper Forest), 3420 chipset (Ibex Peak), (Osage CRB)
+ Intel Storage Platform: Sandy Bridge Xeon processor, C600 PCH (Patsburg), (Oak Creek Canyon CRB)
+ Intel Shark Bay Client Platform: Haswell processor, LynxPoint PCH, (Walnut Canyon CRB, Lava Canyon CRB, Basking Ridge CRB, Flathead Creek CRB)
+ Intel Shark Bay Ultrabook Platform: Haswell ULT processor, Lynx Point-LP PCH, (WhiteTip Mountain 1 CRB)
Intel Atom platforms:
+ Intel embedded Menlow: Intel Atom Z510/530 CPU, System Controller Hub US15W (Portwell NANO-8044)
+ Intel Luna Pier: Intel Atom N4xx/D5xx series CPU (aka: Pineview-D & -M), 82801HM I/O Hub (ICH8M), (Advantech AIMB-212, Moon Creek CRB)
+ Intel Queens Bay platform: Intel Atom E6xx CPU (aka: Tunnel Creek), Topcliff EG20T I/O Hub (Emerson NITX-315, Crown Bay CRB, Minnow Board)
+ Intel Fish River Island platform: Intel Atom E6xx CPU (aka: Tunnel Creek), Topcliff EG20T I/O Hub (Kontron KM2M806)
+ Intel Cedar Trail platform: Intel Atom N2000 & D2000 series CPU (aka: Cedarview), NM10 Express Chipset (Norco kit BIS-6630, Cedar Rock CRB)
and is likely to work on many unlisted Atom/Core/Xeon based devices. The MACHINE
type supports ethernet, wifi, sound, and Intel/vesa graphics by default in
addition to common PC input devices, busses, and so on.
Depending on the device, it can boot from a traditional hard-disk, a USB device,
or over the network. Writing generated images to physical media is
straightforward with a caveat for USB devices. The following examples assume the
target boot device is /dev/sdb, be sure to verify this and use the correct
device as the following commands are run as root and are not reversable.
USB Device:
1. Build a live image. This image type consists of a simple filesystem
without a partition table, which is suitable for USB keys, and with the
default setup for the genericx86 machine, this image type is built
automatically for any image you build. For example:
$ bitbake core-image-minimal
2. Use the "dd" utility to write the image to the raw block device. For
example:
# dd if=core-image-minimal-genericx86.hddimg of=/dev/sdb
If the device fails to boot with "Boot error" displayed, or apparently
stops just after the SYSLINUX version banner, it is likely the BIOS cannot
understand the physical layout of the disk (or rather it expects a
particular layout and cannot handle anything else). There are two possible
solutions to this problem:
1. Change the BIOS USB Device setting to HDD mode. The label will vary by
device, but the idea is to force BIOS to read the Cylinder/Head/Sector
geometry from the device.
2. Without such an option, the BIOS generally boots the device in USB-ZIP
mode. To write an image to a USB device that will be bootable in
USB-ZIP mode, carry out the following actions:
a. Determine the geometry of your USB device using fdisk:
# fdisk /dev/sdb
Command (m for help): p
Disk /dev/sdb: 4011 MB, 4011491328 bytes
124 heads, 62 sectors/track, 1019 cylinders, total 7834944 sectors
...
Command (m for help): q
b. Configure the USB device for USB-ZIP mode:
# mkdiskimage -4 /dev/sdb 1019 124 62
Where 1019, 124 and 62 are the cylinder, head and sectors/track counts
as reported by fdisk (substitute the values reported for your device).
When the operation has finished and the access LED (if any) on the
device stops flashing, remove and reinsert the device to allow the
kernel to detect the new partition layout.
c. Copy the contents of the image to the USB-ZIP mode device:
# mkdir /tmp/image
# mkdir /tmp/usbkey
# mount -o loop core-image-minimal-genericx86.hddimg /tmp/image
# mount /dev/sdb4 /tmp/usbkey
# cp -rf /tmp/image/* /tmp/usbkey
d. Install the syslinux boot loader:
# syslinux /dev/sdb4
e. Unmount everything:
# umount /tmp/image
# umount /tmp/usbkey
Install the boot device in the target board and configure the BIOS to boot
from it.
For more details on the USB-ZIP scenario, see the syslinux documentation:
http://git.kernel.org/?p=boot/syslinux/syslinux.git;a=blob_plain;f=doc/usbkey.txt;hb=HEAD
Texas Instruments Beaglebone (beaglebone)
=========================================
The Beaglebone is an ARM Cortex-A8 development board with USB, Ethernet, 2D/3D
accelerated graphics, audio, serial, JTAG, and SD/MMC. The Black adds a faster
CPU, more RAM, eMMC flash and a micro HDMI port. The beaglebone MACHINE is
tested on the following platforms:
o Beaglebone Black A6
o Beaglebone A6 (the original "White" model)
The Beaglebone Black has eMMC, while the White does not. Pressing the USER/BOOT
button when powering on will temporarily change the boot order. But for the sake
of simplicity, these instructions assume you have erased the eMMC on the Black,
so its boot behavior matches that of the White and boots off of SD card. To do
this, issue the following commands from the u-boot prompt:
# mmc dev 1
# mmc erase 0 512
To further tailor these instructions for your board, please refer to the
documentation at http://www.beagleboard.org/bone and http://www.beagleboard.org/black
From a Linux system with access to the image files perform the following steps
as root, replacing mmcblk0* with the SD card device on your machine (such as sdc
if used via a usb card reader):
1. Partition and format an SD card:
# fdisk -lu /dev/mmcblk0
Disk /dev/mmcblk0: 3951 MB, 3951034368 bytes
255 heads, 63 sectors/track, 480 cylinders, total 7716864 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 * 63 144584 72261 c Win95 FAT32 (LBA)
/dev/mmcblk0p2 144585 465884 160650 83 Linux
# mkfs.vfat -F 16 -n "boot" /dev/mmcblk0p1
# mke2fs -j -L "root" /dev/mmcblk0p2
The following assumes the SD card partitions 1 and 2 are mounted at
/media/boot and /media/root respectively. Removing the card and reinserting
it will do just that on most modern Linux desktop environments.
The files referenced below are made available after the build in
build/tmp/deploy/images.
2. Install the boot loaders
# cp MLO-beaglebone /media/boot/MLO
# cp u-boot-beaglebone.img /media/boot/u-boot.img
3. Install the root filesystem
# tar x -C /media/root -f core-image-$IMAGE_TYPE-beaglebone.tar.bz2
4. If using core-image-base or core-image-sato images, the SD card is ready
and rootfs already contains the kernel, modules and device tree (DTB)
files necessary to be booted with U-boot's default configuration, so
skip directly to step 8.
For core-image-minimal, proceed through next steps.
5. If using core-image-minimal rootfs, install the modules
# tar x -C /media/root -f modules-beaglebone.tgz
6. If using core-image-minimal rootfs, install the kernel zImage into /boot
directory of rootfs
# cp zImage-beaglebone.bin /media/root/boot/zImage
7. If using core-image-minimal rootfs, also install device tree (DTB) files
into /boot directory of rootfs
# cp zImage-am335x-bone.dtb /media/root/boot/am335x-bone.dtb
# cp zImage-am335x-boneblack.dtb /media/root/boot/am335x-boneblack.dtb
8. Unmount the SD partitions, insert the SD card into the Beaglebone, and
boot the Beaglebone
Freescale MPC8315E-RDB (mpc8315e-rdb)
=====================================
The MPC8315 PowerPC reference platform (MPC8315E-RDB) is aimed at hardware and
software development of network attached storage (NAS) and digital media server
applications. The MPC8315E-RDB features the PowerQUICC II Pro processor, which
includes a built-in security accelerator.
(Note: you may find it easier to order MPC8315E-RDBA; this appears to be the
same board in an enclosure with accessories. In any case it is fully
compatible with the instructions given here.)
Setup instructions
------------------
You will need the following:
* NFS root setup on your workstation
* TFTP server installed on your workstation
* Straight-thru 9-conductor serial cable (DB9, M/F) connected from your
PC to UART1
* Ethernet connected to the first ethernet port on the board
--- Preparation ---
Note: if you have altered your board's ethernet MAC address(es) from the
defaults, or you need to do so because you want multiple boards on the same
network, then you will need to change the values in the dts file (patch
linux/arch/powerpc/boot/dts/mpc8315erdb.dts within the kernel source). If
you have left them at the factory default then you shouldn't need to do
anything here.
--- Booting from NFS root ---
Load the kernel and dtb (device tree blob), and boot the system as follows:
1. Get the kernel (uImage-mpc8315e-rdb.bin) and dtb (uImage-mpc8315e-rdb.dtb)
files from the tmp/deploy directory, and make them available on your TFTP
server.
2. Connect the board's first serial port to your workstation and then start up
your favourite serial terminal so that you will be able to interact with
the serial console. If you don't have a favourite, picocom is suggested:
$ picocom /dev/ttyUSB0 -b 115200
3. Power up or reset the board and press a key on the terminal when prompted
to get to the U-Boot command line
4. Set up the environment in U-Boot:
=> setenv ipaddr <board ip>
=> setenv serverip <tftp server ip>
=> setenv bootargs root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:255.255.255.0:mpc8315e:eth0:off console=ttyS0,115200
5. Download the kernel and dtb, and boot:
=> tftp 1000000 uImage-mpc8315e-rdb.bin
=> tftp 2000000 uImage-mpc8315e-rdb.dtb
=> bootm 1000000 - 2000000
--- Booting from JFFS2 root ---
1. First boot the board with NFS root.
2. Erase the MTD partition which will be used as root:
$ flash_eraseall /dev/mtd3
3. Copy the JFFS2 image to the MTD partition:
$ flashcp core-image-minimal-mpc8315e-rdb.jffs2 /dev/mtd3
4. Then reboot the board and set up the environment in U-Boot:
=> setenv bootargs root=/dev/mtdblock3 rootfstype=jffs2 console=ttyS0,115200
Ubiquiti Networks EdgeRouter Lite (edgerouter)
==============================================
The EdgeRouter Lite is part of the EdgeMax series. It is a MIPS64 router
(based on the Cavium Octeon processor) with 512MB of RAM, which uses an
internal USB pendrive for storage.
Setup instructions
------------------
You will need the following:
* RJ45 -> serial ("rollover") cable connected from your PC to the CONSOLE
port on the device
* Ethernet connected to the first ethernet port on the board
If using NFS as part of the setup process, you will also need:
* NFS root setup on your workstation
* TFTP server installed on your workstation (if fetching the kernel from
TFTP, see below).
--- Preparation ---
Build an image (e.g. core-image-minimal) using "edgerouter" as the MACHINE.
In the following instruction it is based on core-image-minimal. Another target
may be similiar with it.
--- Booting from NFS root / kernel via TFTP ---
Load the kernel, and boot the system as follows:
1. Get the kernel (vmlinux) file from the tmp/deploy/images/edgerouter
directory, and make them available on your TFTP server.
2. Connect the board's first serial port to your workstation and then start up
your favourite serial terminal so that you will be able to interact with
the serial console. If you don't have a favourite, picocom is suggested:
$ picocom /dev/ttyS0 -b 115200
3. Power up or reset the board and press a key on the terminal when prompted
to get to the U-Boot command line
4. Set up the environment in U-Boot:
=> setenv ipaddr <board ip>
=> setenv serverip <tftp server ip>
5. Download the kernel and boot:
=> tftp tftp $loadaddr vmlinux
=> bootoctlinux $loadaddr coremask=0x3 root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:<netmask>:edgerouter:eth0:off mtdparts=phys_mapped_flash:512k(boot0),512k(boot1),64k@3072k(eeprom)
--- Booting from USB root ---
To boot from the USB disk, you either need to remove it from the edgerouter
box and populate it from another computer, or use a previously booted NFS
image and populate from the edgerouter itself.
Type 1: Mounted USB disk
------------------------
To boot from the USB disk there are two available partitions on the factory
USB storage. The rest of this guide assumes that these partitions are left
intact. If you change the partition scheme, you must update your boot method
appropriately.
The standard partitions are:
- 1: vfat partition containing factory kernels
- 2: ext3 partition for the root filesystem.
You can place the kernel on either partition 1, or partition 2, but the roofs
must go on partition 2 (due to its size).
Note: If you place the kernel on the ext3 partition, you must re-create the
ext3 filesystem, since the factory u-boot can only handle 128 byte inodes and
cannot read the partition otherwise.
Steps:
1. Remove the USB disk from the edgerouter and insert it into a computer
that has access to your build artifacts.
2. Copy the kernel image to the USB storage (assuming discovered as 'sdb' on
the development machine):
2a) if booting from vfat
# mount /dev/sdb1 /mnt
# cp tmp/deploy/images/edgerouter/vmlinux /mnt
# umount /mnt
2b) if booting from ext3
# mkfs.ext3 -I 128 /dev/sdb2
# mount /dev/sdb2 /mnt
# mkdir /mnt/boot
# cp tmp/deploy/images/edgerouter/vmlinux /mnt/boot
# umount /mnt
3. Extract the rootfs to the USB storage ext3 partition
# mount /dev/sdb2 /mnt
# tar -xvjpf core-image-minimal-XXX.tar.bz2 -C /mnt
# umount /mnt
4. Reboot the board and press a key on the terminal when prompted to get to the U-Boot
command line:
5. Load the kernel and boot:
5a) vfat boot
=> fatload usb 0:1 $loadaddr vmlinux
5b) ext3 boot
=> ext2load usb 0:2 $loadaddr boot/vmlinux
=> bootoctlinux $loadaddr coremask=0x3 root=/dev/sda2 rw rootwait mtdparts=phys_mapped_flash:512k(boot0),512k(boot1),64k@3072k(eeprom)
Type 2: NFS
-----------
Note: If you place the kernel on the ext3 partition, you must re-create the
ext3 filesystem, since the factory u-boot can only handle 128 byte inodes and
cannot read the partition otherwise.
These boot instructions assume that you have recreated the ext3 filesystem with
128 byte inodes, you have an updated uboot or you are running and image capable
of making the filesystem on the board itself.
1. Boot from NFS root
2. Mount the USB disk partition 2 and then extract the contents of
tmp/deploy/core-image-XXXX.tar.bz2 into it.
Before starting, copy core-image-minimal-xxx.tar.bz2 and vmlinux into
rootfs path on your workstation.
and then,
# mount /dev/sda2 /media/sda2
# tar -xvjpf core-image-minimal-XXX.tar.bz2 -C /media/sda2
# cp vmlinux /media/sda2/boot/vmlinux
# umount /media/sda2
# reboot
3. Reboot the board and press a key on the terminal when prompted to get to the U-Boot
command line:
# reboot
4. Load the kernel and boot:
=> ext2load usb 0:2 $loadaddr boot/vmlinux
=> bootoctlinux $loadaddr coremask=0x3 root=/dev/sda2 rw rootwait mtdparts=phys_mapped_flash:512k(boot0),512k(boot1),64k@3072k(eeprom)

View File

@@ -1 +0,0 @@
meta-poky/README.poky

View File

@@ -1,15 +0,0 @@
QEMU Emulation Targets
======================
To simplify development, the build system supports building images to
work with the QEMU emulator in system emulation mode. Several architectures
are currently supported in 32 and 64 bit variants:
* ARM (qemuarm + qemuarm64)
* x86 (qemux86 + qemux86-64)
* PowerPC (qemuppc only)
* MIPS (qemumips + qemumips64)
Use of the QEMU images is covered in the Yocto Project Reference Manual.
The appropriate MACHINE variable value corresponding to the target is given
in brackets.

View File

@@ -5,15 +5,8 @@ The following external components are distributed with this software:
* The Toaster Simple UI application is based upon the Django project template, the files of which are covered by the BSD license and are copyright (c) Django Software
Foundation and individual contributors.
* Twitter Bootstrap (including Glyphicons), redistributed under the MIT license
* Twitter Bootstrap (including Glyphicons), redistributed under the Apache License 2.0.
* jQuery is redistributed under the MIT license.
* Twitter typeahead.js redistributed under the MIT license. Note that the JS source has one small modification, so the full unminified file is currently included to make it obvious where this is.
* jsrender is redistributed under the MIT license.
* QUnit is redistributed under the MIT license.
* Font Awesome fonts redistributed under the SIL Open Font License 1.1
* simplediff is distributed under the zlib license.

View File

@@ -1,35 +0,0 @@
Bitbake
=======
BitBake is a generic task execution engine that allows shell and Python tasks to be run
efficiently and in parallel while working within complex inter-task dependency constraints.
One of BitBake's main users, OpenEmbedded, takes this core and builds embedded Linux software
stacks using a task-oriented approach.
For information about Bitbake, see the OpenEmbedded website:
http://www.openembedded.org/
Bitbake plain documentation can be found under the doc directory or its integrated
html version at the Yocto Project website:
http://yoctoproject.org/documentation
Contributing
------------
Please refer to
http://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
for guidelines on how to submit patches, just note that the latter documentation is intended
for OpenEmbedded (and its core) not bitbake patches (bitbake-devel@lists.openembedded.org)
but in general main guidelines apply. Once the commit(s) have been created, the way to send
the patch is through git-send-email. For example, to send the last commit (HEAD) on current
branch, type:
git send-email -M -1 --to bitbake-devel@lists.openembedded.org
Mailing list:
http://lists.openembedded.org/mailman/listinfo/bitbake-devel
Source code:
http://git.openembedded.org/bitbake/

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
@@ -35,10 +35,7 @@ except RuntimeError as exc:
from bb import cookerdata
from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
__version__ = "1.39.1"
__version__ = "1.30.0"
if __name__ == "__main__":
if __version__ != bb.__version__:

View File

@@ -1,9 +1,9 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# bitbake-diffsigs
# BitBake task signature data comparison utility
#
# Copyright (C) 2012-2013, 2017 Intel Corporation
# Copyright (C) 2012-2013 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -22,162 +22,117 @@ import os
import sys
import warnings
import fnmatch
import argparse
import optparse
import logging
import pickle
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb.tinfoil
import bb.siggen
import bb.msg
logger = bb.msg.logger_create('bitbake-diffsigs')
def logger_create(name, output=sys.stderr):
logger = logging.getLogger(name)
console = logging.StreamHandler(output)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if output.isatty():
format.enable_color()
console.setFormatter(format)
logger.addHandler(console)
logger.setLevel(logging.INFO)
return logger
def find_siginfo(tinfoil, pn, taskname, sigs=None):
result = None
tinfoil.set_event_mask(['bb.event.FindSigInfoResult',
'logging.LogRecord',
'bb.command.CommandCompleted',
'bb.command.CommandFailed'])
ret = tinfoil.run_command('findSigInfo', pn, taskname, sigs)
if ret:
while True:
event = tinfoil.wait_event(1)
if event:
if isinstance(event, bb.command.CommandCompleted):
break
elif isinstance(event, bb.command.CommandFailed):
logger.error(str(event))
sys.exit(2)
elif isinstance(event, bb.event.FindSigInfoResult):
result = event.result
elif isinstance(event, logging.LogRecord):
logger.handle(event)
else:
logger.error('No result returned from findSigInfo command')
sys.exit(2)
return result
logger = logger_create('bitbake-diffsigs')
def find_compare_task(bbhandler, pn, taskname, sig1=None, sig2=None, color=False):
def find_compare_task(bbhandler, pn, taskname):
""" Find the most recent signature files for the specified PN/task and compare them """
def get_hashval(siginfo):
if siginfo.endswith('.siginfo'):
return siginfo.rpartition(':')[2].partition('_')[0]
else:
return siginfo.rpartition('.')[2]
if not hasattr(bb.siggen, 'find_siginfo'):
logger.error('Metadata does not support finding signature data files')
sys.exit(1)
if not taskname.startswith('do_'):
taskname = 'do_%s' % taskname
if sig1 and sig2:
sigfiles = find_siginfo(bbhandler, pn, taskname, [sig1, sig2])
if len(sigfiles) == 0:
logger.error('No sigdata files found matching %s %s matching either %s or %s' % (pn, taskname, sig1, sig2))
sys.exit(1)
elif not sig1 in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig1))
sys.exit(1)
elif not sig2 in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig2))
sys.exit(1)
latestfiles = [sigfiles[sig1], sigfiles[sig2]]
filedates = bb.siggen.find_siginfo(pn, taskname, None, bbhandler.config_data)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-3:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
elif len(latestfiles) < 2:
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (pn, taskname))
sys.exit(1)
else:
filedates = find_siginfo(bbhandler, pn, taskname)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-3:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
elif len(latestfiles) < 2:
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (pn, taskname))
sys.exit(1)
# It's possible that latestfiles contain 3 elements and the first two have the same hash value.
# In this case, we delete the second element.
# The above case is actually the most common one. Because we may have sigdata file and siginfo
# file having the same hash value. Comparing such two files makes no sense.
if len(latestfiles) == 3:
hash0 = get_hashval(latestfiles[0])
hash1 = get_hashval(latestfiles[1])
if hash0 == hash1:
latestfiles.pop(1)
# Define recursion callback
def recursecb(key, hash1, hash2):
hashes = [hash1, hash2]
hashfiles = find_siginfo(bbhandler, key, None, hashes)
# Define recursion callback
def recursecb(key, hash1, hash2):
hashes = [hash1, hash2]
hashfiles = bb.siggen.find_siginfo(key, None, hashes, bbhandler.config_data)
recout = []
if len(hashfiles) == 0:
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
elif not hash1 in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash1))
elif not hash2 in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash2))
else:
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb, color=color)
for change in out2:
for line in change.splitlines():
recout.append(' ' + line)
recout = []
if len(hashfiles) == 2:
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb)
recout.extend(list(' ' + l for l in out2))
else:
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
return recout
return recout
# Recurse into signature comparison
logger.debug("Signature file (previous): %s" % latestfiles[-2])
logger.debug("Signature file (latest): %s" % latestfiles[-1])
output = bb.siggen.compare_sigfiles(latestfiles[-2], latestfiles[-1], recursecb, color=color)
if output:
print('\n'.join(output))
# Recurse into signature comparison
output = bb.siggen.compare_sigfiles(latestfiles[0], latestfiles[1], recursecb)
if output:
print '\n'.join(output)
sys.exit(0)
parser = argparse.ArgumentParser(
description="Compares siginfo/sigdata files written out by BitBake")
parser = optparse.OptionParser(
description = "Compares siginfo/sigdata files written out by BitBake",
usage = """
%prog -t recipename taskname
%prog sigdatafile1 sigdatafile2
%prog sigdatafile1""")
parser.add_argument('-d', '--debug',
help='Enable debug output',
action='store_true')
parser.add_option("-t", "--task",
help = "find the signature data files for last two runs of the specified task and compare them",
action="store", dest="taskargs", nargs=2, metavar='recipename taskname')
parser.add_argument('--color',
help='Colorize output (where %(metavar)s is %(choices)s)',
choices=['auto', 'always', 'never'], default='auto', metavar='color')
parser.add_argument("-t", "--task",
help="find the signature data files for last two runs of the specified task and compare them",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
parser.add_argument("-s", "--signature",
help="With -t/--task, specify the signatures to look for instead of taking the last two",
action="store", dest="sigargs", nargs=2, metavar=('fromsig', 'tosig'))
parser.add_argument("sigdatafile1",
help="First signature file to compare (or signature file to dump, if second not specified). Not used when using -t/--task.",
action="store", nargs='?')
parser.add_argument("sigdatafile2",
help="Second signature file to compare",
action="store", nargs='?')
options = parser.parse_args()
if options.debug:
logger.setLevel(logging.DEBUG)
color = (options.color == 'always' or (options.color == 'auto' and sys.stdout.isatty()))
options, args = parser.parse_args(sys.argv)
if options.taskargs:
with bb.tinfoil.Tinfoil() as tinfoil:
tinfoil.prepare(config_only=True)
if options.sigargs:
find_compare_task(tinfoil, options.taskargs[0], options.taskargs[1], options.sigargs[0], options.sigargs[1], color=color)
else:
find_compare_task(tinfoil, options.taskargs[0], options.taskargs[1], color=color)
tinfoil = bb.tinfoil.Tinfoil()
tinfoil.prepare(config_only = True)
find_compare_task(tinfoil, options.taskargs[0], options.taskargs[1])
else:
if options.sigargs:
logger.error('-s/--signature can only be used together with -t/--task')
sys.exit(1)
try:
if options.sigdatafile1 and options.sigdatafile2:
output = bb.siggen.compare_sigfiles(options.sigdatafile1, options.sigdatafile2, color=color)
elif options.sigdatafile1:
output = bb.siggen.dump_sigfile(options.sigdatafile1)
else:
logger.error('Must specify signature file(s) or -t/--task')
parser.print_help()
if len(args) == 1:
parser.print_help()
else:
import cPickle
try:
if len(args) == 2:
output = bb.siggen.dump_sigfile(sys.argv[1])
else:
output = bb.siggen.compare_sigfiles(sys.argv[1], sys.argv[2])
except IOError as e:
logger.error(str(e))
sys.exit(1)
except cPickle.UnpicklingError, EOFError:
logger.error('Invalid signature data - ensure you are specifying sigdata/siginfo files')
sys.exit(1)
except IOError as e:
logger.error(str(e))
sys.exit(1)
except (pickle.UnpicklingError, EOFError):
logger.error('Invalid signature data - ensure you are specifying sigdata/siginfo files')
sys.exit(1)
if output:
print('\n'.join(output))
if output:
print '\n'.join(output)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# bitbake-dumpsig
# BitBake task signature dump utility
@@ -23,72 +23,43 @@ import sys
import warnings
import optparse
import logging
import pickle
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb.tinfoil
import bb.siggen
import bb.msg
logger = bb.msg.logger_create('bitbake-dumpsig')
def logger_create(name, output=sys.stderr):
logger = logging.getLogger(name)
console = logging.StreamHandler(output)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if output.isatty():
format.enable_color()
console.setFormatter(format)
logger.addHandler(console)
logger.setLevel(logging.INFO)
return logger
def find_siginfo_task(bbhandler, pn, taskname):
""" Find the most recent signature file for the specified PN/task """
if not hasattr(bb.siggen, 'find_siginfo'):
logger.error('Metadata does not support finding signature data files')
sys.exit(1)
if not taskname.startswith('do_'):
taskname = 'do_%s' % taskname
filedates = bb.siggen.find_siginfo(pn, taskname, None, bbhandler.config_data)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-1:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
return latestfiles[0]
logger = logger_create('bitbake-dumpsig')
parser = optparse.OptionParser(
description = "Dumps siginfo/sigdata files written out by BitBake",
usage = """
%prog -t recipename taskname
%prog sigdatafile""")
parser.add_option("-D", "--debug",
help = "enable debug",
action = "store_true", dest="debug", default = False)
parser.add_option("-t", "--task",
help = "find the signature data file for the specified task",
action="store", dest="taskargs", nargs=2, metavar='recipename taskname')
options, args = parser.parse_args(sys.argv)
if options.debug:
logger.setLevel(logging.DEBUG)
if options.taskargs:
tinfoil = bb.tinfoil.Tinfoil()
tinfoil.prepare(config_only = True)
file = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1])
logger.debug("Signature file: %s" % file)
elif len(args) == 1:
if len(args) == 1:
parser.print_help()
sys.exit(0)
else:
file = args[1]
import cPickle
try:
output = bb.siggen.dump_sigfile(args[1])
except IOError as e:
logger.error(str(e))
sys.exit(1)
except cPickle.UnpicklingError, EOFError:
logger.error('Invalid signature data - ensure you are specifying a sigdata/siginfo file')
sys.exit(1)
try:
output = bb.siggen.dump_sigfile(file)
except IOError as e:
logger.error(str(e))
sys.exit(1)
except (pickle.UnpicklingError, EOFError):
logger.error('Invalid signature data - ensure you are specifying a sigdata/siginfo file')
sys.exit(1)
if output:
print('\n'.join(output))
if output:
print '\n'.join(output)

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
import os
import sys,logging
import optparse

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
#
# Copyright (C) 2012 Richard Purdie
#
@@ -22,57 +22,34 @@ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib
import unittest
try:
import bb
import layerindexlib
except RuntimeError as exc:
sys.exit(str(exc))
tests = ["bb.tests.codeparser",
"bb.tests.cooker",
"bb.tests.cow",
"bb.tests.data",
"bb.tests.event",
"bb.tests.fetch",
"bb.tests.parse",
"bb.tests.utils",
"layerindexlib.tests.layerindexobj",
"layerindexlib.tests.restapi",
"layerindexlib.tests.cooker"]
def usage():
print('usage: [BB_SKIP_NETTESTS=yes] %s [-v] [testname1 [testname2]...]' % os.path.basename(sys.argv[0]))
verbosity = 1
tests = sys.argv[1:]
if '-v' in sys.argv:
tests.remove('-v')
verbosity = 2
if tests:
if '--help' in sys.argv[1:]:
usage()
sys.exit(0)
else:
tests = ["bb.tests.codeparser",
"bb.tests.cow",
"bb.tests.data",
"bb.tests.fetch",
"bb.tests.parse",
"bb.tests.utils"]
for t in tests:
t = '.'.join(t.split('.')[:3])
__import__(t)
unittest.main(argv=["bitbake-selftest"] + tests, verbosity=verbosity)
# Set-up logging
class StdoutStreamHandler(logging.StreamHandler):
"""Special handler so that unittest is able to capture stdout"""
def __init__(self):
# Override __init__() because we don't want to set self.stream here
logging.Handler.__init__(self)
@property
def stream(self):
# We want to dynamically write wherever sys.stdout is pointing to
return sys.stdout
handler = StdoutStreamHandler()
bb.logger.addHandler(handler)
bb.logger.setLevel(logging.DEBUG)
ENV_HELP = """\
Environment variables:
BB_SKIP_NETTESTS set to 'yes' in order to skip tests using network
connection
BB_TMPDIR_NOCLEAN set to 'yes' to preserve test tmp directories
"""
class main(unittest.main):
def _print_help(self, *args, **kwargs):
super(main, self)._print_help(*args, **kwargs)
print(ENV_HELP)
if __name__ == '__main__':
main(defaultTest=tests, buffer=True)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
import os
import sys
@@ -10,14 +10,7 @@ import bb
import select
import errno
import signal
import pickle
import traceback
import queue
from multiprocessing import Lock
from threading import Thread
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 2 or not sys.argv[1].startswith("decafbad"):
@@ -37,16 +30,19 @@ if sys.argv[1].startswith("decafbadbad"):
# updates to log files for use with tail
try:
if sys.stdout.name == '<stdout>':
import fcntl
fl = fcntl.fcntl(sys.stdout.fileno(), fcntl.F_GETFL)
fl |= os.O_SYNC
fcntl.fcntl(sys.stdout.fileno(), fcntl.F_SETFL, fl)
#sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
except:
pass
logger = logging.getLogger("BitBake")
try:
import cPickle as pickle
except ImportError:
import pickle
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
worker_pipe = sys.stdout.fileno()
bb.utils.nonblockingfd(worker_pipe)
# Need to guard against multiprocessing being used in child processes
@@ -66,54 +62,36 @@ if 0:
consolelog.setFormatter(conlogformat)
logger.addHandler(consolelog)
worker_queue = queue.Queue()
worker_queue = ""
def worker_fire(event, d):
data = b"<event>" + pickle.dumps(event) + b"</event>"
data = "<event>" + pickle.dumps(event) + "</event>"
worker_fire_prepickled(data)
def worker_fire_prepickled(event):
global worker_queue
worker_queue.put(event)
worker_queue = worker_queue + event
worker_flush()
#
# We can end up with write contention with the cooker, it can be trying to send commands
# and we can be trying to send event data back. Therefore use a separate thread for writing
# back data to cooker.
#
worker_thread_exit = False
def worker_flush():
global worker_queue, worker_pipe
def worker_flush(worker_queue):
worker_queue_int = b""
global worker_pipe, worker_thread_exit
if not worker_queue:
return
while True:
try:
worker_queue_int = worker_queue_int + worker_queue.get(True, 1)
except queue.Empty:
pass
while (worker_queue_int or not worker_queue.empty()):
try:
(_, ready, _) = select.select([], [worker_pipe], [], 1)
if not worker_queue.empty():
worker_queue_int = worker_queue_int + worker_queue.get()
written = os.write(worker_pipe, worker_queue_int)
worker_queue_int = worker_queue_int[written:]
except (IOError, OSError) as e:
if e.errno != errno.EAGAIN and e.errno != errno.EPIPE:
raise
if worker_thread_exit and worker_queue.empty() and not worker_queue_int:
return
worker_thread = Thread(target=worker_flush, args=(worker_queue,))
worker_thread.start()
try:
written = os.write(worker_pipe, worker_queue)
worker_queue = worker_queue[written:]
except (IOError, OSError) as e:
if e.errno != errno.EAGAIN and e.errno != errno.EPIPE:
raise
def worker_child_fire(event, d):
global worker_pipe
global worker_pipe_lock
data = b"<event>" + pickle.dumps(event) + b"</event>"
data = "<event>" + pickle.dumps(event) + "</event>"
try:
worker_pipe_lock.acquire()
worker_pipe.write(data)
@@ -136,7 +114,7 @@ def sigterm_handler(signum, frame):
os.killpg(0, signal.SIGTERM)
sys.exit()
def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, appends, taskdepdata, extraconfigdata, quieterrors=False, dry_run_exec=False):
def fork_off_task(cfg, data, workerdata, fn, task, taskname, appends, taskdepdata, quieterrors=False):
# We need to setup the environment BEFORE the fork, since
# a fork() or exec*() activates PSEUDO...
@@ -152,10 +130,8 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, append
except TypeError:
umask = taskdep['umask'][taskname]
dry_run = cfg.dry_run or dry_run_exec
# We can't use the fakeroot environment in a dry run as it possibly hasn't been built
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not dry_run:
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not cfg.dry_run:
envvars = (workerdata["fakerootenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
@@ -183,8 +159,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, append
pipeout = os.fdopen(pipeout, 'wb', 0)
pid = os.fork()
except OSError as e:
logger.critical("fork failed: %d (%s)" % (e.errno, e.strerror))
sys.exit(1)
bb.msg.fatal("RunQueue", "fork failed: %d (%s)" % (e.errno, e.strerror))
if pid == 0:
def child():
@@ -216,58 +191,39 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, append
if umask:
os.umask(umask)
data.setVar("BB_WORKERCONTEXT", "1")
data.setVar("BB_TASKDEPDATA", taskdepdata)
data.setVar("BUILDNAME", workerdata["buildname"])
data.setVar("DATE", workerdata["date"])
data.setVar("TIME", workerdata["time"])
bb.parse.siggen.set_taskdata(workerdata["sigdata"])
ret = 0
try:
bb_cache = bb.cache.NoCache(databuilder)
(realfn, virtual, mc) = bb.cache.virtualfn2realfn(fn)
the_data = databuilder.mcdata[mc]
the_data.setVar("BB_WORKERCONTEXT", "1")
the_data.setVar("BB_TASKDEPDATA", taskdepdata)
if cfg.limited_deps:
the_data.setVar("BB_LIMITEDDEPS", "1")
the_data.setVar("BUILDNAME", workerdata["buildname"])
the_data.setVar("DATE", workerdata["date"])
the_data.setVar("TIME", workerdata["time"])
for varname, value in extraconfigdata.items():
the_data.setVar(varname, value)
bb.parse.siggen.set_taskdata(workerdata["sigdata"])
ret = 0
the_data = bb_cache.loadDataFull(fn, appends)
the_data = bb.cache.Cache.loadDataFull(fn, appends, data)
the_data.setVar('BB_TASKHASH', workerdata["runq_hash"][task])
bb.utils.set_process_name("%s:%s" % (the_data.getVar("PN"), taskname.replace("do_", "")))
bb.utils.set_process_name("%s:%s" % (the_data.getVar("PN", True), taskname.replace("do_", "")))
# exported_vars() returns a generator which *cannot* be passed to os.environ.update()
# successfully. We also need to unset anything from the environment which shouldn't be there
exports = bb.data.exported_vars(the_data)
bb.utils.empty_environment()
for e, v in exports:
os.environ[e] = v
for e in fakeenv:
os.environ[e] = fakeenv[e]
the_data.setVar(e, fakeenv[e])
the_data.setVarFlag(e, 'export', "1")
task_exports = the_data.getVarFlag(taskname, 'exports')
if task_exports:
for e in task_exports.split():
the_data.setVarFlag(e, 'export', '1')
v = the_data.getVar(e)
if v is not None:
os.environ[e] = v
if quieterrors:
the_data.setVarFlag(taskname, "quieterrors", "1")
except Exception:
except Exception as exc:
if not quieterrors:
logger.critical(traceback.format_exc())
logger.critical(str(exc))
os._exit(1)
try:
if dry_run:
if cfg.dry_run:
return 0
return bb.build.exec_task(fn, taskname, the_data, cfg.profile)
except:
@@ -284,7 +240,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, append
bb.utils.process_profilelog(profname)
os._exit(ret)
else:
for key, value in iter(envbackup.items()):
for key, value in envbackup.iteritems():
if value is None:
del os.environ[key]
else:
@@ -301,22 +257,22 @@ class runQueueWorkerPipe():
if pipeout:
pipeout.close()
bb.utils.nonblockingfd(self.input)
self.queue = b""
self.queue = ""
def read(self):
start = len(self.queue)
try:
self.queue = self.queue + (self.input.read(102400) or b"")
self.queue = self.queue + self.input.read(102400)
except (OSError, IOError) as e:
if e.errno != errno.EAGAIN:
raise
end = len(self.queue)
index = self.queue.find(b"</event>")
index = self.queue.find("</event>")
while index != -1:
worker_fire_prepickled(self.queue[:index+8])
self.queue = self.queue[index+8:]
index = self.queue.find(b"</event>")
index = self.queue.find("</event>")
return (end > start)
def close(self):
@@ -332,11 +288,10 @@ class BitbakeWorker(object):
def __init__(self, din):
self.input = din
bb.utils.nonblockingfd(self.input)
self.queue = b""
self.queue = ""
self.cookercfg = None
self.databuilder = None
self.data = None
self.extraconfigdata = None
self.build_pids = {}
self.build_pipes = {}
@@ -370,29 +325,27 @@ class BitbakeWorker(object):
except (OSError, IOError):
pass
if len(self.queue):
self.handle_item(b"cookerconfig", self.handle_cookercfg)
self.handle_item(b"extraconfigdata", self.handle_extraconfigdata)
self.handle_item(b"workerdata", self.handle_workerdata)
self.handle_item(b"runtask", self.handle_runtask)
self.handle_item(b"finishnow", self.handle_finishnow)
self.handle_item(b"ping", self.handle_ping)
self.handle_item(b"quit", self.handle_quit)
self.handle_item("cookerconfig", self.handle_cookercfg)
self.handle_item("workerdata", self.handle_workerdata)
self.handle_item("runtask", self.handle_runtask)
self.handle_item("finishnow", self.handle_finishnow)
self.handle_item("ping", self.handle_ping)
self.handle_item("quit", self.handle_quit)
for pipe in self.build_pipes:
if self.build_pipes[pipe].input in ready:
self.build_pipes[pipe].read()
self.build_pipes[pipe].read()
if len(self.build_pids):
while self.process_waitpid():
continue
self.process_waitpid()
worker_flush()
def handle_item(self, item, func):
if self.queue.startswith(b"<" + item + b">"):
index = self.queue.find(b"</" + item + b">")
if self.queue.startswith("<" + item + ">"):
index = self.queue.find("</" + item + ">")
while index != -1:
func(self.queue[(len(item) + 2):index])
self.queue = self.queue[(index + len(item) + 3):]
index = self.queue.find(b"</" + item + b">")
index = self.queue.find("</" + item + ">")
def handle_cookercfg(self, data):
self.cookercfg = pickle.loads(data)
@@ -400,22 +353,18 @@ class BitbakeWorker(object):
self.databuilder.parseBaseConfiguration()
self.data = self.databuilder.data
def handle_extraconfigdata(self, data):
self.extraconfigdata = pickle.loads(data)
def handle_workerdata(self, data):
self.workerdata = pickle.loads(data)
bb.msg.loggerDefaultDebugLevel = self.workerdata["logdefaultdebug"]
bb.msg.loggerDefaultVerbose = self.workerdata["logdefaultverbose"]
bb.msg.loggerVerboseLogs = self.workerdata["logdefaultverboselogs"]
bb.msg.loggerDefaultDomains = self.workerdata["logdefaultdomain"]
for mc in self.databuilder.mcdata:
self.databuilder.mcdata[mc].setVar("PRSERV_HOST", self.workerdata["prhost"])
self.data.setVar("PRSERV_HOST", self.workerdata["prhost"])
def handle_ping(self, _):
workerlog_write("Handling ping\n")
logger.warning("Pong from bitbake-worker!")
logger.warn("Pong from bitbake-worker!")
def handle_quit(self, data):
workerlog_write("Handling quit\n")
@@ -425,10 +374,10 @@ class BitbakeWorker(object):
sys.exit(0)
def handle_runtask(self, data):
fn, task, taskname, quieterrors, appends, taskdepdata, dry_run_exec = pickle.loads(data)
fn, task, taskname, quieterrors, appends, taskdepdata = pickle.loads(data)
workerlog_write("Handling runtask %s %s %s\n" % (task, fn, taskname))
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.databuilder, self.workerdata, fn, task, taskname, appends, taskdepdata, self.extraconfigdata, quieterrors, dry_run_exec)
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.workerdata, fn, task, taskname, appends, taskdepdata, quieterrors)
self.build_pids[pid] = task
self.build_pipes[pid] = runQueueWorkerPipe(pipein, pipeout)
@@ -441,9 +390,9 @@ class BitbakeWorker(object):
try:
pid, status = os.waitpid(-1, os.WNOHANG)
if pid == 0 or os.WIFSTOPPED(status):
return False
return None
except OSError:
return False
return None
workerlog_write("Exit code of %s for pid %s\n" % (status, pid))
@@ -460,14 +409,12 @@ class BitbakeWorker(object):
self.build_pipes[pid].close()
del self.build_pipes[pid]
worker_fire_prepickled(b"<exitcode>" + pickle.dumps((task, status)) + b"</exitcode>")
return True
worker_fire_prepickled("<exitcode>" + pickle.dumps((task, status)) + "</exitcode>")
def handle_finishnow(self, _):
if self.build_pids:
logger.info("Sending SIGTERM to remaining %s tasks", len(self.build_pids))
for k, v in iter(self.build_pids.items()):
for k, v in self.build_pids.iteritems():
try:
os.kill(-k, signal.SIGTERM)
os.waitpid(-1, 0)
@@ -477,7 +424,7 @@ class BitbakeWorker(object):
self.build_pipes[pipe].read()
try:
worker = BitbakeWorker(os.fdopen(sys.stdin.fileno(), 'rb'))
worker = BitbakeWorker(sys.stdin)
if not profiling:
worker.serve()
else:
@@ -493,9 +440,8 @@ except BaseException as e:
import traceback
sys.stderr.write(traceback.format_exc())
sys.stderr.write(str(e))
worker_thread_exit = True
worker_thread.join()
while len(worker_queue):
worker_flush()
workerlog_write("exitting")
sys.exit(0)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#

View File

@@ -1,165 +0,0 @@
#!/usr/bin/env python3
"""git-make-shallow: make the current git repository shallow
Remove the history of the specified revisions, then optionally filter the
available refs to those specified.
"""
import argparse
import collections
import errno
import itertools
import os
import subprocess
import sys
version = 1.0
def main():
if sys.version_info < (3, 4, 0):
sys.exit('Python 3.4 or greater is required')
git_dir = check_output(['git', 'rev-parse', '--git-dir']).rstrip()
shallow_file = os.path.join(git_dir, 'shallow')
if os.path.exists(shallow_file):
try:
check_output(['git', 'fetch', '--unshallow'])
except subprocess.CalledProcessError:
try:
os.unlink(shallow_file)
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
args = process_args()
revs = check_output(['git', 'rev-list'] + args.revisions).splitlines()
make_shallow(shallow_file, args.revisions, args.refs)
ref_revs = check_output(['git', 'rev-list'] + args.refs).splitlines()
remaining_history = set(revs) & set(ref_revs)
for rev in remaining_history:
if check_output(['git', 'rev-parse', '{}^@'.format(rev)]):
sys.exit('Error: %s was not made shallow' % rev)
filter_refs(args.refs)
if args.shrink:
shrink_repo(git_dir)
subprocess.check_call(['git', 'fsck', '--unreachable'])
def process_args():
# TODO: add argument to automatically keep local-only refs, since they
# can't be easily restored with a git fetch.
parser = argparse.ArgumentParser(description='Remove the history of the specified revisions, then optionally filter the available refs to those specified.')
parser.add_argument('--ref', '-r', metavar='REF', action='append', dest='refs', help='remove all but the specified refs (cumulative)')
parser.add_argument('--shrink', '-s', action='store_true', help='shrink the git repository by repacking and pruning')
parser.add_argument('revisions', metavar='REVISION', nargs='+', help='a git revision/commit')
if len(sys.argv) < 2:
parser.print_help()
sys.exit(2)
args = parser.parse_args()
if args.refs:
args.refs = check_output(['git', 'rev-parse', '--symbolic-full-name'] + args.refs).splitlines()
else:
args.refs = get_all_refs(lambda r, t, tt: t == 'commit' or tt == 'commit')
args.refs = list(filter(lambda r: not r.endswith('/HEAD'), args.refs))
args.revisions = check_output(['git', 'rev-parse'] + ['%s^{}' % i for i in args.revisions]).splitlines()
return args
def check_output(cmd, input=None):
return subprocess.check_output(cmd, universal_newlines=True, input=input)
def make_shallow(shallow_file, revisions, refs):
"""Remove the history of the specified revisions."""
for rev in follow_history_intersections(revisions, refs):
print("Processing %s" % rev)
with open(shallow_file, 'a') as f:
f.write(rev + '\n')
def get_all_refs(ref_filter=None):
"""Return all the existing refs in this repository, optionally filtering the refs."""
ref_output = check_output(['git', 'for-each-ref', '--format=%(refname)\t%(objecttype)\t%(*objecttype)'])
ref_split = [tuple(iter_extend(l.rsplit('\t'), 3)) for l in ref_output.splitlines()]
if ref_filter:
ref_split = (e for e in ref_split if ref_filter(*e))
refs = [r[0] for r in ref_split]
return refs
def iter_extend(iterable, length, obj=None):
"""Ensure that iterable is the specified length by extending with obj."""
return itertools.islice(itertools.chain(iterable, itertools.repeat(obj)), length)
def filter_refs(refs):
"""Remove all but the specified refs from the git repository."""
all_refs = get_all_refs()
to_remove = set(all_refs) - set(refs)
if to_remove:
check_output(['xargs', '-0', '-n', '1', 'git', 'update-ref', '-d', '--no-deref'],
input=''.join(l + '\0' for l in to_remove))
def follow_history_intersections(revisions, refs):
"""Determine all the points where the history of the specified revisions intersects the specified refs."""
queue = collections.deque(revisions)
seen = set()
for rev in iter_except(queue.popleft, IndexError):
if rev in seen:
continue
parents = check_output(['git', 'rev-parse', '%s^@' % rev]).splitlines()
yield rev
seen.add(rev)
if not parents:
continue
check_refs = check_output(['git', 'merge-base', '--independent'] + sorted(refs)).splitlines()
for parent in parents:
for ref in check_refs:
print("Checking %s vs %s" % (parent, ref))
try:
merge_base = check_output(['git', 'merge-base', parent, ref]).rstrip()
except subprocess.CalledProcessError:
continue
else:
queue.append(merge_base)
def iter_except(func, exception, start=None):
"""Yield a function repeatedly until it raises an exception."""
try:
if start is not None:
yield start()
while True:
yield func()
except exception:
pass
def shrink_repo(git_dir):
"""Shrink the newly shallow repository, removing the unreachable objects."""
subprocess.check_call(['git', 'reflog', 'expire', '--expire-unreachable=now', '--all'])
subprocess.check_call(['git', 'repack', '-ad'])
try:
os.unlink(os.path.join(git_dir, 'objects', 'info', 'alternates'))
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
subprocess.check_call(['git', 'prune', '--expire', 'now'])
if __name__ == '__main__':
main()

122
bitbake/bin/image-writer Executable file
View File

@@ -0,0 +1,122 @@
#!/usr/bin/env python
# Copyright (c) 2012 Wind River Systems, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
import os
import sys
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname( \
os.path.abspath(__file__))), 'lib'))
try:
import bb
except RuntimeError as exc:
sys.exit(str(exc))
import gtk
import optparse
import pygtk
from bb.ui.crumbs.hobwidget import HobAltButton, HobButton
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
from bb.ui.crumbs.hig.deployimagedialog import DeployImageDialog
from bb.ui.crumbs.hig.imageselectiondialog import ImageSelectionDialog
# I put all the fs bitbake supported here. Need more test.
DEPLOYABLE_IMAGE_TYPES = ["jffs2", "cramfs", "ext2", "ext3", "ext4", "btrfs", "squashfs", "ubi", "vmdk"]
Title = "USB Image Writer"
class DeployWindow(gtk.Window):
def __init__(self, image_path=''):
super(DeployWindow, self).__init__()
if len(image_path) > 0:
valid = True
if not os.path.exists(image_path):
valid = False
lbl = "<b>Invalid image file path: %s.</b>\nPress <b>Select Image</b> to select an image." % image_path
else:
image_path = os.path.abspath(image_path)
extend_name = os.path.splitext(image_path)[1][1:]
if extend_name not in DEPLOYABLE_IMAGE_TYPES:
valid = False
lbl = "<b>Undeployable imge type: %s</b>\nPress <b>Select Image</b> to select an image." % extend_name
if not valid:
image_path = ''
crumbs_dialog = CrumbsMessageDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
button = crumbs_dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
crumbs_dialog.run()
crumbs_dialog.destroy()
self.deploy_dialog = DeployImageDialog(Title, image_path, self,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT
| gtk.DIALOG_NO_SEPARATOR, None, standalone=True)
close_button = self.deploy_dialog.add_button("Close", gtk.RESPONSE_NO)
HobAltButton.style_button(close_button)
close_button.connect('clicked', gtk.main_quit)
write_button = self.deploy_dialog.add_button("Write USB image", gtk.RESPONSE_YES)
HobAltButton.style_button(write_button)
self.deploy_dialog.connect('select_image_clicked', self.select_image_clicked_cb)
self.deploy_dialog.connect('destroy', gtk.main_quit)
response = self.deploy_dialog.show()
def select_image_clicked_cb(self, dialog):
cwd = os.getcwd()
dialog = ImageSelectionDialog(cwd, DEPLOYABLE_IMAGE_TYPES, Title, self, gtk.FILE_CHOOSER_ACTION_SAVE )
button = dialog.add_button("Cancel", gtk.RESPONSE_NO)
HobAltButton.style_button(button)
button = dialog.add_button("Open", gtk.RESPONSE_YES)
HobAltButton.style_button(button)
response = dialog.run()
if response == gtk.RESPONSE_YES:
if not dialog.image_names:
lbl = "<b>No selections made</b>\nClicked the radio button to select a image."
crumbs_dialog = CrumbsMessageDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
button = crumbs_dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
crumbs_dialog.run()
crumbs_dialog.destroy()
dialog.destroy()
return
# get the full path of image
image_path = os.path.join(dialog.image_folder, dialog.image_names[0])
self.deploy_dialog.set_image_text_buffer(image_path)
self.deploy_dialog.set_image_path(image_path)
dialog.destroy()
def main():
parser = optparse.OptionParser(
usage = """%prog [-h] [image_file]
%prog writes bootable images to USB devices. You can
provide the image file on the command line or select it using the GUI.""")
options, args = parser.parse_args(sys.argv)
image_file = args[1] if len(args) > 1 else ''
dw = DeployWindow(image_file)
if __name__ == '__main__':
try:
main()
gtk.main()
except Exception:
import traceback
traceback.print_exc()

View File

@@ -17,60 +17,23 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
HELP="
Usage: source toaster start|stop [webport=<address:port>] [noweb] [nobuild] [toasterdir]
Optional arguments:
[nobuild] Setup the environment for capturing builds with toaster but disable managed builds
[noweb] Setup the environment for capturing builds with toaster but don't start the web server
[webport] Set the development server (default: localhost:8000)
[toasterdir] Set absolute path to be used as TOASTER_DIR (default: BUILDDIR/../)
"
# Usage: source toaster [start|stop]
# [webport=<port>] [noui] [noweb]
custom_extention()
{
custom_extension=$BBBASEDIR/lib/toaster/orm/fixtures/custom_toaster_append.sh
if [ -f $custom_extension ] ; then
$custom_extension $*
fi
}
databaseCheck()
{
retval=0
# you can always add a superuser later via
# ../bitbake/lib/toaster/manage.py createsuperuser --username=<ME>
$MANAGE migrate --noinput || retval=1
if [ $retval -eq 1 ]; then
echo "Failed migrations, aborting system start" 1>&2
return $retval
fi
# Make sure that checksettings can pick up any value for TEMPLATECONF
export TEMPLATECONF
$MANAGE checksettings --traceback || retval=1
if [ $retval -eq 1 ]; then
printf "\nError while checking settings; aborting\n"
return $retval
fi
return $retval
}
# Helper function to kill a background toaster development server
webserverKillAll()
{
local pidfile
if [ -f ${BUILDDIR}/.toastermain.pid ] ; then
custom_extention web_stop_postpend
else
custom_extention noweb_stop_postpend
fi
for pidfile in ${BUILDDIR}/.toastermain.pid ${BUILDDIR}/.runbuilds.pid; do
if [ -f ${pidfile} ]; then
pid=`cat ${pidfile}`
while kill -0 $pid 2>/dev/null; do
kill -SIGTERM $pid 2>/dev/null
kill -SIGTERM -$pid 2>/dev/null
sleep 1
# Kill processes if they are still running - may happen
# in interactive shells
ps fux | grep "python.*manage.py runserver" | awk '{print $2}' | xargs kill
done
rm ${pidfile}
fi
@@ -86,13 +49,25 @@ webserverStartAll()
fi
retval=0
# you can always add a superuser later via
# ../bitbake/lib/toaster/manage.py createsuperuser --username=<ME>
$MANAGE migrate --noinput || retval=1
# check the database
databaseCheck || return 1
if [ $retval -eq 1 ]; then
echo "Failed migrations, aborting system start" 1>&2
return $retval
fi
$MANAGE checksettings --traceback || retval=1
if [ $retval -eq 1 ]; then
printf "\nError while checking settings; aborting\n"
return $retval
fi
echo "Starting webserver..."
$MANAGE runserver --noreload "$ADDR_PORT" \
$MANAGE runserver "0.0.0.0:$WEB_PORT" \
</dev/null >>${BUILDDIR}/toaster_web.log 2>&1 \
& echo $! >${BUILDDIR}/.toastermain.pid
@@ -102,9 +77,7 @@ webserverStartAll()
retval=1
rm "${BUILDDIR}/.toastermain.pid"
else
echo "Toaster development webserver started at http://$ADDR_PORT"
echo -e "\nYou can now run 'bitbake <target>' on the command line and monitor your build in Toaster.\nYou can also use a Toaster project to configure and run a build.\n"
custom_extention web_start_postpend $ADDR_PORT
echo "Webserver address: http://0.0.0.0:$WEB_PORT/"
fi
return $retval
@@ -118,8 +91,14 @@ stop_system()
# prevent reentry
if [ $INSTOPSYSTEM -eq 1 ]; then return; fi
INSTOPSYSTEM=1
if [ -f ${BUILDDIR}/.toasterui.pid ]; then
kill `cat ${BUILDDIR}/.toasterui.pid` 2>/dev/null
rm ${BUILDDIR}/.toasterui.pid
fi
webserverKillAll
# unset exported variables
unset DATABASE_URL
unset TOASTER_CONF
unset TOASTER_DIR
unset BITBAKE_UI
unset BBBASEDIR
@@ -130,20 +109,14 @@ stop_system()
verify_prereq() {
# Verify Django version
reqfile=$(python3 -c "import os; print(os.path.realpath('$BBBASEDIR/toaster-requirements.txt'))")
reqfile=$(python -c "import os; print os.path.realpath('$BBBASEDIR/toaster-requirements.txt')")
exp='s/Django\([><=]\+\)\([^,]\+\),\([><=]\+\)\(.\+\)/'
# expand version parts to 2 digits to support 1.10.x > 1.8
# (note:helper functions hard to insert in-line)
exp=$exp'import sys,django;'
exp=$exp'version=["%02d" % int(n) for n in django.get_version().split(".")];'
exp=$exp'vmin=["%02d" % int(n) for n in "\2".split(".")];'
exp=$exp'vmax=["%02d" % int(n) for n in "\4".split(".")];'
exp=$exp'sys.exit(not (version \1 vmin and version \3 vmax))'
exp=$exp'/p'
if ! sed -n "$exp" $reqfile | python3 - ; then
exp=$exp'import sys,django;version=django.get_version().split(".");'
exp=$exp'sys.exit(not (version \1 "\2".split(".") and version \3 "\4".split(".")))/p'
if ! sed -n "$exp" $reqfile | python - ; then
req=`grep ^Django $reqfile`
echo "This program needs $req"
echo "Please install with pip3 install -r $reqfile"
echo "Please install with pip install -r $reqfile"
return 2
fi
@@ -160,8 +133,8 @@ else
fi
export BBBASEDIR=`dirname $TOASTER`/..
MANAGE="python3 $BBBASEDIR/lib/toaster/manage.py"
OE_ROOT=`dirname $TOASTER`/../..
MANAGE=$BBBASEDIR/lib/toaster/manage.py
OEROOT=`dirname $TOASTER`/../..
# this is the configuraton file we are using for toaster
# we are using the same logic that oe-setup-builddir uses
@@ -171,32 +144,47 @@ OE_ROOT=`dirname $TOASTER`/../..
# in the local layers that currently make using an arbitrary
# toasterconf.json difficult.
. $OE_ROOT/.templateconf
. $OEROOT/.templateconf
if [ -n "$TEMPLATECONF" ]; then
if [ ! -d "$TEMPLATECONF" ]; then
# Allow TEMPLATECONF=meta-xyz/conf as a shortcut
if [ -d "$OE_ROOT/$TEMPLATECONF" ]; then
TEMPLATECONF="$OE_ROOT/$TEMPLATECONF"
if [ -d "$OEROOT/$TEMPLATECONF" ]; then
TEMPLATECONF="$OEROOT/$TEMPLATECONF"
fi
if [ ! -d "$TEMPLATECONF" ]; then
echo >&2 "Error: '$TEMPLATECONF' must be a directory containing toasterconf.json"
return 1
fi
fi
fi
unset OE_ROOT
if [ "$TOASTER_CONF" = "" ]; then
TOASTER_CONF="$TEMPLATECONF/toasterconf.json"
export TOASTER_CONF=$(python -c "import os; print os.path.realpath('$TOASTER_CONF')")
fi
if [ ! -f $TOASTER_CONF ]; then
echo "$TOASTER_CONF configuration file not found. Set TOASTER_CONF to specify file or fix .templateconf"
return 1
fi
# this defines the dir toaster will use for
# 1) clones of layers (in _toaster_clones )
# 2) the build dir (in build)
# 3) the sqlite db if that is being used.
# 4) pid's we need to clean up on exit/shutdown
# note: for future. in order to make this an arbitrary directory, we need to
# make sure that the toaster.sqlite file doesn't default to `pwd` like it currently does.
export TOASTER_DIR=`pwd`
WEBSERVER=1
export TOASTER_BUILDSERVER=1
ADDR_PORT="localhost:8000"
TOASTERDIR=`dirname $BUILDDIR`
WEB_PORT="8000"
unset CMD
for param in $*; do
case $param in
noweb )
WEBSERVER=0
;;
nobuild )
TOASTER_BUILDSERVER=0
;;
start )
CMD=$param
;;
@@ -204,27 +192,7 @@ for param in $*; do
CMD=$param
;;
webport=*)
ADDR_PORT="${param#*=}"
# Split the addr:port string
ADDR=`echo $ADDR_PORT | cut -f 1 -d ':'`
PORT=`echo $ADDR_PORT | cut -f 2 -d ':'`
# If only a port has been speified then set address to localhost.
if [ $ADDR = $PORT ] ; then
ADDR_PORT="localhost:$PORT"
fi
;;
toasterdir=*)
TOASTERDIR="${param#*=}"
;;
--help)
echo "$HELP"
return 0
;;
*)
echo "$HELP"
return 1
;;
WEB_PORT="${param#*=}"
esac
done
@@ -246,8 +214,10 @@ fi
# 2) the build dir (in build)
# 3) the sqlite db if that is being used.
# 4) pid's we need to clean up on exit/shutdown
export TOASTER_DIR=$TOASTERDIR
export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE TOASTER_DIR"
# note: for future. in order to make this an arbitrary directory, we need to
# make sure that the toaster.sqlite file doesn't default to `pwd`
# like it currently does.
export TOASTER_DIR=`dirname $BUILDDIR`
# Determine the action. If specified by arguments, fine, if not, toggle it
if [ "$CMD" = "start" ] ; then
@@ -256,23 +226,30 @@ if [ "$CMD" = "start" ] ; then
return 1
fi
elif [ "$CMD" = "" ]; then
echo "No command specified"
echo "$HELP"
return 1
if [ -z "$BBSERVER" ]; then
CMD="start"
else
CMD="stop"
fi
fi
echo "The system will $CMD."
# Execute the commands
custom_extention toaster_prepend $CMD $ADDR_PORT
case $CMD in
start )
# check if addr:port is not in use
if [ "$CMD" == 'start' ]; then
if [ $WEBSERVER -gt 0 ]; then
$MANAGE checksocket "$ADDR_PORT" || return 1
fi
$MANAGE checksocket "0.0.0.0:$WEB_PORT" || return 1
fi
# kill Toaster web server if it's alive
if [ -e $BUILDDIR/.toastermain.pid ] && kill -0 `cat $BUILDDIR/.toastermain.pid`; then
echo "Warning: bitbake appears to be dead, but the Toaster web server is running." 1>&2
echo " Something fishy is going on." 1>&2
echo "Cleaning up the web server to start from a clean slate."
webserverKillAll
fi
# Create configuration file
@@ -280,34 +257,16 @@ case $CMD in
line='INHERIT+="toaster buildhistory"'
grep -q "$line" $conf || echo $line >> $conf
if [ $WEBSERVER -eq 0 ] ; then
# Do not update the database for "noweb" unless
# it does not yet exist
if [ ! -f "$TOASTER_DIR/toaster.sqlite" ] ; then
if ! databaseCheck; then
echo "Failed ${CMD}."
return 4
fi
fi
custom_extention noweb_start_postpend $ADDR_PORT
fi
if [ $WEBSERVER -gt 0 ] && ! webserverStartAll; then
echo "Failed ${CMD}."
return 4
fi
export BITBAKE_UI='toasterui'
if [ $TOASTER_BUILDSERVER -eq 1 ] ; then
$MANAGE runbuilds \
</dev/null >>${BUILDDIR}/toaster_runbuilds.log 2>&1 \
& echo $! >${BUILDDIR}/.runbuilds.pid
else
echo "Toaster build server not started."
fi
export DATABASE_URL=`$MANAGE get-dburl`
$MANAGE runbuilds & echo $! >${BUILDDIR}/.runbuilds.pid
# set fail safe stop system on terminal exit
trap stop_system SIGHUP
echo "Successful ${CMD}."
custom_extention toaster_postpend $CMD $ADDR_PORT
return 0
;;
stop )
@@ -315,5 +274,3 @@ case $CMD in
echo "Successful ${CMD}."
;;
esac
custom_extention toaster_postpend $CMD $ADDR_PORT

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
@@ -21,106 +21,154 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
This command takes a filename as a single parameter. The filename is read
as a build eventlog, and the ToasterUI is used to process events in the file
and log data in the database
"""
# This command takes a filename as a single parameter. The filename is read
# as a build eventlog, and the ToasterUI is used to process events in the file
# and log data in the database
from __future__ import print_function
import os
import sys
import json
import pickle
import codecs
from collections import namedtuple
import sys, logging
# mangle syspath to allow easy import of modules
from os.path import join, dirname, abspath
sys.path.insert(0, join(dirname(dirname(abspath(__file__))), 'lib'))
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))),
'lib'))
import bb.cooker
from bb.ui import toasterui
import sys
import logging
class EventPlayer:
"""Emulate a connection to a bitbake server."""
import json, pickle
def __init__(self, eventfile, variables):
self.eventfile = eventfile
self.variables = variables
self.eventmask = []
def waitEvent(self, _timeout):
"""Read event from the file."""
line = self.eventfile.readline().strip()
if not line:
return
try:
event_str = json.loads(line)['vars'].encode('utf-8')
event = pickle.loads(codecs.decode(event_str, 'base64'))
event_name = "%s.%s" % (event.__module__, event.__class__.__name__)
if event_name not in self.eventmask:
return
return event
except ValueError as err:
print("Failed loading ", line)
raise err
class FileReadEventsServerConnection():
""" Emulates a connection to a bitbake server that feeds
events coming actually read from a saved log file.
"""
def runCommand(self, command_line):
"""Emulate running a command on the server."""
name = command_line[0]
if name == "getVariable":
var_name = command_line[1]
variable = self.variables.get(var_name)
if variable:
return variable['v'], None
return None, "Missing variable %s" % var_name
elif name == "getAllKeysWithFlags":
dump = {}
flaglist = command_line[1]
for key, val in self.variables.items():
try:
if not key.startswith("__"):
dump[key] = {
'v': val['v'],
'history' : val['history'],
}
for flag in flaglist:
dump[key][flag] = val[flag]
except Exception as err:
print(err)
return (dump, None)
elif name == 'setEventMask':
self.eventmask = command_line[-1]
return True, None
else:
raise Exception("Command %s not implemented" % command_line[0])
def getEventHandle(self):
class MockConnection():
""" fill-in for the proxy to the server. we just return generic data
"""
This method is called by toasterui.
The return value is passed to self.runCommand but not used there.
"""
pass
def __init__(self, sc):
self._sc = sc
def main(argv):
with open(argv[-1]) as eventfile:
# load variables from the first line
variables = json.loads(eventfile.readline().strip())['allvariables']
def runCommand(self, commandArray):
""" emulates running a command on the server; only read-only commands are accepted """
command_name = commandArray[0]
params = namedtuple('ConfigParams', ['observe_only'])(True)
player = EventPlayer(eventfile, variables)
if command_name == "getVariable":
if commandArray[1] in self._sc._variables:
return (self._sc._variables[commandArray[1]]['v'], None)
return (None, "Missing variable")
elif command_name == "getAllKeysWithFlags":
dump = {}
flaglist = commandArray[1]
for k in self._sc._variables.keys():
try:
if not k.startswith("__"):
v = self._sc._variables[k]['v']
dump[k] = {
'v' : v ,
'history' : self._sc._variables[k]['history'],
}
for d in flaglist:
dump[k][d] = self._sc._variables[k][d]
except Exception as e:
print(e)
return (dump, None)
else:
raise Exception("Command %s not implemented" % commandArray[0])
def terminateServer(self):
""" do not do anything """
pass
class EventReader():
def __init__(self, sc):
self._sc = sc
self.firstraise = 0
def _create_event(self, line):
def _import_class(name):
assert len(name) > 0
assert "." in name, name
components = name.strip().split(".")
modulename = ".".join(components[:-1])
moduleklass = components[-1]
module = __import__(modulename, fromlist=[str(moduleklass)])
return getattr(module, moduleklass)
# we build a toaster event out of current event log line
try:
event_data = json.loads(line.strip())
event_class = _import_class(event_data['class'])
event_object = pickle.loads(json.loads(event_data['vars']))
except ValueError as e:
print("Failed loading ", line)
raise e
if not isinstance(event_object, event_class):
raise Exception("Error loading objects %s class %s ", event_object, event_class)
return event_object
def waitEvent(self, timeout):
nextline = self._sc._eventfile.readline()
if len(nextline) == 0:
# the build data ended, while toasterui still waits for events.
# this happens when the server was abruptly stopped, so we simulate this
self.firstraise += 1
if self.firstraise == 1:
raise KeyboardInterrupt()
else:
return None
else:
self._sc.lineno += 1
return self._create_event(nextline)
def _readVariables(self, variableline):
self._variables = json.loads(variableline.strip())['allvariables']
def __init__(self, file_name):
self.connection = FileReadEventsServerConnection.MockConnection(self)
self._eventfile = open(file_name, "r")
# we expect to have the variable dump at the start of the file
self.lineno = 1
self._readVariables(self._eventfile.readline())
self.events = FileReadEventsServerConnection.EventReader(self)
class MockConfigParameters():
""" stand-in for cookerdata.ConfigParameters; as we don't really config a cooker, this
serves just to supply needed interfaces for the toaster ui to work """
def __init__(self):
self.observe_only = True # we can only read files
return toasterui.main(player, player, params)
# run toaster ui on our mock bitbake class
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: %s <event file>" % os.path.basename(sys.argv[0]))
if len(sys.argv) < 2:
print("Usage: %s event.log " % sys.argv[0])
sys.exit(1)
sys.exit(main(sys.argv))
file_name = sys.argv[-1]
mock_connection = FileReadEventsServerConnection(file_name)
configParams = MockConfigParameters()
# run the main program and set exit code to the returned value
sys.exit(toasterui.main(mock_connection.connection, mock_connection.events, configParams))

View File

@@ -1,8 +1,8 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Copyright (C) 2012, 2018 Wind River Systems, Inc.
# Copyright (C) 2012 Wind River Systems, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -18,68 +18,51 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Used for dumping the bb_cache.dat
# This is used for dumping the bb_cache.dat, the output format is:
# recipe_path PN PV PACKAGES
#
import os
import sys
import argparse
import warnings
# For importing bb.cache
sys.path.insert(0, os.path.join(os.path.abspath(os.path.dirname(sys.argv[0])), '../lib'))
from bb.cache import CoreRecipeInfo
import pickle
import cPickle as pickle
class DumpCache(object):
def __init__(self):
parser = argparse.ArgumentParser(
description="bb_cache.dat's dumper",
epilog="Use %(prog)s --help to get help")
parser.add_argument("-r", "--recipe",
help="specify the recipe, default: all recipes", action="store")
parser.add_argument("-m", "--members",
help = "specify the member, use comma as separator for multiple ones, default: all members", action="store", default="")
parser.add_argument("-s", "--skip",
help = "skip skipped recipes", action="store_true")
parser.add_argument("cachefile",
help = "specify bb_cache.dat", nargs = 1, action="store", default="")
def main(argv=None):
"""
Get the mapping for the target recipe.
"""
if len(argv) != 1:
print >>sys.stderr, "Error, need one argument!"
return 2
self.args = parser.parse_args()
cachefile = argv[0]
def main(self):
with open(self.args.cachefile[0], "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
while True:
try:
key = pickled.load()
val = pickled.load()
except Exception:
break
if isinstance(val, CoreRecipeInfo):
pn = val.pn
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
while cachefile:
try:
key = pickled.load()
val = pickled.load()
except Exception:
break
if isinstance(val, CoreRecipeInfo) and (not val.skipped):
pn = val.pn
# Filter out the native recipes.
if key.startswith('virtual:native:') or pn.endswith("-native"):
continue
if self.args.recipe and self.args.recipe != pn:
continue
# 1.0 is the default version for a no PV recipe.
if val.__dict__.has_key("pv"):
pv = val.pv
else:
pv = "1.0"
if self.args.skip and val.skipped:
continue
if self.args.members:
out = key
for member in self.args.members.split(','):
out += ": %s" % val.__dict__.get(member)
print("%s" % out)
else:
print("%s: %s" % (key, val.__dict__))
elif not self.args.recipe:
print("%s %s" % (key, val))
print("%s %s %s %s" % (key, pn, pv, ' '.join(val.packages)))
if __name__ == "__main__":
try:
dump = DumpCache()
ret = dump.main()
except Exception as esc:
ret = 1
import traceback
traceback.print_exc()
sys.exit(ret)
sys.exit(main(sys.argv[1:]))

View File

@@ -596,7 +596,7 @@
"<link linkend='checksums'>Checksums (Signatures)</link>"
section for information).
It is also possible to append extra metadata to the stamp using
the <filename>[stamp-extra-info]</filename> task flag.
the "stamp-extra-info" task flag.
For example, OpenEmbedded uses this flag to make some tasks machine-specific.
</para>
@@ -653,8 +653,7 @@
</itemizedlist>
It is possible to have functions run before and after a task's main
function.
This is done using the <filename>[prefuncs]</filename>
and <filename>[postfuncs]</filename> flags of the task
This is done using the "prefuncs" and "postfuncs" flags of the task
that lists the functions to run.
</para>
</section>
@@ -781,7 +780,7 @@
The code in <filename>meta/lib/oe/sstatesig.py</filename> shows two examples
of this and also illustrates how you can insert your own policy into the system
if so desired.
This file defines the two basic signature generators OpenEmbedded-Core
This file defines the two basic signature generators OpenEmbedded Core
uses: "OEBasic" and "OEBasicHash".
By default, there is a dummy "noop" signature handler enabled in BitBake.
This means that behavior is unchanged from previous versions.
@@ -828,7 +827,7 @@
itself.
The simplest parameter to pass is "none", which causes a
set of signature information to be written out into
<filename>STAMPS_DIR</filename>
<filename>STAMP_DIR</filename>
corresponding to the targets specified.
The other currently available parameter is "printdiff",
which causes BitBake to try to establish the closest
@@ -916,7 +915,7 @@
<para>
Finally, after all the setscene tasks have executed, BitBake calls the
function listed in
<link linkend='var-BB_SETSCENE_VERIFY_FUNCTION2'><filename>BB_SETSCENE_VERIFY_FUNCTION2</filename></link>
<link linkend='var-BB_SETSCENE_VERIFY_FUNCTION'><filename>BB_SETSCENE_VERIFY_FUNCTION</filename></link>
with the list of tasks BitBake thinks has been "covered".
The metadata can then ensure that this list is correct and can
inform BitBake that it wants specific tasks to be run regardless

View File

@@ -38,7 +38,7 @@
The code to execute the first part of this process, a fetch,
looks something like the following:
<literallayout class='monospaced'>
src_uri = (d.getVar('SRC_URI') or "").split()
src_uri = (d.getVar('SRC_URI', True) or "").split()
fetcher = bb.fetch2.Fetch(src_uri, d)
fetcher.download()
</literallayout>
@@ -52,7 +52,7 @@
<para>
The instantiation of the fetch class is usually followed by:
<literallayout class='monospaced'>
rootdir = l.getVar('WORKDIR')
rootdir = l.getVar('WORKDIR', True)
fetcher.unpack(rootdir)
</literallayout>
This code unpacks the downloaded files to the
@@ -268,6 +268,15 @@
<link linkend='var-FILESPATH'><filename>FILESPATH</filename></link>
variable is used in the same way
<filename>PATH</filename> is used to find executables.
Failing that,
<link linkend='var-FILESDIR'><filename>FILESDIR</filename></link>
is used to find the appropriate relative file.
<note>
<filename>FILESDIR</filename> is deprecated and can
be replaced with <filename>FILESPATH</filename>.
Because <filename>FILESDIR</filename> is likely to be
removed, you should not use this variable in any new code.
</note>
If the file cannot be found, it is assumed that it is available in
<link linkend='var-DL_DIR'><filename>DL_DIR</filename></link>
by the time the <filename>download()</filename> method is called.
@@ -376,8 +385,7 @@
The supported parameters are as follows:
<itemizedlist>
<listitem><para><emphasis>"method":</emphasis>
The protocol over which to communicate with the CVS
server.
The protocol over which to communicate with the CVS server.
By default, this protocol is "pserver".
If "method" is set to "ext", BitBake examines the
"rsh" parameter and sets <filename>CVS_RSH</filename>.
@@ -461,29 +469,25 @@
You can think of this parameter as the top-level
directory of the repository data you want.
</para></listitem>
<listitem><para><emphasis>"path_spec":</emphasis>
A specific directory in which to checkout the
specified svn module.
</para></listitem>
<listitem><para><emphasis>"protocol":</emphasis>
The protocol to use, which defaults to "svn".
If "protocol" is set to "svn+ssh", the "ssh"
parameter is also used.
Other options are "svn+ssh" and "rsh".
For "rsh", the "rsh" parameter is also used.
</para></listitem>
<listitem><para><emphasis>"rev":</emphasis>
The revision of the source code to checkout.
</para></listitem>
<listitem><para><emphasis>"date":</emphasis>
The date of the source code to checkout.
Specific revisions are generally much safer to checkout
rather than by date as they do not involve timezones
(e.g. they are much more deterministic).
</para></listitem>
<listitem><para><emphasis>"scmdata":</emphasis>
Causes the “.svn” directories to be available during
compile-time when set to "keep".
By default, these directories are removed.
</para></listitem>
<listitem><para><emphasis>"ssh":</emphasis>
An optional parameter used when "protocol" is set
to "svn+ssh".
You can use this parameter to specify the ssh
program used by svn.
</para></listitem>
<listitem><para><emphasis>"transportuser":</emphasis>
When required, sets the username for the transport.
By default, this parameter is empty.
@@ -492,11 +496,10 @@
command.
</para></listitem>
</itemizedlist>
Following are three examples using svn:
Following are two examples using svn:
<literallayout class='monospaced'>
SRC_URI = "svn://myrepos/proj1;module=vip;protocol=http;rev=667"
SRC_URI = "svn://myrepos/proj1;module=opie;protocol=svn+ssh"
SRC_URI = "svn://myrepos/proj1;module=trunk;protocol=http;path_spec=${MY_DIR}/proj1"
SRC_URI = "svn://svn.oe.handhelds.org/svn;module=vip;proto=http;rev=667"
SRC_URI = "svn://svn.oe.handhelds.org/svn/;module=opie;proto=svn+ssh;date=20060126"
</literallayout>
</para>
</section>
@@ -620,9 +623,7 @@
The Git Submodules fetcher is not a complete fetcher
implementation.
The fetcher has known issues where it does not use the
normal source mirroring infrastructure properly. Further,
the submodule sources it fetches are not visible to the
licensing and source archiving infrastructures.
normal source mirroring infrastructure properly.
</para>
</note>
</para>
@@ -669,8 +670,8 @@
The <filename>module</filename> and <filename>vob</filename>
options are combined to create the <filename>load</filename> rule in
the view config spec.
As an example, consider the <filename>vob</filename> and
<filename>module</filename> values from the
As an example, consider the <filename>vob</filename> and
<filename>module</filename> values from the
<filename>SRC_URI</filename> statement at the start of this section.
Combining those values results in the following:
<literallayout class='monospaced'>
@@ -715,105 +716,6 @@
</para>
</section>
<section id='perforce-fetcher'>
<title>Perforce Fetcher (<filename>p4://</filename>)</title>
<para>
This fetcher submodule fetches code from the
<ulink url='https://www.perforce.com/'>Perforce</ulink>
source control system.
The executable used is specified by
<filename>FETCHCMD_p4</filename>, which defaults
to "p4".
The fetcher's temporary working directory is set by
<link linkend='var-P4DIR'><filename>P4DIR</filename></link>,
which defaults to "DL_DIR/p4".
</para>
<para>
To use this fetcher, make sure your recipe has proper
<link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>,
<link linkend='var-SRCREV'><filename>SRCREV</filename></link>, and
<link linkend='var-PV'><filename>PV</filename></link> values.
The p4 executable is able to use the config file defined by your
system's <filename>P4CONFIG</filename> environment variable in
order to define the Perforce server URL and port, username, and
password if you do not wish to keep those values in a recipe
itself.
If you choose not to use <filename>P4CONFIG</filename>,
or to explicitly set variables that <filename>P4CONFIG</filename>
can contain, you can specify the <filename>P4PORT</filename> value,
which is the server's URL and port number, and you can
specify a username and password directly in your recipe within
<filename>SRC_URI</filename>.
</para>
<para>
Here is an example that relies on <filename>P4CONFIG</filename>
to specify the server URL and port, username, and password, and
fetches the Head Revision:
<literallayout class='monospaced'>
SRC_URI = "p4://example-depot/main/source/..."
SRCREV = "${AUTOREV}"
PV = "p4-${SRCPV}"
S = "${WORKDIR}/p4"
</literallayout>
</para>
<para>
Here is an example that specifies the server URL and port,
username, and password, and fetches a Revision based on a Label:
<literallayout class='monospaced'>
P4PORT = "tcp:p4server.example.net:1666"
SRC_URI = "p4://user:passwd@example-depot/main/source/..."
SRCREV = "release-1.0"
PV = "p4-${SRCPV}"
S = "${WORKDIR}/p4"
</literallayout>
<note>
You should always set <filename>S</filename>
to <filename>"${WORKDIR}/p4"</filename> in your recipe.
</note>
</para>
</section>
<section id='repo-fetcher'>
<title>Repo Fetcher (<filename>repo://</filename>)</title>
<para>
This fetcher submodule fetches code from
<filename>google-repo</filename> source control system.
The fetcher works by initiating and syncing sources of the
repository into
<link linkend='var-REPODIR'><filename>REPODIR</filename></link>,
which is usually
<link linkend='var-DL_DIR'><filename>DL_DIR</filename></link><filename>/repo</filename>.
</para>
<para>
This fetcher supports the following parameters:
<itemizedlist>
<listitem><para>
<emphasis>"protocol":</emphasis>
Protocol to fetch the repository manifest (default: git).
</para></listitem>
<listitem><para>
<emphasis>"branch":</emphasis>
Branch or tag of repository to get (default: master).
</para></listitem>
<listitem><para>
<emphasis>"manifest":</emphasis>
Name of the manifest file (default: <filename>default.xml</filename>).
</para></listitem>
</itemizedlist>
Here are some example URLs:
<literallayout class='monospaced'>
SRC_URI = "repo://REPOROOT;protocol=git;branch=some_branch;manifest=my_manifest.xml"
SRC_URI = "repo://REPOROOT;protocol=file;branch=some_branch;manifest=my_manifest.xml"
</literallayout>
</para>
</section>
<section id='other-fetchers'>
<title>Other Fetchers</title>
@@ -823,6 +725,9 @@
<listitem><para>
Bazaar (<filename>bzr://</filename>)
</para></listitem>
<listitem><para>
Perforce (<filename>p4://</filename>)
</para></listitem>
<listitem><para>
Trees using Git Annex (<filename>gitannex://</filename>)
</para></listitem>
@@ -832,6 +737,9 @@
<listitem><para>
Secure Shell (<filename>ssh://</filename>)
</para></listitem>
<listitem><para>
Repo (<filename>repo://</filename>)
</para></listitem>
<listitem><para>
OSC (<filename>osc://</filename>)
</para></listitem>

View File

@@ -128,8 +128,15 @@
</para>
<note>
This example was inspired by and drew heavily from
<ulink url="http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html">Mailing List post - The BitBake equivalent of "Hello, World!"</ulink>.
This example was inspired by and drew heavily from these sources:
<itemizedlist>
<listitem><para>
<ulink url="http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html">Mailing List post - The BitBake equivalent of "Hello, World!"</ulink>
</para></listitem>
<listitem><para>
<ulink url="https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/">Hambedded Linux blog post - From Bitbake Hello World to an Image</ulink>
</para></listitem>
</itemizedlist>
</note>
<para>
@@ -260,9 +267,9 @@
files.
For this example, you need to create the file in your project directory
and define some key BitBake variables.
For more information on the <filename>bitbake.conf</filename> file,
For more information on the <filename>bitbake.conf</filename>,
see
<ulink url='http://git.openembedded.org/bitbake/tree/conf/bitbake.conf'></ulink>.
<ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#an-overview-of-bitbakeconf'></ulink>
</para>
<para>Use the following commands to create the <filename>conf</filename>
directory in the project directory:
@@ -273,32 +280,14 @@
some editor to create the <filename>bitbake.conf</filename>
so that it contains the following:
<literallayout class='monospaced'>
<link linkend='var-PN'>PN</link> = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
</literallayout>
<literallayout class='monospaced'>
TMPDIR = "${<link linkend='var-TOPDIR'>TOPDIR</link>}/tmp"
<link linkend='var-CACHE'>CACHE</link> = "${TMPDIR}/cache"
<link linkend='var-STAMP'>STAMP</link> = "${TMPDIR}/${PN}/stamps"
<link linkend='var-T'>T</link> = "${TMPDIR}/${PN}/work"
<link linkend='var-B'>B</link> = "${TMPDIR}/${PN}"
<link linkend='var-STAMP'>STAMP</link> = "${TMPDIR}/stamps"
<link linkend='var-T'>T</link> = "${TMPDIR}/work"
<link linkend='var-B'>B</link> = "${TMPDIR}"
</literallayout>
<note>
Without a value for <filename>PN</filename>, the
variables <filename>STAMP</filename>,
<filename>T</filename>, and <filename>B</filename>,
prevent more than one recipe from working. You can fix
this by either setting <filename>PN</filename> to have
a value similar to what OpenEmbedded and BitBake use
in the default <filename>bitbake.conf</filename> file
(see previous example). Or, by manually updating each
recipe to set <filename>PN</filename>. You will also
need to include <filename>PN</filename> as part of the
<filename>STAMP</filename>, <filename>T</filename>, and
<filename>B</filename> variable definitions in the
<filename>local.conf</filename> file.
</note>
The <filename>TMPDIR</filename> variable establishes a directory
that BitBake uses for build output and intermediate files other
that BitBake uses for build output and intermediate files (other
than the cached information used by the
<link linkend='setscene'>Setscene</link> process.
Here, the <filename>TMPDIR</filename> directory is set to
@@ -318,19 +307,19 @@
file exists, you can run the <filename>bitbake</filename>
command again:
<literallayout class='monospaced'>
$ bitbake
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 177, in _inherit
bb.parse.BBHandler.inherit(bbclass, "configuration INHERITs", 0, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py", line 92, in inherit
include(fn, file, lineno, d, "inherit")
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 100, in include
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), oldfn, lineno)
ParseError: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
$ bitbake
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 177, in _inherit
bb.parse.BBHandler.inherit(bbclass, "configuration INHERITs", 0, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py", line 92, in inherit
include(fn, file, lineno, d, "inherit")
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 100, in include
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), oldfn, lineno)
ParseError: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
</literallayout>
In the sample output, BitBake could not find the
<filename>classes/base.bbclass</filename> file.
@@ -363,6 +352,9 @@
Of course, the <filename>base.bbclass</filename> can have much
more depending on which build environments BitBake is
supporting.
For more information on the <filename>base.bbclass</filename> file,
you can look at
<ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#tasks'></ulink>.
</para></listitem>
<listitem><para><emphasis>Run Bitbake:</emphasis>
After making sure that the <filename>classes/base.bbclass</filename>
@@ -383,10 +375,10 @@
code separate from the general metadata used by BitBake.
Thus, this example creates and uses a layer called "mylayer".
<note>
You can find additional information on layers in the
"<link linkend='layers'>Layers</link>" section.
</note></para>
You can find additional information on adding a layer at
<ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#adding-an-example-layer'></ulink>.
</note>
</para>
<para>Minimally, you need a recipe file and a layer configuration
file in your layer.
The configuration file needs to be in the <filename>conf</filename>
@@ -407,7 +399,7 @@
<link linkend='var-BBFILES'>BBFILES</link> += "${LAYERDIR}/*.bb"
<link linkend='var-BBFILE_COLLECTIONS'>BBFILE_COLLECTIONS</link> += "mylayer"
<link linkend='var-BBFILE_PATTERN'>BBFILE_PATTERN_mylayer</link> := "^${LAYERDIR_RE}/"
<link linkend='var-BBFILE_PATTERN'>BBFILE_PATTERN_mylayer</link> := "^${LAYERDIR}/"
</literallayout>
For information on these variables, click the links
to go to the definitions in the glossary.</para>

View File

@@ -440,7 +440,7 @@
Build Checkout:</emphasis>
A final possibility for getting a copy of BitBake is that it
already comes with your checkout of a larger Bitbake-based build
system, such as Poky.
system, such as Poky or Yocto Project.
Rather than manually checking out individual layers and
gluing them together yourself, you can check
out an entire build system.
@@ -488,6 +488,8 @@
target that failed and anything depending on it cannot
be built, as much as possible will be built before
stopping.
-a, --tryaltconfigs Continue with builds by trying to use alternative
providers where possible.
-f, --force Force the specified targets/task to run (invalidating
any existing stamp file).
-c CMD, --cmd=CMD Specify the task to execute. The exact options
@@ -502,20 +504,9 @@
Read the specified file before bitbake.conf.
-R POSTFILE, --postread=POSTFILE
Read the specified file after bitbake.conf.
-v, --verbose Enable tracing of shell tasks (with 'set -x'). Also
print bb.note(...) messages to stdout (in addition to
writing them to ${T}/log.do_&lt;task&gt;).
-v, --verbose Output more log message data to the terminal.
-D, --debug Increase the debug level. You can specify this more
than once. -D sets the debug level to 1, where only
bb.debug(1, ...) messages are printed to stdout; -DD
sets the debug level to 2, where both bb.debug(1, ...)
and bb.debug(2, ...) messages are printed; etc.
Without -D, no debug messages are printed. Note that
-D only affects output to stdout. All debug messages
are written to ${T}/log.do_taskname, regardless of the
debug level.
-q, --quiet Output less log message data to the terminal. You can
specify this more than once.
than once.
-n, --dry-run Don't execute, just go through the motions.
-S SIGNATURE_HANDLER, --dump-signatures=SIGNATURE_HANDLER
Dump out the signature construction information, with
@@ -538,38 +529,29 @@
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
-P, --profile Profile the command and save reports.
-u UI, --ui=UI The user interface to use (knotty, ncurses or taskexp
- default knotty).
-u UI, --ui=UI The user interface to use (depexp, goggle, hob, knotty
or ncurses - default knotty).
-t SERVERTYPE, --servertype=SERVERTYPE
Choose which server type to use (process or xmlrpc -
default process).
--token=XMLRPCTOKEN Specify the connection token to be used when
connecting to a remote server.
--revisions-changed Set the exit code depending on whether upstream
floating revisions have changed or not.
--server-only Run bitbake without a UI, only starting a server
(cooker) process.
-B BIND, --bind=BIND The name/address for the bitbake xmlrpc server to bind
to.
-T SERVER_TIMEOUT, --idle-timeout=SERVER_TIMEOUT
Set timeout to unload bitbake server due to
inactivity, set to -1 means no unload, default:
Environment variable BB_SERVER_TIMEOUT.
-B BIND, --bind=BIND The name/address for the bitbake server to bind to.
--no-setscene Do not run any setscene tasks. sstate will be ignored
and everything needed, built.
--setscene-only Only run setscene tasks, don't run any real tasks.
--remote-server=REMOTE_SERVER
Connect to the specified server.
-m, --kill-server Terminate any running bitbake server.
-m, --kill-server Terminate the remote server.
--observe-only Connect to a server as an observing-only client.
--status-only Check the status of the remote bitbake server.
-w WRITEEVENTLOG, --write-log=WRITEEVENTLOG
Writes the event log of the build to a bitbake event
json file. Use '' (empty string) to assign the name
automatically.
--runall=RUNALL Run the specified task for any recipe in the taskgraph
of the specified target (even if it wouldn't otherwise
have run).
--runonly=RUNONLY Run only the specified task within the taskgraph of
the specified targets (and any task dependencies those
tasks may have).
</literallayout>
</para>
</section>
@@ -652,25 +634,6 @@
</para>
</section>
<section id='executing-a-list-of-task-and-recipe-combinations'>
<title>Executing a List of Task and Recipe Combinations</title>
<para>
The BitBake command line supports specifying different
tasks for individual targets when you specify multiple
targets.
For example, suppose you had two targets (or recipes)
<filename>myfirstrecipe</filename> and
<filename>mysecondrecipe</filename> and you needed
BitBake to run <filename>taskA</filename> for the first
recipe and <filename>taskB</filename> for the second
recipe:
<literallayout class='monospaced'>
$ bitbake myfirstrecipe:do_taskA mysecondrecipe:do_taskB
</literallayout>
</para>
</section>
<section id='generating-dependency-graphs'>
<title>Generating Dependency Graphs</title>
@@ -683,21 +646,21 @@
</para>
<para>
When you generate a dependency graph, BitBake writes three files
When you generate a dependency graph, BitBake writes four files
to the current working directory:
<itemizedlist>
<listitem><para>
<emphasis><filename>recipe-depends.dot</filename>:</emphasis>
Shows dependencies between recipes (i.e. a collapsed version of
<filename>task-depends.dot</filename>).
<listitem><para><emphasis><filename>package-depends.dot</filename>:</emphasis>
Shows BitBake's knowledge of dependencies between
runtime targets.
</para></listitem>
<listitem><para>
<emphasis><filename>task-depends.dot</filename>:</emphasis>
<listitem><para><emphasis><filename>pn-depends.dot</filename>:</emphasis>
Shows dependencies between build-time targets
(i.e. recipes).
</para></listitem>
<listitem><para><emphasis><filename>task-depends.dot</filename>:</emphasis>
Shows dependencies between tasks.
These dependencies match BitBake's internal task execution list.
</para></listitem>
<listitem><para>
<emphasis><filename>pn-buildlist</filename>:</emphasis>
<listitem><para><emphasis><filename>pn-buildlist</filename>:</emphasis>
Shows a simple list of targets that are to be built.
</para></listitem>
</itemizedlist>

View File

@@ -52,7 +52,7 @@
<link linkend='var-MIRRORS'>M</link>
<!-- <link linkend='var-glossary-n'>N</link> -->
<link linkend='var-OVERRIDES'>O</link>
<link linkend='var-P4DIR'>P</link>
<link linkend='var-PACKAGES'>P</link>
<!-- <link linkend='var-QMAKE_PROFILES'>Q</link> -->
<link linkend='var-RDEPENDS'>R</link>
<link linkend='var-SECTION'>S</link>
@@ -78,7 +78,7 @@
</para>
<para>
In OpenEmbedded-Core, <filename>ASSUME_PROVIDED</filename>
In OpenEmbedded Core, <filename>ASSUME_PROVIDED</filename>
mostly specifies native tools that should not be built.
An example is <filename>git-native</filename>, which
when specified allows for the Git binary from the host to
@@ -716,7 +716,7 @@
</glossdef>
</glossentry>
<glossentry id='var-BB_SETSCENE_VERIFY_FUNCTION2'><glossterm>BB_SETSCENE_VERIFY_FUNCTION2</glossterm>
<glossentry id='var-BB_SETSCENE_VERIFY_FUNCTION'><glossterm>BB_SETSCENE_VERIFY_FUNCTION</glossterm>
<glossdef>
<para>
Specifies a function to call that verifies the list of
@@ -964,7 +964,7 @@
Allows you to extend a recipe so that it builds variants
of the software.
Some examples of these variants for recipes from the
OpenEmbedded-Core metadata are "natives" such as
OpenEmbedded Core metadata are "natives" such as
<filename>quilt-native</filename>, which is a copy of
Quilt built to run on the build system; "crosses" such
as <filename>gcc-cross</filename>, which is a compiler
@@ -980,35 +980,12 @@
amount of code, it usually is as simple as adding the
variable to your recipe.
Here are two examples.
The "native" variants are from the OpenEmbedded-Core
The "native" variants are from the OpenEmbedded Core
metadata:
<literallayout class='monospaced'>
BBCLASSEXTEND =+ "native nativesdk"
BBCLASSEXTEND =+ "multilib:<replaceable>multilib_name</replaceable>"
</literallayout>
<note>
<para>
Internally, the <filename>BBCLASSEXTEND</filename>
mechanism generates recipe variants by rewriting
variable values and applying overrides such as
<filename>_class-native</filename>.
For example, to generate a native version of a recipe,
a
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
on "foo" is rewritten to a <filename>DEPENDS</filename>
on "foo-native".
</para>
<para>
Even when using <filename>BBCLASSEXTEND</filename>, the
recipe is only parsed once.
Parsing once adds some limitations.
For example, it is not possible to
include a different file depending on the variant,
since <filename>include</filename> statements are
processed when the recipe is parsed.
</para>
</note>
</para>
</glossdef>
</glossentry>
@@ -1017,7 +994,7 @@
<glossdef>
<para>
Sets the BitBake debug output level to a specific value
as incremented by the <filename>-D</filename> command line
as incremented by the <filename>-d</filename> command line
option.
<note>
You must set this variable in the external environment
@@ -1143,6 +1120,8 @@
<glossdef>
<para>
Sets the base location where layers are stored.
By default, this location is set to
<filename>${COREBASE}</filename>.
This setting is used in conjunction with
<filename>bitbake-layers layerindex-fetch</filename> and
tells <filename>bitbake-layers</filename> where to place
@@ -1537,6 +1516,24 @@
</glossdef>
</glossentry>
<glossentry id='var-FILESDIR'><glossterm>FILESDIR</glossterm>
<glossdef>
<para>
Specifies directories BitBake uses when searching for
patches and files.
The "local" fetcher module uses these directories when
handling <filename>file://</filename> URLs if the file
was not found using
<link linkend='var-FILESPATH'><filename>FILESPATH</filename></link>.
<note>
The <filename>FILESDIR</filename> variable is
deprecated and you should use
<filename>FILESPATH</filename> in all new code.
</note>
</para>
</glossdef>
</glossentry>
<glossentry id='var-FILESPATH'><glossterm>FILESPATH</glossterm>
<glossdef>
<para>
@@ -1594,19 +1591,9 @@
<glossentry id='var-INHERIT'><glossterm>INHERIT</glossterm>
<glossdef>
<para>
Causes the named class or classes to be inherited globally.
Anonymous functions in the class or classes
are not executed for the
base configuration and in each individual recipe.
The OpenEmbedded build system ignores changes to
<filename>INHERIT</filename> in individual recipes.
</para>
<para>
For more information on <filename>INHERIT</filename>, see
the
"<link linkend="inherit-configuration-directive"><filename>INHERIT</filename> Configuration Directive</link>"
section.
Causes the named class to be inherited at
this point during parsing.
The variable is only valid in configuration files.
</para>
</glossdef>
</glossentry>
@@ -1649,17 +1636,6 @@
</glossdef>
</glossentry>
<glossentry id='var-LAYERDIR_RE'><glossterm>LAYERDIR_RE</glossterm>
<glossdef>
<para>When used inside the <filename>layer.conf</filename> configuration
file, this variable provides the path of the current layer,
escaped for use in a regular expression
(<link linkend='var-BBFILE_PATTERN'><filename>BBFILE_PATTERN</filename></link>).
This variable is not available outside of <filename>layer.conf</filename>
and references are expanded immediately when parsing of the file completes.</para>
</glossdef>
</glossentry>
<glossentry id='var-LAYERVERSION'><glossterm>LAYERVERSION</glossterm>
<glossdef>
<para>Optionally specifies the version of a layer as a single number.
@@ -1761,15 +1737,6 @@
<glossdiv id='var-glossary-p'><title>P</title>
<glossentry id='var-P4DIR'><glossterm>P4DIR</glossterm>
<glossdef>
<para>
The directory in which a local copy of a Perforce depot
is stored when it is fetched.
</para>
</glossdef>
</glossentry>
<glossentry id='var-PACKAGES'><glossterm>PACKAGES</glossterm>
<glossdef>
<para>The list of packages the recipe creates.
@@ -1901,7 +1868,7 @@
Here are two examples:
<literallayout class='monospaced'>
PREFERRED_VERSION_python = "2.7.3"
PREFERRED_VERSION_linux-yocto = "4.12%"
PREFERRED_VERSION_linux-yocto = "3.10%"
</literallayout>
</para>
</glossdef>
@@ -1966,27 +1933,6 @@
The <filename>PROVIDES</filename> statement results in
the "libav" recipe also being known as "libpostproc".
</para>
<para>
In addition to providing recipes under alternate names,
the <filename>PROVIDES</filename> mechanism is also used
to implement virtual targets.
A virtual target is a name that corresponds to some
particular functionality (e.g. a Linux kernel).
Recipes that provide the functionality in question list the
virtual target in <filename>PROVIDES</filename>.
Recipes that depend on the functionality in question can
include the virtual target in
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
to leave the choice of provider open.
</para>
<para>
Conventionally, virtual targets have names on the form
"virtual/function" (e.g. "virtual/kernel").
The slash is simply part of the name and has no
syntactical significance.
</para>
</glossdef>
</glossentry>
@@ -2089,16 +2035,6 @@
</glossdef>
</glossentry>
<glossentry id='var-REPODIR'><glossterm>REPODIR</glossterm>
<glossdef>
<para>
The directory in which a local copy of a
<filename>google-repo</filename> directory is stored
when it is synced.
</para>
</glossdef>
</glossentry>
<glossentry id='var-RPROVIDES'><glossterm>RPROVIDES</glossterm>
<glossdef>
<para>

View File

@@ -56,7 +56,7 @@
-->
<copyright>
<year>2004-2018</year>
<year>2004-2016</year>
<holder>Richard Purdie</holder>
<holder>Chris Larson</holder>
<holder>and Phil Blundell</holder>

View File

@@ -105,7 +105,7 @@ Show debug logging for the specified logging domains
profile the command and print a report
.TP
.B \-uUI, \-\-ui=UI
User interface to use. Currently, knotty, taskexp or ncurses can be specified as UI.
User interface to use. Currently, hob, depexp, goggle or ncurses can be specified as UI.
.TP
.B \-tSERVERTYPE, \-\-servertype=SERVERTYPE
Choose which server to use, none, process or xmlrpc.

View File

@@ -3,7 +3,7 @@
#
# This is a copy on write dictionary and set which abuses classes to try and be nice and fast.
#
# Copyright (C) 2006 Tim Ansell
# Copyright (C) 2006 Tim Amsell
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -23,17 +23,19 @@
# Assign a file to __warn__ to get warnings about slow operations.
#
from __future__ import print_function
import copy
import types
ImmutableTypes = (
types.NoneType,
bool,
complex,
float,
int,
long,
tuple,
frozenset,
str
basestring
)
MUTABLE = "__mutable__"
@@ -59,7 +61,7 @@ class COWDictMeta(COWMeta):
__call__ = cow
def __setitem__(cls, key, value):
if value is not None and not isinstance(value, ImmutableTypes):
if not isinstance(value, ImmutableTypes):
if not isinstance(value, COWMeta):
cls.__hasmutable__ = True
key += MUTABLE
@@ -114,7 +116,7 @@ class COWDictMeta(COWMeta):
cls.__setitem__(key, cls.__marker__)
def __revertitem__(cls, key):
if key not in cls.__dict__:
if not cls.__dict__.has_key(key):
key += MUTABLE
delattr(cls, key)
@@ -150,7 +152,7 @@ class COWDictMeta(COWMeta):
yield value
if type == "items":
yield (key, value)
return
raise StopIteration()
def iterkeys(cls):
return cls.iter("keys")
@@ -181,7 +183,7 @@ class COWSetMeta(COWDictMeta):
COWDictMeta.__delitem__(cls, repr(hash(value)))
def __in__(cls, value):
return repr(hash(value)) in COWDictMeta
return COWDictMeta.has_key(repr(hash(value)))
def iterkeys(cls):
raise TypeError("sets don't have keys")
@@ -190,10 +192,12 @@ class COWSetMeta(COWDictMeta):
raise TypeError("sets don't have 'items'")
# These are the actual classes you use!
class COWDictBase(object, metaclass = COWDictMeta):
class COWDictBase(object):
__metaclass__ = COWDictMeta
__count__ = 0
class COWSetBase(object, metaclass = COWSetMeta):
class COWSetBase(object):
__metaclass__ = COWSetMeta
__count__ = 0
if __name__ == "__main__":
@@ -283,7 +287,7 @@ if __name__ == "__main__":
except KeyError:
print("Yay! deleted key raises error")
if 'b' in b:
if b.has_key('b'):
print("Boo!")
else:
print("Yay - has_key with delete works!")

View File

@@ -21,11 +21,11 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
__version__ = "1.39.1"
__version__ = "1.30.0"
import sys
if sys.version_info < (3, 4, 0):
raise RuntimeError("Sorry, python 3.4.0 or later is required for this version of bitbake")
if sys.version_info < (2, 7, 3):
raise RuntimeError("Sorry, python 2.7.3 or later is required for this version of bitbake")
class BBHandledException(Exception):
@@ -63,10 +63,6 @@ class BBLogger(Logger):
def verbose(self, msg, *args, **kwargs):
return self.log(logging.INFO - 1, msg, *args, **kwargs)
def verbnote(self, msg, *args, **kwargs):
return self.log(logging.INFO + 2, msg, *args, **kwargs)
logging.raiseExceptions = False
logging.setLoggerClass(BBLogger)
@@ -88,8 +84,8 @@ def plain(*args):
mainlogger.plain(''.join(args))
def debug(lvl, *args):
if isinstance(lvl, str):
mainlogger.warning("Passed invalid debug level '%s' to bb.debug", lvl)
if isinstance(lvl, basestring):
mainlogger.warn("Passed invalid debug level '%s' to bb.debug", lvl)
args = (lvl,) + args
lvl = 1
mainlogger.debug(lvl, ''.join(args))
@@ -97,20 +93,8 @@ def debug(lvl, *args):
def note(*args):
mainlogger.info(''.join(args))
#
# A higher prioity note which will show on the console but isn't a warning
#
# Something is happening the user should be aware of but they probably did
# something to make it happen
#
def verbnote(*args):
mainlogger.verbnote(''.join(args))
#
# Warnings - things the user likely needs to pay attention to and fix
#
def warn(*args):
mainlogger.warning(''.join(args))
mainlogger.warn(''.join(args))
def error(*args, **kwargs):
mainlogger.error(''.join(args), extra=kwargs)

View File

@@ -35,12 +35,14 @@ import stat
import bb
import bb.msg
import bb.process
import bb.progress
from bb import data, event, utils
from contextlib import nested
from bb import event, utils
bblogger = logging.getLogger('BitBake')
logger = logging.getLogger('BitBake.Build')
NULL = open(os.devnull, 'r+')
__mtime_cache = {}
def cached_mtime_noerror(f):
@@ -59,13 +61,8 @@ def reset_cache():
# in all namespaces, hence we add them to __builtins__.
# If we do not do this and use the exec globals, they will
# not be available to subfunctions.
if hasattr(__builtins__, '__setitem__'):
builtins = __builtins__
else:
builtins = __builtins__.__dict__
builtins['bb'] = bb
builtins['os'] = os
__builtins__['bb'] = bb
__builtins__['os'] = os
class FuncFailed(Exception):
def __init__(self, name = None, logfile = None):
@@ -89,14 +86,13 @@ class TaskBase(event.Event):
def __init__(self, t, logfile, d):
self._task = t
self._package = d.getVar("PF")
self._mc = d.getVar("BB_CURRENT_MC")
self.taskfile = d.getVar("FILE")
self._package = d.getVar("PF", True)
self.taskfile = d.getVar("FILE", True)
self.taskname = self._task
self.logfile = logfile
self.time = time.time()
event.Event.__init__(self)
self._message = "recipe %s: task %s: %s" % (d.getVar("PF"), t, self.getDisplayName())
self._message = "recipe %s: task %s: %s" % (d.getVar("PF", True), t, self.getDisplayName())
def getTask(self):
return self._task
@@ -137,25 +133,6 @@ class TaskInvalid(TaskBase):
super(TaskInvalid, self).__init__(task, None, metadata)
self._message = "No such task '%s'" % task
class TaskProgress(event.Event):
"""
Task made some progress that could be reported to the user, usually in
the form of a progress bar or similar.
NOTE: this class does not inherit from TaskBase since it doesn't need
to - it's fired within the task context itself, so we don't have any of
the context information that you do in the case of the other events.
The event PID can be used to determine which task it came from.
The progress value is normally 0-100, but can also be negative
indicating that progress has been made but we aren't able to determine
how much.
The rate is optional, this is simply an extra string to display to the
user if specified.
"""
def __init__(self, progress, rate=None):
self.progress = progress
self.rate = rate
event.Event.__init__(self)
class LogTee(object):
def __init__(self, logger, outfile):
@@ -187,19 +164,20 @@ class LogTee(object):
def exec_func(func, d, dirs = None, pythonexception=False):
"""Execute a BB 'function'"""
try:
oldcwd = os.getcwd()
except:
oldcwd = None
body = d.getVar(func, False)
if not body:
if body is None:
logger.warn("Function %s doesn't exist", func)
return
flags = d.getVarFlags(func)
cleandirs = flags.get('cleandirs') if flags else None
cleandirs = flags.get('cleandirs')
if cleandirs:
for cdir in d.expand(cleandirs).split():
bb.utils.remove(cdir, True)
bb.utils.mkdirhier(cdir)
if flags and dirs is None:
if dirs is None:
dirs = flags.get('dirs')
if dirs:
dirs = d.expand(dirs).split()
@@ -209,13 +187,8 @@ def exec_func(func, d, dirs = None, pythonexception=False):
bb.utils.mkdirhier(adir)
adir = dirs[-1]
else:
adir = None
body = d.getVar(func, False)
if not body:
if body is None:
logger.warning("Function %s doesn't exist", func)
return
adir = d.getVar('B', True)
bb.utils.mkdirhier(adir)
ispython = flags.get('python')
@@ -225,17 +198,17 @@ def exec_func(func, d, dirs = None, pythonexception=False):
else:
lockfiles = None
tempdir = d.getVar('T')
tempdir = d.getVar('T', True)
# or func allows items to be executed outside of the normal
# task set, such as buildhistory
task = d.getVar('BB_RUNTASK') or func
task = d.getVar('BB_RUNTASK', True) or func
if task == func:
taskfunc = task
else:
taskfunc = "%s.%s" % (task, func)
runfmt = d.getVar('BB_RUNFMT') or "run.{func}.{pid}"
runfmt = d.getVar('BB_RUNFMT', True) or "run.{func}.{pid}"
runfn = runfmt.format(taskfunc=taskfunc, task=task, func=func, pid=os.getpid())
runfile = os.path.join(tempdir, runfn)
bb.utils.mkdirhier(os.path.dirname(runfile))
@@ -260,18 +233,6 @@ def exec_func(func, d, dirs = None, pythonexception=False):
else:
exec_func_shell(func, d, runfile, cwd=adir)
try:
curcwd = os.getcwd()
except:
curcwd = None
if oldcwd and curcwd != oldcwd:
try:
bb.warn("Task %s changed cwd to %s" % (func, curcwd))
os.chdir(oldcwd)
except:
pass
_functionfmt = """
{function}(d)
"""
@@ -287,8 +248,7 @@ def exec_func_python(func, d, runfile, cwd=None, pythonexception=False):
if cwd:
try:
olddir = os.getcwd()
except OSError as e:
bb.warn("%s: Cannot get cwd: %s" % (func, e))
except OSError:
olddir = None
os.chdir(cwd)
@@ -314,8 +274,8 @@ def exec_func_python(func, d, runfile, cwd=None, pythonexception=False):
if cwd and olddir:
try:
os.chdir(olddir)
except OSError as e:
bb.warn("%s: Cannot restore cwd %s: %s" % (func, olddir, e))
except OSError:
pass
def shell_trap_code():
return '''#!/bin/sh\n
@@ -363,11 +323,11 @@ trap '' 0
exit $ret
''')
os.chmod(runfile, 0o775)
os.chmod(runfile, 0775)
cmd = runfile
if d.getVarFlag(func, 'fakeroot', False):
fakerootcmd = d.getVar('FAKEROOT')
fakerootcmd = d.getVar('FAKEROOT', True)
if fakerootcmd:
cmd = [fakerootcmd, runfile]
@@ -376,64 +336,41 @@ exit $ret
else:
logfile = sys.stdout
progress = d.getVarFlag(func, 'progress')
if progress:
if progress == 'percent':
# Use default regex
logfile = bb.progress.BasicProgressHandler(d, outfile=logfile)
elif progress.startswith('percent:'):
# Use specified regex
logfile = bb.progress.BasicProgressHandler(d, regex=progress.split(':', 1)[1], outfile=logfile)
elif progress.startswith('outof:'):
# Use specified regex
logfile = bb.progress.OutOfProgressHandler(d, regex=progress.split(':', 1)[1], outfile=logfile)
else:
bb.warn('%s: invalid task progress varflag value "%s", ignoring' % (func, progress))
fifobuffer = bytearray()
def readfifo(data):
nonlocal fifobuffer
fifobuffer.extend(data)
while fifobuffer:
message, token, nextmsg = fifobuffer.partition(b"\00")
if token:
splitval = message.split(b' ', 1)
cmd = splitval[0].decode("utf-8")
if len(splitval) > 1:
value = splitval[1].decode("utf-8")
else:
value = ''
if cmd == 'bbplain':
bb.plain(value)
elif cmd == 'bbnote':
bb.note(value)
elif cmd == 'bbwarn':
bb.warn(value)
elif cmd == 'bberror':
bb.error(value)
elif cmd == 'bbfatal':
# The caller will call exit themselves, so bb.error() is
# what we want here rather than bb.fatal()
bb.error(value)
elif cmd == 'bbfatal_log':
bb.error(value, forcelog=True)
elif cmd == 'bbdebug':
splitval = value.split(' ', 1)
level = int(splitval[0])
value = splitval[1]
bb.debug(level, value)
else:
bb.warn("Unrecognised command '%s' on FIFO" % cmd)
fifobuffer = nextmsg
lines = data.split('\0')
for line in lines:
splitval = line.split(' ', 1)
cmd = splitval[0]
if len(splitval) > 1:
value = splitval[1]
else:
break
value = ''
if cmd == 'bbplain':
bb.plain(value)
elif cmd == 'bbnote':
bb.note(value)
elif cmd == 'bbwarn':
bb.warn(value)
elif cmd == 'bberror':
bb.error(value)
elif cmd == 'bbfatal':
# The caller will call exit themselves, so bb.error() is
# what we want here rather than bb.fatal()
bb.error(value)
elif cmd == 'bbfatal_log':
bb.error(value, forcelog=True)
elif cmd == 'bbdebug':
splitval = value.split(' ', 1)
level = int(splitval[0])
value = splitval[1]
bb.debug(level, value)
tempdir = d.getVar('T')
tempdir = d.getVar('T', True)
fifopath = os.path.join(tempdir, 'fifo.%s' % os.getpid())
if os.path.exists(fifopath):
os.unlink(fifopath)
os.mkfifo(fifopath)
with open(fifopath, 'r+b', buffering=0) as fifo:
with open(fifopath, 'r+') as fifo:
try:
bb.debug(2, "Executing shell function %s" % func)
@@ -441,7 +378,7 @@ exit $ret
with open(os.devnull, 'r+') as stdin:
bb.process.run(cmd, shell=False, stdin=stdin, log=logfile, extrafiles=[(fifo,readfifo)])
except bb.process.CmdError:
logfn = d.getVar('BB_LOGFILE')
logfn = d.getVar('BB_LOGFILE', True)
raise FuncFailed(func, logfn)
finally:
os.unlink(fifopath)
@@ -472,18 +409,18 @@ def _exec_task(fn, task, d, quieterr):
logger.debug(1, "Executing task %s", task)
localdata = _task_data(fn, task, d)
tempdir = localdata.getVar('T')
tempdir = localdata.getVar('T', True)
if not tempdir:
bb.fatal("T variable not set, unable to build")
# Change nice level if we're asked to
nice = localdata.getVar("BB_TASK_NICE_LEVEL")
nice = localdata.getVar("BB_TASK_NICE_LEVEL", True)
if nice:
curnice = os.nice(0)
nice = int(nice) - curnice
newnice = os.nice(nice)
logger.debug(1, "Renice to %s " % newnice)
ionice = localdata.getVar("BB_TASK_IONICE_LEVEL")
ionice = localdata.getVar("BB_TASK_IONICE_LEVEL", True)
if ionice:
try:
cls, prio = ionice.split(".", 1)
@@ -494,7 +431,7 @@ def _exec_task(fn, task, d, quieterr):
bb.utils.mkdirhier(tempdir)
# Determine the logfile to generate
logfmt = localdata.getVar('BB_LOGFMT') or 'log.{task}.{pid}'
logfmt = localdata.getVar('BB_LOGFMT', True) or 'log.{task}.{pid}'
logbase = logfmt.format(task=task, pid=os.getpid())
# Document the order of the tasks...
@@ -531,6 +468,7 @@ def _exec_task(fn, task, d, quieterr):
self.triggered = True
# Handle logfiles
si = open('/dev/null', 'r')
try:
bb.utils.mkdirhier(os.path.dirname(logfn))
logfile = open(logfn, 'w')
@@ -544,8 +482,7 @@ def _exec_task(fn, task, d, quieterr):
ose = [os.dup(sys.stderr.fileno()), sys.stderr.fileno()]
# Replace those fds with our own
with open('/dev/null', 'r') as si:
os.dup2(si.fileno(), osi[1])
os.dup2(si.fileno(), osi[1])
os.dup2(logfile.fileno(), oso[1])
os.dup2(logfile.fileno(), ose[1])
@@ -561,36 +498,24 @@ def _exec_task(fn, task, d, quieterr):
localdata.setVar('BB_LOGFILE', logfn)
localdata.setVar('BB_RUNTASK', task)
localdata.setVar('BB_TASK_LOGGER', bblogger)
flags = localdata.getVarFlags(task)
event.fire(TaskStarted(task, logfn, flags, localdata), localdata)
try:
try:
event.fire(TaskStarted(task, logfn, flags, localdata), localdata)
except (bb.BBHandledException, SystemExit):
return 1
except FuncFailed as exc:
for func in (prefuncs or '').split():
exec_func(func, localdata)
exec_func(task, localdata)
for func in (postfuncs or '').split():
exec_func(func, localdata)
except FuncFailed as exc:
if quieterr:
event.fire(TaskFailedSilent(task, logfn, localdata), localdata)
else:
errprinted = errchk.triggered
logger.error(str(exc))
return 1
try:
for func in (prefuncs or '').split():
exec_func(func, localdata)
exec_func(task, localdata)
for func in (postfuncs or '').split():
exec_func(func, localdata)
except FuncFailed as exc:
if quieterr:
event.fire(TaskFailedSilent(task, logfn, localdata), localdata)
else:
errprinted = errchk.triggered
logger.error(str(exc))
event.fire(TaskFailed(task, logfn, localdata, errprinted), localdata)
return 1
except bb.BBHandledException:
event.fire(TaskFailed(task, logfn, localdata, True), localdata)
return 1
event.fire(TaskFailed(task, logfn, localdata, errprinted), localdata)
return 1
finally:
sys.stdout.flush()
sys.stderr.flush()
@@ -606,6 +531,7 @@ def _exec_task(fn, task, d, quieterr):
os.close(osi[0])
os.close(oso[0])
os.close(ose[0])
si.close()
logfile.close()
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
@@ -626,7 +552,7 @@ def exec_task(fn, task, d, profile = False):
quieterr = True
if profile:
profname = "profile-%s.log" % (d.getVar("PN") + "-" + task)
profname = "profile-%s.log" % (d.getVar("PN", True) + "-" + task)
try:
import cProfile as profile
except:
@@ -649,7 +575,7 @@ def exec_task(fn, task, d, profile = False):
event.fire(failedevent, d)
return 1
def stamp_internal(taskname, d, file_name, baseonly=False, noextra=False):
def stamp_internal(taskname, d, file_name, baseonly=False):
"""
Internal stamp helper function
Makes sure the stamp directory exists
@@ -666,14 +592,12 @@ def stamp_internal(taskname, d, file_name, baseonly=False, noextra=False):
stamp = d.stamp[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMP')
file_name = d.getVar('BB_FILENAME')
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info') or ""
stamp = d.getVar('STAMP', True)
file_name = d.getVar('BB_FILENAME', True)
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
if baseonly:
return stamp
if noextra:
extrainfo = ""
if not stamp:
return
@@ -702,9 +626,9 @@ def stamp_cleanmask_internal(taskname, d, file_name):
stamp = d.stampclean[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMPCLEAN')
file_name = d.getVar('BB_FILENAME')
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info') or ""
stamp = d.getVar('STAMPCLEAN', True)
file_name = d.getVar('BB_FILENAME', True)
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
if not stamp:
return []
@@ -740,7 +664,7 @@ def make_stamp(task, d, file_name = None):
# as it completes
if not task.endswith("_setscene") and task != "do_setscene" and not file_name:
stampbase = stamp_internal(task, d, None, True)
file_name = d.getVar('BB_FILENAME')
file_name = d.getVar('BB_FILENAME', True)
bb.parse.siggen.dump_sigtask(file_name, task, stampbase, True)
def del_stamp(task, d, file_name = None):
@@ -762,19 +686,19 @@ def write_taint(task, d, file_name = None):
if file_name:
taintfn = d.stamp[file_name] + '.' + task + '.taint'
else:
taintfn = d.getVar('STAMP') + '.' + task + '.taint'
taintfn = d.getVar('STAMP', True) + '.' + task + '.taint'
bb.utils.mkdirhier(os.path.dirname(taintfn))
# The specific content of the taint file is not really important,
# we just need it to be random, so a random UUID is used
with open(taintfn, 'w') as taintf:
taintf.write(str(uuid.uuid4()))
def stampfile(taskname, d, file_name = None, noextra=False):
def stampfile(taskname, d, file_name = None):
"""
Return the stamp for a given task
(d can be a data dict or dataCache)
"""
return stamp_internal(taskname, d, file_name, noextra=noextra)
return stamp_internal(taskname, d, file_name)
def add_tasks(tasklist, d):
task_deps = d.getVar('_task_deps', False)
@@ -800,7 +724,6 @@ def add_tasks(tasklist, d):
if name in flags:
deptask = d.expand(flags[name])
task_deps[name][task] = deptask
getTask('mcdepends')
getTask('depends')
getTask('rdepends')
getTask('deptask')
@@ -851,7 +774,6 @@ def deltask(task, d):
bbtasks = d.getVar('__BBTASKS', False) or []
if task in bbtasks:
bbtasks.remove(task)
d.delVarFlag(task, 'task')
d.setVar('__BBTASKS', bbtasks)
d.delVarFlag(task, 'deps')
@@ -860,52 +782,3 @@ def deltask(task, d):
if task in deps:
deps.remove(task)
d.setVarFlag(bbtask, 'deps', deps)
def preceedtask(task, with_recrdeptasks, d):
"""
Returns a set of tasks in the current recipe which were specified as
precondition by the task itself ("after") or which listed themselves
as precondition ("before"). Preceeding tasks specified via the
"recrdeptask" are included in the result only if requested. Beware
that this may lead to the task itself being listed.
"""
preceed = set()
# Ignore tasks which don't exist
tasks = d.getVar('__BBTASKS', False)
if task not in tasks:
return preceed
preceed.update(d.getVarFlag(task, 'deps') or [])
if with_recrdeptasks:
recrdeptask = d.getVarFlag(task, 'recrdeptask')
if recrdeptask:
preceed.update(recrdeptask.split())
return preceed
def tasksbetween(task_start, task_end, d):
"""
Return the list of tasks between two tasks in the current recipe,
where task_start is to start at and task_end is the task to end at
(and task_end has a dependency chain back to task_start).
"""
outtasks = []
tasks = list(filter(lambda k: d.getVarFlag(k, "task"), d.keys()))
def follow_chain(task, endtask, chain=None):
if not chain:
chain = []
chain.append(task)
for othertask in tasks:
if othertask == task:
continue
if task == endtask:
for ctask in chain:
if ctask not in outtasks:
outtasks.append(ctask)
else:
deps = d.getVarFlag(othertask, 'deps', False)
if task in deps:
follow_chain(othertask, endtask, chain)
chain.pop()
follow_chain(task_start, task_end)
return outtasks

View File

@@ -28,16 +28,22 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import logging
import pickle
from collections import defaultdict
import bb.utils
logger = logging.getLogger("BitBake.Cache")
__cache_version__ = "152"
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info("Importing cPickle failed. "
"Falling back to a very slow implementation.")
__cache_version__ = "149"
def getCacheFile(path, filename, data_hash):
return os.path.join(path, filename + "." + data_hash)
@@ -71,10 +77,10 @@ class RecipeInfoCommon(object):
@classmethod
def flaglist(cls, flag, varlist, metadata, squash=False):
out_dict = dict((var, metadata.getVarFlag(var, flag))
out_dict = dict((var, metadata.getVarFlag(var, flag, True))
for var in varlist)
if squash:
return dict((k,v) for (k,v) in out_dict.items() if v)
return dict((k,v) for (k,v) in out_dict.iteritems() if v)
else:
return out_dict
@@ -86,9 +92,9 @@ class RecipeInfoCommon(object):
class CoreRecipeInfo(RecipeInfoCommon):
__slots__ = ()
cachefile = "bb_cache.dat"
cachefile = "bb_cache.dat"
def __init__(self, filename, metadata):
def __init__(self, filename, metadata):
self.file_depends = metadata.getVar('__depends', False)
self.timestamp = bb.parse.cached_mtime(filename)
self.variants = self.listvar('__VARIANTS', metadata) + ['']
@@ -107,7 +113,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.pn = self.getvar('PN', metadata)
self.packages = self.listvar('PACKAGES', metadata)
if not self.packages:
if not self.pn in self.packages:
self.packages.append(self.pn)
self.basetaskhashes = self.taskvar('BB_BASEHASH', self.tasks, metadata)
@@ -122,7 +128,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.defaultpref = self.intvar('DEFAULT_PREFERENCE', metadata)
self.not_world = self.getvar('EXCLUDE_FROM_WORLD', metadata)
self.stamp = self.getvar('STAMP', metadata)
self.stampclean = self.getvar('STAMPCLEAN', metadata)
self.stampclean = self.getvar('STAMPCLEAN', metadata)
self.stamp_extrainfo = self.flaglist('stamp-extra-info', self.tasks, metadata)
self.file_checksums = self.flaglist('file-checksums', self.tasks, metadata, True)
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
@@ -217,7 +223,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.packages_dynamic[package].append(fn)
# Build hash of runtime depends and recommends
for package in self.packages:
for package in self.packages + [self.pn]:
cachedata.rundeps[fn][package] = list(self.rdepends) + self.rdepends_pkg[package]
cachedata.runrecs[fn][package] = list(self.rrecommends) + self.rrecommends_pkg[package]
@@ -234,7 +240,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.universe_target.append(self.pn)
cachedata.hashfn[fn] = self.hashfilename
for task, taskhash in self.basetaskhashes.items():
for task, taskhash in self.basetaskhashes.iteritems():
identifier = '%s.%s' % (fn, task)
cachedata.basetaskhash[identifier] = taskhash
@@ -244,144 +250,23 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootdirs[fn] = self.fakerootdirs
cachedata.extradepsfunc[fn] = self.extradepsfunc
def virtualfn2realfn(virtualfn):
"""
Convert a virtual file name to a real one + the associated subclass keyword
"""
mc = ""
if virtualfn.startswith('multiconfig:'):
elems = virtualfn.split(':')
mc = elems[1]
virtualfn = ":".join(elems[2:])
fn = virtualfn
cls = ""
if virtualfn.startswith('virtual:'):
elems = virtualfn.split(':')
cls = ":".join(elems[1:-1])
fn = elems[-1]
return (fn, cls, mc)
def realfn2virtual(realfn, cls, mc):
"""
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if cls:
realfn = "virtual:" + cls + ":" + realfn
if mc:
realfn = "multiconfig:" + mc + ":" + realfn
return realfn
def variant2virtual(realfn, variant):
"""
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if variant == "":
return realfn
if variant.startswith("multiconfig:"):
elems = variant.split(":")
if elems[2]:
return "multiconfig:" + elems[1] + ":virtual:" + ":".join(elems[2:]) + ":" + realfn
return "multiconfig:" + elems[1] + ":" + realfn
return "virtual:" + variant + ":" + realfn
def parse_recipe(bb_data, bbfile, appends, mc=''):
"""
Parse a recipe
"""
chdir_back = False
bb_data.setVar("__BBMULTICONFIG", mc)
# expand tmpdir to include this topdir
bb_data.setVar('TMPDIR', bb_data.getVar('TMPDIR') or "")
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
oldpath = os.path.abspath(os.getcwd())
bb.parse.cached_mtime_noerror(bbfile_loc)
# The ConfHandler first looks if there is a TOPDIR and if not
# then it would call getcwd().
# Previously, we chdir()ed to bbfile_loc, called the handler
# and finally chdir()ed back, a couple of thousand times. We now
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
if not bb_data.getVar('TOPDIR', False):
chdir_back = True
bb_data.setVar('TOPDIR', bbfile_loc)
try:
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
bb_data = bb.parse.handle(bbfile, bb_data)
if chdir_back:
os.chdir(oldpath)
return bb_data
except:
if chdir_back:
os.chdir(oldpath)
raise
class NoCache(object):
def __init__(self, databuilder):
self.databuilder = databuilder
self.data = databuilder.data
def loadDataFull(self, virtualfn, appends):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug(1, "Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = virtualfn2realfn(virtualfn)
bb_data = self.load_bbfile(virtualfn, appends, virtonly=True)
return bb_data[virtual]
def load_bbfile(self, bbfile, appends, virtonly = False):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
if virtonly:
(bbfile, virtual, mc) = virtualfn2realfn(bbfile)
bb_data = self.databuilder.mcdata[mc].createCopy()
bb_data.setVar("__ONLYFINALISE", virtual or "default")
datastores = parse_recipe(bb_data, bbfile, appends, mc)
return datastores
bb_data = self.data.createCopy()
datastores = parse_recipe(bb_data, bbfile, appends)
for mc in self.databuilder.mcdata:
if not mc:
continue
bb_data = self.databuilder.mcdata[mc].createCopy()
newstores = parse_recipe(bb_data, bbfile, appends, mc)
for ns in newstores:
datastores["multiconfig:%s:%s" % (mc, ns)] = newstores[ns]
return datastores
class Cache(NoCache):
class Cache(object):
"""
BitBake Cache implementation
"""
def __init__(self, databuilder, data_hash, caches_array):
super().__init__(databuilder)
data = databuilder.data
def __init__(self, data, data_hash, caches_array):
# Pass caches_array information into Cache Constructor
# It will be used later for deciding whether we
# need extra cache file dump/load support
# It will be used later for deciding whether we
# need extra cache file dump/load support
self.caches_array = caches_array
self.cachedir = data.getVar("CACHE")
self.cachedir = data.getVar("CACHE", True)
self.clean = set()
self.checked = set()
self.depends_cache = {}
self.data = None
self.data_fn = None
self.cacheclean = True
self.data_hash = data_hash
@@ -395,87 +280,78 @@ class Cache(NoCache):
self.has_cache = True
self.cachefile = getCacheFile(self.cachedir, "bb_cache.dat", self.data_hash)
logger.debug(1, "Cache dir: %s", self.cachedir)
logger.debug(1, "Using cache in '%s'", self.cachedir)
bb.utils.mkdirhier(self.cachedir)
cache_ok = True
if self.caches_array:
for cache_class in self.caches_array:
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
cache_ok = cache_ok and os.path.exists(cachefile)
cache_class.init_cacheData(self)
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
cache_ok = cache_ok and os.path.exists(cachefile)
cache_class.init_cacheData(self)
if cache_ok:
self.load_cachefile()
elif os.path.isfile(self.cachefile):
logger.info("Out of date cache found, rebuilding...")
else:
logger.debug(1, "Cache file %s not found, building..." % self.cachefile)
def load_cachefile(self):
# Firstly, using core cache file information for
# valid checking
with open(self.cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
try:
cache_ver = pickled.load()
bitbake_ver = pickled.load()
except Exception:
logger.info('Invalid cache, rebuilding...')
return
if cache_ver != __cache_version__:
logger.info('Cache version mismatch, rebuilding...')
return
elif bitbake_ver != bb.__version__:
logger.info('Bitbake version mismatch, rebuilding...')
return
cachesize = 0
previous_progress = 0
previous_percent = 0
# Calculate the correct cachesize of all those cache files
for cache_class in self.caches_array:
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
cachesize += os.fstat(cachefile.fileno()).st_size
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
cachesize += os.fstat(cachefile.fileno()).st_size
bb.event.fire(bb.event.CacheLoadStarted(cachesize), self.data)
for cache_class in self.caches_array:
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
logger.debug(1, 'Loading cache file: %s' % cachefile)
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
# Check cache version information
try:
cache_ver = pickled.load()
bitbake_ver = pickled.load()
except Exception:
logger.info('Invalid cache, rebuilding...')
return
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
while cachefile:
try:
key = pickled.load()
value = pickled.load()
except Exception:
break
if self.depends_cache.has_key(key):
self.depends_cache[key].append(value)
else:
self.depends_cache[key] = [value]
# only fire events on even percentage boundaries
current_progress = cachefile.tell() + previous_progress
current_percent = 100 * current_progress / cachesize
if current_percent > previous_percent:
previous_percent = current_percent
bb.event.fire(bb.event.CacheLoadProgress(current_progress, cachesize),
self.data)
if cache_ver != __cache_version__:
logger.info('Cache version mismatch, rebuilding...')
return
elif bitbake_ver != bb.__version__:
logger.info('Bitbake version mismatch, rebuilding...')
return
# Load the rest of the cache file
current_progress = 0
while cachefile:
try:
key = pickled.load()
value = pickled.load()
except Exception:
break
if not isinstance(key, str):
bb.warn("%s from extras cache is not a string?" % key)
break
if not isinstance(value, RecipeInfoCommon):
bb.warn("%s from extras cache is not a RecipeInfoCommon class?" % value)
break
if key in self.depends_cache:
self.depends_cache[key].append(value)
else:
self.depends_cache[key] = [value]
# only fire events on even percentage boundaries
current_progress = cachefile.tell() + previous_progress
if current_progress > cachesize:
# we might have calculated incorrect total size because a file
# might've been written out just after we checked its size
cachesize = current_progress
current_percent = 100 * current_progress / cachesize
if current_percent > previous_percent:
previous_percent = current_percent
bb.event.fire(bb.event.CacheLoadProgress(current_progress, cachesize),
self.data)
previous_progress += current_progress
previous_progress += current_progress
# Note: depends cache number is corresponding to the parsing file numbers.
# The same file has several caches, still regarded as one item in the cache
@@ -483,33 +359,69 @@ class Cache(NoCache):
len(self.depends_cache)),
self.data)
def parse(self, filename, appends):
@staticmethod
def virtualfn2realfn(virtualfn):
"""
Convert a virtual file name to a real one + the associated subclass keyword
"""
fn = virtualfn
cls = ""
if virtualfn.startswith('virtual:'):
elems = virtualfn.split(':')
cls = ":".join(elems[1:-1])
fn = elems[-1]
return (fn, cls)
@staticmethod
def realfn2virtual(realfn, cls):
"""
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if cls == "":
return realfn
return "virtual:" + cls + ":" + realfn
@classmethod
def loadDataFull(cls, virtualfn, appends, cfgData):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
(fn, virtual) = cls.virtualfn2realfn(virtualfn)
logger.debug(1, "Parsing %s (full)", fn)
cfgData.setVar("__ONLYFINALISE", virtual or "default")
bb_data = cls.load_bbfile(fn, appends, cfgData)
return bb_data[virtual]
@classmethod
def parse(cls, filename, appends, configdata, caches_array):
"""Parse the specified filename, returning the recipe information"""
logger.debug(1, "Parsing %s", filename)
infos = []
datastores = self.load_bbfile(filename, appends)
datastores = cls.load_bbfile(filename, appends, configdata)
depends = []
variants = []
# Process the "real" fn last so we can store variants list
for variant, data in sorted(datastores.items(),
for variant, data in sorted(datastores.iteritems(),
key=lambda i: i[0],
reverse=True):
virtualfn = variant2virtual(filename, variant)
variants.append(variant)
virtualfn = cls.realfn2virtual(filename, variant)
depends = depends + (data.getVar("__depends", False) or [])
if depends and not variant:
data.setVar("__depends", depends)
if virtualfn == filename:
data.setVar("__VARIANTS", " ".join(variants))
info_array = []
for cache_class in self.caches_array:
info = cache_class(filename, data)
info_array.append(info)
for cache_class in caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
info = cache_class(filename, data)
info_array.append(info)
infos.append((virtualfn, info_array))
return infos
def load(self, filename, appends):
def load(self, filename, appends, configdata):
"""Obtain the recipe information for the specified filename,
using cached values if available, otherwise parsing.
@@ -523,20 +435,21 @@ class Cache(NoCache):
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
info_array = self.depends_cache[filename]
for variant in info_array[0].variants:
virtualfn = variant2virtual(filename, variant)
virtualfn = self.realfn2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
else:
logger.debug(1, "Parsing %s", filename)
return self.parse(filename, appends, configdata, self.caches_array)
return cached, infos
def loadData(self, fn, appends, cacheData):
def loadData(self, fn, appends, cfgData, cacheData):
"""Load the recipe info for the specified filename,
parsing and adding to the cache if necessary, and adding
the recipe information to the supplied CacheData instance."""
skipped, virtuals = 0, 0
cached, infos = self.load(fn, appends)
cached, infos = self.load(fn, appends, cfgData)
for virtualfn, info_array in infos:
if info_array[0].skipped:
logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
@@ -619,13 +532,13 @@ class Cache(NoCache):
a = fl.find(":True")
b = fl.find(":False")
if ((a < 0) and b) or ((b > 0) and (b < a)):
f = fl[:b+6]
fl = fl[b+7:]
f = fl[:b+6]
fl = fl[b+7:]
elif ((b < 0) and a) or ((a > 0) and (a < b)):
f = fl[:a+5]
fl = fl[a+6:]
f = fl[:a+5]
fl = fl[a+6:]
else:
break
break
fl = fl.strip()
if "*" in f:
continue
@@ -644,19 +557,16 @@ class Cache(NoCache):
invalid = False
for cls in info_array[0].variants:
virtualfn = variant2virtual(fn, cls)
virtualfn = self.realfn2virtual(fn, cls)
self.clean.add(virtualfn)
if virtualfn not in self.depends_cache:
logger.debug(2, "Cache: %s is not cached", virtualfn)
invalid = True
elif len(self.depends_cache[virtualfn]) != len(self.caches_array):
logger.debug(2, "Cache: Extra caches missing for %s?" % virtualfn)
invalid = True
# If any one of the variants is not present, mark as invalid for all
if invalid:
for cls in info_array[0].variants:
virtualfn = variant2virtual(fn, cls)
virtualfn = self.realfn2virtual(fn, cls)
if virtualfn in self.clean:
logger.debug(2, "Cache: Removing %s from cache", virtualfn)
self.clean.remove(virtualfn)
@@ -693,19 +603,30 @@ class Cache(NoCache):
logger.debug(2, "Cache is clean, not saving.")
return
file_dict = {}
pickler_dict = {}
for cache_class in self.caches_array:
cache_class_name = cache_class.__name__
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "wb") as f:
p = pickle.Pickler(f, pickle.HIGHEST_PROTOCOL)
p.dump(__cache_version__)
p.dump(bb.__version__)
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class_name = cache_class.__name__
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
file_dict[cache_class_name] = open(cachefile, "wb")
pickler_dict[cache_class_name] = pickle.Pickler(file_dict[cache_class_name], pickle.HIGHEST_PROTOCOL)
pickler_dict['CoreRecipeInfo'].dump(__cache_version__)
pickler_dict['CoreRecipeInfo'].dump(bb.__version__)
for key, info_array in self.depends_cache.items():
for info in info_array:
if isinstance(info, RecipeInfoCommon) and info.__class__.__name__ == cache_class_name:
p.dump(key)
p.dump(info)
try:
for key, info_array in self.depends_cache.iteritems():
for info in info_array:
if isinstance(info, RecipeInfoCommon):
cache_class_name = info.__class__.__name__
pickler_dict[cache_class_name].dump(key)
pickler_dict[cache_class_name].dump(info)
finally:
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class_name = cache_class.__name__
file_dict[cache_class_name].close()
del self.depends_cache
@@ -733,13 +654,50 @@ class Cache(NoCache):
Save data we need into the cache
"""
realfn = virtualfn2realfn(file_name)[0]
realfn = self.virtualfn2realfn(file_name)[0]
info_array = []
for cache_class in self.caches_array:
info_array.append(cache_class(realfn, data))
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
info_array.append(cache_class(realfn, data))
self.add_info(file_name, info_array, cacheData, parsed)
@staticmethod
def load_bbfile(bbfile, appends, config):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
chdir_back = False
from bb import parse
# expand tmpdir to include this topdir
config.setVar('TMPDIR', config.getVar('TMPDIR', True) or "")
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
oldpath = os.path.abspath(os.getcwd())
parse.cached_mtime_noerror(bbfile_loc)
bb_data = config.createCopy()
# The ConfHandler first looks if there is a TOPDIR and if not
# then it would call getcwd().
# Previously, we chdir()ed to bbfile_loc, called the handler
# and finally chdir()ed back, a couple of thousand times. We now
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
if not bb_data.getVar('TOPDIR', False):
chdir_back = True
bb_data.setVar('TOPDIR', bbfile_loc)
try:
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
bb_data = parse.handle(bbfile, bb_data)
if chdir_back:
os.chdir(oldpath)
return bb_data
except:
if chdir_back:
os.chdir(oldpath)
raise
def init(cooker):
"""
@@ -769,9 +727,8 @@ class CacheData(object):
def __init__(self, caches_array):
self.caches_array = caches_array
for cache_class in self.caches_array:
if not issubclass(cache_class, RecipeInfoCommon):
bb.error("Extra cache data class %s should subclass RecipeInfoCommon class" % cache_class)
cache_class.init_cacheData(self)
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class.init_cacheData(self)
# Direct cache variables
self.task_queues = {}
@@ -799,8 +756,8 @@ class MultiProcessCache(object):
self.cachedata_extras = self.create_cachedata()
def init_cache(self, d, cache_file_name=None):
cachedir = (d.getVar("PERSISTENT_DIR") or
d.getVar("CACHE"))
cachedir = (d.getVar("PERSISTENT_DIR", True) or
d.getVar("CACHE", True))
if cachedir in [None, '']:
return
bb.utils.mkdirhier(cachedir)
@@ -889,3 +846,4 @@ class MultiProcessCache(object):
p.dump([data, self.__class__.CACHE_VERSION])
bb.utils.unlockfile(glf)

View File

@@ -19,13 +19,20 @@ import glob
import operator
import os
import stat
import pickle
import bb.utils
import logging
from bb.cache import MultiProcessCache
logger = logging.getLogger("BitBake.Cache")
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info("Importing cPickle failed. "
"Falling back to a very slow implementation.")
# mtime cache (non-persistent)
# based upon the assumption that files do not change during bitbake run
class FileMtimeCache(object):
@@ -97,8 +104,6 @@ class FileChecksumCache(MultiProcessCache):
def checksum_dir(pth):
# Handle directories recursively
if pth == "/":
bb.fatal("Refusing to checksum /")
dirchecksums = []
for root, dirs, files in os.walk(pth):
for name in files:

View File

@@ -1,39 +1,21 @@
"""
BitBake code parser
Parses actual code (i.e. python and shell) for functions and in-line
expressions. Used mainly to determine dependencies on other functions
and variables within the BitBake metadata. Also provides a cache for
this information in order to speed up processing.
(Not to be confused with the code that parses the metadata itself,
see lib/bb/parse/ for that).
NOTE: if you change how the parsers gather information you will almost
certainly need to increment CodeParserCache.CACHE_VERSION below so that
any existing codeparser cache gets invalidated. Additionally you'll need
to increment __cache_version__ in cache.py in order to ensure that old
recipe caches don't trigger "Taskhash mismatch" errors.
"""
import ast
import sys
import codegen
import logging
import pickle
import bb.pysh as pysh
import os.path
import bb.utils, bb.data
import hashlib
from itertools import chain
from bb.pysh import pyshyacc, pyshlex, sherrors
from pysh import pyshyacc, pyshlex, sherrors
from bb.cache import MultiProcessCache
logger = logging.getLogger('BitBake.CodeParser')
def bbhash(s):
return hashlib.md5(s.encode("utf-8")).hexdigest()
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info('Importing cPickle failed. Falling back to a very slow implementation.')
def check_indent(codestr):
"""If the code is indented, add a top level piece of code to 'remove' the indentation"""
@@ -86,12 +68,11 @@ class SetCache(object):
new = []
for i in items:
new.append(sys.intern(i))
new.append(intern(i))
s = frozenset(new)
h = hash(s)
if h in self.setcache:
return self.setcache[h]
self.setcache[h] = s
if hash(s) in self.setcache:
return self.setcache[hash(s)]
self.setcache[hash(s)] = s
return s
codecache = SetCache()
@@ -136,11 +117,7 @@ class shellCacheLine(object):
class CodeParserCache(MultiProcessCache):
cache_file_name = "bb_codeparser.dat"
# NOTE: you must increment this if you change how the parsers gather information,
# so that an existing cache gets invalidated. Additionally you'll need
# to increment __cache_version__ in cache.py in order to ensure that old
# recipe caches don't trigger "Taskhash mismatch" errors.
CACHE_VERSION = 10
CACHE_VERSION = 7
def __init__(self):
MultiProcessCache.__init__(self)
@@ -209,15 +186,12 @@ class BufferedLogger(Logger):
def flush(self):
for record in self.buffer:
if self.target.isEnabledFor(record.levelno):
self.target.handle(record)
self.target.handle(record)
self.buffer = []
class PythonParser():
getvars = (".getVar", ".appendVar", ".prependVar", "oe.utils.conditional")
getvarflags = (".getVarFlag", ".appendVarFlag", ".prependVarFlag")
containsfuncs = ("bb.utils.contains", "base_contains")
containsanyfuncs = ("bb.utils.contains_any", "bb.utils.filter")
getvars = (".getVar", ".appendVar", ".prependVar")
containsfuncs = ("bb.utils.contains", "base_contains", "bb.utils.contains_any")
execfuncs = ("bb.build.exec_func", "bb.build.exec_task")
def warn(self, func, arg):
@@ -236,24 +210,15 @@ class PythonParser():
def visit_Call(self, node):
name = self.called_node_name(node.func)
if name and (name.endswith(self.getvars) or name.endswith(self.getvarflags) or name in self.containsfuncs or name in self.containsanyfuncs):
if name and name.endswith(self.getvars) or name in self.containsfuncs:
if isinstance(node.args[0], ast.Str):
varname = node.args[0].s
if name in self.containsfuncs and isinstance(node.args[1], ast.Str):
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname].add(node.args[1].s)
elif name in self.containsanyfuncs and isinstance(node.args[1], ast.Str):
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname].update(node.args[1].s.split())
elif name.endswith(self.getvarflags):
if isinstance(node.args[1], ast.Str):
self.references.add('%s[%s]' % (varname, node.args[1].s))
else:
self.warn(node.func, node.args[1])
else:
self.references.add(varname)
else:
self.references.add(node.args[0].s)
else:
self.warn(node.func, node.args[0])
elif name and name.endswith(".expand"):
@@ -303,7 +268,7 @@ class PythonParser():
if not node or not node.strip():
return
h = bbhash(str(node))
h = hash(str(node))
if h in codeparsercache.pythoncache:
self.references = set(codeparsercache.pythoncache[h].refs)
@@ -348,7 +313,7 @@ class ShellParser():
commands it executes.
"""
h = bbhash(str(value))
h = hash(str(value))
if h in codeparsercache.shellcache:
self.execs = set(codeparsercache.shellcache[h].execs)
@@ -371,7 +336,8 @@ class ShellParser():
except pyshlex.NeedMore:
raise sherrors.ShellSyntaxError("Unexpected EOF")
self.process_tokens(tokens)
for token in tokens:
self.process_tokens(token)
def process_tokens(self, tokens):
"""Process a supplied portion of the syntax tree as returned by
@@ -417,24 +383,18 @@ class ShellParser():
"case_clause": case_clause,
}
def process_token_list(tokens):
for token in tokens:
if isinstance(token, list):
process_token_list(token)
continue
name, value = token
try:
more_tokens, words = token_handlers[name](value)
except KeyError:
raise NotImplementedError("Unsupported token type " + name)
for token in tokens:
name, value = token
try:
more_tokens, words = token_handlers[name](value)
except KeyError:
raise NotImplementedError("Unsupported token type " + name)
if more_tokens:
self.process_tokens(more_tokens)
if more_tokens:
self.process_tokens(more_tokens)
if words:
self.process_words(words)
process_token_list(tokens)
if words:
self.process_words(words)
def process_words(self, words):
"""Process a set of 'words' in pyshyacc parlance, which includes

View File

@@ -28,15 +28,8 @@ and must not trigger events, directly or indirectly.
Commands are queued in a CommandQueue
"""
from collections import OrderedDict, defaultdict
import bb.event
import bb.cooker
import bb.remotedata
class DataStoreConnectionHandle(object):
def __init__(self, dsindex=0):
self.dsindex = dsindex
class CommandCompleted(bb.event.Event):
pass
@@ -50,8 +43,6 @@ class CommandFailed(CommandExit):
def __init__(self, message):
self.error = message
CommandExit.__init__(self, 1)
def __str__(self):
return "Command execution failed: %s" % self.error
class CommandError(Exception):
pass
@@ -64,7 +55,6 @@ class Command:
self.cooker = cooker
self.cmds_sync = CommandsSync()
self.cmds_async = CommandsAsync()
self.remotedatastores = bb.remotedata.RemoteDatastores(cooker)
# FIXME Add lock for this
self.currentAsyncCommand = None
@@ -78,8 +68,7 @@ class Command:
if not hasattr(command_method, 'readonly') or False == getattr(command_method, 'readonly'):
return None, "Not able to execute not readonly commands in readonly mode"
try:
self.cooker.process_inotify_updates()
if getattr(command_method, 'needconfig', True):
if getattr(command_method, 'needconfig', False):
self.cooker.updateCacheSync()
result = command_method(self, commandline)
except CommandError as exc:
@@ -99,7 +88,6 @@ class Command:
def runAsyncCommand(self):
try:
self.cooker.process_inotify_updates()
if self.cooker.state in (bb.cooker.state.error, bb.cooker.state.shutdown, bb.cooker.state.forceshutdown):
# updateCache will trigger a shutdown of the parser
# and then raise BBHandledException triggering an exit
@@ -122,7 +110,7 @@ class Command:
return False
except SystemExit as exc:
arg = exc.args[0]
if isinstance(arg, str):
if isinstance(arg, basestring):
self.finishAsyncCommand(arg)
else:
self.finishAsyncCommand("Exited with %s" % arg)
@@ -137,23 +125,14 @@ class Command:
def finishAsyncCommand(self, msg=None, code=None):
if msg or msg == "":
bb.event.fire(CommandFailed(msg), self.cooker.data)
bb.event.fire(CommandFailed(msg), self.cooker.expanded_data)
elif code:
bb.event.fire(CommandExit(code), self.cooker.data)
bb.event.fire(CommandExit(code), self.cooker.expanded_data)
else:
bb.event.fire(CommandCompleted(), self.cooker.data)
bb.event.fire(CommandCompleted(), self.cooker.expanded_data)
self.currentAsyncCommand = None
self.cooker.finishcommand()
def reset(self):
self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
def split_mc_pn(pn):
if pn.startswith("multiconfig:"):
_, mc, pn = pn.split(":", 2)
return (mc, pn)
return ('', pn)
class CommandsSync:
"""
A class of synchronous commands
@@ -200,7 +179,6 @@ class CommandsSync:
"""
varname = params[0]
value = str(params[1])
command.cooker.extraconfigdata[varname] = value
command.cooker.data.setVar(varname, value)
def getSetVariable(self, command, params):
@@ -240,15 +218,59 @@ class CommandsSync:
command.cooker.configuration.postfile = postfiles
setPrePostConfFiles.needconfig = False
def getCpuCount(self, command, params):
"""
Get the CPU count on the bitbake server
"""
return bb.utils.cpu_count()
getCpuCount.readonly = True
getCpuCount.needconfig = False
def matchFile(self, command, params):
fMatch = params[0]
return command.cooker.matchFile(fMatch)
matchFile.needconfig = False
def getUIHandlerNum(self, command, params):
return bb.event.get_uihandler()
getUIHandlerNum.needconfig = False
getUIHandlerNum.readonly = True
def generateNewImage(self, command, params):
image = params[0]
base_image = params[1]
package_queue = params[2]
timestamp = params[3]
description = params[4]
return command.cooker.generateNewImage(image, base_image,
package_queue, timestamp, description)
def ensureDir(self, command, params):
directory = params[0]
bb.utils.mkdirhier(directory)
ensureDir.needconfig = False
def setVarFile(self, command, params):
"""
Save a variable in a file; used for saving in a configuration file
"""
var = params[0]
val = params[1]
default_file = params[2]
op = params[3]
command.cooker.modifyConfigurationVar(var, val, default_file, op)
setVarFile.needconfig = False
def removeVarFile(self, command, params):
"""
Remove a variable declaration from a file
"""
var = params[0]
command.cooker.removeConfigurationVar(var)
removeVarFile.needconfig = False
def createConfigFile(self, command, params):
"""
Create an extra configuration file
"""
name = params[0]
command.cooker.createConfigFile(name)
createConfigFile.needconfig = False
def setEventMask(self, command, params):
handlerNum = params[0]
@@ -273,307 +295,9 @@ class CommandsSync:
def updateConfig(self, command, params):
options = params[0]
environment = params[1]
cmdline = params[2]
command.cooker.updateConfigOpts(options, environment, cmdline)
command.cooker.updateConfigOpts(options, environment)
updateConfig.needconfig = False
def parseConfiguration(self, command, params):
"""Instruct bitbake to parse its configuration
NOTE: it is only necessary to call this if you aren't calling any normal action
(otherwise parsing is taken care of automatically)
"""
command.cooker.parseConfiguration()
parseConfiguration.needconfig = False
def getLayerPriorities(self, command, params):
command.cooker.parseConfiguration()
ret = []
# regex objects cannot be marshalled by xmlrpc
for collection, pattern, regex, pri in command.cooker.bbfile_config_priorities:
ret.append((collection, pattern, regex.pattern, pri))
return ret
getLayerPriorities.readonly = True
def getRecipes(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return list(command.cooker.recipecaches[mc].pkg_pn.items())
getRecipes.readonly = True
def getRecipeDepends(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return list(command.cooker.recipecaches[mc].deps.items())
getRecipeDepends.readonly = True
def getRecipeVersions(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].pkg_pepvpr
getRecipeVersions.readonly = True
def getRecipeProvides(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].fn_provides
getRecipeProvides.readonly = True
def getRecipePackages(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].packages
getRecipePackages.readonly = True
def getRecipePackagesDynamic(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].packages_dynamic
getRecipePackagesDynamic.readonly = True
def getRProviders(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].rproviders
getRProviders.readonly = True
def getRuntimeDepends(self, command, params):
ret = []
try:
mc = params[0]
except IndexError:
mc = ''
rundeps = command.cooker.recipecaches[mc].rundeps
for key, value in rundeps.items():
if isinstance(value, defaultdict):
value = dict(value)
ret.append((key, value))
return ret
getRuntimeDepends.readonly = True
def getRuntimeRecommends(self, command, params):
ret = []
try:
mc = params[0]
except IndexError:
mc = ''
runrecs = command.cooker.recipecaches[mc].runrecs
for key, value in runrecs.items():
if isinstance(value, defaultdict):
value = dict(value)
ret.append((key, value))
return ret
getRuntimeRecommends.readonly = True
def getRecipeInherits(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].inherits
getRecipeInherits.readonly = True
def getBbFilePriority(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].bbfile_priority
getBbFilePriority.readonly = True
def getDefaultPreference(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].pkg_dp
getDefaultPreference.readonly = True
def getSkippedRecipes(self, command, params):
# Return list sorted by reverse priority order
import bb.cache
skipdict = OrderedDict(sorted(command.cooker.skiplist.items(),
key=lambda x: (-command.cooker.collection.calc_bbfile_priority(bb.cache.virtualfn2realfn(x[0])[0]), x[0])))
return list(skipdict.items())
getSkippedRecipes.readonly = True
def getOverlayedRecipes(self, command, params):
return list(command.cooker.collection.overlayed.items())
getOverlayedRecipes.readonly = True
def getFileAppends(self, command, params):
fn = params[0]
return command.cooker.collection.get_file_appends(fn)
getFileAppends.readonly = True
def getAllAppends(self, command, params):
return command.cooker.collection.bbappends
getAllAppends.readonly = True
def findProviders(self, command, params):
return command.cooker.findProviders()
findProviders.readonly = True
def findBestProvider(self, command, params):
(mc, pn) = split_mc_pn(params[0])
return command.cooker.findBestProvider(pn, mc)
findBestProvider.readonly = True
def allProviders(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return list(bb.providers.allProviders(command.cooker.recipecaches[mc]).items())
allProviders.readonly = True
def getRuntimeProviders(self, command, params):
rprovide = params[0]
try:
mc = params[1]
except IndexError:
mc = ''
all_p = bb.providers.getRuntimeProviders(command.cooker.recipecaches[mc], rprovide)
if all_p:
best = bb.providers.filterProvidersRunTime(all_p, rprovide,
command.cooker.data,
command.cooker.recipecaches[mc])[0][0]
else:
best = None
return all_p, best
getRuntimeProviders.readonly = True
def dataStoreConnectorFindVar(self, command, params):
dsindex = params[0]
name = params[1]
datastore = command.remotedatastores[dsindex]
value, overridedata = datastore._findVar(name)
if value:
content = value.get('_content', None)
if isinstance(content, bb.data_smart.DataSmart):
# Value is a datastore (e.g. BB_ORIGENV) - need to handle this carefully
idx = command.remotedatastores.check_store(content, True)
return {'_content': DataStoreConnectionHandle(idx),
'_connector_origtype': 'DataStoreConnectionHandle',
'_connector_overrides': overridedata}
elif isinstance(content, set):
return {'_content': list(content),
'_connector_origtype': 'set',
'_connector_overrides': overridedata}
else:
value['_connector_overrides'] = overridedata
else:
value = {}
value['_connector_overrides'] = overridedata
return value
dataStoreConnectorFindVar.readonly = True
def dataStoreConnectorGetKeys(self, command, params):
dsindex = params[0]
datastore = command.remotedatastores[dsindex]
return list(datastore.keys())
dataStoreConnectorGetKeys.readonly = True
def dataStoreConnectorGetVarHistory(self, command, params):
dsindex = params[0]
name = params[1]
datastore = command.remotedatastores[dsindex]
return datastore.varhistory.variable(name)
dataStoreConnectorGetVarHistory.readonly = True
def dataStoreConnectorExpandPythonRef(self, command, params):
config_data_dict = params[0]
varname = params[1]
expr = params[2]
config_data = command.remotedatastores.receive_datastore(config_data_dict)
varparse = bb.data_smart.VariableParse(varname, config_data)
return varparse.python_sub(expr)
def dataStoreConnectorRelease(self, command, params):
dsindex = params[0]
if dsindex <= 0:
raise CommandError('dataStoreConnectorRelease: invalid index %d' % dsindex)
command.remotedatastores.release(dsindex)
def dataStoreConnectorSetVarFlag(self, command, params):
dsindex = params[0]
name = params[1]
flag = params[2]
value = params[3]
datastore = command.remotedatastores[dsindex]
datastore.setVarFlag(name, flag, value)
def dataStoreConnectorDelVar(self, command, params):
dsindex = params[0]
name = params[1]
datastore = command.remotedatastores[dsindex]
if len(params) > 2:
flag = params[2]
datastore.delVarFlag(name, flag)
else:
datastore.delVar(name)
def dataStoreConnectorRenameVar(self, command, params):
dsindex = params[0]
name = params[1]
newname = params[2]
datastore = command.remotedatastores[dsindex]
datastore.renameVar(name, newname)
def parseRecipeFile(self, command, params):
"""
Parse the specified recipe file (with or without bbappends)
and return a datastore object representing the environment
for the recipe.
"""
fn = params[0]
appends = params[1]
appendlist = params[2]
if len(params) > 3:
config_data_dict = params[3]
config_data = command.remotedatastores.receive_datastore(config_data_dict)
else:
config_data = None
if appends:
if appendlist is not None:
appendfiles = appendlist
else:
appendfiles = command.cooker.collection.get_file_appends(fn)
else:
appendfiles = []
# We are calling bb.cache locally here rather than on the server,
# but that's OK because it doesn't actually need anything from
# the server barring the global datastore (which we have a remote
# version of)
if config_data:
# We have to use a different function here if we're passing in a datastore
# NOTE: we took a copy above, so we don't do it here again
envdata = bb.cache.parse_recipe(config_data, fn, appendfiles)['']
else:
# Use the standard path
parser = bb.cache.NoCache(command.cooker.databuilder)
envdata = parser.loadDataFull(fn, appendfiles)
idx = command.remotedatastores.store(envdata)
return DataStoreConnectionHandle(idx)
parseRecipeFile.readonly = True
class CommandsAsync:
"""
A class of asynchronous commands
@@ -587,15 +311,8 @@ class CommandsAsync:
"""
bfile = params[0]
task = params[1]
if len(params) > 2:
internal = params[2]
else:
internal = False
if internal:
command.cooker.buildFileInternal(bfile, task, fireevents=False, quietlog=True)
else:
command.cooker.buildFile(bfile, task)
command.cooker.buildFile(bfile, task)
buildFile.needcache = False
def buildTargets(self, command, params):
@@ -645,6 +362,17 @@ class CommandsAsync:
command.finishAsyncCommand()
generateTargetsTree.needcache = True
def findCoreBaseFiles(self, command, params):
"""
Find certain files in COREBASE directory. i.e. Layers
"""
subdir = params[0]
filename = params[1]
command.cooker.findCoreBaseFiles(subdir, filename)
command.finishAsyncCommand()
findCoreBaseFiles.needcache = False
def findConfigFiles(self, command, params):
"""
Find config files which provide appropriate values
@@ -744,22 +472,3 @@ class CommandsAsync:
command.finishAsyncCommand()
resetCooker.needcache = False
def clientComplete(self, command, params):
"""
Do the right thing when the controlling client exits
"""
command.cooker.clientComplete()
command.finishAsyncCommand()
clientComplete.needcache = False
def findSigInfo(self, command, params):
"""
Find signature info files via the signature generator
"""
pn = params[0]
taskname = params[1]
sigs = params[2]
res = bb.siggen.find_siginfo(pn, taskname, sigs, command.cooker.data)
bb.event.fire(bb.event.FindSigInfoResult(res), command.cooker.data)
command.finishAsyncCommand()
findSigInfo.needcache = False

File diff suppressed because it is too large Load Diff

View File

@@ -22,11 +22,9 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import logging
import os
import re
import sys
import os, sys
from functools import wraps
import logging
import bb
from bb import data
import bb.parse
@@ -41,6 +39,10 @@ class ConfigParameters(object):
self.options.pkgs_to_build = targets or []
self.options.tracking = False
if hasattr(self.options, "show_environment") and self.options.show_environment:
self.options.tracking = True
for key, val in self.options.__dict__.items():
setattr(self, key, val)
@@ -69,15 +71,15 @@ class ConfigParameters(object):
def updateToServer(self, server, environment):
options = {}
for o in ["abort", "force", "invalidate_stamp",
"verbose", "debug", "dry_run", "dump_signatures",
for o in ["abort", "tryaltconfigs", "force", "invalidate_stamp",
"verbose", "debug", "dry_run", "dump_signatures",
"debug_domains", "extra_assume_provided", "profile",
"prefile", "postfile", "server_timeout"]:
"prefile", "postfile"]:
options[o] = getattr(self.options, o)
ret, error = server.runCommand(["updateConfig", options, environment, sys.argv])
ret, error = server.runCommand(["updateConfig", options, environment])
if error:
raise Exception("Unable to update the server configuration with local parameters: %s" % error)
raise Exception("Unable to update the server configuration with local parameters: %s" % error)
def parseActions(self):
# Parse any commandline into actions
@@ -127,6 +129,8 @@ class CookerConfiguration(object):
self.extra_assume_provided = []
self.prefile = []
self.postfile = []
self.prefile_server = []
self.postfile_server = []
self.debug = 0
self.cmd = None
self.abort = True
@@ -138,13 +142,8 @@ class CookerConfiguration(object):
self.dump_signatures = []
self.dry_run = False
self.tracking = False
self.xmlrpcinterface = []
self.server_timeout = None
self.interface = []
self.writeeventlog = False
self.server_only = False
self.limited_deps = False
self.runall = []
self.runonly = []
self.env = {}
@@ -153,6 +152,7 @@ class CookerConfiguration(object):
if key in parameters.options.__dict__:
setattr(self, key, parameters.options.__dict__[key])
self.env = parameters.environment.copy()
self.tracking = parameters.tracking
def setServerRegIdleCallback(self, srcb):
self.server_register_idlecallback = srcb
@@ -168,7 +168,7 @@ class CookerConfiguration(object):
def __setstate__(self,state):
for k in state:
setattr(self, k, state[k])
setattr(self, k, state[k])
def catch_parse_error(func):
@@ -192,8 +192,7 @@ def catch_parse_error(func):
fn, _, _, _ = traceback.extract_tb(tb, 1)[0]
if not fn.startswith(bbdir):
break
parselog.critical("Unable to parse %s" % fn, exc_info=(exc_class, exc, tb))
sys.exit(1)
parselog.critical("Unable to parse %s", fn, exc_info=(exc_class, exc, tb))
except bb.parse.ParseError as exc:
parselog.critical(str(exc))
sys.exit(1)
@@ -210,7 +209,7 @@ def _inherit(bbclass, data):
def findConfigFile(configfile, data):
search = []
bbpath = data.getVar("BBPATH")
bbpath = data.getVar("BBPATH", True)
if bbpath:
for i in bbpath.split(":"):
search.append(os.path.join(i, "conf", configfile))
@@ -225,27 +224,6 @@ def findConfigFile(configfile, data):
return None
#
# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
# up to /. If that fails, we search for a conf/bitbake.conf in BBPATH.
#
def findTopdir():
d = bb.data.init()
bbpath = None
if 'BBPATH' in os.environ:
bbpath = os.environ['BBPATH']
d.setVar('BBPATH', bbpath)
layerconf = findConfigFile("bblayers.conf", d)
if layerconf:
return os.path.dirname(os.path.dirname(layerconf))
if bbpath:
bitbakeconf = bb.utils.which(bbpath, "conf/bitbake.conf")
if bitbakeconf:
return os.path.dirname(os.path.dirname(bitbakeconf))
return None
class CookerDataBuilder(object):
def __init__(self, cookercfg, worker = False):
@@ -256,9 +234,9 @@ class CookerDataBuilder(object):
bb.utils.set_context(bb.utils.clean_context())
bb.event.set_class_handlers(bb.event.clean_class_handlers())
self.basedata = bb.data.init()
self.data = bb.data.init()
if self.tracking:
self.basedata.enableTracking()
self.data.enableTracking()
# Keep a datastore of the initial environment variables and their
# values from when BitBake was launched to enable child processes
@@ -269,51 +247,16 @@ class CookerDataBuilder(object):
self.savedenv.setVar(k, cookercfg.env[k])
filtered_keys = bb.utils.approved_variables()
bb.data.inheritFromOS(self.basedata, self.savedenv, filtered_keys)
self.basedata.setVar("BB_ORIGENV", self.savedenv)
bb.data.inheritFromOS(self.data, self.savedenv, filtered_keys)
self.data.setVar("BB_ORIGENV", self.savedenv)
if worker:
self.basedata.setVar("BB_WORKERCONTEXT", "1")
self.data = self.basedata
self.mcdata = {}
self.data.setVar("BB_WORKERCONTEXT", "1")
def parseBaseConfiguration(self):
try:
bb.parse.init_parser(self.basedata)
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
if self.data.getVar("BB_WORKERCONTEXT", False) is None:
bb.fetch.fetcher_init(self.data)
bb.codeparser.parser_cache_init(self.data)
bb.event.fire(bb.event.ConfigParsed(), self.data)
reparse_cnt = 0
while self.data.getVar("BB_INVALIDCONF", False) is True:
if reparse_cnt > 20:
logger.error("Configuration has been re-parsed over 20 times, "
"breaking out of the loop...")
raise Exception("Too deep config re-parse loop. Check locations where "
"BB_INVALIDCONF is being set (ConfigParsed event handlers)")
self.data.setVar("BB_INVALIDCONF", False)
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
reparse_cnt += 1
bb.event.fire(bb.event.ConfigParsed(), self.data)
bb.parse.init_parser(self.data)
self.data_hash = self.data.get_hash()
self.mcdata[''] = self.data
multiconfig = (self.data.getVar("BBMULTICONFIG") or "").split()
for config in multiconfig:
mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), mcdata)
self.mcdata[config] = mcdata
if multiconfig:
bb.event.fire(bb.event.MultiConfigParsed(self.mcdata), self.data)
except (SyntaxError, bb.BBHandledException):
self.parseConfigurationFiles(self.prefiles, self.postfiles)
except SyntaxError:
raise bb.BBHandledException
except bb.data_smart.ExpansionError as e:
logger.error(str(e))
@@ -322,24 +265,12 @@ class CookerDataBuilder(object):
logger.exception("Error parsing configuration files")
raise bb.BBHandledException
# Create a copy so we can reset at a later date when UIs disconnect
self.origdata = self.data
self.data = bb.data.createCopy(self.origdata)
self.mcdata[''] = self.data
def reset(self):
# We may not have run parseBaseConfiguration() yet
if not hasattr(self, 'origdata'):
return
self.data = bb.data.createCopy(self.origdata)
self.mcdata[''] = self.data
def _findLayerConf(self, data):
return findConfigFile("bblayers.conf", data)
def parseConfigurationFiles(self, prefiles, postfiles, mc = "default"):
data = bb.data.createCopy(self.basedata)
data.setVar("BB_CURRENT_MC", mc)
def parseConfigurationFiles(self, prefiles, postfiles):
data = self.data
bb.parse.init_parser(data)
# Parse files for loading *before* bitbake.conf and any includes
for f in prefiles:
@@ -353,53 +284,23 @@ class CookerDataBuilder(object):
data.setVar("TOPDIR", os.path.dirname(os.path.dirname(layerconf)))
data = parse_config_file(layerconf, data)
layers = (data.getVar('BBLAYERS') or "").split()
layers = (data.getVar('BBLAYERS', True) or "").split()
data = bb.data.createCopy(data)
approved = bb.utils.approved_variables()
for layer in layers:
if not os.path.isdir(layer):
parselog.critical("Layer directory '%s' does not exist! "
"Please check BBLAYERS in %s" % (layer, layerconf))
sys.exit(1)
parselog.debug(2, "Adding layer %s", layer)
if 'HOME' in approved and '~' in layer:
layer = os.path.expanduser(layer)
if layer.endswith('/'):
layer = layer.rstrip('/')
data.setVar('LAYERDIR', layer)
data.setVar('LAYERDIR_RE', re.escape(layer))
data = parse_config_file(os.path.join(layer, "conf", "layer.conf"), data)
data.expandVarref('LAYERDIR')
data.expandVarref('LAYERDIR_RE')
data.delVar('LAYERDIR_RE')
data.delVar('LAYERDIR')
bbfiles_dynamic = (data.getVar('BBFILES_DYNAMIC') or "").split()
collections = (data.getVar('BBFILE_COLLECTIONS') or "").split()
invalid = []
for entry in bbfiles_dynamic:
parts = entry.split(":", 1)
if len(parts) != 2:
invalid.append(entry)
continue
l, f = parts
if l in collections:
data.appendVar("BBFILES", " " + f)
if invalid:
bb.fatal("BBFILES_DYNAMIC entries must be of the form <collection name>:<filename pattern>, not:\n %s" % "\n ".join(invalid))
layerseries = set((data.getVar("LAYERSERIES_CORENAMES") or "").split())
for c in collections:
compat = set((data.getVar("LAYERSERIES_COMPAT_%s" % c) or "").split())
if compat and not (compat & layerseries):
bb.fatal("Layer %s is not compatible with the core layer which only supports these series: %s (layer is compatible with %s)"
% (c, " ".join(layerseries), " ".join(compat)))
elif not compat and not data.getVar("BB_WORKERCONTEXT"):
bb.warn("Layer %s should set LAYERSERIES_COMPAT_%s in its conf/layer.conf file to list the core layer names it is compatible with." % (c, c))
if not data.getVar("BBPATH"):
if not data.getVar("BBPATH", True):
msg = "The BBPATH variable is not set"
if not layerconf:
msg += (" and bitbake did not find a conf/bblayers.conf file in"
@@ -414,7 +315,7 @@ class CookerDataBuilder(object):
data = parse_config_file(p, data)
# Handle any INHERITs and inherit the base class
bbclasses = ["base"] + (data.getVar('INHERIT') or "").split()
bbclasses = ["base"] + (data.getVar('INHERIT', True) or "").split()
for bbclass in bbclasses:
data = _inherit(bbclass, data)
@@ -422,13 +323,23 @@ class CookerDataBuilder(object):
# We register any handlers we've found so far here...
for var in data.getVar('__BBHANDLERS', False) or []:
handlerfn = data.getVarFlag(var, "filename", False)
if not handlerfn:
parselog.critical("Undefined event handler function '%s'" % var)
sys.exit(1)
handlerln = int(data.getVarFlag(var, "lineno", False))
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask", True) or "").split(), handlerfn, handlerln)
if data.getVar("BB_WORKERCONTEXT", False) is None:
bb.fetch.fetcher_init(data)
bb.codeparser.parser_cache_init(data)
bb.event.fire(bb.event.ConfigParsed(), data)
if data.getVar("BB_INVALIDCONF", False) is True:
data.setVar("BB_INVALIDCONF", False)
self.parseConfigurationFiles(self.prefiles, self.postfiles)
return
bb.parse.init_parser(data)
data.setVar('BBINCLUDED',bb.parse.get_file_depends(data))
self.data = data
self.data_hash = data.get_hash()
return data

View File

@@ -1,14 +1,48 @@
"""
Python Daemonizing helper
Originally based on code Copyright (C) 2005 Chad J. Schroeder but now heavily modified
to allow a function to be daemonized and return for bitbake use by Richard Purdie
Configurable daemon behaviors:
1.) The current working directory set to the "/" directory.
2.) The current file creation mode mask set to 0.
3.) Close all open files (1024).
4.) Redirect standard I/O streams to "/dev/null".
A failed call to fork() now raises an exception.
References:
1) Advanced Programming in the Unix Environment: W. Richard Stevens
http://www.apuebook.com/apue3e.html
2) The Linux Programming Interface: Michael Kerrisk
http://man7.org/tlpi/index.html
3) Unix Programming Frequently Asked Questions:
http://www.faqs.org/faqs/unix-faq/programmer/faq/
Modified to allow a function to be daemonized and return for
bitbake use by Richard Purdie
"""
import os
import sys
import io
import traceback
__author__ = "Chad J. Schroeder"
__copyright__ = "Copyright (C) 2005 Chad J. Schroeder"
__version__ = "0.2"
# Standard Python modules.
import os # Miscellaneous OS interfaces.
import sys # System-specific parameters and functions.
# Default daemon parameters.
# File mode creation mask of the daemon.
# For BitBake's children, we do want to inherit the parent umask.
UMASK = None
# Default maximum for the number of available file descriptors.
MAXFD = 1024
# The standard I/O file descriptors are redirected to /dev/null by default.
if (hasattr(os, "devnull")):
REDIRECT_TO = os.devnull
else:
REDIRECT_TO = "/dev/null"
def createDaemon(function, logfile):
"""
@@ -16,10 +50,6 @@ def createDaemon(function, logfile):
background as a daemon, returning control to the caller.
"""
# Ensure stdout/stderror are flushed before forking to avoid duplicate output
sys.stdout.flush()
sys.stderr.flush()
try:
# Fork a child process so the parent can exit. This returns control to
# the command-line or shell. It also guarantees that the child will not
@@ -35,6 +65,36 @@ def createDaemon(function, logfile):
# leader of the new process group, we call os.setsid(). The process is
# also guaranteed not to have a controlling terminal.
os.setsid()
# Is ignoring SIGHUP necessary?
#
# It's often suggested that the SIGHUP signal should be ignored before
# the second fork to avoid premature termination of the process. The
# reason is that when the first child terminates, all processes, e.g.
# the second child, in the orphaned group will be sent a SIGHUP.
#
# "However, as part of the session management system, there are exactly
# two cases where SIGHUP is sent on the death of a process:
#
# 1) When the process that dies is the session leader of a session that
# is attached to a terminal device, SIGHUP is sent to all processes
# in the foreground process group of that terminal device.
# 2) When the death of a process causes a process group to become
# orphaned, and one or more processes in the orphaned group are
# stopped, then SIGHUP and SIGCONT are sent to all members of the
# orphaned group." [2]
#
# The first case can be ignored since the child is guaranteed not to have
# a controlling terminal. The second case isn't so easy to dismiss.
# The process group is orphaned when the first child terminates and
# POSIX.1 requires that every STOPPED process in an orphaned process
# group be sent a SIGHUP signal followed by a SIGCONT signal. Since the
# second child is not STOPPED though, we can safely forego ignoring the
# SIGHUP signal. In any case, there are no ill-effects if it is ignored.
#
# import signal # Set handlers for asynchronous events.
# signal.signal(signal.SIGHUP, signal.SIG_IGN)
try:
# Fork a second child and exit immediately to prevent zombies. This
# causes the second child process to be orphaned, making the init
@@ -48,46 +108,86 @@ def createDaemon(function, logfile):
except OSError as e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
if (pid != 0):
if (pid == 0): # The second child.
# We probably don't want the file mode creation mask inherited from
# the parent, so we give the child complete control over permissions.
if UMASK is not None:
os.umask(UMASK)
else:
# Parent (the first child) of the second child.
# exit() or _exit()?
# _exit is like exit(), but it doesn't call any functions registered
# with atexit (and on_exit) or any registered signal handlers. It also
# closes any open file descriptors, but doesn't flush any buffered output.
# Using exit() may cause all any temporary files to be unexpectedly
# removed. It's therefore recommended that child branches of a fork()
# and the parent branch(es) of a daemon use _exit().
os._exit(0)
else:
os.waitpid(pid, 0)
# exit() or _exit()?
# _exit is like exit(), but it doesn't call any functions registered
# with atexit (and on_exit) or any registered signal handlers. It also
# closes any open file descriptors. Using exit() may cause all stdio
# streams to be flushed twice and any temporary files may be unexpectedly
# removed. It's therefore recommended that child branches of a fork()
# and the parent branch(es) of a daemon use _exit().
return
# The second child.
# Close all open file descriptors. This prevents the child from keeping
# open any file descriptors inherited from the parent. There is a variety
# of methods to accomplish this task. Three are listed below.
#
# Try the system configuration variable, SC_OPEN_MAX, to obtain the maximum
# number of open file descriptors to close. If it doesn't exist, use
# the default value (configurable).
#
# try:
# maxfd = os.sysconf("SC_OPEN_MAX")
# except (AttributeError, ValueError):
# maxfd = MAXFD
#
# OR
#
# if (os.sysconf_names.has_key("SC_OPEN_MAX")):
# maxfd = os.sysconf("SC_OPEN_MAX")
# else:
# maxfd = MAXFD
#
# OR
#
# Use the getrlimit method to retrieve the maximum file descriptor number
# that can be opened by this process. If there is no limit on the
# resource, use the default value.
#
import resource # Resource usage information.
maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
if (maxfd == resource.RLIM_INFINITY):
maxfd = MAXFD
# Iterate through and close all file descriptors.
# for fd in range(0, maxfd):
# try:
# os.close(fd)
# except OSError: # ERROR, fd wasn't open to begin with (ignored)
# pass
# Replace standard fds with our own
with open('/dev/null', 'r') as si:
os.dup2(si.fileno(), sys.stdin.fileno())
# Redirect the standard I/O file descriptors to the specified file. Since
# the daemon has no controlling terminal, most daemons redirect stdin,
# stdout, and stderr to /dev/null. This is done to prevent side-effects
# from reads and writes to the standard I/O file descriptors.
try:
so = open(logfile, 'a+')
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(so.fileno(), sys.stderr.fileno())
except io.UnsupportedOperation:
sys.stdout = open(logfile, 'a+')
# This call to open is guaranteed to return the lowest file descriptor,
# which will be 0 (stdin), since it was closed above.
# os.open(REDIRECT_TO, os.O_RDWR) # standard input (0)
# Have stdout and stderr be the same so log output matches chronologically
# and there aren't two seperate buffers
sys.stderr = sys.stdout
# Duplicate standard input to standard output and standard error.
# os.dup2(0, 1) # standard output (1)
# os.dup2(0, 2) # standard error (2)
try:
function()
except Exception as e:
traceback.print_exc()
finally:
bb.event.print_ui_queue()
# os._exit() doesn't flush open files like os.exit() does. Manually flush
# stdout and stderr so that any logging output will be seen, particularly
# exception tracebacks.
sys.stdout.flush()
sys.stderr.flush()
os._exit(0)
si = file('/dev/null', 'r')
so = file(logfile, 'w')
se = so
# Replace those fds with our own
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
function()
os._exit(0)

View File

@@ -78,6 +78,59 @@ def initVar(var, d):
"""Non-destructive var init for data structure"""
d.initVar(var)
def setVar(var, value, d):
"""Set a variable to a given value"""
d.setVar(var, value)
def getVar(var, d, exp = False):
"""Gets the value of a variable"""
return d.getVar(var, exp)
def renameVar(key, newkey, d):
"""Renames a variable from key to newkey"""
d.renameVar(key, newkey)
def delVar(var, d):
"""Removes a variable from the data set"""
d.delVar(var)
def appendVar(var, value, d):
"""Append additional value to a variable"""
d.appendVar(var, value)
def setVarFlag(var, flag, flagvalue, d):
"""Set a flag for a given variable to a given value"""
d.setVarFlag(var, flag, flagvalue)
def getVarFlag(var, flag, d):
"""Gets given flag from given var"""
return d.getVarFlag(var, flag, False)
def delVarFlag(var, flag, d):
"""Removes a given flag from the variable's flags"""
d.delVarFlag(var, flag)
def setVarFlags(var, flags, d):
"""Set the flags for a given variable
Note:
setVarFlags will not clear previous
flags. Think of this method as
addVarFlags
"""
d.setVarFlags(var, flags)
def getVarFlags(var, d):
"""Gets a variable's flags"""
return d.getVarFlags(var)
def delVarFlags(var, d):
"""Removes a variable's flags"""
d.delVarFlags(var)
def keys(d):
"""Return a list of keys in d"""
return d.keys()
@@ -121,7 +174,7 @@ def inheritFromOS(d, savedenv, permitted):
for s in savedenv.keys():
if s in permitted:
try:
d.setVar(s, savedenv.getVar(s), op = 'from env')
d.setVar(s, savedenv.getVar(s, True), op = 'from env')
if s in exportlist:
d.setVarFlag(s, "export", True, op = 'auto env export')
except TypeError:
@@ -129,19 +182,19 @@ def inheritFromOS(d, savedenv, permitted):
def emit_var(var, o=sys.__stdout__, d = init(), all=False):
"""Emit a variable to be sourced by a shell."""
func = d.getVarFlag(var, "func", False)
if d.getVarFlag(var, 'python', False) and func:
if d.getVarFlag(var, "python", False):
return False
export = d.getVarFlag(var, "export", False)
unexport = d.getVarFlag(var, "unexport", False)
func = d.getVarFlag(var, "func", False)
if not all and not export and not unexport and not func:
return False
try:
if all:
oval = d.getVar(var, False)
val = d.getVar(var)
val = d.getVar(var, True)
except (KeyboardInterrupt, bb.build.FuncFailed):
raise
except Exception as exc:
@@ -196,7 +249,7 @@ def emit_env(o=sys.__stdout__, d = init(), all=False):
keys = sorted((key for key in d.keys() if not key.startswith("__")), key=isfunc)
grouped = groupby(keys, isfunc)
for isfunc, keys in grouped:
for key in sorted(keys):
for key in keys:
emit_var(key, o, d, all and not isfunc) and o.write('\n')
def exported_keys(d):
@@ -205,13 +258,11 @@ def exported_keys(d):
not d.getVarFlag(key, 'unexport', False))
def exported_vars(d):
k = list(exported_keys(d))
for key in k:
for key in exported_keys(d):
try:
value = d.getVar(key)
except Exception as err:
bb.warn("%s: Unable to export ${%s}: %s" % (d.getVar("FILE"), key, err))
continue
value = d.getVar(key, True)
except Exception:
pass
if value is not None:
yield key, str(value)
@@ -220,13 +271,13 @@ def emit_func(func, o=sys.__stdout__, d = init()):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
keys = (key for key in d.keys() if not key.startswith("__") and not d.getVarFlag(key, "func", False))
for key in sorted(keys):
for key in keys:
emit_var(key, o, d, False)
o.write('\n')
emit_var(func, o, d, False) and o.write('\n')
newdeps = bb.codeparser.ShellParser(func, logger).parse_shell(d.getVar(func))
newdeps |= set((d.getVarFlag(func, "vardeps") or "").split())
newdeps = bb.codeparser.ShellParser(func, logger).parse_shell(d.getVar(func, True))
newdeps |= set((d.getVarFlag(func, "vardeps", True) or "").split())
seen = set()
while newdeps:
deps = newdeps
@@ -235,8 +286,8 @@ def emit_func(func, o=sys.__stdout__, d = init()):
for dep in deps:
if d.getVarFlag(dep, "func", False) and not d.getVarFlag(dep, "python", False):
emit_var(dep, o, d, False) and o.write('\n')
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep))
newdeps |= set((d.getVarFlag(dep, "vardeps") or "").split())
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep, True))
newdeps |= set((d.getVarFlag(dep, "vardeps", True) or "").split())
newdeps -= seen
_functionfmt = """
@@ -259,7 +310,7 @@ def emit_func_python(func, o=sys.__stdout__, d = init()):
pp = bb.codeparser.PythonParser(func, logger)
pp.parse_python(d.getVar(func, False))
newdeps = pp.execs
newdeps |= set((d.getVarFlag(func, "vardeps") or "").split())
newdeps |= set((d.getVarFlag(func, "vardeps", True) or "").split())
seen = set()
while newdeps:
deps = newdeps
@@ -271,7 +322,7 @@ def emit_func_python(func, o=sys.__stdout__, d = init()):
pp = bb.codeparser.PythonParser(dep, logger)
pp.parse_python(d.getVar(dep, False))
newdeps |= pp.execs
newdeps |= set((d.getVarFlag(dep, "vardeps") or "").split())
newdeps |= set((d.getVarFlag(dep, "vardeps", True) or "").split())
newdeps -= seen
def update_data(d):
@@ -288,21 +339,19 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps |= parser.references
deps = deps | (keys & parser.execs)
return deps, value
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "vardepvalueexclude", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
vardeps = varflags.get("vardeps")
value = d.getVarFlag(key, "_content", False)
value = d.getVar(key, False)
def handle_contains(value, contains, d):
newvalue = ""
for k in sorted(contains):
l = (d.getVar(k) or "").split()
for item in sorted(contains[k]):
for word in item.split():
if not word in l:
newvalue += "\n%s{%s} = Unset" % (k, item)
break
l = (d.getVar(k, True) or "").split()
for word in sorted(contains[k]):
if word in l:
newvalue += "\n%s{%s} = Set" % (k, word)
else:
newvalue += "\n%s{%s} = Set" % (k, item)
newvalue += "\n%s{%s} = Unset" % (k, word)
if not newvalue:
return value
if not value:
@@ -315,7 +364,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
if varflags.get("python"):
parser = bb.codeparser.PythonParser(key, logger)
if value and "\t" in value:
logger.warning("Variable %s contains tabs, please remove these (%s)" % (key, d.getVar("FILE")))
logger.warn("Variable %s contains tabs, please remove these (%s)" % (key, d.getVar("FILE", True)))
parser.parse_python(value, filename=varflags.get("filename"), lineno=varflags.get("lineno"))
deps = deps | parser.references
deps = deps | (keys & parser.execs)
@@ -334,8 +383,6 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps = deps | set(varflags["prefuncs"].split())
if "postfuncs" in varflags:
deps = deps | set(varflags["postfuncs"].split())
if "exports" in varflags:
deps = deps | set(varflags["exports"].split())
else:
parser = d.expandWithRefs(value, key)
deps |= parser.references
@@ -359,8 +406,6 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps |= set((vardeps or "").split())
deps -= set(varflags.get("vardepsexclude", "").split())
except bb.parse.SkipRecipe:
raise
except Exception as e:
bb.warn("Exception during build_dependencies for %s" % key)
raise
@@ -372,7 +417,7 @@ def generate_dependencies(d):
keys = set(key for key in d if not key.startswith("__"))
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export", False) and not d.getVarFlag(key, "unexport", False))
varflagsexcl = d.getVar('BB_SIGNATURE_EXCLUDE_FLAGS')
varflagsexcl = d.getVar('BB_SIGNATURE_EXCLUDE_FLAGS', True)
deps = {}
values = {}

View File

@@ -39,7 +39,7 @@ from bb.COW import COWDictBase
logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = ["_append", "_prepend", "_remove"]
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>[^A-Z]*))?$')
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>.*))?$')
__expand_var_regexp__ = re.compile(r"\${[^{}@\n\t :]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
@@ -108,7 +108,7 @@ class VariableParse:
varparse = self.d.expand_cache[key]
var = varparse.value
else:
var = self.d.getVarFlag(key, "_content")
var = self.d.getVarFlag(key, "_content", True)
self.references.add(key)
if var is not None:
return var
@@ -116,21 +116,13 @@ class VariableParse:
return match.group()
def python_sub(self, match):
if isinstance(match, str):
code = match
else:
code = match.group()[3:-1]
if "_remote_data" in self.d:
connector = self.d["_remote_data"]
return connector.expandPythonRef(self.varname, code, self.d)
code = match.group()[3:-1]
codeobj = compile(code.strip(), self.varname or "<expansion>", "eval")
parser = bb.codeparser.PythonParser(self.varname, logger)
parser.parse_python(code)
if self.varname:
vardeps = self.d.getVarFlag(self.varname, "vardeps")
vardeps = self.d.getVarFlag(self.varname, "vardeps", True)
if vardeps is None:
parser.log.flush()
else:
@@ -143,7 +135,7 @@ class VariableParse:
self.contains[k] = parser.contains[k].copy()
else:
self.contains[k].update(parser.contains[k])
value = utils.better_eval(codeobj, DataContext(self.d), {'d' : self.d})
value = utils.better_eval(codeobj, DataContext(self.d))
return str(value)
@@ -154,7 +146,7 @@ class DataContext(dict):
self['d'] = metadata
def __missing__(self, key):
value = self.metadata.getVar(key)
value = self.metadata.getVar(key, True)
if value is None or self.metadata.getVarFlag(key, 'func', False):
raise KeyError(key)
else:
@@ -230,19 +222,6 @@ class VariableHistory(object):
new.variables = self.variables.copy()
return new
def __getstate__(self):
vardict = {}
for k, v in self.variables.iteritems():
vardict[k] = v
return {'dataroot': self.dataroot,
'variables': vardict}
def __setstate__(self, state):
self.dataroot = state['dataroot']
self.variables = COWDictBase.copy()
for k, v in state['variables'].items():
self.variables[k] = v
def record(self, *kwonly, **loginfo):
if not self.dataroot._tracking:
return
@@ -268,15 +247,10 @@ class VariableHistory(object):
self.variables[var].append(loginfo.copy())
def variable(self, var):
remote_connector = self.dataroot.getVar('_remote_data', False)
if remote_connector:
varhistory = remote_connector.getVarHistory(var)
else:
varhistory = []
if var in self.variables:
varhistory.extend(self.variables[var])
return varhistory
return self.variables[var]
else:
return []
def emit(self, var, oval, val, o, d):
history = self.variable(var)
@@ -344,7 +318,7 @@ class VariableHistory(object):
the files in which they were added.
"""
history = self.variable(var)
finalitems = (d.getVar(var) or '').split()
finalitems = (d.getVar(var, True) or '').split()
filemap = {}
isset = False
for event in history:
@@ -398,7 +372,7 @@ class DataSmart(MutableMapping):
def expandWithRefs(self, s, varname):
if not isinstance(s, str): # sanity check
if not isinstance(s, basestring): # sanity check
return VariableParse(varname, self, s)
if varname and varname in self.expand_cache:
@@ -423,7 +397,8 @@ class DataSmart(MutableMapping):
except bb.parse.SkipRecipe:
raise
except Exception as exc:
raise ExpansionError(varname, s, exc) from exc
exc_class, exc, tb = sys.exc_info()
raise ExpansionError, ExpansionError(varname, s, exc), tb
varparse.value = s
@@ -452,11 +427,11 @@ class DataSmart(MutableMapping):
# Can end up here recursively so setup dummy values
self.overrides = []
self.overridesset = set()
self.overrides = (self.getVar("OVERRIDES") or "").split(":") or []
self.overrides = (self.getVar("OVERRIDES", True) or "").split(":") or []
self.overridesset = set(self.overrides)
self.inoverride = False
self.expand_cache = {}
newoverrides = (self.getVar("OVERRIDES") or "").split(":") or []
newoverrides = (self.getVar("OVERRIDES", True) or "").split(":") or []
if newoverrides == self.overrides:
break
self.overrides = newoverrides
@@ -473,22 +448,17 @@ class DataSmart(MutableMapping):
dest = self.dict
while dest:
if var in dest:
return dest[var], self.overridedata.get(var, None)
if "_remote_data" in dest:
connector = dest["_remote_data"]["_content"]
return connector.getVar(var)
return dest[var]
if "_data" not in dest:
break
dest = dest["_data"]
return None, self.overridedata.get(var, None)
def _makeShadowCopy(self, var):
if var in self.dict:
return
local_var, _ = self._findVar(var)
local_var = self._findVar(var)
if local_var:
self.dict[var] = copy.copy(local_var)
@@ -502,12 +472,6 @@ class DataSmart(MutableMapping):
if 'parsing' in loginfo:
parsing=True
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.setVar(var, value)
if not res:
return
if 'op' not in loginfo:
loginfo['op'] = "set"
self.expand_cache = {}
@@ -546,8 +510,6 @@ class DataSmart(MutableMapping):
del self.dict[var]["_append"]
if "_prepend" in self.dict[var]:
del self.dict[var]["_prepend"]
if "_remove" in self.dict[var]:
del self.dict[var]["_remove"]
if var in self.overridedata:
active = []
self.need_overrides()
@@ -580,7 +542,7 @@ class DataSmart(MutableMapping):
nextnew = set()
self.overridevars.update(new)
for i in new:
vardata = self.expandWithRefs(self.getVar(i), i)
vardata = self.expandWithRefs(self.getVar(i, True), i)
nextnew.update(vardata.references)
nextnew.update(vardata.contains.keys())
new = nextnew
@@ -604,19 +566,13 @@ class DataSmart(MutableMapping):
if len(shortvar) == 0:
override = None
def getVar(self, var, expand=True, noweakdefault=False, parsing=False):
def getVar(self, var, expand, noweakdefault=False, parsing=False):
return self.getVarFlag(var, "_content", expand, noweakdefault, parsing)
def renameVar(self, key, newkey, **loginfo):
"""
Rename the variable key to newkey
"""
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.renameVar(key, newkey)
if not res:
return
val = self.getVar(key, 0, parsing=True)
if val is not None:
loginfo['variable'] = newkey
@@ -660,12 +616,6 @@ class DataSmart(MutableMapping):
self.setVar(var + "_prepend", value, ignore=True, parsing=True)
def delVar(self, var, **loginfo):
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.delVar(var)
if not res:
return
loginfo['detail'] = ""
loginfo['op'] = 'del'
self.varhistory.record(**loginfo)
@@ -692,12 +642,6 @@ class DataSmart(MutableMapping):
override = None
def setVarFlag(self, var, flag, value, **loginfo):
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.setVarFlag(var, flag, value)
if not res:
return
self.expand_cache = {}
if 'op' not in loginfo:
loginfo['op'] = "set"
@@ -719,14 +663,14 @@ class DataSmart(MutableMapping):
self.dict["__exportlist"]["_content"] = set()
self.dict["__exportlist"]["_content"].add(var)
def getVarFlag(self, var, flag, expand=True, noweakdefault=False, parsing=False):
local_var, overridedata = self._findVar(var)
def getVarFlag(self, var, flag, expand, noweakdefault=False, parsing=False):
local_var = self._findVar(var)
value = None
if flag == "_content" and overridedata is not None and not parsing:
if flag == "_content" and var in self.overridedata and not parsing:
match = False
active = {}
self.need_overrides()
for (r, o) in overridedata:
for (r, o) in self.overridedata[var]:
# What about double overrides both with "_" in the name?
if o in self.overridesset:
active[o] = r
@@ -805,25 +749,18 @@ class DataSmart(MutableMapping):
if match:
removes.extend(self.expand(r).split())
if removes:
filtered = filter(lambda v: v not in removes,
value.split())
value = " ".join(filtered)
if expand and var in self.expand_cache:
# We need to ensure the expand cache has the correct value
# flag == "_content" here
self.expand_cache[var].value = value
filtered = filter(lambda v: v not in removes,
value.split())
value = " ".join(filtered)
if expand and var in self.expand_cache:
# We need to ensure the expand cache has the correct value
# flag == "_content" here
self.expand_cache[var].value = value
return value
def delVarFlag(self, var, flag, **loginfo):
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.delVarFlag(var, flag)
if not res:
return
self.expand_cache = {}
local_var, _ = self._findVar(var)
local_var = self._findVar(var)
if not local_var:
return
if not var in self.dict:
@@ -866,7 +803,7 @@ class DataSmart(MutableMapping):
self.dict[var][i] = flags[i]
def getVarFlags(self, var, expand = False, internalflags=False):
local_var, _ = self._findVar(var)
local_var = self._findVar(var)
flags = {}
if local_var:
@@ -908,7 +845,7 @@ class DataSmart(MutableMapping):
data = DataSmart()
data.dict["_data"] = self.dict
data.varhistory = self.varhistory.copy()
data.varhistory.dataroot = data
data.varhistory.datasmart = data
data.inchistory = self.inchistory.copy()
data._tracking = self._tracking
@@ -939,7 +876,7 @@ class DataSmart(MutableMapping):
def localkeys(self):
for key in self.dict:
if key not in ['_data', '_remote_data']:
if key != '_data':
yield key
def __iter__(self):
@@ -948,7 +885,7 @@ class DataSmart(MutableMapping):
def keylist(d):
klist = set()
for key in d:
if key in ["_data", "_remote_data"]:
if key == "_data":
continue
if key in deleted:
continue
@@ -962,13 +899,6 @@ class DataSmart(MutableMapping):
if "_data" in d:
klist |= keylist(d["_data"])
if "_remote_data" in d:
connector = d["_remote_data"]["_content"]
for key in connector.getKeys():
if key in deleted:
continue
klist.add(key)
return klist
self.need_overrides()
@@ -987,7 +917,7 @@ class DataSmart(MutableMapping):
yield k
def __len__(self):
return len(frozenset(iter(self)))
return len(frozenset(self))
def __getitem__(self, item):
value = self.getVar(item, False)
@@ -1006,8 +936,9 @@ class DataSmart(MutableMapping):
data = {}
d = self.createCopy()
bb.data.expandKeys(d)
bb.data.update_data(d)
config_whitelist = set((d.getVar("BB_HASHCONFIG_WHITELIST") or "").split())
config_whitelist = set((d.getVar("BB_HASHCONFIG_WHITELIST", True) or "").split())
keys = set(key for key in iter(d) if not key.startswith("__"))
for key in keys:
if key in config_whitelist:
@@ -1026,6 +957,7 @@ class DataSmart(MutableMapping):
for key in ["__BBTASKS", "__BBANONFUNCS", "__BBHANDLERS"]:
bb_list = d.getVar(key, False) or []
bb_list.sort()
data.update({key:str(bb_list)})
if key == "__BBANONFUNCS":
@@ -1034,4 +966,4 @@ class DataSmart(MutableMapping):
data.update({i:value})
data_str = str([(k, data[k]) for k in sorted(data.keys())])
return hashlib.md5(data_str.encode("utf-8")).hexdigest()
return hashlib.md5(data_str).hexdigest()

View File

@@ -24,13 +24,14 @@ BitBake build tools.
import os, sys
import warnings
import pickle
try:
import cPickle as pickle
except ImportError:
import pickle
import logging
import atexit
import traceback
import ast
import threading
import bb.utils
import bb.compat
import bb.exceptions
@@ -48,16 +49,6 @@ class Event(object):
def __init__(self):
self.pid = worker_pid
class HeartbeatEvent(Event):
"""Triggered at regular time intervals of 10 seconds. Other events can fire much more often
(runQueueTaskStarted when there are many short tasks) or not at all for long periods
of time (again runQueueTaskStarted, when there is just one long-running task), so this
event is more suitable for doing some task-independent work occassionally."""
def __init__(self, time):
Event.__init__(self)
self.time = time
Registered = 10
AlreadyRegistered = 14
@@ -80,27 +71,12 @@ _event_handler_map = {}
_catchall_handlers = {}
_eventfilter = None
_uiready = False
_thread_lock = threading.Lock()
_thread_lock_enabled = False
if hasattr(__builtins__, '__setitem__'):
builtins = __builtins__
else:
builtins = __builtins__.__dict__
def enable_threadlock():
global _thread_lock_enabled
_thread_lock_enabled = True
def disable_threadlock():
global _thread_lock_enabled
_thread_lock_enabled = False
def execute_handler(name, handler, event, d):
event.data = d
addedd = False
if 'd' not in builtins:
builtins['d'] = d
if 'd' not in __builtins__:
__builtins__['d'] = d
addedd = True
try:
ret = handler(event)
@@ -118,7 +94,7 @@ def execute_handler(name, handler, event, d):
finally:
del event.data
if addedd:
del builtins['d']
del __builtins__['d']
def fire_class_handlers(event, d):
if isinstance(event, logging.LogRecord):
@@ -126,7 +102,7 @@ def fire_class_handlers(event, d):
eid = str(event.__class__)[8:-2]
evt_hmap = _event_handler_map.get(eid, {})
for name, handler in list(_handlers.items()):
for name, handler in _handlers.iteritems():
if name in _catchall_handlers or name in evt_hmap:
if _eventfilter:
if not _eventfilter(name, handler, event, d):
@@ -141,9 +117,6 @@ def print_ui_queue():
logger = logging.getLogger("BitBake")
if not _uiready:
from bb.msg import BBLogFormatter
# Flush any existing buffered content
sys.stdout.flush()
sys.stderr.flush()
stdout = logging.StreamHandler(sys.stdout)
stderr = logging.StreamHandler(sys.stderr)
formatter = BBLogFormatter("%(levelname)s: %(message)s")
@@ -152,47 +125,30 @@ def print_ui_queue():
# First check to see if we have any proper messages
msgprint = False
msgerrs = False
# Should we print to stderr?
for event in ui_queue[:]:
if isinstance(event, logging.LogRecord) and event.levelno >= logging.WARNING:
msgerrs = True
break
if msgerrs:
logger.addHandler(stderr)
else:
logger.addHandler(stdout)
for event in ui_queue[:]:
if isinstance(event, logging.LogRecord):
if event.levelno > logging.DEBUG:
if event.levelno >= logging.WARNING:
logger.addHandler(stderr)
else:
logger.addHandler(stdout)
logger.handle(event)
msgprint = True
if msgprint:
return
# Nope, so just print all of the messages we have (including debug messages)
if not msgprint:
for event in ui_queue[:]:
if isinstance(event, logging.LogRecord):
logger.handle(event)
if msgerrs:
logger.removeHandler(stderr)
else:
logger.removeHandler(stdout)
logger.addHandler(stdout)
for event in ui_queue[:]:
if isinstance(event, logging.LogRecord):
logger.handle(event)
def fire_ui_handlers(event, d):
global _thread_lock
global _thread_lock_enabled
if not _uiready:
# No UI handlers registered yet, queue up the messages
ui_queue.append(event)
return
if _thread_lock_enabled:
_thread_lock.acquire()
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
@@ -211,9 +167,6 @@ def fire_ui_handlers(event, d):
for h in errors:
del _ui_handlers[h]
if _thread_lock_enabled:
_thread_lock.release()
def fire(event, d):
"""Fire off an Event"""
@@ -226,12 +179,6 @@ def fire(event, d):
if worker_fire:
worker_fire(event, d)
else:
# If messages have been queued up, clear the queue
global _uiready, ui_queue
if _uiready and ui_queue:
for queue_event in ui_queue:
fire_ui_handlers(queue_event, d)
ui_queue = []
fire_ui_handlers(event, d)
def fire_from_worker(event, d):
@@ -247,7 +194,7 @@ def register(name, handler, mask=None, filename=None, lineno=None):
if handler is not None:
# handle string containing python code
if isinstance(handler, str):
if isinstance(handler, basestring):
tmp = "def %s(e):\n%s" % (name, handler)
try:
code = bb.methodpool.compile_cache(tmp)
@@ -284,46 +231,26 @@ def register(name, handler, mask=None, filename=None, lineno=None):
def remove(name, handler):
"""Remove an Event handler"""
_handlers.pop(name)
if name in _catchall_handlers:
_catchall_handlers.pop(name)
for event in _event_handler_map.keys():
if name in _event_handler_map[event]:
_event_handler_map[event].pop(name)
def get_handlers():
return _handlers
def set_handlers(handlers):
global _handlers
_handlers = handlers
def set_eventfilter(func):
global _eventfilter
_eventfilter = func
def register_UIHhandler(handler, mainui=False):
if mainui:
global _uiready
_uiready = True
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
level, debug_domains = bb.msg.constructLogOptions()
_ui_logfilters[_ui_handler_seq] = UIEventFilter(level, debug_domains)
if mainui:
global _uiready
_uiready = _ui_handler_seq
return _ui_handler_seq
def unregister_UIHhandler(handlerNum, mainui=False):
if mainui:
global _uiready
_uiready = False
def unregister_UIHhandler(handlerNum):
if handlerNum in _ui_handlers:
del _ui_handlers[handlerNum]
return
def get_uihandler():
if _uiready is False:
return None
return _uiready
# Class to allow filtering of events and specific filtering of LogRecords *before* we put them over the IPC
class UIEventFilter(object):
def __init__(self, level, debug_domains):
@@ -386,30 +313,13 @@ class OperationProgress(Event):
class ConfigParsed(Event):
"""Configuration Parsing Complete"""
class MultiConfigParsed(Event):
"""Multi-Config Parsing Complete"""
def __init__(self, mcdata):
self.mcdata = mcdata
Event.__init__(self)
class RecipeEvent(Event):
def __init__(self, fn):
self.fn = fn
Event.__init__(self)
class RecipePreFinalise(RecipeEvent):
""" Recipe Parsing Complete but not yet finalised"""
class RecipeTaskPreProcess(RecipeEvent):
"""
Recipe Tasks about to be finalised
The list of tasks should be final at this point and handlers
are only able to change interdependencies
"""
def __init__(self, fn, tasklist):
self.fn = fn
self.tasklist = tasklist
Event.__init__(self)
""" Recipe Parsing Complete but not yet finialised"""
class RecipeParsed(RecipeEvent):
""" Recipe Parsing Complete """
@@ -432,7 +342,7 @@ class StampUpdate(Event):
targets = property(getTargets)
class BuildBase(Event):
"""Base class for bitbake build events"""
"""Base class for bbmake run events"""
def __init__(self, n, p, failures = 0):
self._name = n
@@ -452,6 +362,12 @@ class BuildBase(Event):
def setName(self, name):
self._name = name
def getCfg(self):
return self.data
def setCfg(self, cfg):
self.data = cfg
def getFailures(self):
"""
Return the number of failed packages
@@ -460,21 +376,20 @@ class BuildBase(Event):
pkgs = property(getPkgs, setPkgs, None, "pkgs property")
name = property(getName, setName, None, "name property")
cfg = property(getCfg, setCfg, None, "cfg property")
class BuildInit(BuildBase):
"""buildFile or buildTargets was invoked"""
def __init__(self, p=[]):
name = None
BuildBase.__init__(self, name, p)
class BuildStarted(BuildBase, OperationStarted):
"""Event when builds start"""
"""bbmake build run started"""
def __init__(self, n, p, failures = 0):
OperationStarted.__init__(self, "Building Started")
BuildBase.__init__(self, n, p, failures)
class BuildCompleted(BuildBase, OperationCompleted):
"""Event when builds have completed"""
"""bbmake build run completed"""
def __init__(self, total, n, p, failures=0, interrupted=0):
if not failures:
OperationCompleted.__init__(self, total, "Building Succeeded")
@@ -492,23 +407,6 @@ class DiskFull(Event):
self._free = freespace
self._mountpoint = mountpoint
class DiskUsageSample:
def __init__(self, available_bytes, free_bytes, total_bytes):
# Number of bytes available to non-root processes.
self.available_bytes = available_bytes
# Number of bytes available to root processes.
self.free_bytes = free_bytes
# Total capacity of the volume.
self.total_bytes = total_bytes
class MonitorDiskEvent(Event):
"""If BB_DISKMON_DIRS is set, then this event gets triggered each time disk space is checked.
Provides information about devices that are getting monitored."""
def __init__(self, disk_usage):
Event.__init__(self)
# hash of device root path -> DiskUsageSample
self.disk_usage = disk_usage
class NoProvider(Event):
"""No Provider for an Event"""
@@ -526,28 +424,6 @@ class NoProvider(Event):
def isRuntime(self):
return self._runtime
def __str__(self):
msg = ''
if self._runtime:
r = "R"
else:
r = ""
extra = ''
if not self._reasons:
if self._close_matches:
extra = ". Close matches:\n %s" % '\n '.join(self._close_matches)
if self._dependees:
msg = "Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)%s" % (r, self._item, ", ".join(self._dependees), r, extra)
else:
msg = "Nothing %sPROVIDES '%s'%s" % (r, self._item, extra)
if self._reasons:
for reason in self._reasons:
msg += '\n' + reason
return msg
class MultipleProviders(Event):
"""Multiple Providers"""
@@ -575,16 +451,6 @@ class MultipleProviders(Event):
"""
return self._candidates
def __str__(self):
msg = "Multiple providers are available for %s%s (%s)" % (self._is_runtime and "runtime " or "",
self._item,
", ".join(self._candidates))
rtime = ""
if self._is_runtime:
rtime = "R"
msg += "\nConsider defining a PREFERRED_%sPROVIDER entry to match %s" % (rtime, self._item)
return msg
class ParseStarted(OperationStarted):
"""Recipe parsing for the runqueue has begun"""
def __init__(self, total):
@@ -678,6 +544,14 @@ class FilesMatchingFound(Event):
self._pattern = pattern
self._matches = matches
class CoreBaseFilesFound(Event):
"""
Event when a list of appropriate config files has been generated
"""
def __init__(self, paths):
Event.__init__(self)
self._paths = paths
class ConfigFilesFound(Event):
"""
Event when a list of appropriate config files has been generated
@@ -738,9 +612,8 @@ class LogHandler(logging.Handler):
if hasattr(tb, 'tb_next'):
tb = list(bb.exceptions.extract_traceback(tb, context=3))
# Need to turn the value into something the logging system can pickle
record.bb_exc_info = (etype, value, tb)
record.bb_exc_formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
value = str(value)
record.bb_exc_info = (etype, value, tb)
record.exc_info = None
fire(record, None)
@@ -748,6 +621,19 @@ class LogHandler(logging.Handler):
record.taskpid = worker_pid
return True
class RequestPackageInfo(Event):
"""
Event to request package information
"""
class PackageInfo(Event):
"""
Package information for GUI
"""
def __init__(self, pkginfolist):
Event.__init__(self)
self._pkginfolist = pkginfolist
class MetadataEvent(Event):
"""
Generic event that target for OE-Core classes
@@ -758,33 +644,6 @@ class MetadataEvent(Event):
self.type = eventtype
self._localdata = eventdata
class ProcessStarted(Event):
"""
Generic process started event (usually part of the initial startup)
where further progress events will be delivered
"""
def __init__(self, processname, total):
Event.__init__(self)
self.processname = processname
self.total = total
class ProcessProgress(Event):
"""
Generic process progress event (usually part of the initial startup)
"""
def __init__(self, processname, progress):
Event.__init__(self)
self.processname = processname
self.progress = progress
class ProcessFinished(Event):
"""
Generic process finished event (usually part of the initial startup)
"""
def __init__(self, processname):
Event.__init__(self)
self.processname = processname
class SanityCheck(Event):
"""
Event to run sanity checks, either raise errors or generate events as return status.
@@ -825,10 +684,3 @@ class NetworkTestFailed(Event):
Event to indicate network test has failed
"""
class FindSigInfoResult(Event):
"""
Event to return results from findSigInfo command
"""
def __init__(self, result):
Event.__init__(self)
self.result = result

View File

@@ -1,4 +1,4 @@
from __future__ import absolute_import
import inspect
import traceback
import bb.namedtuple_with_abc
@@ -86,6 +86,6 @@ def format_exception(etype, value, tb, context=1, limit=None, formatter=None):
def to_string(exc):
if isinstance(exc, SystemExit):
if not isinstance(exc.code, str):
if not isinstance(exc.code, basestring):
return 'Exited with "%d"' % exc.code
return str(exc)

File diff suppressed because it is too large Load Diff

View File

@@ -27,6 +27,7 @@ import os
import sys
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
@@ -41,16 +42,15 @@ class Bzr(FetchMethod):
init bzr specific variable within url data
"""
# Create paths to bzr checkouts
bzrdir = d.getVar("BZRDIR") or (d.getVar("DL_DIR") + "/bzr")
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(bzrdir, ud.host, relpath)
ud.pkgdir = os.path.join(data.expand('${BZRDIR}', d), ud.host, relpath)
ud.setup_revisions(d)
ud.setup_revisons(d)
if not ud.revision:
ud.revision = self.latest_revision(ud, d)
ud.localfile = d.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision))
ud.localfile = data.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision), d)
def _buildbzrcommand(self, ud, d, command):
"""
@@ -58,7 +58,7 @@ class Bzr(FetchMethod):
command is "fetch", "update", "revno"
"""
basecmd = d.getVar("FETCHCMD_bzr") or "/usr/bin/env bzr"
basecmd = data.expand('${FETCHCMD_bzr}', d)
proto = ud.parm.get('protocol', 'http')
@@ -88,25 +88,28 @@ class Bzr(FetchMethod):
bzrcmd = self._buildbzrcommand(ud, d, "update")
logger.debug(1, "BZR Update %s", ud.url)
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
runfetchcmd(bzrcmd, d, workdir=os.path.join(ud.pkgdir, os.path.basename(ud.path)))
os.chdir(os.path.join (ud.pkgdir, os.path.basename(ud.path)))
runfetchcmd(bzrcmd, d)
else:
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
logger.debug(1, "BZR Checkout %s", ud.url)
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", bzrcmd)
runfetchcmd(bzrcmd, d, workdir=ud.pkgdir)
runfetchcmd(bzrcmd, d)
os.chdir(ud.pkgdir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude='.bzr' --exclude='.bzrtags'"
tar_flags = "--exclude '.bzr' --exclude '.bzrtags'"
# tar them up to a defined filename
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(ud.pkgdir)),
d, cleanup=[ud.localpath], workdir=ud.pkgdir)
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(ud.pkgdir)), d, cleanup = [ud.localpath])
def supports_srcrev(self):
return True

View File

@@ -65,10 +65,12 @@ import os
import sys
import shutil
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
from distutils import spawn
class ClearCase(FetchMethod):
"""Class to fetch urls via 'clearcase'"""
@@ -106,13 +108,13 @@ class ClearCase(FetchMethod):
else:
ud.module = ""
ud.basecmd = d.getVar("FETCHCMD_ccrc") or "/usr/bin/env cleartool || rcleartool"
ud.basecmd = d.getVar("FETCHCMD_ccrc", True) or spawn.find_executable("cleartool") or spawn.find_executable("rcleartool")
if d.getVar("SRCREV") == "INVALID":
if data.getVar("SRCREV", d, True) == "INVALID":
raise FetchError("Set a valid SRCREV for the clearcase fetcher in your recipe, e.g. SRCREV = \"/main/LATEST\" or any other label of your choice.")
ud.label = d.getVar("SRCREV", False)
ud.customspec = d.getVar("CCASE_CUSTOM_CONFIG_SPEC")
ud.customspec = d.getVar("CCASE_CUSTOM_CONFIG_SPEC", True)
ud.server = "%s://%s%s" % (ud.proto, ud.host, ud.path)
@@ -122,7 +124,7 @@ class ClearCase(FetchMethod):
ud.viewname = "%s-view%s" % (ud.identifier, d.getVar("DATETIME", d, True))
ud.csname = "%s-config-spec" % (ud.identifier)
ud.ccasedir = os.path.join(d.getVar("DL_DIR"), ud.type)
ud.ccasedir = os.path.join(data.getVar("DL_DIR", d, True), ud.type)
ud.viewdir = os.path.join(ud.ccasedir, ud.viewname)
ud.configspecfile = os.path.join(ud.ccasedir, ud.csname)
ud.localfile = "%s.tar.gz" % (ud.identifier)
@@ -142,7 +144,7 @@ class ClearCase(FetchMethod):
self.debug("configspecfile = %s" % ud.configspecfile)
self.debug("localfile = %s" % ud.localfile)
ud.localfile = os.path.join(d.getVar("DL_DIR"), ud.localfile)
ud.localfile = os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def _build_ccase_command(self, ud, command):
"""
@@ -200,10 +202,11 @@ class ClearCase(FetchMethod):
def _remove_view(self, ud, d):
if os.path.exists(ud.viewdir):
os.chdir(ud.ccasedir)
cmd = self._build_ccase_command(ud, 'rmview');
logger.info("cleaning up [VOB=%s label=%s view=%s]", ud.vob, ud.label, ud.viewname)
bb.fetch2.check_network_access(d, cmd, ud.url)
output = runfetchcmd(cmd, d, workdir=ud.ccasedir)
output = runfetchcmd(cmd, d)
logger.info("rmview output: %s", output)
def need_update(self, ud, d):
@@ -238,10 +241,11 @@ class ClearCase(FetchMethod):
raise e
# Set configspec: Setting the configspec effectively fetches the files as defined in the configspec
os.chdir(ud.viewdir)
cmd = self._build_ccase_command(ud, 'setcs');
logger.info("fetching data [VOB=%s label=%s view=%s]", ud.vob, ud.label, ud.viewname)
bb.fetch2.check_network_access(d, cmd, ud.url)
output = runfetchcmd(cmd, d, workdir=ud.viewdir)
output = runfetchcmd(cmd, d)
logger.info("%s", output)
# Copy the configspec to the viewdir so we have it in our source tarball later

View File

@@ -63,7 +63,7 @@ class Cvs(FetchMethod):
if 'fullpath' in ud.parm:
fullpath = '_fullpath'
ud.localfile = d.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath))
ud.localfile = bb.data.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath), d)
def need_update(self, ud, d):
if (ud.date == "now"):
@@ -87,10 +87,10 @@ class Cvs(FetchMethod):
cvsroot = ud.path
else:
cvsroot = ":" + method
cvsproxyhost = d.getVar('CVS_PROXY_HOST')
cvsproxyhost = d.getVar('CVS_PROXY_HOST', True)
if cvsproxyhost:
cvsroot += ";proxy=" + cvsproxyhost
cvsproxyport = d.getVar('CVS_PROXY_PORT')
cvsproxyport = d.getVar('CVS_PROXY_PORT', True)
if cvsproxyport:
cvsroot += ";proxyport=" + cvsproxyport
cvsroot += ":" + ud.user
@@ -110,7 +110,7 @@ class Cvs(FetchMethod):
if ud.tag:
options.append("-r %s" % ud.tag)
cvsbasecmd = d.getVar("FETCHCMD_cvs") or "/usr/bin/env cvs"
cvsbasecmd = d.getVar("FETCHCMD_cvs", True)
cvscmd = cvsbasecmd + " '-d" + cvsroot + "' co " + " ".join(options) + " " + ud.module
cvsupdatecmd = cvsbasecmd + " '-d" + cvsroot + "' update -d -P " + " ".join(options)
@@ -120,27 +120,25 @@ class Cvs(FetchMethod):
# create module directory
logger.debug(2, "Fetch: checking for module directory")
pkg = d.getVar('PN')
cvsdir = d.getVar("CVSDIR") or (d.getVar("DL_DIR") + "/cvs")
pkgdir = os.path.join(cvsdir, pkg)
pkg = d.getVar('PN', True)
pkgdir = os.path.join(d.getVar('CVSDIR', True), pkg)
moddir = os.path.join(pkgdir, localdir)
workdir = None
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
logger.info("Update " + ud.url)
bb.fetch2.check_network_access(d, cvsupdatecmd, ud.url)
# update sources there
workdir = moddir
os.chdir(moddir)
cmd = cvsupdatecmd
else:
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(pkgdir)
workdir = pkgdir
os.chdir(pkgdir)
logger.debug(1, "Running %s", cvscmd)
bb.fetch2.check_network_access(d, cvscmd, ud.url)
cmd = cvscmd
runfetchcmd(cmd, d, cleanup=[moddir], workdir=workdir)
runfetchcmd(cmd, d, cleanup = [moddir])
if not os.access(moddir, os.R_OK):
raise FetchError("Directory %s was not readable despite sucessful fetch?!" % moddir, ud.url)
@@ -149,24 +147,24 @@ class Cvs(FetchMethod):
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude='CVS'"
tar_flags = "--exclude 'CVS'"
# tar them up to a defined filename
workdir = None
if 'fullpath' in ud.parm:
workdir = pkgdir
os.chdir(pkgdir)
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, localdir)
else:
workdir = os.path.dirname(os.path.realpath(moddir))
os.chdir(moddir)
os.chdir('..')
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(moddir))
runfetchcmd(cmd, d, cleanup=[ud.localpath], workdir=workdir)
runfetchcmd(cmd, d, cleanup = [ud.localpath])
def clean(self, ud, d):
""" Clean CVS Files and tarballs """
pkg = d.getVar('PN')
pkgdir = os.path.join(d.getVar("CVSDIR"), pkg)
pkg = d.getVar('PN', True)
pkgdir = os.path.join(d.getVar("CVSDIR", True), pkg)
bb.utils.remove(pkgdir, True)
bb.utils.remove(ud.localpath)

View File

@@ -49,10 +49,6 @@ Supported SRC_URI options are:
referring to commit which is valid in tag instead of branch.
The default is "0", set nobranch=1 if needed.
- usehead
For local git:// urls to use the current branch HEAD as the revision for use with
AUTOREV. Implies nobranch.
"""
#Copyright (C) 2005 Richard Purdie
@@ -70,64 +66,17 @@ Supported SRC_URI options are:
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import collections
import errno
import fnmatch
import os
import re
import subprocess
import tempfile
import bb
import bb.progress
import errno
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class GitProgressHandler(bb.progress.LineFilterProgressHandler):
"""Extract progress information from git output"""
def __init__(self, d):
self._buffer = ''
self._count = 0
super(GitProgressHandler, self).__init__(d)
# Send an initial progress event so the bar gets shown
self._fire_progress(-1)
def write(self, string):
self._buffer += string
stages = ['Counting objects', 'Compressing objects', 'Receiving objects', 'Resolving deltas']
stage_weights = [0.2, 0.05, 0.5, 0.25]
stagenum = 0
for i, stage in reversed(list(enumerate(stages))):
if stage in self._buffer:
stagenum = i
self._buffer = ''
break
self._status = stages[stagenum]
percs = re.findall(r'(\d+)%', string)
if percs:
progress = int(round((int(percs[-1]) * stage_weights[stagenum]) + (sum(stage_weights[:stagenum]) * 100)))
rates = re.findall(r'([\d.]+ [a-zA-Z]*/s+)', string)
if rates:
rate = rates[-1]
else:
rate = None
self.update(progress, rate)
else:
if stagenum == 0:
percs = re.findall(r': (\d+)', string)
if percs:
count = int(percs[-1])
if count > self._count:
self._count = count
self._fire_progress(-count)
super(GitProgressHandler, self).write(string)
class Git(FetchMethod):
bitbake_dir = os.path.abspath(os.path.join(os.path.dirname(os.path.join(os.path.abspath(__file__))), '..', '..', '..'))
make_shallow_path = os.path.join(bitbake_dir, 'bin', 'git-make-shallow')
"""Class to fetch a module or modules from git repositories"""
def init(self, d):
pass
@@ -162,13 +111,6 @@ class Git(FetchMethod):
ud.nobranch = ud.parm.get("nobranch","0") == "1"
# usehead implies nobranch
ud.usehead = ud.parm.get("usehead","0") == "1"
if ud.usehead:
if ud.proto != "file":
raise bb.fetch2.ParameterError("The usehead option is only for use with local ('protocol=file') git repositories", ud.url)
ud.nobranch = 1
# bareclone implies nocheckout
ud.bareclone = ud.parm.get("bareclone","0") == "1"
if ud.bareclone:
@@ -178,68 +120,17 @@ class Git(FetchMethod):
branches = ud.parm.get("branch", "master").split(',')
if len(branches) != len(ud.names):
raise bb.fetch2.ParameterError("The number of name and branch parameters is not balanced", ud.url)
ud.cloneflags = "-s -n"
if ud.bareclone:
ud.cloneflags += " --mirror"
ud.shallow = d.getVar("BB_GIT_SHALLOW") == "1"
ud.shallow_extra_refs = (d.getVar("BB_GIT_SHALLOW_EXTRA_REFS") or "").split()
depth_default = d.getVar("BB_GIT_SHALLOW_DEPTH")
if depth_default is not None:
try:
depth_default = int(depth_default or 0)
except ValueError:
raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH: %s" % depth_default)
else:
if depth_default < 0:
raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH: %s" % depth_default)
else:
depth_default = 1
ud.shallow_depths = collections.defaultdict(lambda: depth_default)
revs_default = d.getVar("BB_GIT_SHALLOW_REVS", True)
ud.shallow_revs = []
ud.branches = {}
for pos, name in enumerate(ud.names):
branch = branches[pos]
for name in ud.names:
branch = branches[ud.names.index(name)]
ud.branches[name] = branch
ud.unresolvedrev[name] = branch
shallow_depth = d.getVar("BB_GIT_SHALLOW_DEPTH_%s" % name)
if shallow_depth is not None:
try:
shallow_depth = int(shallow_depth or 0)
except ValueError:
raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH_%s: %s" % (name, shallow_depth))
else:
if shallow_depth < 0:
raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH_%s: %s" % (name, shallow_depth))
ud.shallow_depths[name] = shallow_depth
ud.basecmd = data.getVar("FETCHCMD_git", d, True) or "git -c core.fsyncobjectfiles=0"
revs = d.getVar("BB_GIT_SHALLOW_REVS_%s" % name)
if revs is not None:
ud.shallow_revs.extend(revs.split())
elif revs_default is not None:
ud.shallow_revs.extend(revs_default.split())
ud.write_tarballs = ((data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True) or "0") != "0") or ud.rebaseable
if (ud.shallow and
not ud.shallow_revs and
all(ud.shallow_depths[n] == 0 for n in ud.names)):
# Shallow disabled for this URL
ud.shallow = False
if ud.usehead:
ud.unresolvedrev['default'] = 'HEAD'
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0"
write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0"
ud.write_tarballs = write_tarballs != "0" or ud.rebaseable
ud.write_shallow_tarballs = (d.getVar("BB_GENERATE_SHALLOW_TARBALLS") or write_tarballs) != "0"
ud.setup_revisions(d)
ud.setup_revisons(d)
for name in ud.names:
# Ensure anything that doesn't look like a sha256 checksum/revision is translated into one
@@ -259,53 +150,23 @@ class Git(FetchMethod):
if ud.rebaseable:
for name in ud.names:
gitsrcname = gitsrcname + '_' + ud.revisions[name]
dl_dir = d.getVar("DL_DIR")
gitdir = d.getVar("GITDIR") or (dl_dir + "/git2")
ud.mirrortarball = 'git2_%s.tar.gz' % (gitsrcname)
ud.fullmirror = os.path.join(d.getVar("DL_DIR", True), ud.mirrortarball)
gitdir = d.getVar("GITDIR", True) or (d.getVar("DL_DIR", True) + "/git2/")
ud.clonedir = os.path.join(gitdir, gitsrcname)
ud.localfile = ud.clonedir
mirrortarball = 'git2_%s.tar.gz' % gitsrcname
ud.fullmirror = os.path.join(dl_dir, mirrortarball)
ud.mirrortarballs = [mirrortarball]
if ud.shallow:
tarballname = gitsrcname
if ud.bareclone:
tarballname = "%s_bare" % tarballname
if ud.shallow_revs:
tarballname = "%s_%s" % (tarballname, "_".join(sorted(ud.shallow_revs)))
for name, revision in sorted(ud.revisions.items()):
tarballname = "%s_%s" % (tarballname, ud.revisions[name][:7])
depth = ud.shallow_depths[name]
if depth:
tarballname = "%s-%s" % (tarballname, depth)
shallow_refs = []
if not ud.nobranch:
shallow_refs.extend(ud.branches.values())
if ud.shallow_extra_refs:
shallow_refs.extend(r.replace('refs/heads/', '').replace('*', 'ALL') for r in ud.shallow_extra_refs)
if shallow_refs:
tarballname = "%s_%s" % (tarballname, "_".join(sorted(shallow_refs)).replace('/', '.'))
fetcher = self.__class__.__name__.lower()
ud.shallowtarball = '%sshallow_%s.tar.gz' % (fetcher, tarballname)
ud.fullshallow = os.path.join(dl_dir, ud.shallowtarball)
ud.mirrortarballs.insert(0, ud.shallowtarball)
def localpath(self, ud, d):
return ud.clonedir
def need_update(self, ud, d):
if not os.path.exists(ud.clonedir):
return True
os.chdir(ud.clonedir)
for name in ud.names:
if not self._contains_ref(ud, d, name, ud.clonedir):
if not self._contains_ref(ud, d, name):
return True
if ud.shallow and ud.write_shallow_tarballs and not os.path.exists(ud.fullshallow):
return True
if ud.write_tarballs and not os.path.exists(ud.fullmirror):
return True
return False
@@ -313,7 +174,7 @@ class Git(FetchMethod):
def try_premirror(self, ud, d):
# If we don't do this, updating an existing checkout with only premirrors
# is not possible
if d.getVar("BB_FETCH_PREMIRRORONLY") is not None:
if d.getVar("BB_FETCH_PREMIRRORONLY", True) is not None:
return True
if os.path.exists(ud.clonedir):
return False
@@ -322,15 +183,11 @@ class Git(FetchMethod):
def download(self, ud, d):
"""Fetch url"""
# A current clone is preferred to either tarball, a shallow tarball is
# preferred to an out of date clone, and a missing clone will use
# either tarball.
if ud.shallow and os.path.exists(ud.fullshallow) and self.need_update(ud, d):
ud.localpath = ud.fullshallow
return
elif os.path.exists(ud.fullmirror) and not os.path.exists(ud.clonedir):
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.clonedir) and os.path.exists(ud.fullmirror):
bb.utils.mkdirhier(ud.clonedir)
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir)
os.chdir(ud.clonedir)
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d)
repourl = self._get_repo_url(ud)
@@ -339,128 +196,58 @@ class Git(FetchMethod):
# We do this since git will use a "-l" option automatically for local urls where possible
if repourl.startswith("file://"):
repourl = repourl[7:]
clone_cmd = "LANG=C %s clone --bare --mirror %s %s --progress" % (ud.basecmd, repourl, ud.clonedir)
clone_cmd = "%s clone --bare --mirror %s %s" % (ud.basecmd, repourl, ud.clonedir)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, clone_cmd, ud.url)
progresshandler = GitProgressHandler(d)
runfetchcmd(clone_cmd, d, log=progresshandler)
bb.fetch2.check_network_access(d, clone_cmd)
runfetchcmd(clone_cmd, d)
os.chdir(ud.clonedir)
# Update the checkout if needed
needupdate = False
for name in ud.names:
if not self._contains_ref(ud, d, name, ud.clonedir):
if not self._contains_ref(ud, d, name):
needupdate = True
break
if needupdate:
output = runfetchcmd("%s remote" % ud.basecmd, d, quiet=True, workdir=ud.clonedir)
if "origin" in output:
runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir)
try:
runfetchcmd("%s remote rm origin" % ud.basecmd, d)
except bb.fetch2.FetchError:
logger.debug(1, "No Origin")
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d, workdir=ud.clonedir)
fetch_cmd = "LANG=C %s fetch -f --prune --progress %s refs/*:refs/*" % (ud.basecmd, repourl)
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d)
fetch_cmd = "%s fetch -f --prune %s refs/*:refs/*" % (ud.basecmd, repourl)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
progresshandler = GitProgressHandler(d)
runfetchcmd(fetch_cmd, d, log=progresshandler, workdir=ud.clonedir)
runfetchcmd("%s prune-packed" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s pack-refs --all" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd(fetch_cmd, d)
runfetchcmd("%s prune-packed" % ud.basecmd, d)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
try:
os.unlink(ud.fullmirror)
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
os.chdir(ud.clonedir)
for name in ud.names:
if not self._contains_ref(ud, d, name, ud.clonedir):
if not self._contains_ref(ud, d, name):
raise bb.fetch2.FetchError("Unable to find revision %s in branch %s even from upstream" % (ud.revisions[name], ud.branches[name]))
def build_mirror_data(self, ud, d):
if ud.shallow and ud.write_shallow_tarballs:
if not os.path.exists(ud.fullshallow):
if os.path.islink(ud.fullshallow):
os.unlink(ud.fullshallow)
tempdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
shallowclone = os.path.join(tempdir, 'git')
try:
self.clone_shallow_local(ud, shallowclone, d)
logger.info("Creating tarball of git repository")
runfetchcmd("tar -czf %s ." % ud.fullshallow, d, workdir=shallowclone)
runfetchcmd("touch %s.done" % ud.fullshallow, d)
finally:
bb.utils.remove(tempdir, recurse=True)
elif ud.write_tarballs and not os.path.exists(ud.fullmirror):
# Generate a mirror tarball if needed
if ud.write_tarballs and not os.path.exists(ud.fullmirror):
# it's possible that this symlink points to read-only filesystem with PREMIRROR
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
os.chdir(ud.clonedir)
logger.info("Creating tarball of git repository")
runfetchcmd("tar -czf %s ." % ud.fullmirror, d, workdir=ud.clonedir)
runfetchcmd("touch %s.done" % ud.fullmirror, d)
def clone_shallow_local(self, ud, dest, d):
"""Clone the repo and make it shallow.
The upstream url of the new clone isn't set at this time, as it'll be
set correctly when unpacked."""
runfetchcmd("%s clone %s %s %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, dest), d)
to_parse, shallow_branches = [], []
for name in ud.names:
revision = ud.revisions[name]
depth = ud.shallow_depths[name]
if depth:
to_parse.append('%s~%d^{}' % (revision, depth - 1))
# For nobranch, we need a ref, otherwise the commits will be
# removed, and for non-nobranch, we truncate the branch to our
# srcrev, to avoid keeping unnecessary history beyond that.
branch = ud.branches[name]
if ud.nobranch:
ref = "refs/shallow/%s" % name
elif ud.bareclone:
ref = "refs/heads/%s" % branch
else:
ref = "refs/remotes/origin/%s" % branch
shallow_branches.append(ref)
runfetchcmd("%s update-ref %s %s" % (ud.basecmd, ref, revision), d, workdir=dest)
# Map srcrev+depths to revisions
parsed_depths = runfetchcmd("%s rev-parse %s" % (ud.basecmd, " ".join(to_parse)), d, workdir=dest)
# Resolve specified revisions
parsed_revs = runfetchcmd("%s rev-parse %s" % (ud.basecmd, " ".join('"%s^{}"' % r for r in ud.shallow_revs)), d, workdir=dest)
shallow_revisions = parsed_depths.splitlines() + parsed_revs.splitlines()
# Apply extra ref wildcards
all_refs = runfetchcmd('%s for-each-ref "--format=%%(refname)"' % ud.basecmd,
d, workdir=dest).splitlines()
for r in ud.shallow_extra_refs:
if not ud.bareclone:
r = r.replace('refs/heads/', 'refs/remotes/origin/')
if '*' in r:
matches = filter(lambda a: fnmatch.fnmatchcase(a, r), all_refs)
shallow_branches.extend(matches)
else:
shallow_branches.append(r)
# Make the repository shallow
shallow_cmd = [self.make_shallow_path, '-s']
for b in shallow_branches:
shallow_cmd.append('-r')
shallow_cmd.append(b)
shallow_cmd.extend(shallow_revisions)
runfetchcmd(subprocess.list2cmdline(shallow_cmd), d, workdir=dest)
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, os.path.join(".") ), d)
runfetchcmd("touch %s.done" % (ud.fullmirror), d)
def unpack(self, ud, destdir, d):
""" unpack the downloaded src to destdir"""
subdir = ud.parm.get("subpath", "")
if subdir != "":
readpathspec = ":%s" % subdir
readpathspec = ":%s" % (subdir)
def_destsuffix = "%s/" % os.path.basename(subdir.rstrip('/'))
else:
readpathspec = ""
@@ -471,27 +258,26 @@ class Git(FetchMethod):
if os.path.exists(destdir):
bb.utils.prunedir(destdir)
if ud.shallow and self.need_update(ud, d):
bb.utils.mkdirhier(destdir)
runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=destdir)
else:
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d)
cloneflags = "-s -n"
if ud.bareclone:
cloneflags += " --mirror"
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, cloneflags, ud.clonedir, destdir), d)
os.chdir(destdir)
repourl = self._get_repo_url(ud)
runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, repourl), d, workdir=destdir)
runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, repourl), d)
if not ud.nocheckout:
if subdir != "":
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d,
workdir=destdir)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d, workdir=destdir)
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d)
elif not ud.nobranch:
branchname = ud.branches[ud.names[0]]
runfetchcmd("%s checkout -B %s %s" % (ud.basecmd, branchname, \
ud.revisions[ud.names[0]]), d, workdir=destdir)
ud.revisions[ud.names[0]]), d)
runfetchcmd("%s branch %s --set-upstream-to origin/%s" % (ud.basecmd, branchname, \
branchname), d, workdir=destdir)
branchname), d)
else:
runfetchcmd("%s checkout %s" % (ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=destdir)
runfetchcmd("%s checkout %s" % (ud.basecmd, ud.revisions[ud.names[0]]), d)
return True
@@ -505,7 +291,7 @@ class Git(FetchMethod):
def supports_srcrev(self):
return True
def _contains_ref(self, ud, d, name, wd):
def _contains_ref(self, ud, d, name):
cmd = ""
if ud.nobranch:
cmd = "%s log --pretty=oneline -n 1 %s -- 2> /dev/null | wc -l" % (
@@ -514,7 +300,7 @@ class Git(FetchMethod):
cmd = "%s branch --contains %s --list %s 2> /dev/null | wc -l" % (
ud.basecmd, ud.revisions[name], ud.branches[name])
try:
output = runfetchcmd(cmd, d, quiet=True, workdir=wd)
output = runfetchcmd(cmd, d, quiet=True)
except bb.fetch2.FetchError:
return False
if len(output.split()) > 1:
@@ -541,26 +327,14 @@ class Git(FetchMethod):
"""
Run git ls-remote with the specified search string
"""
# Prevent recursion e.g. in OE if SRCPV is in PV, PV is in WORKDIR,
# and WORKDIR is in PATH (as a result of RSS), our call to
# runfetchcmd() exports PATH so this function will get called again (!)
# In this scenario the return call of the function isn't actually
# important - WORKDIR isn't needed in PATH to call git ls-remote
# anyway.
if d.getVar('_BB_GIT_IN_LSREMOTE', False):
return ''
d.setVar('_BB_GIT_IN_LSREMOTE', '1')
try:
repourl = self._get_repo_url(ud)
cmd = "%s ls-remote %s %s" % \
(ud.basecmd, repourl, search)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, cmd, repourl)
output = runfetchcmd(cmd, d, True)
if not output:
raise bb.fetch2.FetchError("The command %s gave empty output unexpectedly" % cmd, ud.url)
finally:
d.delVar('_BB_GIT_IN_LSREMOTE')
repourl = self._get_repo_url(ud)
cmd = "%s ls-remote %s %s" % \
(ud.basecmd, repourl, search)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, cmd)
output = runfetchcmd(cmd, d, True)
if not output:
raise bb.fetch2.FetchError("The command %s gave empty output unexpectedly" % cmd, ud.url)
return output
def _latest_revision(self, ud, d, name):
@@ -569,17 +343,16 @@ class Git(FetchMethod):
"""
output = self._lsremote(ud, d, "")
# Tags of the form ^{} may not work, need to fallback to other form
if ud.unresolvedrev[name][:5] == "refs/" or ud.usehead:
if ud.unresolvedrev[name][:5] == "refs/":
head = ud.unresolvedrev[name]
tag = ud.unresolvedrev[name]
else:
head = "refs/heads/%s" % ud.unresolvedrev[name]
tag = "refs/tags/%s" % ud.unresolvedrev[name]
for s in [head, tag + "^{}", tag]:
for l in output.strip().split('\n'):
sha1, ref = l.split()
if s == ref:
return sha1
for l in output.split('\n'):
if s in l:
return l.split()[0]
raise bb.fetch2.FetchError("Unable to resolve '%s' in upstream git repository in git ls-remote output for %s" % \
(ud.unresolvedrev[name], ud.host+ud.path))
@@ -591,11 +364,10 @@ class Git(FetchMethod):
"""
pupver = ('', '')
tagregex = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX') or "(?P<pver>([0-9][\.|_]?)+)")
tagregex = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX', True) or "(?P<pver>([0-9][\.|_]?)+)")
try:
output = self._lsremote(ud, d, "refs/tags/*")
except (bb.fetch2.FetchError, bb.fetch2.NetworkAccess) as e:
bb.note("Could not list remote: %s" % str(e))
except bb.fetch2.FetchError or bb.fetch2.NetworkAccess:
return pupver
verstring = ""
@@ -644,7 +416,7 @@ class Git(FetchMethod):
if not os.path.exists(rev_file) or not os.path.getsize(rev_file):
from pipes import quote
commits = bb.fetch2.runfetchcmd(
"git rev-list %s -- | wc -l" % quote(rev),
"git rev-list %s -- | wc -l" % (quote(rev)),
d, quiet=True).strip().lstrip('0')
if commits:
open(rev_file, "w").write("%d\n" % int(commits))
@@ -659,5 +431,5 @@ class Git(FetchMethod):
try:
self._lsremote(ud, d, "")
return True
except bb.fetch2.FetchError:
except FetchError:
return False

View File

@@ -22,6 +22,7 @@ BitBake 'Fetch' git annex implementation
import os
import bb
from bb import data
from bb.fetch2.git import Git
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
@@ -33,59 +34,43 @@ class GitANNEX(Git):
"""
return ud.type in ['gitannex']
def urldata_init(self, ud, d):
super(GitANNEX, self).urldata_init(ud, d)
if ud.shallow:
ud.shallow_extra_refs += ['refs/heads/git-annex', 'refs/heads/synced/*']
def uses_annex(self, ud, d, wd):
def uses_annex(self, ud, d):
for name in ud.names:
try:
runfetchcmd("%s rev-list git-annex" % (ud.basecmd), d, quiet=True, workdir=wd)
runfetchcmd("%s rev-list git-annex" % (ud.basecmd), d, quiet=True)
return True
except bb.fetch.FetchError:
pass
return False
def update_annex(self, ud, d, wd):
def update_annex(self, ud, d):
try:
runfetchcmd("%s annex get --all" % (ud.basecmd), d, quiet=True, workdir=wd)
runfetchcmd("%s annex get --all" % (ud.basecmd), d, quiet=True)
except bb.fetch.FetchError:
return False
runfetchcmd("chmod u+w -R %s/annex" % (ud.clonedir), d, quiet=True, workdir=wd)
runfetchcmd("chmod u+w -R %s/annex" % (ud.clonedir), d, quiet=True)
return True
def download(self, ud, d):
Git.download(self, ud, d)
if not ud.shallow or ud.localpath != ud.fullshallow:
if self.uses_annex(ud, d, ud.clonedir):
self.update_annex(ud, d, ud.clonedir)
def clone_shallow_local(self, ud, dest, d):
super(GitANNEX, self).clone_shallow_local(ud, dest, d)
try:
runfetchcmd("%s annex init" % ud.basecmd, d, workdir=dest)
except bb.fetch.FetchError:
pass
if self.uses_annex(ud, d, dest):
runfetchcmd("%s annex get" % ud.basecmd, d, workdir=dest)
runfetchcmd("chmod u+w -R %s/.git/annex" % (dest), d, quiet=True, workdir=dest)
os.chdir(ud.clonedir)
annex = self.uses_annex(ud, d)
if annex:
self.update_annex(ud, d)
def unpack(self, ud, destdir, d):
Git.unpack(self, ud, destdir, d)
os.chdir(ud.destdir)
try:
runfetchcmd("%s annex init" % (ud.basecmd), d, workdir=ud.destdir)
runfetchcmd("%s annex sync" % (ud.basecmd), d)
except bb.fetch.FetchError:
pass
annex = self.uses_annex(ud, d, ud.destdir)
annex = self.uses_annex(ud, d)
if annex:
runfetchcmd("%s annex get" % (ud.basecmd), d, workdir=ud.destdir)
runfetchcmd("chmod u+w -R %s/.git/annex" % (ud.destdir), d, quiet=True, workdir=ud.destdir)
runfetchcmd("%s annex get" % (ud.basecmd), d)
runfetchcmd("chmod u+w -R %s/.git/annex" % (ud.destdir), d, quiet=True)

View File

@@ -31,6 +31,7 @@ NOTE: Switching a SRC_URI from "git://" to "gitsm://" requires a clean of your r
import os
import bb
from bb import data
from bb.fetch2.git import Git
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
@@ -42,10 +43,10 @@ class GitSM(Git):
"""
return ud.type in ['gitsm']
def uses_submodules(self, ud, d, wd):
def uses_submodules(self, ud, d):
for name in ud.names:
try:
runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revisions[name]), d, quiet=True, workdir=wd)
runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revisions[name]), d, quiet=True)
return True
except bb.fetch.FetchError:
pass
@@ -98,7 +99,7 @@ class GitSM(Git):
for line in lines:
f.write(line)
def update_submodules(self, ud, d, allow_network):
def update_submodules(self, ud, d):
# We have to convert bare -> full repo, do the submodule bit, then convert back
tmpclonedir = ud.clonedir + ".tmp"
gitdir = tmpclonedir + os.sep + ".git"
@@ -106,97 +107,28 @@ class GitSM(Git):
os.mkdir(tmpclonedir)
os.rename(ud.clonedir, gitdir)
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*true/bare = false/'", d)
runfetchcmd(ud.basecmd + " reset --hard", d, workdir=tmpclonedir)
runfetchcmd(ud.basecmd + " checkout -f " + ud.revisions[ud.names[0]], d, workdir=tmpclonedir)
try:
if allow_network:
fetch_flags = ""
else:
fetch_flags = "--no-fetch"
# The 'git submodule sync' sandwiched between two successive 'git submodule update' commands is
# intentional. See the notes on the similar construction in download() for an explanation.
runfetchcmd("%(basecmd)s submodule update --init --recursive %(fetch_flags)s || (%(basecmd)s submodule sync --recursive && %(basecmd)s submodule update --init --recursive %(fetch_flags)s)" % {'basecmd': ud.basecmd, 'fetch_flags' : fetch_flags}, d, workdir=tmpclonedir)
except bb.fetch.FetchError:
if allow_network:
raise
else:
# This method was called as a probe to see whether the submodule history
# is complete enough to allow the current working copy to have its
# modules filled in. It's not, so swallow up the exception and report
# the negative result.
return False
finally:
self._set_relative_paths(tmpclonedir)
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*false/bare = true/'", d, workdir=tmpclonedir)
os.rename(gitdir, ud.clonedir,)
bb.utils.remove(tmpclonedir, True)
return True
def need_update(self, ud, d):
main_repo_needs_update = Git.need_update(self, ud, d)
# First check that the main repository has enough history fetched. If it doesn't, then we don't
# even have the .gitmodules and gitlinks for the submodules to attempt asking whether the
# submodules' histories are recent enough.
if main_repo_needs_update:
return True
# Now check that the submodule histories are new enough. The git-submodule command doesn't have
# any clean interface for doing this aside from just attempting the checkout (with network
# fetched disabled).
return not self.update_submodules(ud, d, allow_network=False)
os.chdir(tmpclonedir)
runfetchcmd(ud.basecmd + " reset --hard", d)
runfetchcmd(ud.basecmd + " checkout " + ud.revisions[ud.names[0]], d)
runfetchcmd(ud.basecmd + " submodule update --init --recursive", d)
self._set_relative_paths(tmpclonedir)
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*false/bare = true/'", d)
os.rename(gitdir, ud.clonedir,)
bb.utils.remove(tmpclonedir, True)
def download(self, ud, d):
Git.download(self, ud, d)
if not ud.shallow or ud.localpath != ud.fullshallow:
submodules = self.uses_submodules(ud, d, ud.clonedir)
if submodules:
self.update_submodules(ud, d, allow_network=True)
def clone_shallow_local(self, ud, dest, d):
super(GitSM, self).clone_shallow_local(ud, dest, d)
runfetchcmd('cp -fpPRH "%s/modules" "%s/"' % (ud.clonedir, os.path.join(dest, '.git')), d)
os.chdir(ud.clonedir)
submodules = self.uses_submodules(ud, d)
if submodules:
self.update_submodules(ud, d)
def unpack(self, ud, destdir, d):
Git.unpack(self, ud, destdir, d)
if self.uses_submodules(ud, d, ud.destdir):
runfetchcmd(ud.basecmd + " checkout " + ud.revisions[ud.names[0]], d, workdir=ud.destdir)
# Copy over the submodules' fetched histories too.
if ud.bareclone:
repo_conf = ud.destdir
else:
repo_conf = os.path.join(ud.destdir, '.git')
if os.path.exists(ud.clonedir):
# This is not a copy unpacked from a shallow mirror clone. So
# the manual intervention to populate the .git/modules done
# in clone_shallow_local() won't have been done yet.
runfetchcmd("cp -fpPRH %s %s" % (os.path.join(ud.clonedir, 'modules'), repo_conf), d)
fetch_flags = "--no-fetch"
elif os.path.exists(os.path.join(repo_conf, 'modules')):
# Unpacked from a shallow mirror clone. Manual population of
# .git/modules is already done.
fetch_flags = "--no-fetch"
else:
# This isn't fatal; git-submodule will just fetch it
# during do_unpack().
fetch_flags = ""
bb.error("submodule history not retrieved during do_fetch()")
# Careful not to hit the network during unpacking; all history should already
# be fetched.
#
# The repeated attempts to do the submodule initialization sandwiched around a sync to
# install the correct remote URLs into the submodules' .git/config metadata are deliberate.
# Bad remote URLs are leftover in the modules' .git/config files from the unpack of bare
# clone tarballs and an initial 'git submodule update' is necessary to prod them back to
# enough life so that the 'git submodule sync' realizes the existing module .git/config
# files exist to be updated.
runfetchcmd("%(basecmd)s submodule update --init --recursive %(fetch_flags)s || (%(basecmd)s submodule sync --recursive && %(basecmd)s submodule update --init --recursive %(fetch_flags)s)" % {'basecmd': ud.basecmd, 'fetch_flags': fetch_flags}, d, workdir=ud.destdir)
os.chdir(ud.destdir)
submodules = self.uses_submodules(ud, d)
if submodules:
runfetchcmd(ud.basecmd + " checkout " + ud.revisions[ud.names[0]], d)
runfetchcmd(ud.basecmd + " submodule update --init --recursive", d)

View File

@@ -29,6 +29,7 @@ import sys
import logging
import bb
import errno
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
@@ -66,7 +67,7 @@ class Hg(FetchMethod):
else:
ud.proto = "hg"
ud.setup_revisions(d)
ud.setup_revisons(d)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
@@ -76,17 +77,16 @@ class Hg(FetchMethod):
# Create paths to mercurial checkouts
hgsrcname = '%s_%s_%s' % (ud.module.replace('/', '.'), \
ud.host, ud.path.replace('/', '.'))
mirrortarball = 'hg_%s.tar.gz' % hgsrcname
ud.fullmirror = os.path.join(d.getVar("DL_DIR"), mirrortarball)
ud.mirrortarballs = [mirrortarball]
ud.mirrortarball = 'hg_%s.tar.gz' % hgsrcname
ud.fullmirror = os.path.join(d.getVar("DL_DIR", True), ud.mirrortarball)
hgdir = d.getVar("HGDIR") or (d.getVar("DL_DIR") + "/hg")
hgdir = d.getVar("HGDIR", True) or (d.getVar("DL_DIR", True) + "/hg/")
ud.pkgdir = os.path.join(hgdir, hgsrcname)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
ud.localfile = ud.moddir
ud.basecmd = d.getVar("FETCHCMD_hg") or "/usr/bin/env hg"
ud.basecmd = data.getVar("FETCHCMD_hg", d, True) or "/usr/bin/env hg"
ud.write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS")
ud.write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS", True)
def need_update(self, ud, d):
revTag = ud.parm.get('rev', 'tip')
@@ -99,7 +99,7 @@ class Hg(FetchMethod):
def try_premirror(self, ud, d):
# If we don't do this, updating an existing checkout with only premirrors
# is not possible
if d.getVar("BB_FETCH_PREMIRRORONLY") is not None:
if d.getVar("BB_FETCH_PREMIRRORONLY", True) is not None:
return True
if os.path.exists(ud.moddir):
return False
@@ -169,22 +169,25 @@ class Hg(FetchMethod):
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.pkgdir) and os.path.exists(ud.fullmirror):
bb.utils.mkdirhier(ud.pkgdir)
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d, workdir=ud.pkgdir)
os.chdir(ud.pkgdir)
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d)
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
# Found the source, check whether need pull
updatecmd = self._buildhgcommand(ud, d, "update")
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
try:
runfetchcmd(updatecmd, d, workdir=ud.moddir)
runfetchcmd(updatecmd, d)
except bb.fetch2.FetchError:
# Runnning pull in the repo
pullcmd = self._buildhgcommand(ud, d, "pull")
logger.info("Pulling " + ud.url)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", pullcmd)
bb.fetch2.check_network_access(d, pullcmd, ud.url)
runfetchcmd(pullcmd, d, workdir=ud.moddir)
runfetchcmd(pullcmd, d)
try:
os.unlink(ud.fullmirror)
except OSError as exc:
@@ -197,15 +200,17 @@ class Hg(FetchMethod):
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd, ud.url)
runfetchcmd(fetchcmd, d, workdir=ud.pkgdir)
runfetchcmd(fetchcmd, d)
# Even when we clone (fetch), we still need to update as hg's clone
# won't checkout the specified revision if its on a branch
updatecmd = self._buildhgcommand(ud, d, "update")
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
runfetchcmd(updatecmd, d, workdir=ud.moddir)
runfetchcmd(updatecmd, d)
def clean(self, ud, d):
""" Clean the hg dir """
@@ -221,7 +226,7 @@ class Hg(FetchMethod):
"""
Compute tip revision for the url
"""
bb.fetch2.check_network_access(d, self._buildhgcommand(ud, d, "info"), ud.url)
bb.fetch2.check_network_access(d, self._buildhgcommand(ud, d, "info"))
output = runfetchcmd(self._buildhgcommand(ud, d, "info"), d)
return output.strip()
@@ -241,9 +246,10 @@ class Hg(FetchMethod):
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
os.chdir(ud.pkgdir)
logger.info("Creating tarball of hg repository")
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, ud.module), d, workdir=ud.pkgdir)
runfetchcmd("touch %s.done" % (ud.fullmirror), d, workdir=ud.pkgdir)
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, ud.module), d)
runfetchcmd("touch %s.done" % (ud.fullmirror), d)
def localpath(self, ud, d):
return ud.pkgdir
@@ -263,8 +269,10 @@ class Hg(FetchMethod):
logger.debug(2, "Unpack: creating new hg repository in '" + codir + "'")
runfetchcmd("%s init %s" % (ud.basecmd, codir), d)
logger.debug(2, "Unpack: updating source in '" + codir + "'")
runfetchcmd("%s pull %s" % (ud.basecmd, ud.moddir), d, workdir=codir)
runfetchcmd("%s up -C %s" % (ud.basecmd, revflag), d, workdir=codir)
os.chdir(codir)
runfetchcmd("%s pull %s" % (ud.basecmd, ud.moddir), d)
runfetchcmd("%s up -C %s" % (ud.basecmd, revflag), d)
else:
logger.debug(2, "Unpack: extracting source to '" + codir + "'")
runfetchcmd("%s archive -t files %s %s" % (ud.basecmd, revflag, codir), d, workdir=ud.moddir)
os.chdir(ud.moddir)
runfetchcmd("%s archive -t files %s %s" % (ud.basecmd, revflag, codir), d)

View File

@@ -26,9 +26,10 @@ BitBake build tools.
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import urllib.request, urllib.parse, urllib.error
import urllib
import bb
import bb.utils
from bb import data
from bb.fetch2 import FetchMethod, FetchError
from bb.fetch2 import logger
@@ -41,7 +42,7 @@ class Local(FetchMethod):
def urldata_init(self, ud, d):
# We don't set localfile as for this fetcher the file is already local!
ud.decodedurl = urllib.parse.unquote(ud.url.split("://")[1].split(";")[0])
ud.decodedurl = urllib.unquote(ud.url.split("://")[1].split(";")[0])
ud.basename = os.path.basename(ud.decodedurl)
ud.basepath = ud.decodedurl
ud.needdonestamp = False
@@ -62,11 +63,17 @@ class Local(FetchMethod):
newpath = path
if path[0] == "/":
return [path]
filespath = d.getVar('FILESPATH')
filespath = data.getVar('FILESPATH', d, True)
if filespath:
logger.debug(2, "Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
newpath, hist = bb.utils.which(filespath, path, history=True)
searched.extend(hist)
if not newpath:
filesdir = data.getVar('FILESDIR', d, True)
if filesdir:
logger.debug(2, "Searching for %s in path: %s" % (path, filesdir))
newpath = os.path.join(filesdir, path)
searched.append(newpath)
if (not newpath or not os.path.exists(newpath)) and path.find("*") != -1:
# For expressions using '*', best we can do is take the first directory in FILESPATH that exists
newpath, hist = bb.utils.which(filespath, ".", history=True)
@@ -74,7 +81,7 @@ class Local(FetchMethod):
logger.debug(2, "Searching for %s in path: %s" % (path, newpath))
return searched
if not os.path.exists(newpath):
dldirfile = os.path.join(d.getVar("DL_DIR"), path)
dldirfile = os.path.join(d.getVar("DL_DIR", True), path)
logger.debug(2, "Defaulting to %s for %s" % (dldirfile, path))
bb.utils.mkdirhier(os.path.dirname(dldirfile))
searched.append(dldirfile)
@@ -93,10 +100,13 @@ class Local(FetchMethod):
# no need to fetch local files, we'll deal with them in place.
if self.supports_checksum(urldata) and not os.path.exists(urldata.localpath):
locations = []
filespath = d.getVar('FILESPATH')
filespath = data.getVar('FILESPATH', d, True)
if filespath:
locations = filespath.split(":")
locations.append(d.getVar("DL_DIR"))
filesdir = data.getVar('FILESDIR', d, True)
if filesdir:
locations.append(filesdir)
locations.append(d.getVar("DL_DIR", True))
msg = "Unable to find file " + urldata.url + " anywhere. The paths that were searched were:\n " + "\n ".join(locations)
raise FetchError(msg)

View File

@@ -13,18 +13,19 @@ Usage in the recipe:
- name
- version
npm://registry.npmjs.org/${PN}/-/${PN}-${PV}.tgz would become npm://registry.npmjs.org;name=${PN};version=${PV}
npm://registry.npmjs.org/${PN}/-/${PN}-${PV}.tgz would become npm://registry.npmjs.org;name=${PN};ver=${PV}
The fetcher all triggers off the existence of ud.localpath. If that exists and has the ".done" stamp, its assumed the fetch is good/done
"""
import os
import sys
import urllib.request, urllib.parse, urllib.error
import urllib
import json
import subprocess
import signal
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import ChecksumError
@@ -32,6 +33,7 @@ from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
from bb.fetch2 import UnpackError
from bb.fetch2 import ParameterError
from distutils import spawn
def subprocess_setup():
# Python installs a SIGPIPE handler by default. This is usually not what
@@ -78,7 +80,6 @@ class Npm(FetchMethod):
if not ud.version:
raise ParameterError("NPM fetcher requires a version parameter", ud.url)
ud.bbnpmmanifest = "%s-%s.deps.json" % (ud.pkgname, ud.version)
ud.bbnpmmanifest = ud.bbnpmmanifest.replace('/', '-')
ud.registry = "http://%s" % (ud.url.replace('npm://', '', 1).split(';'))[0]
prefixdir = "npm/%s" % ud.pkgname
ud.pkgdatadir = d.expand("${DL_DIR}/%s" % prefixdir)
@@ -86,14 +87,12 @@ class Npm(FetchMethod):
bb.utils.mkdirhier(ud.pkgdatadir)
ud.localpath = d.expand("${DL_DIR}/npm/%s" % ud.bbnpmmanifest)
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -O -t 2 -T 30 -nv --passive-ftp --no-check-certificate "
ud.prefixdir = prefixdir
self.basecmd = d.getVar("FETCHCMD_wget", True) or "/usr/bin/env wget -O -t 2 -T 30 -nv --passive-ftp --no-check-certificate "
self.basecmd += " --directory-prefix=%s " % prefixdir
ud.write_tarballs = ((d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0") != "0")
mirrortarball = 'npm_%s-%s.tar.xz' % (ud.pkgname, ud.version)
mirrortarball = mirrortarball.replace('/', '-')
ud.fullmirror = os.path.join(d.getVar("DL_DIR"), mirrortarball)
ud.mirrortarballs = [mirrortarball]
ud.write_tarballs = ((data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True) or "0") != "0")
ud.mirrortarball = 'npm_%s-%s.tar.xz' % (ud.pkgname, ud.version)
ud.fullmirror = os.path.join(d.getVar("DL_DIR", True), ud.mirrortarball)
def need_update(self, ud, d):
if os.path.exists(ud.localpath):
@@ -102,9 +101,8 @@ class Npm(FetchMethod):
def _runwget(self, ud, d, command, quiet):
logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
bb.fetch2.check_network_access(d, command, ud.url)
dldir = d.getVar("DL_DIR")
runfetchcmd(command, d, quiet, workdir=dldir)
bb.fetch2.check_network_access(d, command)
runfetchcmd(command, d, quiet)
def _unpackdep(self, ud, pkg, data, destdir, dldir, d):
file = data[pkg]['tgz']
@@ -115,13 +113,16 @@ class Npm(FetchMethod):
bb.fatal("NPM package %s downloaded not a tarball!" % file)
# Change to subdir before executing command
save_cwd = os.getcwd()
if not os.path.exists(destdir):
os.makedirs(destdir)
path = d.getVar('PATH')
os.chdir(destdir)
path = d.getVar('PATH', True)
if path:
cmd = "PATH=\"%s\" %s" % (path, cmd)
bb.note("Unpacking %s to %s/" % (file, destdir))
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True, cwd=destdir)
bb.note("Unpacking %s to %s/" % (file, os.getcwd()))
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True)
os.chdir(save_cwd)
if ret != 0:
raise UnpackError("Unpack command %s failed with return value %s" % (cmd, ret), ud.url)
@@ -133,17 +134,13 @@ class Npm(FetchMethod):
def unpack(self, ud, destdir, d):
dldir = d.getVar("DL_DIR")
with open("%s/npm/%s" % (dldir, ud.bbnpmmanifest)) as datafile:
dldir = d.getVar("DL_DIR", True)
depdumpfile = "%s-%s.deps.json" % (ud.pkgname, ud.version)
with open("%s/npm/%s" % (dldir, depdumpfile)) as datafile:
workobj = json.load(datafile)
dldir = "%s/%s" % (os.path.dirname(ud.localpath), ud.pkgname)
if 'subdir' in ud.parm:
unpackdir = '%s/%s' % (destdir, ud.parm.get('subdir'))
else:
unpackdir = '%s/npmpkg' % destdir
self._unpackdep(ud, ud.pkgname, workobj, unpackdir, dldir, d)
self._unpackdep(ud, ud.pkgname, workobj, "%s/npmpkg" % destdir, dldir, d)
def _parse_view(self, output):
'''
@@ -165,9 +162,7 @@ class Npm(FetchMethod):
pdata = json.loads('\n'.join(datalines))
return pdata
def _getdependencies(self, pkg, data, version, d, ud, optional=False, fetchedlist=None):
if fetchedlist is None:
fetchedlist = []
def _getdependencies(self, pkg, data, version, d, ud, optional=False):
pkgfullname = pkg
if version != '*' and not '/' in version:
pkgfullname += "@'%s'" % version
@@ -182,27 +177,17 @@ class Npm(FetchMethod):
if pkg_os:
if not isinstance(pkg_os, list):
pkg_os = [pkg_os]
blacklist = False
for item in pkg_os:
if item.startswith('!'):
blacklist = True
break
if (not blacklist and 'linux' not in pkg_os) or '!linux' in pkg_os:
if 'linux' not in pkg_os or '!linux' in pkg_os:
logger.debug(2, "Skipping %s since it's incompatible with Linux" % pkg)
return
#logger.debug(2, "Output URL is %s - %s - %s" % (ud.basepath, ud.basename, ud.localfile))
outputurl = pdata['dist']['tarball']
data[pkg] = {}
data[pkg]['tgz'] = os.path.basename(outputurl)
if outputurl in fetchedlist:
return
self._runwget(ud, d, "%s --directory-prefix=%s %s" % (self.basecmd, ud.prefixdir, outputurl), False)
fetchedlist.append(outputurl)
self._runwget(ud, d, "%s %s" % (self.basecmd, outputurl), False)
dependencies = pdata.get('dependencies', {})
optionalDependencies = pdata.get('optionalDependencies', {})
dependencies.update(optionalDependencies)
depsfound = {}
optdepsfound = {}
data[pkg]['deps'] = {}
@@ -211,20 +196,13 @@ class Npm(FetchMethod):
optdepsfound[dep] = dependencies[dep]
else:
depsfound[dep] = dependencies[dep]
for dep, version in optdepsfound.items():
self._getdependencies(dep, data[pkg]['deps'], version, d, ud, optional=True, fetchedlist=fetchedlist)
for dep, version in depsfound.items():
self._getdependencies(dep, data[pkg]['deps'], version, d, ud, fetchedlist=fetchedlist)
for dep, version in optdepsfound.iteritems():
self._getdependencies(dep, data[pkg]['deps'], version, d, ud, optional=True)
for dep, version in depsfound.iteritems():
self._getdependencies(dep, data[pkg]['deps'], version, d, ud)
def _getshrinkeddependencies(self, pkg, data, version, d, ud, lockdown, manifest, toplevel=True):
def _getshrinkeddependencies(self, pkg, data, version, d, ud, lockdown, manifest):
logger.debug(2, "NPM shrinkwrap file is %s" % data)
if toplevel:
name = data.get('name', None)
if name and name != pkg:
for obj in data.get('dependencies', []):
if obj == pkg:
self._getshrinkeddependencies(obj, data['dependencies'][obj], data['dependencies'][obj]['version'], d, ud, lockdown, manifest, False)
return
outputurl = "invalid"
if ('resolved' not in data) or (not data['resolved'].startswith('http')):
# will be the case for ${PN}
@@ -233,7 +211,7 @@ class Npm(FetchMethod):
outputurl = runfetchcmd(fetchcmd, d, True)
else:
outputurl = data['resolved']
self._runwget(ud, d, "%s --directory-prefix=%s %s" % (self.basecmd, ud.prefixdir, outputurl), False)
self._runwget(ud, d, "%s %s" % (self.basecmd, outputurl), False)
manifest[pkg] = {}
manifest[pkg]['tgz'] = os.path.basename(outputurl).rstrip()
manifest[pkg]['deps'] = {}
@@ -250,7 +228,7 @@ class Npm(FetchMethod):
if 'dependencies' in data:
for obj in data['dependencies']:
logger.debug(2, "Found dep is %s" % str(obj))
self._getshrinkeddependencies(obj, data['dependencies'][obj], data['dependencies'][obj]['version'], d, ud, lockdown, manifest[pkg]['deps'], False)
self._getshrinkeddependencies(obj, data['dependencies'][obj], data['dependencies'][obj]['version'], d, ud, lockdown, manifest[pkg]['deps'])
def download(self, ud, d):
"""Fetch url"""
@@ -259,32 +237,28 @@ class Npm(FetchMethod):
lockdown = {}
if not os.listdir(ud.pkgdatadir) and os.path.exists(ud.fullmirror):
dest = d.getVar("DL_DIR")
dest = d.getVar("DL_DIR", True)
bb.utils.mkdirhier(dest)
runfetchcmd("tar -xJf %s" % (ud.fullmirror), d, workdir=dest)
save_cwd = os.getcwd()
os.chdir(dest)
runfetchcmd("tar -xJf %s" % (ud.fullmirror), d)
os.chdir(save_cwd)
return
if ud.parm.get("noverify", None) != '1':
shwrf = d.getVar('NPM_SHRINKWRAP')
logger.debug(2, "NPM shrinkwrap file is %s" % shwrf)
if shwrf:
try:
with open(shwrf) as datafile:
shrinkobj = json.load(datafile)
except Exception as e:
raise FetchError('Error loading NPM_SHRINKWRAP file "%s" for %s: %s' % (shwrf, ud.pkgname, str(e)))
elif not ud.ignore_checksums:
logger.warning('Missing shrinkwrap file in NPM_SHRINKWRAP for %s, this will lead to unreliable builds!' % ud.pkgname)
lckdf = d.getVar('NPM_LOCKDOWN')
logger.debug(2, "NPM lockdown file is %s" % lckdf)
if lckdf:
try:
with open(lckdf) as datafile:
lockdown = json.load(datafile)
except Exception as e:
raise FetchError('Error loading NPM_LOCKDOWN file "%s" for %s: %s' % (lckdf, ud.pkgname, str(e)))
elif not ud.ignore_checksums:
logger.warning('Missing lockdown file in NPM_LOCKDOWN for %s, this will lead to unreproducible builds!' % ud.pkgname)
shwrf = d.getVar('NPM_SHRINKWRAP', True)
logger.debug(2, "NPM shrinkwrap file is %s" % shwrf)
try:
with open(shwrf) as datafile:
shrinkobj = json.load(datafile)
except:
logger.warn('Missing shrinkwrap file in NPM_SHRINKWRAP for %s, this will lead to unreliable builds!' % ud.pkgname)
lckdf = d.getVar('NPM_LOCKDOWN', True)
logger.debug(2, "NPM lockdown file is %s" % lckdf)
try:
with open(lckdf) as datafile:
lockdown = json.load(datafile)
except:
logger.warn('Missing lockdown file in NPM_LOCKDOWN for %s, this will lead to unreproducible builds!' % ud.pkgname)
if ('name' not in shrinkobj):
self._getdependencies(ud.pkgname, jsondepobj, ud.version, d, ud)
@@ -301,8 +275,10 @@ class Npm(FetchMethod):
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
dldir = d.getVar("DL_DIR")
save_cwd = os.getcwd()
os.chdir(d.getVar("DL_DIR", True))
logger.info("Creating tarball of npm data")
runfetchcmd("tar -cJf %s npm/%s npm/%s" % (ud.fullmirror, ud.bbnpmmanifest, ud.pkgname), d,
workdir=dldir)
runfetchcmd("touch %s.done" % (ud.fullmirror), d, workdir=dldir)
runfetchcmd("tar -cJf %s npm/%s npm/%s" % (ud.fullmirror, ud.bbnpmmanifest, ud.pkgname), d)
runfetchcmd("touch %s.done" % (ud.fullmirror), d)
os.chdir(save_cwd)

View File

@@ -10,6 +10,7 @@ import os
import sys
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
@@ -32,9 +33,8 @@ class Osc(FetchMethod):
ud.module = ud.parm["module"]
# Create paths to osc checkouts
oscdir = d.getVar("OSCDIR") or (d.getVar("DL_DIR") + "/osc")
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(oscdir, ud.host)
ud.pkgdir = os.path.join(d.getVar('OSCDIR', True), ud.host)
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
if 'rev' in ud.parm:
@@ -47,7 +47,7 @@ class Osc(FetchMethod):
else:
ud.revision = ""
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision))
ud.localfile = data.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision), d)
def _buildosccommand(self, ud, d, command):
"""
@@ -55,7 +55,7 @@ class Osc(FetchMethod):
command is "fetch", "update", "info"
"""
basecmd = d.getVar("FETCHCMD_osc") or "/usr/bin/env osc"
basecmd = data.expand('${FETCHCMD_osc}', d)
proto = ud.parm.get('protocol', 'ocs')
@@ -84,25 +84,27 @@ class Osc(FetchMethod):
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(d.getVar('OSCDIR'), ud.path, ud.module), os.R_OK):
if os.access(os.path.join(d.getVar('OSCDIR', True), ud.path, ud.module), os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ ud.url)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", oscupdatecmd)
bb.fetch2.check_network_access(d, oscupdatecmd, ud.url)
runfetchcmd(oscupdatecmd, d, workdir=ud.moddir)
runfetchcmd(oscupdatecmd, d)
else:
oscfetchcmd = self._buildosccommand(ud, d, "fetch")
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", oscfetchcmd)
bb.fetch2.check_network_access(d, oscfetchcmd, ud.url)
runfetchcmd(oscfetchcmd, d, workdir=ud.pkgdir)
runfetchcmd(oscfetchcmd, d)
os.chdir(os.path.join(ud.pkgdir + ud.path))
# tar them up to a defined filename
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d,
cleanup=[ud.localpath], workdir=os.path.join(ud.pkgdir + ud.path))
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d, cleanup = [ud.localpath])
def supports_srcrev(self):
return False
@@ -112,7 +114,7 @@ class Osc(FetchMethod):
Generate a .oscrc to be used for this run.
"""
config_path = os.path.join(d.getVar('OSCDIR'), "oscrc")
config_path = os.path.join(d.getVar('OSCDIR', True), "oscrc")
if (os.path.exists(config_path)):
os.remove(config_path)
@@ -121,8 +123,8 @@ class Osc(FetchMethod):
f.write("apisrv = %s\n" % ud.host)
f.write("scheme = http\n")
f.write("su-wrapper = su -c\n")
f.write("build-root = %s\n" % d.getVar('WORKDIR'))
f.write("urllist = %s\n" % d.getVar("OSCURLLIST"))
f.write("build-root = %s\n" % d.getVar('WORKDIR', True))
f.write("urllist = %s\n" % d.getVar("OSCURLLIST", True))
f.write("extra-pkgs = gzip\n")
f.write("\n")
f.write("[%s]\n" % ud.host)

View File

@@ -1,12 +1,14 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementation for perforce
BitBake 'Fetch' implementations
Classes for obtaining upstream sources for the
BitBake build tools.
"""
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2016 Kodak Alaris, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -23,183 +25,163 @@ BitBake 'Fetch' implementation for perforce
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
from future_builtins import zip
import os
import subprocess
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
class Perforce(FetchMethod):
""" Class to fetch from perforce repositories """
def supports(self, ud, d):
""" Check to see if a given url can be fetched with perforce. """
return ud.type in ['p4']
def urldata_init(self, ud, d):
"""
Initialize perforce specific variables within url data. If P4CONFIG is
provided by the env, use it. If P4PORT is specified by the recipe, use
its values, which may override the settings in P4CONFIG.
"""
ud.basecmd = d.getVar("FETCHCMD_p4") or "/usr/bin/env p4"
ud.dldir = d.getVar("P4DIR") or (d.getVar("DL_DIR") + "/p4")
path = ud.url.split('://')[1]
path = path.split(';')[0]
delim = path.find('@');
def doparse(url, d):
parm = {}
path = url.split("://")[1]
delim = path.find("@");
if delim != -1:
(ud.user, ud.pswd) = path.split('@')[0].split(':')
ud.path = path.split('@')[1]
(user, pswd, host, port) = path.split('@')[0].split(":")
path = path.split('@')[1]
else:
ud.path = path
(host, port) = d.getVar('P4PORT', False).split(':')
user = ""
pswd = ""
ud.usingp4config = False
p4port = d.getVar('P4PORT')
if path.find(";") != -1:
keys=[]
values=[]
plist = path.split(';')
for item in plist:
if item.count('='):
(key, value) = item.split('=')
keys.append(key)
values.append(value)
if p4port:
logger.debug(1, 'Using recipe provided P4PORT: %s' % p4port)
ud.host = p4port
else:
logger.debug(1, 'Trying to use P4CONFIG to automatically set P4PORT...')
ud.usingp4config = True
p4cmd = '%s info | grep "Server address"' % ud.basecmd
bb.fetch2.check_network_access(d, p4cmd, ud.url)
ud.host = runfetchcmd(p4cmd, d, True)
ud.host = ud.host.split(': ')[1].strip()
logger.debug(1, 'Determined P4PORT to be: %s' % ud.host)
if not ud.host:
raise FetchError('Could not determine P4PORT from P4CONFIG')
if ud.path.find('/...') >= 0:
ud.pathisdir = True
else:
ud.pathisdir = False
parm = dict(zip(keys, values))
path = "//" + path.split(';')[0]
host += ":%s" % (port)
parm["cset"] = Perforce.getcset(d, path, host, user, pswd, parm)
cleanedpath = ud.path.replace('/...', '').replace('/', '.')
cleanedhost = ud.host.replace(':', '.')
ud.pkgdir = os.path.join(ud.dldir, cleanedhost, cleanedpath)
return host, path, user, pswd, parm
doparse = staticmethod(doparse)
ud.setup_revisions(d)
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, ud.revision))
def _buildp4command(self, ud, d, command, depot_filename=None):
"""
Build a p4 commandline. Valid commands are "changes", "print", and
"files". depot_filename is the full path to the file in the depot
including the trailing '#rev' value.
"""
def getcset(d, depot, host, user, pswd, parm):
p4opt = ""
if "cset" in parm:
return parm["cset"];
if user:
p4opt += " -u %s" % (user)
if pswd:
p4opt += " -P %s" % (pswd)
if host:
p4opt += " -p %s" % (host)
if ud.user:
p4opt += ' -u "%s"' % (ud.user)
p4date = d.getVar("P4DATE", True)
if "revision" in parm:
depot += "#%s" % (parm["revision"])
elif "label" in parm:
depot += "@%s" % (parm["label"])
elif p4date:
depot += "@%s" % (p4date)
if ud.pswd:
p4opt += ' -P "%s"' % (ud.pswd)
p4cmd = d.getVar('FETCHCMD_p4', True) or "p4"
logger.debug(1, "Running %s%s changes -m 1 %s", p4cmd, p4opt, depot)
p4file, errors = bb.process.run("%s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
cset = p4file.strip()
logger.debug(1, "READ %s", cset)
if not cset:
return -1
if ud.host and not ud.usingp4config:
p4opt += ' -p %s' % (ud.host)
return cset.split(' ')[1]
getcset = staticmethod(getcset)
if hasattr(ud, 'revision') and ud.revision:
pathnrev = '%s@%s' % (ud.path, ud.revision)
def urldata_init(self, ud, d):
(host, path, user, pswd, parm) = Perforce.doparse(ud.url, d)
base_path = path.replace('/...', '')
base_path = self._strip_leading_slashes(base_path)
if "label" in parm:
version = parm["label"]
else:
pathnrev = '%s' % (ud.path)
version = Perforce.getcset(d, path, host, user, pswd, parm)
if depot_filename:
if ud.pathisdir: # Remove leading path to obtain filename
filename = depot_filename[len(ud.path)-1:]
else:
filename = depot_filename[depot_filename.rfind('/'):]
filename = filename[:filename.find('#')] # Remove trailing '#rev'
if command == 'changes':
p4cmd = '%s%s changes -m 1 //%s' % (ud.basecmd, p4opt, pathnrev)
elif command == 'print':
if depot_filename != None:
p4cmd = '%s%s print -o "p4/%s" "%s"' % (ud.basecmd, p4opt, filename, depot_filename)
else:
raise FetchError('No depot file name provided to p4 %s' % command, ud.url)
elif command == 'files':
p4cmd = '%s%s files //%s' % (ud.basecmd, p4opt, pathnrev)
else:
raise FetchError('Invalid p4 command %s' % command, ud.url)
return p4cmd
def _p4listfiles(self, ud, d):
"""
Return a list of the file names which are present in the depot using the
'p4 files' command, including trailing '#rev' file revision indicator
"""
p4cmd = self._buildp4command(ud, d, 'files')
bb.fetch2.check_network_access(d, p4cmd, ud.url)
p4fileslist = runfetchcmd(p4cmd, d, True)
p4fileslist = [f.rstrip() for f in p4fileslist.splitlines()]
if not p4fileslist:
raise FetchError('Unable to fetch listing of p4 files from %s@%s' % (ud.host, ud.path))
count = 0
filelist = []
for filename in p4fileslist:
item = filename.split(' - ')
lastaction = item[1].split()
logger.debug(1, 'File: %s Last Action: %s' % (item[0], lastaction[0]))
if lastaction[0] == 'delete':
continue
filelist.append(item[0])
return filelist
ud.localfile = data.expand('%s+%s+%s.tar.gz' % (host, base_path.replace('/', '.'), version), d)
def download(self, ud, d):
""" Get the list of files, fetch each one """
filelist = self._p4listfiles(ud, d)
if not filelist:
raise FetchError('No files found in depot %s@%s' % (ud.host, ud.path))
"""
Fetch urls
"""
bb.utils.remove(ud.pkgdir, True)
bb.utils.mkdirhier(ud.pkgdir)
(host, depot, user, pswd, parm) = Perforce.doparse(ud.url, d)
for afile in filelist:
p4fetchcmd = self._buildp4command(ud, d, 'print', afile)
bb.fetch2.check_network_access(d, p4fetchcmd, ud.url)
runfetchcmd(p4fetchcmd, d, workdir=ud.pkgdir)
if depot.find('/...') != -1:
path = depot[:depot.find('/...')]
else:
path = depot[:depot.rfind('/')]
runfetchcmd('tar -czf %s p4' % (ud.localpath), d, cleanup=[ud.localpath], workdir=ud.pkgdir)
module = parm.get('module', os.path.basename(path))
def clean(self, ud, d):
""" Cleanup p4 specific files and dirs"""
bb.utils.remove(ud.localpath)
bb.utils.remove(ud.pkgdir, True)
# Get the p4 command
p4opt = ""
if user:
p4opt += " -u %s" % (user)
def supports_srcrev(self):
return True
if pswd:
p4opt += " -P %s" % (pswd)
def _revision_key(self, ud, d, name):
""" Return a unique key for the url """
return 'p4:%s' % ud.pkgdir
if host:
p4opt += " -p %s" % (host)
def _latest_revision(self, ud, d, name):
""" Return the latest upstream scm revision number """
p4cmd = self._buildp4command(ud, d, "changes")
bb.fetch2.check_network_access(d, p4cmd, ud.url)
tip = runfetchcmd(p4cmd, d, True)
p4cmd = d.getVar('FETCHCMD_p4', True) or "p4"
if not tip:
raise FetchError('Could not determine the latest perforce changelist')
# create temp directory
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(d.expand('${WORKDIR}'))
mktemp = d.getVar("FETCHCMD_p4mktemp", True) or d.expand("mktemp -d -q '${WORKDIR}/oep4.XXXXXX'")
tmpfile, errors = bb.process.run(mktemp)
tmpfile = tmpfile.strip()
if not tmpfile:
raise FetchError("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.", ud.url)
tipcset = tip.split(' ')[1]
logger.debug(1, 'p4 tip found to be changelist %s' % tipcset)
return tipcset
if "label" in parm:
depot = "%s@%s" % (depot, parm["label"])
else:
cset = Perforce.getcset(d, depot, host, user, pswd, parm)
depot = "%s@%s" % (depot, cset)
def sortable_revision(self, ud, d, name):
""" Return a sortable revision number """
return False, self._build_revision(ud, d)
os.chdir(tmpfile)
logger.info("Fetch " + ud.url)
logger.info("%s%s files %s", p4cmd, p4opt, depot)
p4file, errors = bb.process.run("%s%s files %s" % (p4cmd, p4opt, depot))
p4file = [f.rstrip() for f in p4file.splitlines()]
def _build_revision(self, ud, d):
return ud.revision
if not p4file:
raise FetchError("Fetch: unable to get the P4 files from %s" % depot, ud.url)
count = 0
for file in p4file:
list = file.split()
if list[2] == "delete":
continue
dest = list[0][len(path)+1:]
where = dest.find("#")
subprocess.call("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module, dest[:where], list[0]), shell=True)
count = count + 1
if count == 0:
logger.error()
raise FetchError("Fetch: No files gathered from the P4 fetch", ud.url)
runfetchcmd("tar -czf %s %s" % (ud.localpath, module), d, cleanup = [ud.localpath])
# cleanup
bb.utils.prunedir(tmpfile)

View File

@@ -25,9 +25,9 @@ BitBake "Fetch" repo (git) implementation
import os
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class Repo(FetchMethod):
"""Class to fetch a module or modules from repo (git) repositories"""
@@ -45,25 +45,23 @@ class Repo(FetchMethod):
"master".
"""
ud.basecmd = d.getVar("FETCHCMD_repo") or "/usr/bin/env repo"
ud.proto = ud.parm.get('protocol', 'git')
ud.branch = ud.parm.get('branch', 'master')
ud.manifest = ud.parm.get('manifest', 'default.xml')
if not ud.manifest.endswith('.xml'):
ud.manifest += '.xml'
ud.localfile = d.expand("repo_%s%s_%s_%s.tar.gz" % (ud.host, ud.path.replace("/", "."), ud.manifest, ud.branch))
ud.localfile = data.expand("repo_%s%s_%s_%s.tar.gz" % (ud.host, ud.path.replace("/", "."), ud.manifest, ud.branch), d)
def download(self, ud, d):
"""Fetch url"""
if os.access(os.path.join(d.getVar("DL_DIR"), ud.localfile), os.R_OK):
if os.access(os.path.join(data.getVar("DL_DIR", d, True), ud.localfile), os.R_OK):
logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
return
repodir = d.getVar("REPODIR") or (d.getVar("DL_DIR") + "/repo")
gitsrcname = "%s%s" % (ud.host, ud.path.replace("/", "."))
repodir = data.getVar("REPODIR", d, True) or os.path.join(data.getVar("DL_DIR", d, True), "repo")
codir = os.path.join(repodir, gitsrcname, ud.manifest)
if ud.user:
@@ -71,23 +69,24 @@ class Repo(FetchMethod):
else:
username = ""
repodir = os.path.join(codir, "repo")
bb.utils.mkdirhier(repodir)
if not os.path.exists(os.path.join(repodir, ".repo")):
bb.fetch2.check_network_access(d, "%s init -m %s -b %s -u %s://%s%s%s" % (ud.basecmd, ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), ud.url)
runfetchcmd("%s init -m %s -b %s -u %s://%s%s%s" % (ud.basecmd, ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d, workdir=repodir)
bb.utils.mkdirhier(os.path.join(codir, "repo"))
os.chdir(os.path.join(codir, "repo"))
if not os.path.exists(os.path.join(codir, "repo", ".repo")):
bb.fetch2.check_network_access(d, "repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), ud.url)
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d)
bb.fetch2.check_network_access(d, "%s sync %s" % (ud.basecmd, ud.url), ud.url)
runfetchcmd("%s sync" % ud.basecmd, d, workdir=repodir)
bb.fetch2.check_network_access(d, "repo sync %s" % ud.url, ud.url)
runfetchcmd("repo sync", d)
os.chdir(codir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude='.repo' --exclude='.git'"
tar_flags = "--exclude '.repo' --exclude '.git'"
# Create a cache
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.join(".", "*") ), d, workdir=codir)
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.join(".", "*") ), d)
def supports_srcrev(self):
return False

View File

@@ -1,98 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementation for Amazon AWS S3.
Class for fetching files from Amazon S3 using the AWS Command Line Interface.
The aws tool must be correctly installed and configured prior to use.
"""
# Copyright (C) 2017, Andre McCurdy <armccurdy@gmail.com>
#
# Based in part on bb.fetch2.wget:
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import bb
import urllib.request, urllib.parse, urllib.error
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
class S3(FetchMethod):
"""Class to fetch urls via 'aws s3'"""
def supports(self, ud, d):
"""
Check to see if a given url can be fetched with s3.
"""
return ud.type in ['s3']
def recommends_checksum(self, urldata):
return True
def urldata_init(self, ud, d):
if 'downloadfilename' in ud.parm:
ud.basename = ud.parm['downloadfilename']
else:
ud.basename = os.path.basename(ud.path)
ud.localfile = d.expand(urllib.parse.unquote(ud.basename))
ud.basecmd = d.getVar("FETCHCMD_s3") or "/usr/bin/env aws s3"
def download(self, ud, d):
"""
Fetch urls
Assumes localpath was called first
"""
cmd = '%s cp s3://%s%s %s' % (ud.basecmd, ud.host, ud.path, ud.localpath)
bb.fetch2.check_network_access(d, cmd, ud.url)
runfetchcmd(cmd, d)
# Additional sanity checks copied from the wget class (although there
# are no known issues which mean these are required, treat the aws cli
# tool with a little healthy suspicion).
if not os.path.exists(ud.localpath):
raise FetchError("The aws cp command returned success for s3://%s%s but %s doesn't exist?!" % (ud.host, ud.path, ud.localpath))
if os.path.getsize(ud.localpath) == 0:
os.remove(ud.localpath)
raise FetchError("The aws cp command for s3://%s%s resulted in a zero size file?! Deleting and failing since this isn't right." % (ud.host, ud.path))
return True
def checkstatus(self, fetch, ud, d):
"""
Check the status of a URL
"""
cmd = '%s ls s3://%s%s' % (ud.basecmd, ud.host, ud.path)
bb.fetch2.check_network_access(d, cmd, ud.url)
output = runfetchcmd(cmd, d)
# "aws s3 ls s3://mybucket/foo" will exit with success even if the file
# is not found, so check output of the command to confirm success.
if not output:
raise FetchError("The aws ls command for s3://%s%s gave empty output" % (ud.host, ud.path))
return True

View File

@@ -61,11 +61,14 @@ SRC_URI = "sftp://user@host.example.com/dir/path.file.txt"
import os
import bb
import urllib.request, urllib.parse, urllib.error
import urllib
import commands
from bb import data
from bb.fetch2 import URI
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
class SFTP(FetchMethod):
"""Class to fetch urls via 'sftp'"""
@@ -90,7 +93,7 @@ class SFTP(FetchMethod):
else:
ud.basename = os.path.basename(ud.path)
ud.localfile = d.expand(urllib.parse.unquote(ud.basename))
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
def download(self, ud, d):
"""Fetch urls"""
@@ -102,7 +105,7 @@ class SFTP(FetchMethod):
port = '-P %d' % urlo.port
urlo.port = None
dldir = d.getVar('DL_DIR')
dldir = data.getVar('DL_DIR', d, True)
lpath = os.path.join(dldir, ud.localfile)
user = ''
@@ -118,7 +121,8 @@ class SFTP(FetchMethod):
remote = '%s%s:%s' % (user, urlo.hostname, path)
cmd = '%s %s %s %s' % (basecmd, port, remote, lpath)
cmd = '%s %s %s %s' % (basecmd, port, commands.mkarg(remote),
commands.mkarg(lpath))
bb.fetch2.check_network_access(d, cmd, ud.url)
runfetchcmd(cmd, d)

View File

@@ -43,6 +43,7 @@ IETF secsh internet draft:
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import re, os
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
@@ -86,11 +87,11 @@ class SSH(FetchMethod):
m = __pattern__.match(urldata.url)
path = m.group('path')
host = m.group('host')
urldata.localpath = os.path.join(d.getVar('DL_DIR'),
urldata.localpath = os.path.join(d.getVar('DL_DIR', True),
os.path.basename(os.path.normpath(path)))
def download(self, urldata, d):
dldir = d.getVar('DL_DIR')
dldir = d.getVar('DL_DIR', True)
m = __pattern__.match(urldata.url)
path = m.group('path')
@@ -113,10 +114,12 @@ class SSH(FetchMethod):
fr = host
fr += ':%s' % path
import commands
cmd = 'scp -B -r %s %s %s/' % (
portarg,
fr,
dldir
commands.mkarg(fr),
commands.mkarg(dldir)
)
bb.fetch2.check_network_access(d, cmd, urldata.url)

View File

@@ -28,6 +28,7 @@ import sys
import logging
import bb
import re
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
@@ -49,7 +50,7 @@ class Svn(FetchMethod):
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
ud.basecmd = d.getVar("FETCHCMD_svn") or "/usr/bin/env svn --non-interactive --trust-server-cert"
ud.basecmd = d.getVar('FETCHCMD_svn', True)
ud.module = ud.parm["module"]
@@ -59,17 +60,16 @@ class Svn(FetchMethod):
ud.path_spec = ud.parm["path_spec"]
# Create paths to svn checkouts
svndir = d.getVar("SVNDIR") or (d.getVar("DL_DIR") + "/svn")
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(svndir, ud.host, relpath)
ud.pkgdir = os.path.join(data.expand('${SVNDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
ud.setup_revisions(d)
ud.setup_revisons(d)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
ud.localfile = d.expand('%s_%s_%s_%s_.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision))
ud.localfile = data.expand('%s_%s_%s_%s_.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
def _buildsvncommand(self, ud, d, command):
"""
@@ -79,9 +79,9 @@ class Svn(FetchMethod):
proto = ud.parm.get('protocol', 'svn')
svn_ssh = None
if proto == "svn+ssh" and "ssh" in ud.parm:
svn_ssh = ud.parm["ssh"]
svn_rsh = None
if proto == "svn+ssh" and "rsh" in ud.parm:
svn_rsh = ud.parm["rsh"]
svnroot = ud.host + ud.path
@@ -113,8 +113,8 @@ class Svn(FetchMethod):
else:
raise FetchError("Invalid svn command %s" % command, ud.url)
if svn_ssh:
svncmd = "SVN_SSH=\"%s\" %s" % (svn_ssh, svncmd)
if svn_rsh:
svncmd = "svn_RSH=\"%s\" %s" % (svn_rsh, svncmd)
return svncmd
@@ -126,32 +126,35 @@ class Svn(FetchMethod):
if os.access(os.path.join(ud.moddir, '.svn'), os.R_OK):
svnupdatecmd = self._buildsvncommand(ud, d, "update")
logger.info("Update " + ud.url)
# update sources there
os.chdir(ud.moddir)
# We need to attempt to run svn upgrade first in case its an older working format
try:
runfetchcmd(ud.basecmd + " upgrade", d, workdir=ud.moddir)
runfetchcmd(ud.basecmd + " upgrade", d)
except FetchError:
pass
logger.debug(1, "Running %s", svnupdatecmd)
bb.fetch2.check_network_access(d, svnupdatecmd, ud.url)
runfetchcmd(svnupdatecmd, d, workdir=ud.moddir)
runfetchcmd(svnupdatecmd, d)
else:
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", svnfetchcmd)
bb.fetch2.check_network_access(d, svnfetchcmd, ud.url)
runfetchcmd(svnfetchcmd, d, workdir=ud.pkgdir)
runfetchcmd(svnfetchcmd, d)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude='.svn'"
tar_flags = "--exclude '.svn'"
os.chdir(ud.pkgdir)
# tar them up to a defined filename
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.path_spec), d,
cleanup=[ud.localpath], workdir=ud.pkgdir)
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.path_spec), d, cleanup = [ud.localpath])
def clean(self, ud, d):
""" Clean SVN specific files and dirs """
@@ -173,7 +176,7 @@ class Svn(FetchMethod):
"""
Return the latest upstream revision number
"""
bb.fetch2.check_network_access(d, self._buildsvncommand(ud, d, "log1"), ud.url)
bb.fetch2.check_network_access(d, self._buildsvncommand(ud, d, "log1"))
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "log1"), d, True)

View File

@@ -30,10 +30,9 @@ import tempfile
import subprocess
import os
import logging
import errno
import bb
import bb.progress
import urllib.request, urllib.parse, urllib.error
import urllib
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
@@ -42,27 +41,6 @@ from bb.utils import export_proxies
from bs4 import BeautifulSoup
from bs4 import SoupStrainer
class WgetProgressHandler(bb.progress.LineFilterProgressHandler):
"""
Extract progress information from wget output.
Note: relies on --progress=dot (with -v or without -q/-nv) being
specified on the wget command line.
"""
def __init__(self, d):
super(WgetProgressHandler, self).__init__(d)
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
def writeline(self, line):
percs = re.findall(r'(\d+)%\s+([\d.]+[A-Z])', line)
if percs:
progress = int(percs[-1][0])
rate = percs[-1][1] + '/s'
self.update(progress, rate)
return False
return True
class Wget(FetchMethod):
"""Class to fetch urls via 'wget'"""
def supports(self, ud, d):
@@ -84,19 +62,17 @@ class Wget(FetchMethod):
else:
ud.basename = os.path.basename(ud.path)
ud.localfile = d.expand(urllib.parse.unquote(ud.basename))
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
if not ud.localfile:
ud.localfile = d.expand(urllib.parse.unquote(ud.host + ud.path).replace("/", "."))
ud.localfile = data.expand(urllib.unquote(ud.host + ud.path).replace("/", "."), d)
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30 --passive-ftp --no-check-certificate"
self.basecmd = d.getVar("FETCHCMD_wget", True) or "/usr/bin/env wget -t 2 -T 30 -nv --passive-ftp --no-check-certificate"
def _runwget(self, ud, d, command, quiet, workdir=None):
progresshandler = WgetProgressHandler(d)
def _runwget(self, ud, d, command, quiet):
logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
bb.fetch2.check_network_access(d, command, ud.url)
runfetchcmd(command + ' --progress=dot -v', d, quiet, log=progresshandler, workdir=workdir)
bb.fetch2.check_network_access(d, command)
runfetchcmd(command, d, quiet)
def download(self, ud, d):
"""Fetch urls"""
@@ -104,13 +80,10 @@ class Wget(FetchMethod):
fetchcmd = self.basecmd
if 'downloadfilename' in ud.parm:
dldir = d.getVar("DL_DIR")
dldir = d.getVar("DL_DIR", True)
bb.utils.mkdirhier(os.path.dirname(dldir + os.sep + ud.localfile))
fetchcmd += " -O " + dldir + os.sep + ud.localfile
if ud.user and ud.pswd:
fetchcmd += " --user=%s --password=%s --auth-no-challenge" % (ud.user, ud.pswd)
uri = ud.url.split(";")[0]
if os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
@@ -132,11 +105,11 @@ class Wget(FetchMethod):
return True
def checkstatus(self, fetch, ud, d, try_again=True):
import urllib.request, urllib.error, urllib.parse, socket, http.client
from urllib.response import addinfourl
import urllib2, socket, httplib
from urllib import addinfourl
from bb.fetch2 import FetchConnectionCache
class HTTPConnectionCache(http.client.HTTPConnection):
class HTTPConnectionCache(httplib.HTTPConnection):
if fetch.connection_cache:
def connect(self):
"""Connect to the host and port specified in __init__."""
@@ -152,7 +125,7 @@ class Wget(FetchMethod):
if self._tunnel_host:
self._tunnel()
class CacheHTTPHandler(urllib.request.HTTPHandler):
class CacheHTTPHandler(urllib2.HTTPHandler):
def http_open(self, req):
return self.do_open(HTTPConnectionCache, req)
@@ -166,7 +139,7 @@ class Wget(FetchMethod):
- geturl(): return the original request URL
- code: HTTP status code
"""
host = req.host
host = req.get_host()
if not host:
raise urlllib2.URLError('no host given')
@@ -174,7 +147,7 @@ class Wget(FetchMethod):
h.set_debuglevel(self._debuglevel)
headers = dict(req.unredirected_hdrs)
headers.update(dict((k, v) for k, v in list(req.headers.items())
headers.update(dict((k, v) for k, v in req.headers.items()
if k not in headers))
# We want to make an HTTP/1.1 request, but the addinfourl
@@ -191,7 +164,7 @@ class Wget(FetchMethod):
headers["Connection"] = "Keep-Alive" # Works for HTTP/1.0
headers = dict(
(name.title(), val) for name, val in list(headers.items()))
(name.title(), val) for name, val in headers.items())
if req._tunnel_host:
tunnel_headers = {}
@@ -204,25 +177,12 @@ class Wget(FetchMethod):
h.set_tunnel(req._tunnel_host, headers=tunnel_headers)
try:
h.request(req.get_method(), req.selector, req.data, headers)
except socket.error as err: # XXX what error?
h.request(req.get_method(), req.get_selector(), req.data, headers)
except socket.error, err: # XXX what error?
# Don't close connection when cache is enabled.
# Instead, try to detect connections that are no longer
# usable (for example, closed unexpectedly) and remove
# them from the cache.
if fetch.connection_cache is None:
h.close()
elif isinstance(err, OSError) and err.errno == errno.EBADF:
# This happens when the server closes the connection despite the Keep-Alive.
# Apparently urllib then uses the file descriptor, expecting it to be
# connected, when in reality the connection is already gone.
# We let the request fail and expect it to be
# tried once more ("try_again" in check_status()),
# with the dead connection removed from the cache.
# If it still fails, we give up, which can happend for bad
# HTTP proxy settings.
fetch.connection_cache.remove_connection(h.host, h.port)
raise urllib.error.URLError(err)
raise urllib2.URLError(err)
else:
try:
r = h.getresponse(buffering=True)
@@ -250,7 +210,6 @@ class Wget(FetchMethod):
return ""
def close(self):
pass
closed = False
resp = addinfourl(fp_dummy(), r.msg, req.get_full_url())
resp.code = r.status
@@ -263,7 +222,7 @@ class Wget(FetchMethod):
return resp
class HTTPMethodFallback(urllib.request.BaseHandler):
class HTTPMethodFallback(urllib2.BaseHandler):
"""
Fallback to GET if HEAD is not allowed (405 HTTP error)
"""
@@ -271,11 +230,11 @@ class Wget(FetchMethod):
fp.read()
fp.close()
newheaders = dict((k,v) for k,v in list(req.headers.items())
newheaders = dict((k,v) for k,v in req.headers.items()
if k.lower() not in ("content-length", "content-type"))
return self.parent.open(urllib.request.Request(req.get_full_url(),
return self.parent.open(urllib2.Request(req.get_full_url(),
headers=newheaders,
origin_req_host=req.origin_req_host,
origin_req_host=req.get_origin_req_host(),
unverifiable=True))
"""
@@ -284,58 +243,41 @@ class Wget(FetchMethod):
"""
http_error_403 = http_error_405
"""
Some servers (e.g. FusionForge) returns 406 Not Acceptable when they
actually mean 405 Method Not Allowed.
"""
http_error_406 = http_error_405
class FixedHTTPRedirectHandler(urllib.request.HTTPRedirectHandler):
class FixedHTTPRedirectHandler(urllib2.HTTPRedirectHandler):
"""
urllib2.HTTPRedirectHandler resets the method to GET on redirect,
when we want to follow redirects using the original method.
"""
def redirect_request(self, req, fp, code, msg, headers, newurl):
newreq = urllib.request.HTTPRedirectHandler.redirect_request(self, req, fp, code, msg, headers, newurl)
newreq = urllib2.HTTPRedirectHandler.redirect_request(self, req, fp, code, msg, headers, newurl)
newreq.get_method = lambda: req.get_method()
return newreq
exported_proxies = export_proxies(d)
handlers = [FixedHTTPRedirectHandler, HTTPMethodFallback]
if export_proxies:
handlers.append(urllib.request.ProxyHandler())
handlers.append(urllib2.ProxyHandler())
handlers.append(CacheHTTPHandler())
# XXX: Since Python 2.7.9 ssl cert validation is enabled by default
# see PEP-0476, this causes verification errors on some https servers
# so disable by default.
import ssl
if hasattr(ssl, '_create_unverified_context'):
handlers.append(urllib.request.HTTPSHandler(context=ssl._create_unverified_context()))
opener = urllib.request.build_opener(*handlers)
handlers.append(urllib2.HTTPSHandler(context=ssl._create_unverified_context()))
opener = urllib2.build_opener(*handlers)
try:
uri = ud.url.split(";")[0]
r = urllib.request.Request(uri)
r = urllib2.Request(uri)
r.get_method = lambda: "HEAD"
# Some servers (FusionForge, as used on Alioth) require that the
# optional Accept header is set.
r.add_header("Accept", "*/*")
def add_basic_auth(login_str, request):
'''Adds Basic auth to http request, pass in login:password as string'''
import base64
encodeuser = base64.b64encode(login_str.encode('utf-8')).decode("utf-8")
authheader = "Basic %s" % encodeuser
r.add_header("Authorization", authheader)
if ud.user:
add_basic_auth(ud.user, r)
try:
import netrc, urllib.parse
n = netrc.netrc()
login, unused, password = n.authenticators(urllib.parse.urlparse(uri).hostname)
add_basic_auth("%s:%s" % (login, password), r)
except (TypeError, ImportError, IOError, netrc.NetrcParseError):
pass
with opener.open(r) as response:
pass
except urllib.error.URLError as e:
opener.open(r)
except urllib2.URLError as e:
if try_again:
logger.debug(2, "checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
@@ -421,16 +363,17 @@ class Wget(FetchMethod):
Run fetch checkstatus to get directory information
"""
f = tempfile.NamedTemporaryFile()
with tempfile.TemporaryDirectory(prefix="wget-index-") as workdir, tempfile.NamedTemporaryFile(dir=workdir, prefix="wget-listing-") as f:
agent = "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12"
fetchcmd = self.basecmd
fetchcmd += " -O " + f.name + " --user-agent='" + agent + "' '" + uri + "'"
try:
self._runwget(ud, d, fetchcmd, True, workdir=workdir)
fetchresult = f.read()
except bb.fetch2.BBFetchException:
fetchresult = ""
agent = "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12"
fetchcmd = self.basecmd
fetchcmd += " -O " + f.name + " --user-agent='" + agent + "' '" + uri + "'"
try:
self._runwget(ud, d, fetchcmd, True)
fetchresult = f.read()
except bb.fetch2.BBFetchException:
fetchresult = ""
f.close()
return fetchresult
def _check_latest_version(self, url, package, package_regex, current_version, ud, d):
@@ -557,7 +500,7 @@ class Wget(FetchMethod):
# src.rpm extension was added only for rpm package. Can be removed if the rpm
# packaged will always be considered as having to be manually upgraded
psuffix_regex = "(tar\.gz|tgz|tar\.bz2|zip|xz|tar\.lz|rpm|bz2|orig\.tar\.gz|tar\.xz|src\.tar\.gz|src\.tgz|svnr\d+\.tar\.bz2|stable\.tar\.gz|src\.rpm)"
psuffix_regex = "(tar\.gz|tgz|tar\.bz2|zip|xz|rpm|bz2|orig\.tar\.gz|tar\.xz|src\.tar\.gz|src\.tgz|svnr\d+\.tar\.bz2|stable\.tar\.gz|src\.rpm)"
# match name, version and archive type of a package
package_regex_comp = re.compile("(?P<name>%s?\.?v?)(?P<pver>%s)(?P<arch>%s)?[\.-](?P<type>%s$)"
@@ -565,7 +508,7 @@ class Wget(FetchMethod):
self.suffix_regex_comp = re.compile(psuffix_regex)
# compile regex, can be specific by package or generic regex
pn_regex = d.getVar('UPSTREAM_CHECK_REGEX')
pn_regex = d.getVar('UPSTREAM_CHECK_REGEX', True)
if pn_regex:
package_custom_regex_comp = re.compile(pn_regex)
else:
@@ -586,7 +529,7 @@ class Wget(FetchMethod):
sanity check to ensure same name and type.
"""
package = ud.path.split("/")[-1]
current_version = ['', d.getVar('PV'), '']
current_version = ['', d.getVar('PV', True), '']
"""possible to have no version in pkg name, such as spectrum-fw"""
if not re.search("\d+", package):
@@ -601,7 +544,7 @@ class Wget(FetchMethod):
bb.debug(3, "latest_versionstring, regex: %s" % (package_regex.pattern))
uri = ""
regex_uri = d.getVar("UPSTREAM_CHECK_URI")
regex_uri = d.getVar("UPSTREAM_CHECK_URI", True)
if not regex_uri:
path = ud.path.split(package)[0]
@@ -610,7 +553,7 @@ class Wget(FetchMethod):
dirver_regex = re.compile("(?P<dirver>[^/]*(\d+\.)*\d+([-_]r\d+)*)/")
m = dirver_regex.search(path)
if m:
pn = d.getVar('PN')
pn = d.getVar('PN', True)
dirver = m.group('dirver')
dirver_pn_regex = re.compile("%s\d?" % (re.escape(pn)))

View File

@@ -27,9 +27,6 @@ import sys
import logging
import optparse
import warnings
import fcntl
import time
import traceback
import bb
from bb import event
@@ -39,17 +36,11 @@ from bb import ui
from bb import server
from bb import cookerdata
import bb.server.process
import bb.server.xmlrpcclient
logger = logging.getLogger("BitBake")
class BBMainException(Exception):
pass
class BBMainFatal(bb.BBHandledException):
pass
def present_options(optionlist):
if len(optionlist) > 1:
return ' or '.join([', '.join(optionlist[:-1]), optionlist[-1]])
@@ -66,6 +57,9 @@ class BitbakeHelpFormatter(optparse.IndentedHelpFormatter):
if option.dest == 'ui':
valid_uis = list_extension_modules(bb.ui, 'main')
option.help = option.help.replace('@CHOICES@', present_options(valid_uis))
elif option.dest == 'servertype':
valid_server_types = list_extension_modules(bb.server, 'BitBakeServer')
option.help = option.help.replace('@CHOICES@', present_options(valid_server_types))
return optparse.IndentedHelpFormatter.format_option(self, option)
@@ -106,12 +100,11 @@ def import_extension_module(pkg, modulename, checkattr):
# Dynamically load the UI based on the ui name. Although we
# suggest a fixed set this allows you to have flexibility in which
# ones are available.
module = __import__(pkg.__name__, fromlist=[modulename])
module = __import__(pkg.__name__, fromlist = [modulename])
return getattr(module, modulename)
except AttributeError:
modules = present_options(list_extension_modules(pkg, checkattr))
raise BBMainException('FATAL: Unable to import extension module "%s" from %s. '
'Valid extension modules: %s' % (modulename, pkg.__name__, modules))
raise BBMainException('FATAL: Unable to import extension module "%s" from %s. Valid extension modules: %s' % (modulename, pkg.__name__, present_options(list_extension_modules(pkg, checkattr))))
# Display bitbake/OE warnings via the BitBake.Warnings logger, ignoring others"""
warnlog = logging.getLogger("BitBake.Warnings")
@@ -122,7 +115,7 @@ def _showwarning(message, category, filename, lineno, file=None, line=None):
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno)
warnlog.warning(s)
warnlog.warn(s)
warnings.showwarning = _showwarning
warnings.filterwarnings("ignore")
@@ -136,204 +129,194 @@ class BitBakeConfigParameters(cookerdata.ConfigParameters):
def parseCommandLine(self, argv=sys.argv):
parser = optparse.OptionParser(
formatter=BitbakeHelpFormatter(),
version="BitBake Build Tool Core version %s" % bb.__version__,
usage="""%prog [options] [recipename/target recipe:do_task ...]
formatter = BitbakeHelpFormatter(),
version = "BitBake Build Tool Core version %s" % bb.__version__,
usage = """%prog [options] [recipename/target recipe:do_task ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.""")
parser.add_option("-b", "--buildfile", action="store", dest="buildfile", default=None,
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
parser.add_option("-b", "--buildfile", help = "Execute tasks from a specific .bb recipe directly. WARNING: Does not handle any dependencies from other recipes.",
action = "store", dest = "buildfile", default = None)
parser.add_option("-k", "--continue", action="store_false", dest="abort", default=True,
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
parser.add_option("-k", "--continue", help = "Continue as much as possible after an error. While the target that failed and anything depending on it cannot be built, as much as possible will be built before stopping.",
action = "store_false", dest = "abort", default = True)
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
parser.add_option("-a", "--tryaltconfigs", help = "Continue with builds by trying to use alternative providers where possible.",
action = "store_true", dest = "tryaltconfigs", default = False)
parser.add_option("-c", "--cmd", action="store", dest="cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-f", "--force", help = "Force the specified targets/task to run (invalidating any existing stamp file).",
action = "store_true", dest = "force", default = False)
parser.add_option("-C", "--clear-stamp", action="store", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("-c", "--cmd", help = "Specify the task to execute. The exact options available depend on the metadata. Some examples might be 'compile' or 'populate_sysroot' or 'listtasks' may give a list of the tasks available.",
action = "store", dest = "cmd")
parser.add_option("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
parser.add_option("-C", "--clear-stamp", help = "Invalidate the stamp for the specified task such as 'compile' and then run the default task for the specified target(s).",
action = "store", dest = "invalidate_stamp")
parser.add_option("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
parser.add_option("-r", "--read", help = "Read the specified file before bitbake.conf.",
action = "append", dest = "prefile", default = [])
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
parser.add_option("-R", "--postread", help = "Read the specified file after bitbake.conf.",
action = "append", dest = "postfile", default = [])
parser.add_option("-D", "--debug", action="count", dest="debug", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
parser.add_option("-v", "--verbose", help = "Output more log message data to the terminal.",
action = "store_true", dest = "verbose", default = False)
parser.add_option("-q", "--quiet", action="count", dest="quiet", default=0,
help="Output less log message data to the terminal. You can specify this more than once.")
parser.add_option("-D", "--debug", help = "Increase the debug level. You can specify this more than once.",
action = "count", dest="debug", default = 0)
parser.add_option("-n", "--dry-run", action="store_true", dest="dry_run", default=False,
help="Don't execute, just go through the motions.")
parser.add_option("-n", "--dry-run", help = "Don't execute, just go through the motions.",
action = "store_true", dest = "dry_run", default = False)
parser.add_option("-S", "--dump-signatures", action="append", dest="dump_signatures",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means compare the dumped signature with the cached one.")
parser.add_option("-S", "--dump-signatures", help = "Dump out the signature construction information, with no task execution. The SIGNATURE_HANDLER parameter is passed to the handler. Two common values are none and printdiff but the handler may define more/less. none means only dump the signature, printdiff means compare the dumped signature with the cached one.",
action = "append", dest = "dump_signatures", default = [], metavar="SIGNATURE_HANDLER")
parser.add_option("-p", "--parse-only", action="store_true",
dest="parse_only", default=False,
help="Quit after parsing the BB recipes.")
parser.add_option("-p", "--parse-only", help = "Quit after parsing the BB recipes.",
action = "store_true", dest = "parse_only", default = False)
parser.add_option("-s", "--show-versions", action="store_true",
dest="show_versions", default=False,
help="Show current and preferred versions of all recipes.")
parser.add_option("-s", "--show-versions", help = "Show current and preferred versions of all recipes.",
action = "store_true", dest = "show_versions", default = False)
parser.add_option("-e", "--environment", action="store_true",
dest="show_environment", default=False,
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-e", "--environment", help = "Show the global or per-recipe environment complete with information about where variables were set/changed.",
action = "store_true", dest = "show_environment", default = False)
parser.add_option("-g", "--graphviz", action="store_true", dest="dot_graph", default=False,
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-g", "--graphviz", help = "Save dependency tree information for the specified targets in the dot syntax.",
action = "store_true", dest = "dot_graph", default = False)
parser.add_option("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing")
parser.add_option("-I", "--ignore-deps", help = """Assume these dependencies don't exist and are already provided (equivalent to ASSUME_PROVIDED). Useful to make dependency graphs more appealing""",
action = "append", dest = "extra_assume_provided", default = [])
parser.add_option("-l", "--log-domains", action="append", dest="debug_domains", default=[],
help="Show debug logging for the specified logging domains")
parser.add_option("-l", "--log-domains", help = """Show debug logging for the specified logging domains""",
action = "append", dest = "debug_domains", default = [])
parser.add_option("-P", "--profile", action="store_true", dest="profile", default=False,
help="Profile the command and save reports.")
parser.add_option("-P", "--profile", help = "Profile the command and save reports.",
action = "store_true", dest = "profile", default = False)
env_ui = os.environ.get('BITBAKE_UI', None)
default_ui = env_ui or 'knotty'
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
parser.add_option("-u", "--ui", help = "The user interface to use (@CHOICES@ - default %default).",
action="store", dest="ui", default=default_ui)
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
parser.add_option("-u", "--ui", action="store", dest="ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %default).")
parser.add_option("-t", "--servertype", help = "Choose which server type to use (@CHOICES@ - default %default).",
action = "store", dest = "servertype", default = "process")
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
parser.add_option("", "--token", help = "Specify the connection token to be used when connecting to a remote server.",
action = "store", dest = "xmlrpctoken")
parser.add_option("", "--revisions-changed", action="store_true",
dest="revisions_changed", default=False,
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
parser.add_option("", "--revisions-changed", help = "Set the exit code depending on whether upstream floating revisions have changed or not.",
action = "store_true", dest = "revisions_changed", default = False)
parser.add_option("", "--server-only", action="store_true",
dest="server_only", default=False,
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
parser.add_option("", "--server-only", help = "Run bitbake without a UI, only starting a server (cooker) process.",
action = "store_true", dest = "server_only", default = False)
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
help="The name/address for the bitbake xmlrpc server to bind to.")
parser.add_option("-B", "--bind", help = "The name/address for the bitbake server to bind to.",
action = "store", dest = "bind", default = False)
parser.add_option("-T", "--idle-timeout", type=float, dest="server_timeout",
default=os.getenv("BB_SERVER_TIMEOUT"),
help="Set timeout to unload bitbake server due to inactivity, "
"set to -1 means no unload, "
"default: Environment variable BB_SERVER_TIMEOUT.")
parser.add_option("", "--no-setscene", help = "Do not run any setscene tasks. sstate will be ignored and everything needed, built.",
action = "store_true", dest = "nosetscene", default = False)
parser.add_option("", "--no-setscene", action="store_true",
dest="nosetscene", default=False,
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
parser.add_option("", "--setscene-only", help = "Only run setscene tasks, don't run any real tasks.",
action = "store_true", dest = "setsceneonly", default = False)
parser.add_option("", "--setscene-only", action="store_true",
dest="setsceneonly", default=False,
help="Only run setscene tasks, don't run any real tasks.")
parser.add_option("", "--remote-server", help = "Connect to the specified server.",
action = "store", dest = "remote_server", default = False)
parser.add_option("", "--remote-server", action="store", dest="remote_server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
parser.add_option("-m", "--kill-server", help = "Terminate the remote server.",
action = "store_true", dest = "kill_server", default = False)
parser.add_option("-m", "--kill-server", action="store_true",
dest="kill_server", default=False,
help="Terminate any running bitbake server.")
parser.add_option("", "--observe-only", help = "Connect to a server as an observing-only client.",
action = "store_true", dest = "observe_only", default = False)
parser.add_option("", "--observe-only", action="store_true",
dest="observe_only", default=False,
help="Connect to a server as an observing-only client.")
parser.add_option("", "--status-only", action="store_true",
dest="status_only", default=False,
help="Check the status of the remote bitbake server.")
parser.add_option("-w", "--write-log", action="store", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
parser.add_option("", "--runall", action="append", dest="runall",
help="Run the specified task for any recipe in the taskgraph of the specified target (even if it wouldn't otherwise have run).")
parser.add_option("", "--runonly", action="append", dest="runonly",
help="Run only the specified task within the taskgraph of the specified targets (and any task dependencies those tasks may have).")
parser.add_option("", "--status-only", help = "Check the status of the remote bitbake server.",
action = "store_true", dest = "status_only", default = False)
parser.add_option("-w", "--write-log", help = "Writes the event log of the build to a bitbake event json file. Use '' (empty string) to assign the name automatically.",
action = "store", dest = "writeeventlog")
options, targets = parser.parse_args(argv)
if options.quiet and options.verbose:
parser.error("options --quiet and --verbose are mutually exclusive")
# some environmental variables set also configuration options
if "BBSERVER" in os.environ:
options.servertype = "xmlrpc"
options.remote_server = os.environ["BBSERVER"]
if options.quiet and options.debug:
parser.error("options --quiet and --debug are mutually exclusive")
if "BBTOKEN" in os.environ:
options.xmlrpctoken = os.environ["BBTOKEN"]
# use configuration files from environment variables
if "BBPRECONF" in os.environ:
options.prefile.append(os.environ["BBPRECONF"])
if "BBPOSTCONF" in os.environ:
options.postfile.append(os.environ["BBPOSTCONF"])
if "BBEVENTLOG" in os.environ:
options.writeeventlog = os.environ["BBEVENTLOG"]
# fill in proper log name if not supplied
if options.writeeventlog is not None and len(options.writeeventlog) == 0:
from datetime import datetime
eventlog = "bitbake_eventlog_%s.json" % datetime.now().strftime("%Y%m%d%H%M%S")
options.writeeventlog = eventlog
import datetime
options.writeeventlog = "bitbake_eventlog_%s.json" % datetime.datetime.now().strftime("%Y%m%d%H%M%S")
if options.bind:
try:
#Checking that the port is a number and is a ':' delimited value
(host, port) = options.bind.split(':')
port = int(port)
except (ValueError,IndexError):
raise BBMainException("FATAL: Malformed host:port bind parameter")
options.xmlrpcinterface = (host, port)
else:
options.xmlrpcinterface = (None, 0)
# if BBSERVER says to autodetect, let's do that
if options.remote_server:
[host, port] = options.remote_server.split(":", 2)
port = int(port)
# use automatic port if port set to -1, means read it from
# the bitbake.lock file; this is a bit tricky, but we always expect
# to be in the base of the build directory if we need to have a
# chance to start the server later, anyway
if port == -1:
lock_location = "./bitbake.lock"
# we try to read the address at all times; if the server is not started,
# we'll try to start it after the first connect fails, below
try:
lf = open(lock_location, 'r')
remotedef = lf.readline()
[host, port] = remotedef.split(":")
port = int(port)
lf.close()
options.remote_server = remotedef
except Exception as e:
raise BBMainException("Failed to read bitbake.lock (%s), invalid port" % str(e))
return options, targets[1:]
def start_server(servermodule, configParams, configuration, features):
server = servermodule.BitBakeServer()
single_use = not configParams.server_only
if configParams.bind:
(host, port) = configParams.bind.split(':')
server.initServer((host, int(port)), single_use)
configuration.interface = [ server.serverImpl.host, server.serverImpl.port ]
else:
server.initServer(single_use=single_use)
configuration.interface = []
try:
configuration.setServerRegIdleCallback(server.getServerIdleCB())
cooker = bb.cooker.BBCooker(configuration, features)
server.addcooker(cooker)
server.saveConnectionDetails()
except Exception as e:
exc_info = sys.exc_info()
while hasattr(server, "event_queue"):
try:
import queue
except ImportError:
import Queue as queue
try:
event = server.event_queue.get(block=False)
except (queue.Empty, IOError):
break
if isinstance(event, logging.LogRecord):
logger.handle(event)
raise exc_info[1], None, exc_info[2]
server.detach()
cooker.lock.close()
return server
def bitbake_main(configParams, configuration):
# Python multiprocessing requires /dev/shm on Linux
@@ -345,60 +328,51 @@ def bitbake_main(configParams, configuration):
# updates to log files for use with tail
try:
if sys.stdout.name == '<stdout>':
# Reopen with O_SYNC (unbuffered)
fl = fcntl.fcntl(sys.stdout.fileno(), fcntl.F_GETFL)
fl |= os.O_SYNC
fcntl.fcntl(sys.stdout.fileno(), fcntl.F_SETFL, fl)
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
except:
pass
configuration.setConfigParameters(configParams)
if configParams.server_only and configParams.remote_server:
ui_module = import_extension_module(bb.ui, configParams.ui, 'main')
servermodule = import_extension_module(bb.server, configParams.servertype, 'BitBakeServer')
if configParams.server_only:
if configParams.servertype != "xmlrpc":
raise BBMainException("FATAL: If '--server-only' is defined, we must set the "
"servertype as 'xmlrpc'.\n")
if not configParams.bind:
raise BBMainException("FATAL: The '--server-only' option requires a name/address "
"to bind to with the -B option.\n")
if configParams.remote_server:
raise BBMainException("FATAL: The '--server-only' option conflicts with %s.\n" %
("the BBSERVER environment variable" if "BBSERVER" in os.environ \
else "the '--remote-server' option"))
else "the '--remote-server' option" ))
if configParams.observe_only and not (configParams.remote_server or configParams.bind):
if configParams.bind and configParams.servertype != "xmlrpc":
raise BBMainException("FATAL: If '-B' or '--bind' is defined, we must "
"set the servertype as 'xmlrpc'.\n")
if configParams.remote_server and configParams.servertype != "xmlrpc":
raise BBMainException("FATAL: If '--remote-server' is defined, we must "
"set the servertype as 'xmlrpc'.\n")
if configParams.observe_only and (not configParams.remote_server or configParams.bind):
raise BBMainException("FATAL: '--observe-only' can only be used by UI clients "
"connecting to a server.\n")
if configParams.kill_server and not configParams.remote_server:
raise BBMainException("FATAL: '--kill-server' can only be used to terminate a remote server")
if "BBDEBUG" in os.environ:
level = int(os.environ["BBDEBUG"])
if level > configuration.debug:
configuration.debug = level
bb.msg.init_msgconfig(configParams.verbose, configuration.debug,
configuration.debug_domains)
configuration.debug_domains)
server_connection, ui_module = setup_bitbake(configParams, configuration)
# No server connection
if server_connection is None:
if configParams.status_only:
return 1
if configParams.kill_server:
return 0
if not configParams.server_only:
if configParams.status_only:
server_connection.terminate()
return 0
try:
for event in bb.event.ui_queue:
server_connection.events.queue_event(event)
bb.event.ui_queue = []
return ui_module.main(server_connection.connection, server_connection.events,
configParams)
finally:
server_connection.terminate()
else:
return 0
return 1
def setup_bitbake(configParams, configuration, extrafeatures=None):
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
if not configParams.status_only:
@@ -408,101 +382,59 @@ def setup_bitbake(configParams, configuration, extrafeatures=None):
# Clear away any spurious environment variables while we stoke up the cooker
cleanedvars = bb.utils.clean_environment()
if configParams.server_only:
featureset = []
ui_module = None
else:
ui_module = import_extension_module(bb.ui, configParams.ui, 'main')
featureset = []
if not configParams.server_only:
# Collect the feature set for the UI
featureset = getattr(ui_module, "featureSet", [])
if extrafeatures:
for feature in extrafeatures:
if not feature in featureset:
featureset.append(feature)
if configParams.server_only:
for param in ('prefile', 'postfile'):
value = getattr(configParams, param)
if value:
setattr(configuration, "%s_server" % param, value)
param = "%s_server" % param
server_connection = None
if configParams.remote_server:
# Connect to a remote XMLRPC server
server_connection = bb.server.xmlrpcclient.connectXMLRPC(configParams.remote_server, featureset,
configParams.observe_only, configParams.xmlrpctoken)
else:
retries = 8
while retries:
try:
topdir, lock = lockBitbake()
sockname = topdir + "/bitbake.sock"
if lock:
if configParams.status_only or configParams.kill_server:
logger.info("bitbake server is not running.")
lock.close()
return None, None
# we start a server with a given configuration
logger.info("Starting bitbake server...")
# Clear the event queue since we already displayed messages
bb.event.ui_queue = []
server = bb.server.process.BitBakeServer(lock, sockname, configuration, featureset)
else:
logger.info("Reconnecting to bitbake server...")
if not os.path.exists(sockname):
print("Previous bitbake instance shutting down?, waiting to retry...")
i = 0
lock = None
# Wait for 5s or until we can get the lock
while not lock and i < 50:
time.sleep(0.1)
_, lock = lockBitbake()
i += 1
if lock:
bb.utils.unlockfile(lock)
raise bb.server.process.ProcessTimeout("Bitbake still shutting down as socket exists but no lock?")
if not configParams.server_only:
try:
server_connection = bb.server.process.connectProcessServer(sockname, featureset)
except EOFError:
# The server may have been shutting down but not closed the socket yet. If that happened,
# ignore it.
pass
if server_connection or configParams.server_only:
break
except BBMainFatal:
raise
except (Exception, bb.server.process.ProcessTimeout) as e:
if not retries:
raise
retries -= 1
if isinstance(e, (bb.server.process.ProcessTimeout, BrokenPipeError)):
logger.info("Retrying server connection...")
else:
logger.info("Retrying server connection... (%s)" % traceback.format_exc())
if not retries:
bb.fatal("Unable to connect to bitbake server, or start one")
if retries < 5:
time.sleep(5)
if configParams.kill_server:
server_connection.connection.terminateServer()
server_connection.terminate()
if not configParams.remote_server:
# we start a server with a given configuration
server = start_server(servermodule, configParams, configuration, featureset)
bb.event.ui_queue = []
logger.info("Terminated bitbake server.")
return None, None
else:
# we start a stub server that is actually a XMLRPClient that connects to a real server
server = servermodule.BitBakeXMLRPCClient(configParams.observe_only, configParams.xmlrpctoken)
server.saveConnectionDetails(configParams.remote_server)
# Restore the environment in case the UI needs it
for k in cleanedvars:
os.environ[k] = cleanedvars[k]
logger.removeHandler(handler)
if not configParams.server_only:
try:
server_connection = server.establishConnection(featureset)
except Exception as e:
bb.fatal("Could not connect to server %s: %s" % (configParams.remote_server, str(e)))
return server_connection, ui_module
if configParams.kill_server:
server_connection.connection.terminateServer()
bb.event.ui_queue = []
return 0
def lockBitbake():
topdir = bb.cookerdata.findTopdir()
if not topdir:
bb.error("Unable to find conf/bblayers.conf or conf/bitbake.conf. BBAPTH is unset and/or not in a build directory?")
raise BBMainFatal
lockfile = topdir + "/bitbake.lock"
return topdir, bb.utils.lockfile(lockfile, False, False)
server_connection.setupEventQueue()
# Restore the environment in case the UI needs it
for k in cleanedvars:
os.environ[k] = cleanedvars[k]
logger.removeHandler(handler)
if configParams.status_only:
server_connection.terminate()
return 0
try:
return ui_module.main(server_connection.connection, server_connection.events, configParams)
finally:
bb.event.ui_queue = []
server_connection.terminate()
else:
print("Bitbake server address: %s, server port: %s" % (server.serverImpl.host, server.serverImpl.port))
return 0
return 1

View File

@@ -129,7 +129,7 @@ def getDiskData(BBDirs, configuration):
bb.utils.mkdirhier(path)
dev = getMountedDev(path)
# Use path/action as the key
devDict[(path, action)] = [dev, minSpace, minInode]
devDict[os.path.join(path, action)] = [dev, minSpace, minInode]
return devDict
@@ -141,7 +141,7 @@ def getInterval(configuration):
spaceDefault = 50 * 1024 * 1024
inodeDefault = 5 * 1024
interval = configuration.getVar("BB_DISKMON_WARNINTERVAL")
interval = configuration.getVar("BB_DISKMON_WARNINTERVAL", True)
if not interval:
return spaceDefault, inodeDefault
else:
@@ -179,7 +179,7 @@ class diskMonitor:
self.enableMonitor = False
self.configuration = configuration
BBDirs = configuration.getVar("BB_DISKMON_DIRS") or None
BBDirs = configuration.getVar("BB_DISKMON_DIRS", True) or None
if BBDirs:
self.devDict = getDiskData(BBDirs, configuration)
if self.devDict:
@@ -205,25 +205,22 @@ class diskMonitor:
""" Take action for the monitor """
if self.enableMonitor:
diskUsage = {}
for k, attributes in self.devDict.items():
path, action = k
dev, minSpace, minInode = attributes
for k in self.devDict:
path = os.path.dirname(k)
action = os.path.basename(k)
dev = self.devDict[k][0]
minSpace = self.devDict[k][1]
minInode = self.devDict[k][2]
st = os.statvfs(path)
# The available free space, integer number
# The free space, float point number
freeSpace = st.f_bavail * st.f_frsize
# Send all relevant information in the event.
freeSpaceRoot = st.f_bfree * st.f_frsize
totalSpace = st.f_blocks * st.f_frsize
diskUsage[dev] = bb.event.DiskUsageSample(freeSpace, freeSpaceRoot, totalSpace)
if minSpace and freeSpace < minSpace:
# Always show warning, the self.checked would always be False if the action is WARN
if self.preFreeS[k] == 0 or self.preFreeS[k] - freeSpace > self.spaceInterval and not self.checked[k]:
logger.warning("The free space of %s (%s) is running low (%.3fGB left)" % \
logger.warn("The free space of %s (%s) is running low (%.3fGB left)" % \
(path, dev, freeSpace / 1024 / 1024 / 1024.0))
self.preFreeS[k] = freeSpace
@@ -238,7 +235,7 @@ class diskMonitor:
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
# The free inodes, integer number
# The free inodes, float point number
freeInode = st.f_favail
if minInode and freeInode < minInode:
@@ -249,7 +246,7 @@ class diskMonitor:
continue
# Always show warning, the self.checked would always be False if the action is WARN
if self.preFreeI[k] == 0 or self.preFreeI[k] - freeInode > self.inodeInterval and not self.checked[k]:
logger.warning("The free inode of %s (%s) is running low (%.3fK left)" % \
logger.warn("The free inode of %s (%s) is running low (%.3fK left)" % \
(path, dev, freeInode / 1024.0))
self.preFreeI[k] = freeInode
@@ -263,6 +260,4 @@ class diskMonitor:
self.checked[k] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
bb.event.fire(bb.event.MonitorDiskEvent(diskUsage), self.configuration)
return

View File

@@ -40,7 +40,6 @@ class BBLogFormatter(logging.Formatter):
VERBOSE = logging.INFO - 1
NOTE = logging.INFO
PLAIN = logging.INFO + 1
VERBNOTE = logging.INFO + 2
ERROR = logging.ERROR
WARNING = logging.WARNING
CRITICAL = logging.CRITICAL
@@ -52,14 +51,13 @@ class BBLogFormatter(logging.Formatter):
VERBOSE: 'NOTE',
NOTE : 'NOTE',
PLAIN : '',
VERBNOTE: 'NOTE',
WARNING : 'WARNING',
ERROR : 'ERROR',
CRITICAL: 'ERROR',
}
color_enabled = False
BASECOLOR, BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = list(range(29,38))
BASECOLOR, BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = range(29,38)
COLORS = {
DEBUG3 : CYAN,
@@ -68,7 +66,6 @@ class BBLogFormatter(logging.Formatter):
VERBOSE : BASECOLOR,
NOTE : BASECOLOR,
PLAIN : BASECOLOR,
VERBNOTE: BASECOLOR,
WARNING : YELLOW,
ERROR : RED,
CRITICAL: RED,
@@ -93,9 +90,8 @@ class BBLogFormatter(logging.Formatter):
if self.color_enabled:
record = self.colorize(record)
msg = logging.Formatter.format(self, record)
if hasattr(record, 'bb_exc_formatted'):
msg += '\n' + ''.join(record.bb_exc_formatted)
elif hasattr(record, 'bb_exc_info'):
if hasattr(record, 'bb_exc_info'):
etype, value, tb = record.bb_exc_info
formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
msg += '\n' + ''.join(formatted)
@@ -185,12 +181,9 @@ def constructLogOptions():
debug_domains["BitBake.%s" % domainarg] = logging.DEBUG - dlevel + 1
return level, debug_domains
def addDefaultlogFilter(handler, cls = BBLogFilter, forcelevel=None):
def addDefaultlogFilter(handler, cls = BBLogFilter):
level, debug_domains = constructLogOptions()
if forcelevel is not None:
level = forcelevel
cls(handler, level, debug_domains)
#
@@ -204,25 +197,3 @@ def fatal(msgdomain, msg):
logger = logging.getLogger("BitBake")
logger.critical(msg)
sys.exit(1)
def logger_create(name, output=sys.stderr, level=logging.INFO, preserve_handlers=False, color='auto'):
"""Standalone logger creation function"""
logger = logging.getLogger(name)
console = logging.StreamHandler(output)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if color == 'always' or (color == 'auto' and output.isatty()):
format.enable_color()
console.setFormatter(format)
if preserve_handlers:
logger.addHandler(console)
else:
logger.handlers = [console]
logger.setLevel(level)
return logger
def has_console_handler(logger):
for handler in logger.handlers:
if isinstance(handler, logging.StreamHandler):
if handler.stream in [sys.stderr, sys.stdout]:
return True
return False

View File

@@ -84,10 +84,6 @@ def update_cache(f):
logger.debug(1, "Updating mtime cache for %s" % f)
update_mtime(f)
def clear_cache():
global __mtime_cache
__mtime_cache = {}
def mark_dependency(d, f):
if f.startswith('./'):
f = "%s/%s" % (os.getcwd(), f[2:])
@@ -127,16 +123,15 @@ def init_parser(d):
def resolve_file(fn, d):
if not os.path.isabs(fn):
bbpath = d.getVar("BBPATH")
bbpath = d.getVar("BBPATH", True)
newfn, attempts = bb.utils.which(bbpath, fn, history=True)
for af in attempts:
mark_dependency(d, af)
if not newfn:
raise IOError(errno.ENOENT, "file %s not found in %s" % (fn, bbpath))
fn = newfn
else:
mark_dependency(d, fn)
mark_dependency(d, fn)
if not os.path.isfile(fn):
raise IOError(errno.ENOENT, "file %s not found" % fn)

View File

@@ -21,7 +21,8 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from __future__ import absolute_import
from future_builtins import filter
import re
import string
import logging
@@ -30,6 +31,8 @@ import itertools
from bb import methodpool
from bb.parse import logger
_bbversions_re = re.compile(r"\[(?P<from>[0-9]+)-(?P<to>[0-9]+)\]")
class StatementGroup(list):
def eval(self, data):
for statement in self:
@@ -67,33 +70,6 @@ class ExportNode(AstNode):
def eval(self, data):
data.setVarFlag(self.var, "export", 1, op = 'exported')
class UnsetNode(AstNode):
def __init__(self, filename, lineno, var):
AstNode.__init__(self, filename, lineno)
self.var = var
def eval(self, data):
loginfo = {
'variable': self.var,
'file': self.filename,
'line': self.lineno,
}
data.delVar(self.var,**loginfo)
class UnsetFlagNode(AstNode):
def __init__(self, filename, lineno, var, flag):
AstNode.__init__(self, filename, lineno)
self.var = var
self.flag = flag
def eval(self, data):
loginfo = {
'variable': self.var,
'file': self.filename,
'line': self.lineno,
}
data.delVarFlag(self.var, self.flag, **loginfo)
class DataNode(AstNode):
"""
Various data related updates. For the sake of sanity
@@ -130,6 +106,7 @@ class DataNode(AstNode):
val = groupd["value"]
elif "colon" in groupd and groupd["colon"] != None:
e = data.createCopy()
bb.data.update_data(e)
op = "immediate"
val = e.expand(groupd["value"], key + "[:=]")
elif "append" in groupd and groupd["append"] != None:
@@ -162,7 +139,7 @@ class DataNode(AstNode):
data.setVar(key, val, parsing=True, **loginfo)
class MethodNode(AstNode):
tr_tbl = str.maketrans('/.+-@%&', '_______')
tr_tbl = string.maketrans('/.+-@%&', '_______')
def __init__(self, filename, lineno, func_name, body, python, fakeroot):
AstNode.__init__(self, filename, lineno)
@@ -294,12 +271,6 @@ def handleInclude(statements, filename, lineno, m, force):
def handleExport(statements, filename, lineno, m):
statements.append(ExportNode(filename, lineno, m.group(1)))
def handleUnset(statements, filename, lineno, m):
statements.append(UnsetNode(filename, lineno, m.group(1)))
def handleUnsetFlag(statements, filename, lineno, m):
statements.append(UnsetFlagNode(filename, lineno, m.group(1), m.group(2)))
def handleData(statements, filename, lineno, groupd):
statements.append(DataNode(filename, lineno, groupd))
@@ -335,39 +306,32 @@ def handleInherit(statements, filename, lineno, m):
classes = m.group(1)
statements.append(InheritNode(filename, lineno, classes))
def runAnonFuncs(d):
def finalize(fn, d, variant = None):
all_handlers = {}
for var in d.getVar('__BBHANDLERS', False) or []:
# try to add the handler
handlerfn = d.getVarFlag(var, "filename", False)
handlerln = int(d.getVarFlag(var, "lineno", False))
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask", True) or "").split(), handlerfn, handlerln)
bb.event.fire(bb.event.RecipePreFinalise(fn), d)
bb.data.expandKeys(d)
bb.data.update_data(d)
code = []
for funcname in d.getVar("__BBANONFUNCS", False) or []:
code.append("%s(d)" % funcname)
bb.utils.better_exec("\n".join(code), {"d": d})
bb.data.update_data(d)
def finalize(fn, d, variant = None):
saved_handlers = bb.event.get_handlers().copy()
try:
for var in d.getVar('__BBHANDLERS', False) or []:
# try to add the handler
handlerfn = d.getVarFlag(var, "filename", False)
if not handlerfn:
bb.fatal("Undefined event handler function '%s'" % var)
handlerln = int(d.getVarFlag(var, "lineno", False))
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
tasklist = d.getVar('__BBTASKS', False) or []
bb.build.add_tasks(tasklist, d)
bb.event.fire(bb.event.RecipePreFinalise(fn), d)
bb.parse.siggen.finalise(fn, d, variant)
bb.data.expandKeys(d)
runAnonFuncs(d)
d.setVar('BBINCLUDED', bb.parse.get_file_depends(d))
tasklist = d.getVar('__BBTASKS', False) or []
bb.event.fire(bb.event.RecipeTaskPreProcess(fn, list(tasklist)), d)
bb.build.add_tasks(tasklist, d)
bb.parse.siggen.finalise(fn, d, variant)
d.setVar('BBINCLUDED', bb.parse.get_file_depends(d))
bb.event.fire(bb.event.RecipeParsed(fn), d)
finally:
bb.event.set_handlers(saved_handlers)
bb.event.fire(bb.event.RecipeParsed(fn), d)
def _create_variants(datastores, names, function, onlyfinalise):
def create_variant(name, orig_d, arg = None):
@@ -377,16 +341,37 @@ def _create_variants(datastores, names, function, onlyfinalise):
function(arg or name, new_d)
datastores[name] = new_d
for variant in list(datastores.keys()):
for variant, variant_d in datastores.items():
for name in names:
if not variant:
# Based on main recipe
create_variant(name, datastores[""])
create_variant(name, variant_d)
else:
create_variant("%s-%s" % (variant, name), datastores[variant], name)
create_variant("%s-%s" % (variant, name), variant_d, name)
def _expand_versions(versions):
def expand_one(version, start, end):
for i in xrange(start, end + 1):
ver = _bbversions_re.sub(str(i), version, 1)
yield ver
versions = iter(versions)
while True:
try:
version = next(versions)
except StopIteration:
break
range_ver = _bbversions_re.search(version)
if not range_ver:
yield version
else:
newversions = expand_one(version, int(range_ver.group("from")),
int(range_ver.group("to")))
versions = itertools.chain(newversions, versions)
def multi_finalize(fn, d):
appends = (d.getVar("__BBAPPEND") or "").split()
appends = (d.getVar("__BBAPPEND", True) or "").split()
for append in appends:
logger.debug(1, "Appending .bbappend file %s to %s", append, fn)
bb.parse.BBHandler.handle(append, d, True)
@@ -401,7 +386,51 @@ def multi_finalize(fn, d):
d.setVar("__SKIPPED", e.args[0])
datastores = {"": safe_d}
extended = d.getVar("BBCLASSEXTEND") or ""
versions = (d.getVar("BBVERSIONS", True) or "").split()
if versions:
pv = orig_pv = d.getVar("PV", True)
baseversions = {}
def verfunc(ver, d, pv_d = None):
if pv_d is None:
pv_d = d
overrides = d.getVar("OVERRIDES", True).split(":")
pv_d.setVar("PV", ver)
overrides.append(ver)
bpv = baseversions.get(ver) or orig_pv
pv_d.setVar("BPV", bpv)
overrides.append(bpv)
d.setVar("OVERRIDES", ":".join(overrides))
versions = list(_expand_versions(versions))
for pos, version in enumerate(list(versions)):
try:
pv, bpv = version.split(":", 2)
except ValueError:
pass
else:
versions[pos] = pv
baseversions[pv] = bpv
if pv in versions and not baseversions.get(pv):
versions.remove(pv)
else:
pv = versions.pop()
# This is necessary because our existing main datastore
# has already been finalized with the old PV, we need one
# that's been finalized with the new PV.
d = bb.data.createCopy(safe_d)
verfunc(pv, d, safe_d)
try:
finalize(fn, d)
except bb.parse.SkipRecipe as e:
d.setVar("__SKIPPED", e.args[0])
_create_variants(datastores, versions, verfunc, onlyfinalise)
extended = d.getVar("BBCLASSEXTEND", True) or ""
if extended:
# the following is to support bbextends with arguments, for e.g. multilib
# an example is as follows:
@@ -419,7 +448,7 @@ def multi_finalize(fn, d):
else:
extendedmap[ext] = ext
pn = d.getVar("PN")
pn = d.getVar("PN", True)
def extendfunc(name, d):
if name != extendedmap[name]:
d.setVar("BBEXTENDCURR", extendedmap[name])
@@ -431,13 +460,17 @@ def multi_finalize(fn, d):
safe_d.setVar("BBCLASSEXTEND", extended)
_create_variants(datastores, extendedmap.keys(), extendfunc, onlyfinalise)
for variant in datastores.keys():
for variant, variant_d in datastores.iteritems():
if variant:
try:
if not onlyfinalise or variant in onlyfinalise:
finalize(fn, datastores[variant], variant)
finalize(fn, variant_d, variant)
except bb.parse.SkipRecipe as e:
datastores[variant].setVar("__SKIPPED", e.args[0])
variant_d.setVar("__SKIPPED", e.args[0])
if len(datastores) > 1:
variants = filter(None, datastores.iterkeys())
safe_d.setVar("__VARIANTS", " ".join(variants))
datastores[""] = d
return datastores

View File

@@ -25,7 +25,7 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from __future__ import absolute_import
import re, bb, os
import logging
import bb.build, bb.utils
@@ -66,7 +66,7 @@ def inherit(files, fn, lineno, d):
file = os.path.join('classes', '%s.bbclass' % file)
if not os.path.isabs(file):
bbpath = d.getVar("BBPATH")
bbpath = d.getVar("BBPATH", True)
abs_fn, attempts = bb.utils.which(bbpath, file, history=True)
for af in attempts:
if af != abs_fn:
@@ -87,17 +87,17 @@ def get_statements(filename, absolute_filename, base_name):
try:
return cached_statements[absolute_filename]
except KeyError:
with open(absolute_filename, 'r') as f:
statements = ast.StatementGroup()
lineno = 0
while True:
lineno = lineno + 1
s = f.readline()
if not s: break
s = s.rstrip()
feeder(lineno, s, filename, base_name, statements)
file = open(absolute_filename, 'r')
statements = ast.StatementGroup()
lineno = 0
while True:
lineno = lineno + 1
s = file.readline()
if not s: break
s = s.rstrip()
feeder(lineno, s, filename, base_name, statements)
file.close()
if __inpython__:
# add a blank line to close out any python definition
feeder(lineno, "", filename, base_name, statements, eof=True)
@@ -131,6 +131,9 @@ def handle(fn, d, include):
abs_fn = resolve_file(fn, d)
if include:
bb.parse.mark_dependency(d, abs_fn)
# actual loading
statements = get_statements(fn, abs_fn, base_name)
@@ -141,7 +144,7 @@ def handle(fn, d, include):
try:
statements.eval(d)
except bb.parse.SkipRecipe:
d.setVar("__SKIPPED", True)
bb.data.setVar("__SKIPPED", True, d)
if include == 0:
return { "" : d }

View File

@@ -32,8 +32,8 @@ from bb.parse import ParseError, resolve_file, ast, logger, handle
__config_regexp__ = re.compile( r"""
^
(?P<exp>export\s+)?
(?P<var>[a-zA-Z0-9\-_+.${}/~]+?)
(?P<exp>export\s*)?
(?P<var>[a-zA-Z0-9\-~_+.${}/]+?)
(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
\s* (
@@ -56,9 +56,7 @@ __config_regexp__ = re.compile( r"""
""", re.X)
__include_regexp__ = re.compile( r"include\s+(.+)" )
__require_regexp__ = re.compile( r"require\s+(.+)" )
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)\[([a-zA-Z0-9\-_+.]+)\]$" )
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/]+)$" )
def init(data):
topdir = data.getVar('TOPDIR', False)
@@ -69,38 +67,30 @@ def init(data):
def supports(fn, d):
return fn[-5:] == ".conf"
def include(parentfn, fns, lineno, data, error_out):
def include(parentfn, fn, lineno, data, error_out):
"""
error_out: A string indicating the verb (e.g. "include", "inherit") to be
used in a ParseError that will be raised if the file to be included could
not be included. Specify False to avoid raising an error in this case.
"""
fns = data.expand(fns)
parentfn = data.expand(parentfn)
# "include" or "require" accept zero to n space-separated file names to include.
for fn in fns.split():
include_single_file(parentfn, fn, lineno, data, error_out)
def include_single_file(parentfn, fn, lineno, data, error_out):
"""
Helper function for include() which does not expand or split its parameters.
"""
if parentfn == fn: # prevent infinite recursion
return None
fn = data.expand(fn)
parentfn = data.expand(parentfn)
if not os.path.isabs(fn):
dname = os.path.dirname(parentfn)
bbpath = "%s:%s" % (dname, data.getVar("BBPATH"))
bbpath = "%s:%s" % (dname, data.getVar("BBPATH", True))
abs_fn, attempts = bb.utils.which(bbpath, fn, history=True)
if abs_fn and bb.parse.check_dependency(data, abs_fn):
logger.warning("Duplicate inclusion for %s in %s" % (abs_fn, data.getVar('FILE')))
logger.warn("Duplicate inclusion for %s in %s" % (abs_fn, data.getVar('FILE', True)))
for af in attempts:
bb.parse.mark_dependency(data, af)
if abs_fn:
fn = abs_fn
elif bb.parse.check_dependency(data, fn):
logger.warning("Duplicate inclusion for %s in %s" % (fn, data.getVar('FILE')))
logger.warn("Duplicate inclusion for %s in %s" % (fn, data.getVar('FILE', True)))
try:
bb.parse.handle(fn, data, True)
@@ -134,6 +124,9 @@ def handle(fn, data, include):
abs_fn = resolve_file(fn, data)
f = open(abs_fn, 'r')
if include:
bb.parse.mark_dependency(data, abs_fn)
statements = ast.StatementGroup()
lineno = 0
while True:
@@ -192,16 +185,6 @@ def feeder(lineno, s, fn, statements):
ast.handleExport(statements, fn, lineno, m)
return
m = __unset_regexp__.match(s)
if m:
ast.handleUnset(statements, fn, lineno, m)
return
m = __unset_flag_regexp__.match(s)
if m:
ast.handleUnsetFlag(statements, fn, lineno, m)
return
raise ParseError("unparsed line: '%s'" % s, fn, lineno);
# Add us to the handlers list

View File

@@ -28,7 +28,11 @@ import sys
import warnings
from bb.compat import total_ordering
from collections import Mapping
import sqlite3
try:
import sqlite3
except ImportError:
from pysqlite2 import dbapi2 as sqlite3
sqlversion = sqlite3.sqlite_version_info
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
@@ -88,9 +92,9 @@ class SQLTable(collections.MutableMapping):
self._execute("DELETE from %s where key=?;" % self.table, [key])
def __setitem__(self, key, value):
if not isinstance(key, str):
if not isinstance(key, basestring):
raise TypeError('Only string keys are supported')
elif not isinstance(value, str):
elif not isinstance(value, basestring):
raise TypeError('Only string values are supported')
data = self._execute("SELECT * from %s where key=?;" %
@@ -174,7 +178,7 @@ class PersistData(object):
"""
Return a list of key + value pairs for a domain
"""
return list(self.data[domain].items())
return self.data[domain].items()
def getValue(self, domain, key):
"""
@@ -203,8 +207,8 @@ def connect(database):
def persist(domain, d):
"""Convenience factory for SQLTable objects based upon metadata"""
import bb.utils
cachedir = (d.getVar("PERSISTENT_DIR") or
d.getVar("CACHE"))
cachedir = (d.getVar("PERSISTENT_DIR", True) or
d.getVar("CACHE", True))
if not cachedir:
logger.critical("Please set the 'PERSISTENT_DIR' or 'CACHE' variable")
sys.exit(1)

View File

@@ -17,7 +17,7 @@ class CmdError(RuntimeError):
self.msg = msg
def __str__(self):
if not isinstance(self.command, str):
if not isinstance(self.command, basestring):
cmd = subprocess.list2cmdline(self.command)
else:
cmd = self.command
@@ -94,53 +94,34 @@ def _logged_communicate(pipe, log, input, extrafiles):
if data is not None:
func(data)
def read_all_pipes(log, rin, outdata, errdata):
rlist = rin
stdoutbuf = b""
stderrbuf = b""
try:
while pipe.poll() is None:
rlist = rin
try:
r,w,e = select.select (rlist, [], [], 1)
except OSError as e:
if e.errno != errno.EINTR:
raise
try:
r,w,e = select.select (rlist, [], [], 1)
except OSError as e:
if e.errno != errno.EINTR:
raise
readextras(r)
if pipe.stdout in r:
data = stdoutbuf + pipe.stdout.read()
if data is not None and len(data) > 0:
try:
data = data.decode("utf-8")
if pipe.stdout in r:
data = pipe.stdout.read()
if data is not None:
outdata.append(data)
log.write(data)
log.flush()
stdoutbuf = b""
except UnicodeDecodeError:
stdoutbuf = data
if pipe.stderr in r:
data = stderrbuf + pipe.stderr.read()
if data is not None and len(data) > 0:
try:
data = data.decode("utf-8")
if pipe.stderr in r:
data = pipe.stderr.read()
if data is not None:
errdata.append(data)
log.write(data)
log.flush()
stderrbuf = b""
except UnicodeDecodeError:
stderrbuf = data
try:
# Read all pipes while the process is open
while pipe.poll() is None:
read_all_pipes(log, rin, outdata, errdata)
readextras(r)
# Pocess closed, drain all pipes...
read_all_pipes(log, rin, outdata, errdata)
finally:
finally:
log.flush()
readextras([fobj for fobj, _ in extrafiles])
if pipe.stdout is not None:
pipe.stdout.close()
if pipe.stderr is not None:
@@ -154,7 +135,7 @@ def run(cmd, input=None, log=None, extrafiles=None, **options):
if not extrafiles:
extrafiles = []
if isinstance(cmd, str) and not "shell" in options:
if isinstance(cmd, basestring) and not "shell" in options:
options["shell"] = True
try:
@@ -169,10 +150,6 @@ def run(cmd, input=None, log=None, extrafiles=None, **options):
stdout, stderr = _logged_communicate(pipe, log, input, extrafiles)
else:
stdout, stderr = pipe.communicate(input)
if not stdout is None:
stdout = stdout.decode("utf-8")
if not stderr is None:
stderr = stderr.decode("utf-8")
if pipe.returncode != 0:
raise ExecutionError(cmd, pipe.returncode, stdout, stderr)

View File

@@ -1,276 +0,0 @@
"""
BitBake progress handling code
"""
# Copyright (C) 2016 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import sys
import re
import time
import inspect
import bb.event
import bb.build
class ProgressHandler(object):
"""
Base class that can pretend to be a file object well enough to be
used to build objects to intercept console output and determine the
progress of some operation.
"""
def __init__(self, d, outfile=None):
self._progress = 0
self._data = d
self._lastevent = 0
if outfile:
self._outfile = outfile
else:
self._outfile = sys.stdout
def _fire_progress(self, taskprogress, rate=None):
"""Internal function to fire the progress event"""
bb.event.fire(bb.build.TaskProgress(taskprogress, rate), self._data)
def write(self, string):
self._outfile.write(string)
def flush(self):
self._outfile.flush()
def update(self, progress, rate=None):
ts = time.time()
if progress > 100:
progress = 100
if progress != self._progress or self._lastevent + 1 < ts:
self._fire_progress(progress, rate)
self._lastevent = ts
self._progress = progress
class LineFilterProgressHandler(ProgressHandler):
"""
A ProgressHandler variant that provides the ability to filter out
the lines if they contain progress information. Additionally, it
filters out anything before the last line feed on a line. This can
be used to keep the logs clean of output that we've only enabled for
getting progress, assuming that that can be done on a per-line
basis.
"""
def __init__(self, d, outfile=None):
self._linebuffer = ''
super(LineFilterProgressHandler, self).__init__(d, outfile)
def write(self, string):
self._linebuffer += string
while True:
breakpos = self._linebuffer.find('\n') + 1
if breakpos == 0:
break
line = self._linebuffer[:breakpos]
self._linebuffer = self._linebuffer[breakpos:]
# Drop any line feeds and anything that precedes them
lbreakpos = line.rfind('\r') + 1
if lbreakpos:
line = line[lbreakpos:]
if self.writeline(line):
super(LineFilterProgressHandler, self).write(line)
def writeline(self, line):
return True
class BasicProgressHandler(ProgressHandler):
def __init__(self, d, regex=r'(\d+)%', outfile=None):
super(BasicProgressHandler, self).__init__(d, outfile)
self._regex = re.compile(regex)
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
def write(self, string):
percs = self._regex.findall(string)
if percs:
progress = int(percs[-1])
self.update(progress)
super(BasicProgressHandler, self).write(string)
class OutOfProgressHandler(ProgressHandler):
def __init__(self, d, regex, outfile=None):
super(OutOfProgressHandler, self).__init__(d, outfile)
self._regex = re.compile(regex)
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
def write(self, string):
nums = self._regex.findall(string)
if nums:
progress = (float(nums[-1][0]) / float(nums[-1][1])) * 100
self.update(progress)
super(OutOfProgressHandler, self).write(string)
class MultiStageProgressReporter(object):
"""
Class which allows reporting progress without the caller
having to know where they are in the overall sequence. Useful
for tasks made up of python code spread across multiple
classes / functions - the progress reporter object can
be passed around or stored at the object level and calls
to next_stage() and update() made whereever needed.
"""
def __init__(self, d, stage_weights, debug=False):
"""
Initialise the progress reporter.
Parameters:
* d: the datastore (needed for firing the events)
* stage_weights: a list of weight values, one for each stage.
The value is scaled internally so you only need to specify
values relative to other values in the list, so if there
are two stages and the first takes 2s and the second takes
10s you would specify [2, 10] (or [1, 5], it doesn't matter).
* debug: specify True (and ensure you call finish() at the end)
in order to show a printout of the calculated stage weights
based on timing each stage. Use this to determine what the
weights should be when you're not sure.
"""
self._data = d
total = sum(stage_weights)
self._stage_weights = [float(x)/total for x in stage_weights]
self._stage = -1
self._base_progress = 0
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
self._debug = debug
self._finished = False
if self._debug:
self._last_time = time.time()
self._stage_times = []
self._stage_total = None
self._callers = []
def _fire_progress(self, taskprogress):
bb.event.fire(bb.build.TaskProgress(taskprogress), self._data)
def next_stage(self, stage_total=None):
"""
Move to the next stage.
Parameters:
* stage_total: optional total for progress within the stage,
see update() for details
NOTE: you need to call this before the first stage.
"""
self._stage += 1
self._stage_total = stage_total
if self._stage == 0:
# First stage
if self._debug:
self._last_time = time.time()
else:
if self._stage < len(self._stage_weights):
self._base_progress = sum(self._stage_weights[:self._stage]) * 100
if self._debug:
currtime = time.time()
self._stage_times.append(currtime - self._last_time)
self._last_time = currtime
self._callers.append(inspect.getouterframes(inspect.currentframe())[1])
elif not self._debug:
bb.warn('ProgressReporter: current stage beyond declared number of stages')
self._base_progress = 100
self._fire_progress(self._base_progress)
def update(self, stage_progress):
"""
Update progress within the current stage.
Parameters:
* stage_progress: progress value within the stage. If stage_total
was specified when next_stage() was last called, then this
value is considered to be out of stage_total, otherwise it should
be a percentage value from 0 to 100.
"""
if self._stage_total:
stage_progress = (float(stage_progress) / self._stage_total) * 100
if self._stage < 0:
bb.warn('ProgressReporter: update called before first call to next_stage()')
elif self._stage < len(self._stage_weights):
progress = self._base_progress + (stage_progress * self._stage_weights[self._stage])
else:
progress = self._base_progress
if progress > 100:
progress = 100
self._fire_progress(progress)
def finish(self):
if self._finished:
return
self._finished = True
if self._debug:
import math
self._stage_times.append(time.time() - self._last_time)
mintime = max(min(self._stage_times), 0.01)
self._callers.append(None)
stage_weights = [int(math.ceil(x / mintime)) for x in self._stage_times]
bb.warn('Stage weights: %s' % stage_weights)
out = []
for stage_weight, caller in zip(stage_weights, self._callers):
if caller:
out.append('Up to %s:%d: %d' % (caller[1], caller[2], stage_weight))
else:
out.append('Up to finish: %d' % stage_weight)
bb.warn('Stage times:\n %s' % '\n '.join(out))
class MultiStageProcessProgressReporter(MultiStageProgressReporter):
"""
Version of MultiStageProgressReporter intended for use with
standalone processes (such as preparing the runqueue)
"""
def __init__(self, d, processname, stage_weights, debug=False):
self._processname = processname
self._started = False
MultiStageProgressReporter.__init__(self, d, stage_weights, debug)
def start(self):
if not self._started:
bb.event.fire(bb.event.ProcessStarted(self._processname, 100), self._data)
self._started = True
def _fire_progress(self, taskprogress):
if taskprogress == 0:
self.start()
return
bb.event.fire(bb.event.ProcessProgress(self._processname, taskprogress), self._data)
def finish(self):
MultiStageProgressReporter.finish(self)
bb.event.fire(bb.event.ProcessFinished(self._processname), self._data)
class DummyMultiStageProcessProgressReporter(MultiStageProgressReporter):
"""
MultiStageProcessProgressReporter that takes the calls and does nothing
with them (to avoid a bunch of "if progress_reporter:" checks)
"""
def __init__(self):
MultiStageProcessProgressReporter.__init__(self, "", None, [])
def _fire_progress(self, taskprogress, rate=None):
pass
def start(self):
pass
def next_stage(self, stage_total=None):
pass
def update(self, stage_progress):
pass
def finish(self):
pass

View File

@@ -48,6 +48,7 @@ def findProviders(cfgData, dataCache, pkg_pn = None):
# Need to ensure data store is expanded
localdata = data.createCopy(cfgData)
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
preferred_versions = {}
@@ -122,11 +123,11 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
# pn can contain '_', e.g. gcc-cross-x86_64 and an override cannot
# hence we do this manually rather than use OVERRIDES
preferred_v = cfgData.getVar("PREFERRED_VERSION_pn-%s" % pn)
preferred_v = cfgData.getVar("PREFERRED_VERSION_pn-%s" % pn, True)
if not preferred_v:
preferred_v = cfgData.getVar("PREFERRED_VERSION_%s" % pn)
preferred_v = cfgData.getVar("PREFERRED_VERSION_%s" % pn, True)
if not preferred_v:
preferred_v = cfgData.getVar("PREFERRED_VERSION")
preferred_v = cfgData.getVar("PREFERRED_VERSION", True)
if preferred_v:
m = re.match('(\d+:)*(.*)(_.*)*', preferred_v)
@@ -244,17 +245,17 @@ def _filterProviders(providers, item, cfgData, dataCache):
pkg_pn[pn] = []
pkg_pn[pn].append(p)
logger.debug(1, "providers for %s are: %s", item, list(sorted(pkg_pn.keys())))
logger.debug(1, "providers for %s are: %s", item, pkg_pn.keys())
# First add PREFERRED_VERSIONS
for pn in sorted(pkg_pn):
for pn in pkg_pn:
sortpkg_pn[pn] = sortPriorities(pn, dataCache, pkg_pn)
preferred_versions[pn] = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn[pn], item)
if preferred_versions[pn][1]:
eligible.append(preferred_versions[pn][1])
# Now add latest versions
for pn in sorted(sortpkg_pn):
for pn in sortpkg_pn:
if pn in preferred_versions and preferred_versions[pn][1]:
continue
preferred_versions[pn] = findLatestProvider(pn, cfgData, dataCache, sortpkg_pn[pn][0])
@@ -288,7 +289,7 @@ def filterProviders(providers, item, cfgData, dataCache):
eligible = _filterProviders(providers, item, cfgData, dataCache)
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % item)
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % item, True)
if prefervar:
dataCache.preferred[item] = prefervar
@@ -317,7 +318,7 @@ def filterProvidersRunTime(providers, item, cfgData, dataCache):
eligible = _filterProviders(providers, item, cfgData, dataCache)
# First try and match any PREFERRED_RPROVIDER entry
prefervar = cfgData.getVar('PREFERRED_RPROVIDER_%s' % item)
prefervar = cfgData.getVar('PREFERRED_RPROVIDER_%s' % item, True)
foundUnique = False
if prefervar:
for p in eligible:
@@ -344,7 +345,7 @@ def filterProvidersRunTime(providers, item, cfgData, dataCache):
pn = dataCache.pkg_fn[p]
provides = dataCache.pn_provides[pn]
for provide in provides:
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % provide)
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % provide, True)
#logger.debug(1, "checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
if prefervar in pns and pns[prefervar] not in preferred:
var = "PREFERRED_PROVIDER_%s = %s" % (provide, prefervar)
@@ -401,7 +402,7 @@ def getRuntimeProviders(dataCache, rdepend):
return rproviders
def buildWorldTargetList(dataCache, task=None):
def buildWorldTargetList(dataCache):
"""
Build package list for "bitbake world"
"""
@@ -412,9 +413,6 @@ def buildWorldTargetList(dataCache, task=None):
for f in dataCache.possible_world:
terminal = True
pn = dataCache.pkg_fn[f]
if task and task not in dataCache.task_deps[f]['tasks']:
logger.debug(2, "World build skipping %s as task %s doesn't exist", f, task)
terminal = False
for p in dataCache.pn_provides[pn]:
if p.startswith('virtual/'):

View File

@@ -527,7 +527,7 @@ def utility_sed(name, args, interp, env, stdin, stdout, stderr, debugflags):
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
# Scan pattern arguments and append a space if necessary
for i in range(len(args)):
for i in xrange(len(args)):
if not RE_SED.search(args[i]):
continue
args[i] = args[i] + ' '

View File

@@ -474,7 +474,7 @@ class Environment:
"""
# Save and remove previous arguments
prevargs = []
for i in range(int(self._env['#'])):
for i in xrange(int(self._env['#'])):
i = str(i+1)
prevargs.append(self._env[i])
del self._env[i]
@@ -488,7 +488,7 @@ class Environment:
return prevargs
def get_positional_args(self):
return [self._env[str(i+1)] for i in range(int(self._env['#']))]
return [self._env[str(i+1)] for i in xrange(int(self._env['#']))]
def get_variables(self):
return dict(self._env)

View File

@@ -20,7 +20,7 @@ except NameError:
from Set import Set as set
from ply import lex
from bb.pysh.sherrors import *
from sherrors import *
class NeedMore(Exception):
pass

View File

@@ -10,11 +10,11 @@
import os.path
import sys
import bb.pysh.pyshlex as pyshlex
import pyshlex
tokens = pyshlex.tokens
from ply import yacc
import bb.pysh.sherrors as sherrors
import sherrors
class IORedirect:
def __init__(self, op, filename, io_number=None):

View File

@@ -1,116 +0,0 @@
"""
BitBake 'remotedata' module
Provides support for using a datastore from the bitbake client
"""
# Copyright (C) 2016 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import bb.data
class RemoteDatastores:
"""Used on the server side to manage references to server-side datastores"""
def __init__(self, cooker):
self.cooker = cooker
self.datastores = {}
self.locked = []
self.nextindex = 1
def __len__(self):
return len(self.datastores)
def __getitem__(self, key):
if key is None:
return self.cooker.data
else:
return self.datastores[key]
def items(self):
return self.datastores.items()
def store(self, d, locked=False):
"""
Put a datastore into the collection. If locked=True then the datastore
is understood to be managed externally and cannot be released by calling
release().
"""
idx = self.nextindex
self.datastores[idx] = d
if locked:
self.locked.append(idx)
self.nextindex += 1
return idx
def check_store(self, d, locked=False):
"""
Put a datastore into the collection if it's not already in there;
in either case return the index
"""
for key, val in self.datastores.items():
if val is d:
idx = key
break
else:
idx = self.store(d, locked)
return idx
def release(self, idx):
"""Discard a datastore in the collection"""
if idx in self.locked:
raise Exception('Tried to release locked datastore %d' % idx)
del self.datastores[idx]
def receive_datastore(self, remote_data):
"""Receive a datastore object sent from the client (as prepared by transmit_datastore())"""
dct = dict(remote_data)
d = bb.data_smart.DataSmart()
d.dict = dct
while True:
if '_remote_data' in dct:
dsindex = dct['_remote_data']['_content']
del dct['_remote_data']
if dsindex is None:
dct['_data'] = self.cooker.data.dict
else:
dct['_data'] = self.datastores[dsindex].dict
break
elif '_data' in dct:
idct = dict(dct['_data'])
dct['_data'] = idct
dct = idct
else:
break
return d
@staticmethod
def transmit_datastore(d):
"""Prepare a datastore object for sending over IPC from the client end"""
# FIXME content might be a dict, need to turn that into a list as well
def copy_dicts(dct):
if '_remote_data' in dct:
dsindex = dct['_remote_data']['_content'].dsindex
newdct = dct.copy()
newdct['_remote_data'] = {'_content': dsindex}
return list(newdct.items())
elif '_data' in dct:
newdct = dct.copy()
newdata = copy_dicts(dct['_data'])
if newdata:
newdct['_data'] = newdata
return list(newdct.items())
return None
main_dict = copy_dicts(d.dict)
return main_dict

File diff suppressed because it is too large Load Diff

View File

@@ -18,4 +18,82 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
""" Base code for Bitbake server process
Have a common base for that all Bitbake server classes ensures a consistent
approach to the interface, and minimize risks associated with code duplication.
"""
""" BaseImplServer() the base class for all XXServer() implementations.
These classes contain the actual code that runs the server side, i.e.
listens for the commands and executes them. Although these implementations
contain all the data of the original bitbake command, i.e the cooker instance,
they may well run on a different process or even machine.
"""
class BaseImplServer():
def __init__(self):
self._idlefuns = {}
def addcooker(self, cooker):
self.cooker = cooker
def register_idle_function(self, function, data):
"""Register a function to be called while the server is idle"""
assert hasattr(function, '__call__')
self._idlefuns[function] = data
""" BitBakeBaseServerConnection class is the common ancestor to all
BitBakeServerConnection classes.
These classes control the remote server. The only command currently
implemented is the terminate() command.
"""
class BitBakeBaseServerConnection():
def __init__(self, serverImpl):
pass
def terminate(self):
pass
def setupEventQueue(self):
pass
""" BitBakeBaseServer class is the common ancestor to all Bitbake servers
Derive this class in order to implement a BitBakeServer which is the
controlling stub for the actual server implementation
"""
class BitBakeBaseServer(object):
def initServer(self):
self.serverImpl = None # we ensure a runtime crash if not overloaded
self.connection = None
return
def addcooker(self, cooker):
self.cooker = cooker
self.serverImpl.addcooker(cooker)
def getServerIdleCB(self):
return self.serverImpl.register_idle_function
def saveConnectionDetails(self):
return
def detach(self):
return
def establishConnection(self, featureset):
raise "Must redefine the %s.establishConnection()" % self.__class__.__name__
def endSession(self):
self.connection.terminate()

View File

@@ -22,254 +22,122 @@
import bb
import bb.event
import itertools
import logging
import multiprocessing
import threading
import array
import os
import signal
import sys
import time
import select
import socket
import subprocess
import errno
import re
import datetime
import bb.server.xmlrpcserver
from bb import daemonize
from multiprocessing import queues
from Queue import Empty
from multiprocessing import Event, Process, util, Queue, Pipe, queues, Manager
from . import BitBakeBaseServer, BitBakeBaseServerConnection, BaseImplServer
logger = logging.getLogger('BitBake')
class ProcessTimeout(SystemExit):
pass
class ServerCommunicator():
def __init__(self, connection, event_handle, server):
self.connection = connection
self.event_handle = event_handle
self.server = server
class ProcessServer(multiprocessing.Process):
def runCommand(self, command):
# @todo try/except
self.connection.send(command)
if not self.server.is_alive():
raise SystemExit
while True:
# don't let the user ctrl-c while we're waiting for a response
try:
for idx in range(0,4): # 0, 1, 2, 3
if self.connection.poll(5):
return self.connection.recv()
else:
bb.warn("Timeout while attempting to communicate with bitbake server")
bb.fatal("Gave up; Too many tries: timeout while attempting to communicate with bitbake server")
except KeyboardInterrupt:
pass
def getEventHandle(self):
return self.event_handle.value
class EventAdapter():
"""
Adapter to wrap our event queue since the caller (bb.event) expects to
call a send() method, but our actual queue only has put()
"""
def __init__(self, queue):
self.queue = queue
def send(self, event):
try:
self.queue.put(event)
except Exception as err:
print("EventAdapter puked: %s" % str(err))
class ProcessServer(Process, BaseImplServer):
profile_filename = "profile.log"
profile_processed_filename = "profile.log.processed"
def __init__(self, lock, sock, sockname):
multiprocessing.Process.__init__(self)
self.command_channel = False
self.command_channel_reply = False
def __init__(self, command_channel, event_queue, featurelist):
BaseImplServer.__init__(self)
Process.__init__(self)
self.command_channel = command_channel
self.event_queue = event_queue
self.event = EventAdapter(event_queue)
self.featurelist = featurelist
self.quit = False
self.heartbeat_seconds = 1 # default, BB_HEARTBEAT_EVENT will be checked once we have a datastore.
self.next_heartbeat = time.time()
self.event_handle = None
self.haveui = False
self.lastui = False
self.xmlrpc = False
self._idlefuns = {}
self.bitbake_lock = lock
self.sock = sock
self.sockname = sockname
def register_idle_function(self, function, data):
"""Register a function to be called while the server is idle"""
assert hasattr(function, '__call__')
self._idlefuns[function] = data
self.quitin, self.quitout = Pipe()
self.event_handle = multiprocessing.Value("i")
def run(self):
for event in bb.event.ui_queue:
self.event_queue.put(event)
self.event_handle.value = bb.event.register_UIHhandler(self, True)
if self.xmlrpcinterface[0]:
self.xmlrpc = bb.server.xmlrpcserver.BitBakeXMLRPCServer(self.xmlrpcinterface, self.cooker, self)
print("Bitbake XMLRPC server address: %s, server port: %s" % (self.xmlrpc.host, self.xmlrpc.port))
heartbeat_event = self.cooker.data.getVar('BB_HEARTBEAT_EVENT')
if heartbeat_event:
try:
self.heartbeat_seconds = float(heartbeat_event)
except:
bb.warn('Ignoring invalid BB_HEARTBEAT_EVENT=%s, must be a float specifying seconds.' % heartbeat_event)
self.timeout = self.server_timeout or self.cooker.data.getVar('BB_SERVER_TIMEOUT')
try:
if self.timeout:
self.timeout = float(self.timeout)
except:
bb.warn('Ignoring invalid BB_SERVER_TIMEOUT=%s, must be a float specifying seconds.' % self.timeout)
try:
self.bitbake_lock.seek(0)
self.bitbake_lock.truncate()
if self.xmlrpc:
self.bitbake_lock.write("%s %s:%s\n" % (os.getpid(), self.xmlrpc.host, self.xmlrpc.port))
else:
self.bitbake_lock.write("%s\n" % (os.getpid()))
self.bitbake_lock.flush()
except Exception as e:
print("Error writing to lock file: %s" % str(e))
pass
if self.cooker.configuration.profile:
try:
import cProfile as profile
except:
import profile
prof = profile.Profile()
ret = profile.Profile.runcall(prof, self.main)
prof.dump_stats("profile.log")
bb.utils.process_profilelog("profile.log")
print("Raw profiling information saved to profile.log and processed statistics to profile.log.processed")
else:
ret = self.main()
return ret
bb.cooker.server_main(self.cooker, self.main)
def main(self):
self.cooker.pre_serve()
# Ignore SIGINT within the server, as all SIGINT handling is done by
# the UI and communicated to us
self.quitin.close()
signal.signal(signal.SIGINT, signal.SIG_IGN)
bb.utils.set_process_name("Cooker")
ready = []
self.controllersock = False
fds = [self.sock]
if self.xmlrpc:
fds.append(self.xmlrpc)
print("Entering server connection loop")
def disconnect_client(self, fds):
if not self.haveui:
return
print("Disconnecting Client")
fds.remove(self.controllersock)
fds.remove(self.command_channel)
bb.event.unregister_UIHhandler(self.event_handle, True)
self.command_channel_reply.writer.close()
self.event_writer.writer.close()
del self.event_writer
self.controllersock.close()
self.controllersock = False
self.haveui = False
self.lastui = time.time()
self.cooker.clientComplete()
if self.timeout is None:
print("No timeout, exiting.")
self.quit = True
while not self.quit:
if self.sock in ready:
self.controllersock, address = self.sock.accept()
if self.haveui:
print("Dropping connection attempt as we have a UI %s" % (str(ready)))
self.controllersock.close()
else:
print("Accepting %s" % (str(ready)))
fds.append(self.controllersock)
if self.controllersock in ready:
try:
print("Connecting Client")
ui_fds = recvfds(self.controllersock, 3)
# Where to write events to
writer = ConnectionWriter(ui_fds[0])
self.event_handle = bb.event.register_UIHhandler(writer, True)
self.event_writer = writer
# Where to read commands from
reader = ConnectionReader(ui_fds[1])
fds.append(reader)
self.command_channel = reader
# Where to send command return values to
writer = ConnectionWriter(ui_fds[2])
self.command_channel_reply = writer
self.haveui = True
except (EOFError, OSError):
disconnect_client(self, fds)
if not self.timeout == -1.0 and not self.haveui and self.lastui and self.timeout and \
(self.lastui + self.timeout) < time.time():
print("Server timeout, exiting.")
self.quit = True
if self.command_channel in ready:
try:
command = self.command_channel.get()
except EOFError:
# Client connection shutting down
ready = []
disconnect_client(self, fds)
continue
if command[0] == "terminateServer":
try:
if self.command_channel.poll():
command = self.command_channel.recv()
self.runCommand(command)
if self.quitout.poll():
self.quitout.recv()
self.quit = True
continue
try:
print("Running command %s" % command)
self.command_channel_reply.send(self.cooker.command.runCommand(command))
except Exception as e:
logger.exception('Exception in server main event loop running command %s (%s)' % (command, str(e)))
if self.xmlrpc in ready:
self.xmlrpc.handle_requests()
ready = self.idle_commands(.1, fds)
print("Exiting")
# Remove the socket file so we don't get any more connections to avoid races
os.unlink(self.sockname)
self.sock.close()
try:
self.cooker.shutdown(True)
self.cooker.notifier.stop()
self.cooker.confignotifier.stop()
except:
pass
self.cooker.post_serve()
# Finally release the lockfile but warn about other processes holding it open
lock = self.bitbake_lock
lockfile = lock.name
lock.close()
lock = None
while not lock:
with bb.utils.timeout(3):
lock = bb.utils.lockfile(lockfile, shared=False, retry=False, block=True)
if not lock:
# Some systems may not have lsof available
procs = None
try:
procs = subprocess.check_output(["lsof", '-w', lockfile], stderr=subprocess.STDOUT)
except OSError as e:
if e.errno != errno.ENOENT:
raise
if procs is None:
# Fall back to fuser if lsof is unavailable
try:
procs = subprocess.check_output(["fuser", '-v', lockfile], stderr=subprocess.STDOUT)
except OSError as e:
if e.errno != errno.ENOENT:
raise
self.runCommand(["stateForceShutdown"])
except:
pass
msg = "Delaying shutdown due to active processes which appear to be holding bitbake.lock"
if procs:
msg += ":\n%s" % str(procs)
print(msg)
return
# We hold the lock so we can remove the file (hide stale pid data)
bb.utils.remove(lockfile)
bb.utils.unlockfile(lock)
self.idle_commands(.1, [self.command_channel, self.quitout])
except Exception:
logger.exception('Running command %s', command)
self.event_queue.close()
bb.event.unregister_UIHhandler(self.event_handle.value)
self.command_channel.close()
self.cooker.shutdown(True)
self.quitout.close()
def idle_commands(self, delay, fds=None):
nextsleep = delay
if not fds:
fds = []
for function, data in list(self._idlefuns.items()):
for function, data in self._idlefuns.items():
try:
retval = function(self, data, False)
if retval is False:
@@ -277,7 +145,7 @@ class ProcessServer(multiprocessing.Process):
nextsleep = None
elif retval is True:
nextsleep = None
elif isinstance(retval, float) and nextsleep:
elif isinstance(retval, float):
if (retval < nextsleep):
nextsleep = retval
elif nextsleep is None:
@@ -292,334 +160,109 @@ class ProcessServer(multiprocessing.Process):
del self._idlefuns[function]
self.quit = True
# Create new heartbeat event?
now = time.time()
if now >= self.next_heartbeat:
# We might have missed heartbeats. Just trigger once in
# that case and continue after the usual delay.
self.next_heartbeat += self.heartbeat_seconds
if self.next_heartbeat <= now:
self.next_heartbeat = now + self.heartbeat_seconds
heartbeat = bb.event.HeartbeatEvent(now)
bb.event.fire(heartbeat, self.cooker.data)
if nextsleep and now + nextsleep > self.next_heartbeat:
# Shorten timeout so that we we wake up in time for
# the heartbeat.
nextsleep = self.next_heartbeat - now
if nextsleep is not None:
if self.xmlrpc:
nextsleep = self.xmlrpc.get_timeout(nextsleep)
try:
return select.select(fds,[],[],nextsleep)[0]
except InterruptedError:
# Ignore EINTR
return []
else:
return select.select(fds,[],[],0)[0]
class ServerCommunicator():
def __init__(self, connection, recv):
self.connection = connection
self.recv = recv
select.select(fds,[],[],nextsleep)
def runCommand(self, command):
self.connection.send(command)
if not self.recv.poll(30):
raise ProcessTimeout("Timeout while waiting for a reply from the bitbake server")
return self.recv.get()
"""
Run a cooker command on the server
"""
self.command_channel.send(self.cooker.command.runCommand(command))
def updateFeatureSet(self, featureset):
_, error = self.runCommand(["setFeatures", featureset])
def stop(self):
self.quitin.send("quit")
self.quitin.close()
class BitBakeProcessServerConnection(BitBakeBaseServerConnection):
def __init__(self, serverImpl, ui_channel, event_queue):
self.procserver = serverImpl
self.ui_channel = ui_channel
self.event_queue = event_queue
self.connection = ServerCommunicator(self.ui_channel, self.procserver.event_handle, self.procserver)
self.events = self.event_queue
self.terminated = False
def sigterm_terminate(self):
bb.error("UI received SIGTERM")
self.terminate()
def terminate(self):
if self.terminated:
return
self.terminated = True
def flushevents():
while True:
try:
event = self.event_queue.get(block=False)
except (Empty, IOError):
break
if isinstance(event, logging.LogRecord):
logger.handle(event)
signal.signal(signal.SIGINT, signal.SIG_IGN)
self.procserver.stop()
while self.procserver.is_alive():
flushevents()
self.procserver.join(0.1)
self.ui_channel.close()
self.event_queue.close()
self.event_queue.setexit()
# Wrap Queue to provide API which isn't server implementation specific
class ProcessEventQueue(multiprocessing.queues.Queue):
def __init__(self, maxsize):
multiprocessing.queues.Queue.__init__(self, maxsize)
self.exit = False
bb.utils.set_process_name("ProcessEQueue")
def setexit(self):
self.exit = True
def waitEvent(self, timeout):
if self.exit:
sys.exit(1)
try:
if not self.server.is_alive():
self.setexit()
return None
return self.get(True, timeout)
except Empty:
return None
def getEvent(self):
try:
if not self.server.is_alive():
self.setexit()
return None
return self.get(False)
except Empty:
return None
class BitBakeServer(BitBakeBaseServer):
def initServer(self, single_use=True):
# establish communication channels. We use bidirectional pipes for
# ui <--> server command/response pairs
# and a queue for server -> ui event notifications
#
self.ui_channel, self.server_channel = Pipe()
self.event_queue = ProcessEventQueue(0)
self.serverImpl = ProcessServer(self.server_channel, self.event_queue, None)
self.event_queue.server = self.serverImpl
def detach(self):
self.serverImpl.start()
return
def establishConnection(self, featureset):
self.connection = BitBakeProcessServerConnection(self.serverImpl, self.ui_channel, self.event_queue)
_, error = self.connection.connection.runCommand(["setFeatures", featureset])
if error:
logger.error("Unable to set the cooker to the correct featureset: %s" % error)
raise BaseException(error)
def getEventHandle(self):
handle, error = self.runCommand(["getUIHandlerNum"])
if error:
logger.error("Unable to get UI Handler Number: %s" % error)
raise BaseException(error)
return handle
def terminateServer(self):
self.connection.send(['terminateServer'])
return
class BitBakeProcessServerConnection(object):
def __init__(self, ui_channel, recv, eq, sock):
self.connection = ServerCommunicator(ui_channel, recv)
self.events = eq
# Save sock so it doesn't get gc'd for the life of our connection
self.socket_connection = sock
def terminate(self):
self.socket_connection.close()
self.connection.connection.close()
self.connection.recv.close()
return
class BitBakeServer(object):
start_log_format = '--- Starting bitbake server pid %s at %s ---'
start_log_datetime_format = '%Y-%m-%d %H:%M:%S.%f'
def __init__(self, lock, sockname, configuration, featureset):
self.configuration = configuration
self.featureset = featureset
self.sockname = sockname
self.bitbake_lock = lock
self.readypipe, self.readypipein = os.pipe()
# Create server control socket
if os.path.exists(sockname):
os.unlink(sockname)
# Place the log in the builddirectory alongside the lock file
logfile = os.path.join(os.path.dirname(self.bitbake_lock.name), "bitbake-cookerdaemon.log")
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
# AF_UNIX has path length issues so chdir here to workaround
cwd = os.getcwd()
try:
os.chdir(os.path.dirname(sockname))
self.sock.bind(os.path.basename(sockname))
finally:
os.chdir(cwd)
self.sock.listen(1)
os.set_inheritable(self.sock.fileno(), True)
startdatetime = datetime.datetime.now()
bb.daemonize.createDaemon(self._startServer, logfile)
self.sock.close()
self.bitbake_lock.close()
os.close(self.readypipein)
ready = ConnectionReader(self.readypipe)
r = ready.poll(30)
if r:
try:
r = ready.get()
except EOFError:
# Trap the child exitting/closing the pipe and error out
r = None
if not r or r != "ready":
ready.close()
bb.error("Unable to start bitbake server")
if os.path.exists(logfile):
logstart_re = re.compile(self.start_log_format % ('([0-9]+)', '([0-9-]+ [0-9:.]+)'))
started = False
lines = []
with open(logfile, "r") as f:
for line in f:
if started:
lines.append(line)
else:
res = logstart_re.match(line.rstrip())
if res:
ldatetime = datetime.datetime.strptime(res.group(2), self.start_log_datetime_format)
if ldatetime >= startdatetime:
started = True
lines.append(line)
if lines:
if len(lines) > 10:
bb.error("Last 10 lines of server log for this session (%s):\n%s" % (logfile, "".join(lines[-10:])))
else:
bb.error("Server log for this session (%s):\n%s" % (logfile, "".join(lines)))
raise SystemExit(1)
ready.close()
def _startServer(self):
print(self.start_log_format % (os.getpid(), datetime.datetime.now().strftime(self.start_log_datetime_format)))
server = ProcessServer(self.bitbake_lock, self.sock, self.sockname)
self.configuration.setServerRegIdleCallback(server.register_idle_function)
os.close(self.readypipe)
writer = ConnectionWriter(self.readypipein)
self.cooker = bb.cooker.BBCooker(self.configuration, self.featureset)
writer.send("ready")
writer.close()
server.cooker = self.cooker
server.server_timeout = self.configuration.server_timeout
server.xmlrpcinterface = self.configuration.xmlrpcinterface
print("Started bitbake server pid %d" % os.getpid())
server.start()
def connectProcessServer(sockname, featureset):
# Connect to socket
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
# AF_UNIX has path length issues so chdir here to workaround
cwd = os.getcwd()
try:
os.chdir(os.path.dirname(sockname))
sock.connect(os.path.basename(sockname))
finally:
os.chdir(cwd)
readfd = writefd = readfd1 = writefd1 = readfd2 = writefd2 = None
eq = command_chan_recv = command_chan = None
try:
# Send an fd for the remote to write events to
readfd, writefd = os.pipe()
eq = BBUIEventQueue(readfd)
# Send an fd for the remote to recieve commands from
readfd1, writefd1 = os.pipe()
command_chan = ConnectionWriter(writefd1)
# Send an fd for the remote to write commands results to
readfd2, writefd2 = os.pipe()
command_chan_recv = ConnectionReader(readfd2)
sendfds(sock, [writefd, readfd1, writefd2])
server_connection = BitBakeProcessServerConnection(command_chan, command_chan_recv, eq, sock)
# Close the ends of the pipes we won't use
for i in [writefd, readfd1, writefd2]:
os.close(i)
server_connection.connection.updateFeatureSet(featureset)
except (Exception, SystemExit) as e:
if command_chan_recv:
command_chan_recv.close()
if command_chan:
command_chan.close()
for i in [writefd, readfd1, writefd2]:
try:
os.close(i)
except OSError:
pass
sock.close()
raise
return server_connection
def sendfds(sock, fds):
'''Send an array of fds over an AF_UNIX socket.'''
fds = array.array('i', fds)
msg = bytes([len(fds) % 256])
sock.sendmsg([msg], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, fds)])
def recvfds(sock, size):
'''Receive an array of fds over an AF_UNIX socket.'''
a = array.array('i')
bytes_size = a.itemsize * size
msg, ancdata, flags, addr = sock.recvmsg(1, socket.CMSG_LEN(bytes_size))
if not msg and not ancdata:
raise EOFError
try:
if len(ancdata) != 1:
raise RuntimeError('received %d items of ancdata' %
len(ancdata))
cmsg_level, cmsg_type, cmsg_data = ancdata[0]
if (cmsg_level == socket.SOL_SOCKET and
cmsg_type == socket.SCM_RIGHTS):
if len(cmsg_data) % a.itemsize != 0:
raise ValueError
a.frombytes(cmsg_data)
assert len(a) % 256 == msg[0]
return list(a)
except (ValueError, IndexError):
pass
raise RuntimeError('Invalid data received')
class BBUIEventQueue:
def __init__(self, readfd):
self.eventQueue = []
self.eventQueueLock = threading.Lock()
self.eventQueueNotify = threading.Event()
self.reader = ConnectionReader(readfd)
self.t = threading.Thread()
self.t.setDaemon(True)
self.t.run = self.startCallbackHandler
self.t.start()
def getEvent(self):
self.eventQueueLock.acquire()
if len(self.eventQueue) == 0:
self.eventQueueLock.release()
return None
item = self.eventQueue.pop(0)
if len(self.eventQueue) == 0:
self.eventQueueNotify.clear()
self.eventQueueLock.release()
return item
def waitEvent(self, delay):
self.eventQueueNotify.wait(delay)
return self.getEvent()
def queue_event(self, event):
self.eventQueueLock.acquire()
self.eventQueue.append(event)
self.eventQueueNotify.set()
self.eventQueueLock.release()
def send_event(self, event):
self.queue_event(pickle.loads(event))
def startCallbackHandler(self):
bb.utils.set_process_name("UIEventQueue")
while True:
try:
self.reader.wait()
event = self.reader.get()
self.queue_event(event)
except EOFError:
# Easiest way to exit is to close the file descriptor to cause an exit
break
self.reader.close()
class ConnectionReader(object):
def __init__(self, fd):
self.reader = multiprocessing.connection.Connection(fd, writable=False)
self.rlock = multiprocessing.Lock()
def wait(self, timeout=None):
return multiprocessing.connection.wait([self.reader], timeout)
def poll(self, timeout=None):
return self.reader.poll(timeout)
def get(self):
with self.rlock:
res = self.reader.recv_bytes()
return multiprocessing.reduction.ForkingPickler.loads(res)
def fileno(self):
return self.reader.fileno()
def close(self):
return self.reader.close()
class ConnectionWriter(object):
def __init__(self, fd):
self.writer = multiprocessing.connection.Connection(fd, readable=False)
self.wlock = multiprocessing.Lock()
# Why bb.event needs this I have no idea
self.event = self
def send(self, obj):
obj = multiprocessing.reduction.ForkingPickler.dumps(obj)
with self.wlock:
self.writer.send_bytes(obj)
def fileno(self):
return self.writer.fileno()
def close(self):
return self.writer.close()
signal.signal(signal.SIGTERM, lambda i, s: self.connection.sigterm_terminate())
return self.connection

View File

@@ -0,0 +1,390 @@
#
# BitBake XMLRPC Server
#
# Copyright (C) 2006 - 2007 Michael 'Mickey' Lauer
# Copyright (C) 2006 - 2008 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
This module implements an xmlrpc server for BitBake.
Use this by deriving a class from BitBakeXMLRPCServer and then adding
methods which you want to "export" via XMLRPC. If the methods have the
prefix xmlrpc_, then registering those function will happen automatically,
if not, you need to call register_function.
Use register_idle_function() to add a function which the xmlrpc server
calls from within server_forever when no requests are pending. Make sure
that those functions are non-blocking or else you will introduce latency
in the server's main loop.
"""
import bb
import xmlrpclib, sys
from bb import daemonize
from bb.ui import uievent
import hashlib, time
import socket
import os, signal
import threading
try:
import cPickle as pickle
except ImportError:
import pickle
DEBUG = False
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
import inspect, select, httplib
from . import BitBakeBaseServer, BitBakeBaseServerConnection, BaseImplServer
class BBTransport(xmlrpclib.Transport):
def __init__(self, timeout):
self.timeout = timeout
self.connection_token = None
xmlrpclib.Transport.__init__(self)
# Modified from default to pass timeout to HTTPConnection
def make_connection(self, host):
#return an existing connection if possible. This allows
#HTTP/1.1 keep-alive.
if self._connection and host == self._connection[0]:
return self._connection[1]
# create a HTTP connection object from a host descriptor
chost, self._extra_headers, x509 = self.get_host_info(host)
#store the host argument along with the connection object
self._connection = host, httplib.HTTPConnection(chost, timeout=self.timeout)
return self._connection[1]
def set_connection_token(self, token):
self.connection_token = token
def send_content(self, h, body):
if self.connection_token:
h.putheader("Bitbake-token", self.connection_token)
xmlrpclib.Transport.send_content(self, h, body)
def _create_server(host, port, timeout = 60):
t = BBTransport(timeout)
s = xmlrpclib.ServerProxy("http://%s:%d/" % (host, port), transport=t, allow_none=True)
return s, t
class BitBakeServerCommands():
def __init__(self, server):
self.server = server
self.has_client = False
def registerEventHandler(self, host, port):
"""
Register a remote UI Event Handler
"""
s, t = _create_server(host, port)
# we don't allow connections if the cooker is running
if (self.cooker.state in [bb.cooker.state.parsing, bb.cooker.state.running]):
return None, "Cooker is busy: %s" % bb.cooker.state.get_name(self.cooker.state)
self.event_handle = bb.event.register_UIHhandler(s, True)
return self.event_handle, 'OK'
def unregisterEventHandler(self, handlerNum):
"""
Unregister a remote UI Event Handler
"""
return bb.event.unregister_UIHhandler(handlerNum)
def runCommand(self, command):
"""
Run a cooker command on the server
"""
return self.cooker.command.runCommand(command, self.server.readonly)
def getEventHandle(self):
return self.event_handle
def terminateServer(self):
"""
Trigger the server to quit
"""
self.server.quit = True
print("Server (cooker) exiting")
return
def addClient(self):
if self.has_client:
return None
token = hashlib.md5(str(time.time())).hexdigest()
self.server.set_connection_token(token)
self.has_client = True
return token
def removeClient(self):
if self.has_client:
self.server.set_connection_token(None)
self.has_client = False
if self.server.single_use:
self.server.quit = True
# This request handler checks if the request has a "Bitbake-token" header
# field (this comes from the client side) and compares it with its internal
# "Bitbake-token" field (this comes from the server). If the two are not
# equal, it is assumed that a client is trying to connect to the server
# while another client is connected to the server. In this case, a 503 error
# ("service unavailable") is returned to the client.
class BitBakeXMLRPCRequestHandler(SimpleXMLRPCRequestHandler):
def __init__(self, request, client_address, server):
self.server = server
SimpleXMLRPCRequestHandler.__init__(self, request, client_address, server)
def do_POST(self):
try:
remote_token = self.headers["Bitbake-token"]
except:
remote_token = None
if remote_token != self.server.connection_token and remote_token != "observer":
self.report_503()
else:
if remote_token == "observer":
self.server.readonly = True
else:
self.server.readonly = False
SimpleXMLRPCRequestHandler.do_POST(self)
def report_503(self):
self.send_response(503)
response = 'No more client allowed'
self.send_header("Content-type", "text/plain")
self.send_header("Content-length", str(len(response)))
self.end_headers()
self.wfile.write(response)
class XMLRPCProxyServer(BaseImplServer):
""" not a real working server, but a stub for a proxy server connection
"""
def __init__(self, host, port):
self.host = host
self.port = port
class XMLRPCServer(SimpleXMLRPCServer, BaseImplServer):
# remove this when you're done with debugging
# allow_reuse_address = True
def __init__(self, interface, single_use=False):
"""
Constructor
"""
BaseImplServer.__init__(self)
self.single_use = single_use
# Use auto port configuration
if (interface[1] == -1):
interface = (interface[0], 0)
SimpleXMLRPCServer.__init__(self, interface,
requestHandler=BitBakeXMLRPCRequestHandler,
logRequests=False, allow_none=True)
self.host, self.port = self.socket.getsockname()
self.connection_token = None
#self.register_introspection_functions()
self.commands = BitBakeServerCommands(self)
self.autoregister_all_functions(self.commands, "")
self.interface = interface
def addcooker(self, cooker):
BaseImplServer.addcooker(self, cooker)
self.commands.cooker = cooker
def autoregister_all_functions(self, context, prefix):
"""
Convenience method for registering all functions in the scope
of this class that start with a common prefix
"""
methodlist = inspect.getmembers(context, inspect.ismethod)
for name, method in methodlist:
if name.startswith(prefix):
self.register_function(method, name[len(prefix):])
def serve_forever(self):
# Start the actual XMLRPC server
bb.cooker.server_main(self.cooker, self._serve_forever)
def _serve_forever(self):
"""
Serve Requests. Overloaded to honor a quit command
"""
self.quit = False
while not self.quit:
fds = [self]
nextsleep = 0.1
for function, data in self._idlefuns.items():
retval = None
try:
retval = function(self, data, False)
if retval is False:
del self._idlefuns[function]
elif retval is True:
nextsleep = 0
elif isinstance(retval, float):
if (retval < nextsleep):
nextsleep = retval
else:
fds = fds + retval
except SystemExit:
raise
except:
import traceback
traceback.print_exc()
if retval == None:
# the function execute failed; delete it
del self._idlefuns[function]
pass
socktimeout = self.socket.gettimeout() or nextsleep
socktimeout = min(socktimeout, nextsleep)
# Mirror what BaseServer handle_request would do
try:
fd_sets = select.select(fds, [], [], socktimeout)
if fd_sets[0] and self in fd_sets[0]:
self._handle_request_noblock()
except IOError:
# we ignore interrupted calls
pass
# Tell idle functions we're exiting
for function, data in self._idlefuns.items():
try:
retval = function(self, data, True)
except:
pass
self.server_close()
return
def set_connection_token(self, token):
self.connection_token = token
class BitBakeXMLRPCServerConnection(BitBakeBaseServerConnection):
def __init__(self, serverImpl, clientinfo=("localhost", 0), observer_only = False, featureset = None):
self.connection, self.transport = _create_server(serverImpl.host, serverImpl.port)
self.clientinfo = clientinfo
self.serverImpl = serverImpl
self.observer_only = observer_only
if featureset:
self.featureset = featureset
else:
self.featureset = []
def connect(self, token = None):
if token is None:
if self.observer_only:
token = "observer"
else:
token = self.connection.addClient()
if token is None:
return None
self.transport.set_connection_token(token)
return self
def setupEventQueue(self):
self.events = uievent.BBUIEventQueue(self.connection, self.clientinfo)
for event in bb.event.ui_queue:
self.events.queue_event(event)
_, error = self.connection.runCommand(["setFeatures", self.featureset])
if error:
# disconnect the client, we can't make the setFeature work
self.connection.removeClient()
# no need to log it here, the error shall be sent to the client
raise BaseException(error)
def removeClient(self):
if not self.observer_only:
self.connection.removeClient()
def terminate(self):
# Don't wait for server indefinitely
import socket
socket.setdefaulttimeout(2)
try:
self.events.system_quit()
except:
pass
try:
self.connection.removeClient()
except:
pass
class BitBakeServer(BitBakeBaseServer):
def initServer(self, interface = ("localhost", 0), single_use = False):
self.interface = interface
self.serverImpl = XMLRPCServer(interface, single_use)
def detach(self):
daemonize.createDaemon(self.serverImpl.serve_forever, "bitbake-cookerdaemon.log")
del self.cooker
def establishConnection(self, featureset):
self.connection = BitBakeXMLRPCServerConnection(self.serverImpl, self.interface, False, featureset)
return self.connection.connect()
def set_connection_token(self, token):
self.connection.transport.set_connection_token(token)
class BitBakeXMLRPCClient(BitBakeBaseServer):
def __init__(self, observer_only = False, token = None):
self.token = token
self.observer_only = observer_only
# if we need extra caches, just tell the server to load them all
pass
def saveConnectionDetails(self, remote):
self.remote = remote
def establishConnection(self, featureset):
# The format of "remote" must be "server:port"
try:
[host, port] = self.remote.split(":")
port = int(port)
except Exception as e:
bb.warn("Failed to read remote definition (%s)" % str(e))
raise e
# We need our IP for the server connection. We get the IP
# by trying to connect with the server
try:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect((host, port))
ip = s.getsockname()[0]
s.close()
except Exception as e:
bb.warn("Could not create socket for %s:%s (%s)" % (host, port, str(e)))
raise e
try:
self.serverImpl = XMLRPCProxyServer(host, port)
self.connection = BitBakeXMLRPCServerConnection(self.serverImpl, (ip, 0), self.observer_only, featureset)
return self.connection.connect(self.token)
except Exception as e:
bb.warn("Could not connect to server at %s:%s (%s)" % (host, port, str(e)))
raise e
def endSession(self):
self.connection.removeClient()

View File

@@ -1,154 +0,0 @@
#
# BitBake XMLRPC Client Interface
#
# Copyright (C) 2006 - 2007 Michael 'Mickey' Lauer
# Copyright (C) 2006 - 2008 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import socket
import http.client
import xmlrpc.client
import bb
from bb.ui import uievent
class BBTransport(xmlrpc.client.Transport):
def __init__(self, timeout):
self.timeout = timeout
self.connection_token = None
xmlrpc.client.Transport.__init__(self)
# Modified from default to pass timeout to HTTPConnection
def make_connection(self, host):
#return an existing connection if possible. This allows
#HTTP/1.1 keep-alive.
if self._connection and host == self._connection[0]:
return self._connection[1]
# create a HTTP connection object from a host descriptor
chost, self._extra_headers, x509 = self.get_host_info(host)
#store the host argument along with the connection object
self._connection = host, http.client.HTTPConnection(chost, timeout=self.timeout)
return self._connection[1]
def set_connection_token(self, token):
self.connection_token = token
def send_content(self, h, body):
if self.connection_token:
h.putheader("Bitbake-token", self.connection_token)
xmlrpc.client.Transport.send_content(self, h, body)
def _create_server(host, port, timeout = 60):
t = BBTransport(timeout)
s = xmlrpc.client.ServerProxy("http://%s:%d/" % (host, port), transport=t, allow_none=True, use_builtin_types=True)
return s, t
def check_connection(remote, timeout):
try:
host, port = remote.split(":")
port = int(port)
except Exception as e:
bb.warn("Failed to read remote definition (%s)" % str(e))
raise e
server, _transport = _create_server(host, port, timeout)
try:
ret, err = server.runCommand(['getVariable', 'TOPDIR'])
if err or not ret:
return False
except ConnectionError:
return False
return True
class BitBakeXMLRPCServerConnection(object):
def __init__(self, host, port, clientinfo=("localhost", 0), observer_only = False, featureset = None):
self.connection, self.transport = _create_server(host, port)
self.clientinfo = clientinfo
self.observer_only = observer_only
if featureset:
self.featureset = featureset
else:
self.featureset = []
self.events = uievent.BBUIEventQueue(self.connection, self.clientinfo)
_, error = self.connection.runCommand(["setFeatures", self.featureset])
if error:
# disconnect the client, we can't make the setFeature work
self.connection.removeClient()
# no need to log it here, the error shall be sent to the client
raise BaseException(error)
def connect(self, token = None):
if token is None:
if self.observer_only:
token = "observer"
else:
token = self.connection.addClient()
if token is None:
return None
self.transport.set_connection_token(token)
return self
def removeClient(self):
if not self.observer_only:
self.connection.removeClient()
def terminate(self):
# Don't wait for server indefinitely
socket.setdefaulttimeout(2)
try:
self.events.system_quit()
except:
pass
try:
self.connection.removeClient()
except:
pass
def connectXMLRPC(remote, featureset, observer_only = False, token = None):
# The format of "remote" must be "server:port"
try:
[host, port] = remote.split(":")
port = int(port)
except Exception as e:
bb.warn("Failed to parse remote definition %s (%s)" % (remote, str(e)))
raise e
# We need our IP for the server connection. We get the IP
# by trying to connect with the server
try:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect((host, port))
ip = s.getsockname()[0]
s.close()
except Exception as e:
bb.warn("Could not create socket for %s:%s (%s)" % (host, port, str(e)))
raise e
try:
connection = BitBakeXMLRPCServerConnection(host, port, (ip, 0), observer_only, featureset)
return connection.connect(token)
except Exception as e:
bb.warn("Could not connect to server at %s:%s (%s)" % (host, port, str(e)))
raise e

View File

@@ -1,158 +0,0 @@
#
# BitBake XMLRPC Server Interface
#
# Copyright (C) 2006 - 2007 Michael 'Mickey' Lauer
# Copyright (C) 2006 - 2008 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import hashlib
import time
import inspect
from xmlrpc.server import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
import bb
# This request handler checks if the request has a "Bitbake-token" header
# field (this comes from the client side) and compares it with its internal
# "Bitbake-token" field (this comes from the server). If the two are not
# equal, it is assumed that a client is trying to connect to the server
# while another client is connected to the server. In this case, a 503 error
# ("service unavailable") is returned to the client.
class BitBakeXMLRPCRequestHandler(SimpleXMLRPCRequestHandler):
def __init__(self, request, client_address, server):
self.server = server
SimpleXMLRPCRequestHandler.__init__(self, request, client_address, server)
def do_POST(self):
try:
remote_token = self.headers["Bitbake-token"]
except:
remote_token = None
if 0 and remote_token != self.server.connection_token and remote_token != "observer":
self.report_503()
else:
if remote_token == "observer":
self.server.readonly = True
else:
self.server.readonly = False
SimpleXMLRPCRequestHandler.do_POST(self)
def report_503(self):
self.send_response(503)
response = 'No more client allowed'
self.send_header("Content-type", "text/plain")
self.send_header("Content-length", str(len(response)))
self.end_headers()
self.wfile.write(bytes(response, 'utf-8'))
class BitBakeXMLRPCServer(SimpleXMLRPCServer):
# remove this when you're done with debugging
# allow_reuse_address = True
def __init__(self, interface, cooker, parent):
# Use auto port configuration
if (interface[1] == -1):
interface = (interface[0], 0)
SimpleXMLRPCServer.__init__(self, interface,
requestHandler=BitBakeXMLRPCRequestHandler,
logRequests=False, allow_none=True)
self.host, self.port = self.socket.getsockname()
self.interface = interface
self.connection_token = None
self.commands = BitBakeXMLRPCServerCommands(self)
self.register_functions(self.commands, "")
self.cooker = cooker
self.parent = parent
def register_functions(self, context, prefix):
"""
Convenience method for registering all functions in the scope
of this class that start with a common prefix
"""
methodlist = inspect.getmembers(context, inspect.ismethod)
for name, method in methodlist:
if name.startswith(prefix):
self.register_function(method, name[len(prefix):])
def get_timeout(self, delay):
socktimeout = self.socket.gettimeout() or delay
return min(socktimeout, delay)
def handle_requests(self):
self._handle_request_noblock()
class BitBakeXMLRPCServerCommands():
def __init__(self, server):
self.server = server
self.has_client = False
def registerEventHandler(self, host, port):
"""
Register a remote UI Event Handler
"""
s, t = bb.server.xmlrpcclient._create_server(host, port)
# we don't allow connections if the cooker is running
if (self.server.cooker.state in [bb.cooker.state.parsing, bb.cooker.state.running]):
return None, "Cooker is busy: %s" % bb.cooker.state.get_name(self.server.cooker.state)
self.event_handle = bb.event.register_UIHhandler(s, True)
return self.event_handle, 'OK'
def unregisterEventHandler(self, handlerNum):
"""
Unregister a remote UI Event Handler
"""
ret = bb.event.unregister_UIHhandler(handlerNum, True)
self.event_handle = None
return ret
def runCommand(self, command):
"""
Run a cooker command on the server
"""
return self.server.cooker.command.runCommand(command, self.server.readonly)
def getEventHandle(self):
return self.event_handle
def terminateServer(self):
"""
Trigger the server to quit
"""
self.server.parent.quit = True
print("XMLRPC Server triggering exit")
return
def addClient(self):
if self.server.parent.haveui:
return None
token = hashlib.md5(str(time.time()).encode("utf-8")).hexdigest()
self.server.connection_token = token
self.server.parent.haveui = True
return token
def removeClient(self):
if self.server.parent.haveui:
self.server.connection_token = None
self.server.parent.haveui = False

820
bitbake/lib/bb/shell.py Normal file
View File

@@ -0,0 +1,820 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
##########################################################################
#
# Copyright (C) 2005-2006 Michael 'Mickey' Lauer <mickey@Vanille.de>
# Copyright (C) 2005-2006 Vanille Media
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
##########################################################################
#
# Thanks to:
# * Holger Freyther <zecke@handhelds.org>
# * Justin Patrin <papercrane@reversefold.com>
#
##########################################################################
"""
BitBake Shell
IDEAS:
* list defined tasks per package
* list classes
* toggle force
* command to reparse just one (or more) bbfile(s)
* automatic check if reparsing is necessary (inotify?)
* frontend for bb file manipulation
* more shell-like features:
- output control, i.e. pipe output into grep, sort, etc.
- job control, i.e. bring running commands into background and foreground
* start parsing in background right after startup
* ncurses interface
PROBLEMS:
* force doesn't always work
* readline completion for commands with more than one parameters
"""
##########################################################################
# Import and setup global variables
##########################################################################
from __future__ import print_function
from functools import reduce
try:
set
except NameError:
from sets import Set as set
import sys, os, readline, socket, httplib, urllib, commands, popen2, shlex, Queue, fnmatch
from bb import data, parse, build, cache, taskdata, runqueue, providers as Providers
__version__ = "0.5.3.1"
__credits__ = """BitBake Shell Version %s (C) 2005 Michael 'Mickey' Lauer <mickey@Vanille.de>
Type 'help' for more information, press CTRL-D to exit.""" % __version__
cmds = {}
leave_mainloop = False
last_exception = None
cooker = None
parsed = False
debug = os.environ.get( "BBSHELL_DEBUG", "" )
##########################################################################
# Class BitBakeShellCommands
##########################################################################
class BitBakeShellCommands:
"""This class contains the valid commands for the shell"""
def __init__( self, shell ):
"""Register all the commands"""
self._shell = shell
for attr in BitBakeShellCommands.__dict__:
if not attr.startswith( "_" ):
if attr.endswith( "_" ):
command = attr[:-1].lower()
else:
command = attr[:].lower()
method = getattr( BitBakeShellCommands, attr )
debugOut( "registering command '%s'" % command )
# scan number of arguments
usage = getattr( method, "usage", "" )
if usage != "<...>":
numArgs = len( usage.split() )
else:
numArgs = -1
shell.registerCommand( command, method, numArgs, "%s %s" % ( command, usage ), method.__doc__ )
def _checkParsed( self ):
if not parsed:
print("SHELL: This command needs to parse bbfiles...")
self.parse( None )
def _findProvider( self, item ):
self._checkParsed()
# Need to use taskData for this information
preferred = data.getVar( "PREFERRED_PROVIDER_%s" % item, cooker.configuration.data, 1 )
if not preferred: preferred = item
try:
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status)
except KeyError:
if item in cooker.status.providers:
pf = cooker.status.providers[item][0]
else:
pf = None
return pf
def alias( self, params ):
"""Register a new name for a command"""
new, old = params
if not old in cmds:
print("ERROR: Command '%s' not known" % old)
else:
cmds[new] = cmds[old]
print("OK")
alias.usage = "<alias> <command>"
def buffer( self, params ):
"""Dump specified output buffer"""
index = params[0]
print(self._shell.myout.buffer( int( index ) ))
buffer.usage = "<index>"
def buffers( self, params ):
"""Show the available output buffers"""
commands = self._shell.myout.bufferedCommands()
if not commands:
print("SHELL: No buffered commands available yet. Start doing something.")
else:
print("="*35, "Available Output Buffers", "="*27)
for index, cmd in enumerate( commands ):
print("| %s %s" % ( str( index ).ljust( 3 ), cmd ))
print("="*88)
def build( self, params, cmd = "build" ):
"""Build a providee"""
global last_exception
globexpr = params[0]
self._checkParsed()
names = globfilter( cooker.status.pkg_pn, globexpr )
if len( names ) == 0: names = [ globexpr ]
print("SHELL: Building %s" % ' '.join( names ))
td = taskdata.TaskData(cooker.configuration.abort)
localdata = data.createCopy(cooker.configuration.data)
data.update_data(localdata)
data.expandKeys(localdata)
try:
tasks = []
for name in names:
td.add_provider(localdata, cooker.status, name)
providers = td.get_provider(name)
if len(providers) == 0:
raise Providers.NoProvider
tasks.append([name, "do_%s" % cmd])
td.add_unresolved(localdata, cooker.status)
rq = runqueue.RunQueue(cooker, localdata, cooker.status, td, tasks)
rq.prepare_runqueue()
rq.execute_runqueue()
except Providers.NoProvider:
print("ERROR: No Provider")
last_exception = Providers.NoProvider
except runqueue.TaskFailure as fnids:
last_exception = runqueue.TaskFailure
except build.FuncFailed as e:
print("ERROR: Couldn't build '%s'" % names)
last_exception = e
build.usage = "<providee>"
def clean( self, params ):
"""Clean a providee"""
self.build( params, "clean" )
clean.usage = "<providee>"
def compile( self, params ):
"""Execute 'compile' on a providee"""
self.build( params, "compile" )
compile.usage = "<providee>"
def configure( self, params ):
"""Execute 'configure' on a providee"""
self.build( params, "configure" )
configure.usage = "<providee>"
def install( self, params ):
"""Execute 'install' on a providee"""
self.build( params, "install" )
install.usage = "<providee>"
def edit( self, params ):
"""Call $EDITOR on a providee"""
name = params[0]
bbfile = self._findProvider( name )
if bbfile is not None:
os.system( "%s %s" % ( os.environ.get( "EDITOR", "vi" ), bbfile ) )
else:
print("ERROR: Nothing provides '%s'" % name)
edit.usage = "<providee>"
def environment( self, params ):
"""Dump out the outer BitBake environment"""
cooker.showEnvironment()
def exit_( self, params ):
"""Leave the BitBake Shell"""
debugOut( "setting leave_mainloop to true" )
global leave_mainloop
leave_mainloop = True
def fetch( self, params ):
"""Fetch a providee"""
self.build( params, "fetch" )
fetch.usage = "<providee>"
def fileBuild( self, params, cmd = "build" ):
"""Parse and build a .bb file"""
global last_exception
name = params[0]
bf = completeFilePath( name )
print("SHELL: Calling '%s' on '%s'" % ( cmd, bf ))
try:
cooker.buildFile(bf, cmd)
except parse.ParseError:
print("ERROR: Unable to open or parse '%s'" % bf)
except build.FuncFailed as e:
print("ERROR: Couldn't build '%s'" % name)
last_exception = e
fileBuild.usage = "<bbfile>"
def fileClean( self, params ):
"""Clean a .bb file"""
self.fileBuild( params, "clean" )
fileClean.usage = "<bbfile>"
def fileEdit( self, params ):
"""Call $EDITOR on a .bb file"""
name = params[0]
os.system( "%s %s" % ( os.environ.get( "EDITOR", "vi" ), completeFilePath( name ) ) )
fileEdit.usage = "<bbfile>"
def fileRebuild( self, params ):
"""Rebuild (clean & build) a .bb file"""
self.fileBuild( params, "rebuild" )
fileRebuild.usage = "<bbfile>"
def fileReparse( self, params ):
"""(re)Parse a bb file"""
bbfile = params[0]
print("SHELL: Parsing '%s'" % bbfile)
parse.update_mtime( bbfile )
cooker.parser.reparse(bbfile)
if False: #fromCache:
print("SHELL: File has not been updated, not reparsing")
else:
print("SHELL: Parsed")
fileReparse.usage = "<bbfile>"
def abort( self, params ):
"""Toggle abort task execution flag (see bitbake -k)"""
cooker.configuration.abort = not cooker.configuration.abort
print("SHELL: Abort Flag is now '%s'" % repr( cooker.configuration.abort ))
def force( self, params ):
"""Toggle force task execution flag (see bitbake -f)"""
cooker.configuration.force = not cooker.configuration.force
print("SHELL: Force Flag is now '%s'" % repr( cooker.configuration.force ))
def help( self, params ):
"""Show a comprehensive list of commands and their purpose"""
print("="*30, "Available Commands", "="*30)
for cmd in sorted(cmds):
function, numparams, usage, helptext = cmds[cmd]
print("| %s | %s" % (usage.ljust(30), helptext))
print("="*78)
def lastError( self, params ):
"""Show the reason or log that was produced by the last BitBake event exception"""
if last_exception is None:
print("SHELL: No Errors yet (Phew)...")
else:
reason, event = last_exception.args
print("SHELL: Reason for the last error: '%s'" % reason)
if ':' in reason:
msg, filename = reason.split( ':' )
filename = filename.strip()
print("SHELL: Dumping log file for last error:")
try:
print(open( filename ).read())
except IOError:
print("ERROR: Couldn't open '%s'" % filename)
def match( self, params ):
"""Dump all files or providers matching a glob expression"""
what, globexpr = params
if what == "files":
self._checkParsed()
for key in globfilter( cooker.status.pkg_fn, globexpr ): print(key)
elif what == "providers":
self._checkParsed()
for key in globfilter( cooker.status.pkg_pn, globexpr ): print(key)
else:
print("Usage: match %s" % self.print_.usage)
match.usage = "<files|providers> <glob>"
def new( self, params ):
"""Create a new .bb file and open the editor"""
dirname, filename = params
packages = '/'.join( data.getVar( "BBFILES", cooker.configuration.data, 1 ).split('/')[:-2] )
fulldirname = "%s/%s" % ( packages, dirname )
if not os.path.exists( fulldirname ):
print("SHELL: Creating '%s'" % fulldirname)
os.mkdir( fulldirname )
if os.path.exists( fulldirname ) and os.path.isdir( fulldirname ):
if os.path.exists( "%s/%s" % ( fulldirname, filename ) ):
print("SHELL: ERROR: %s/%s already exists" % ( fulldirname, filename ))
return False
print("SHELL: Creating '%s/%s'" % ( fulldirname, filename ))
newpackage = open( "%s/%s" % ( fulldirname, filename ), "w" )
print("""DESCRIPTION = ""
SECTION = ""
AUTHOR = ""
HOMEPAGE = ""
MAINTAINER = ""
LICENSE = "GPL"
PR = "r0"
SRC_URI = ""
#inherit base
#do_configure() {
#
#}
#do_compile() {
#
#}
#do_stage() {
#
#}
#do_install() {
#
#}
""", file=newpackage)
newpackage.close()
os.system( "%s %s/%s" % ( os.environ.get( "EDITOR" ), fulldirname, filename ) )
new.usage = "<directory> <filename>"
def package( self, params ):
"""Execute 'package' on a providee"""
self.build( params, "package" )
package.usage = "<providee>"
def pasteBin( self, params ):
"""Send a command + output buffer to the pastebin at http://rafb.net/paste"""
index = params[0]
contents = self._shell.myout.buffer( int( index ) )
sendToPastebin( "output of " + params[0], contents )
pasteBin.usage = "<index>"
def pasteLog( self, params ):
"""Send the last event exception error log (if there is one) to http://rafb.net/paste"""
if last_exception is None:
print("SHELL: No Errors yet (Phew)...")
else:
reason, event = last_exception.args
print("SHELL: Reason for the last error: '%s'" % reason)
if ':' in reason:
msg, filename = reason.split( ':' )
filename = filename.strip()
print("SHELL: Pasting log file to pastebin...")
file = open( filename ).read()
sendToPastebin( "contents of " + filename, file )
def patch( self, params ):
"""Execute 'patch' command on a providee"""
self.build( params, "patch" )
patch.usage = "<providee>"
def parse( self, params ):
"""(Re-)parse .bb files and calculate the dependency graph"""
cooker.status = cache.CacheData(cooker.caches_array)
ignore = data.getVar("ASSUME_PROVIDED", cooker.configuration.data, 1) or ""
cooker.status.ignored_dependencies = set( ignore.split() )
cooker.handleCollections( data.getVar("BBFILE_COLLECTIONS", cooker.configuration.data, 1) )
(filelist, masked) = cooker.collect_bbfiles()
cooker.parse_bbfiles(filelist, masked, cooker.myProgressCallback)
cooker.buildDepgraph()
global parsed
parsed = True
print()
def reparse( self, params ):
"""(re)Parse a providee's bb file"""
bbfile = self._findProvider( params[0] )
if bbfile is not None:
print("SHELL: Found bbfile '%s' for '%s'" % ( bbfile, params[0] ))
self.fileReparse( [ bbfile ] )
else:
print("ERROR: Nothing provides '%s'" % params[0])
reparse.usage = "<providee>"
def getvar( self, params ):
"""Dump the contents of an outer BitBake environment variable"""
var = params[0]
value = data.getVar( var, cooker.configuration.data, 1 )
print(value)
getvar.usage = "<variable>"
def peek( self, params ):
"""Dump contents of variable defined in providee's metadata"""
name, var = params
bbfile = self._findProvider( name )
if bbfile is not None:
the_data = cache.Cache.loadDataFull(bbfile, cooker.configuration.data)
value = the_data.getVar( var, 1 )
print(value)
else:
print("ERROR: Nothing provides '%s'" % name)
peek.usage = "<providee> <variable>"
def poke( self, params ):
"""Set contents of variable defined in providee's metadata"""
name, var, value = params
bbfile = self._findProvider( name )
if bbfile is not None:
print("ERROR: Sorry, this functionality is currently broken")
#d = cooker.pkgdata[bbfile]
#data.setVar( var, value, d )
# mark the change semi persistant
#cooker.pkgdata.setDirty(bbfile, d)
#print "OK"
else:
print("ERROR: Nothing provides '%s'" % name)
poke.usage = "<providee> <variable> <value>"
def print_( self, params ):
"""Dump all files or providers"""
what = params[0]
if what == "files":
self._checkParsed()
for key in cooker.status.pkg_fn: print(key)
elif what == "providers":
self._checkParsed()
for key in cooker.status.providers: print(key)
else:
print("Usage: print %s" % self.print_.usage)
print_.usage = "<files|providers>"
def python( self, params ):
"""Enter the expert mode - an interactive BitBake Python Interpreter"""
sys.ps1 = "EXPERT BB>>> "
sys.ps2 = "EXPERT BB... "
import code
interpreter = code.InteractiveConsole( dict( globals() ) )
interpreter.interact( "SHELL: Expert Mode - BitBake Python %s\nType 'help' for more information, press CTRL-D to switch back to BBSHELL." % sys.version )
def showdata( self, params ):
"""Execute 'showdata' on a providee"""
cooker.showEnvironment(None, params)
showdata.usage = "<providee>"
def setVar( self, params ):
"""Set an outer BitBake environment variable"""
var, value = params
data.setVar( var, value, cooker.configuration.data )
print("OK")
setVar.usage = "<variable> <value>"
def rebuild( self, params ):
"""Clean and rebuild a .bb file or a providee"""
self.build( params, "clean" )
self.build( params, "build" )
rebuild.usage = "<providee>"
def shell( self, params ):
"""Execute a shell command and dump the output"""
if params != "":
print(commands.getoutput( " ".join( params ) ))
shell.usage = "<...>"
def stage( self, params ):
"""Execute 'stage' on a providee"""
self.build( params, "populate_staging" )
stage.usage = "<providee>"
def status( self, params ):
"""<just for testing>"""
print("-" * 78)
print("building list = '%s'" % cooker.building_list)
print("build path = '%s'" % cooker.build_path)
print("consider_msgs_cache = '%s'" % cooker.consider_msgs_cache)
print("build stats = '%s'" % cooker.stats)
if last_exception is not None: print("last_exception = '%s'" % repr( last_exception.args ))
print("memory output contents = '%s'" % self._shell.myout._buffer)
def test( self, params ):
"""<just for testing>"""
print("testCommand called with '%s'" % params)
def unpack( self, params ):
"""Execute 'unpack' on a providee"""
self.build( params, "unpack" )
unpack.usage = "<providee>"
def which( self, params ):
"""Computes the providers for a given providee"""
# Need to use taskData for this information
item = params[0]
self._checkParsed()
preferred = data.getVar( "PREFERRED_PROVIDER_%s" % item, cooker.configuration.data, 1 )
if not preferred: preferred = item
try:
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status)
except KeyError:
lv, lf, pv, pf = (None,)*4
try:
providers = cooker.status.providers[item]
except KeyError:
print("SHELL: ERROR: Nothing provides", preferred)
else:
for provider in providers:
if provider == pf: provider = " (***) %s" % provider
else: provider = " %s" % provider
print(provider)
which.usage = "<providee>"
##########################################################################
# Common helper functions
##########################################################################
def completeFilePath( bbfile ):
"""Get the complete bbfile path"""
if not cooker.status: return bbfile
if not cooker.status.pkg_fn: return bbfile
for key in cooker.status.pkg_fn:
if key.endswith( bbfile ):
return key
return bbfile
def sendToPastebin( desc, content ):
"""Send content to http://oe.pastebin.com"""
mydata = {}
mydata["lang"] = "Plain Text"
mydata["desc"] = desc
mydata["cvt_tabs"] = "No"
mydata["nick"] = "%s@%s" % ( os.environ.get( "USER", "unknown" ), socket.gethostname() or "unknown" )
mydata["text"] = content
params = urllib.urlencode( mydata )
headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"}
host = "rafb.net"
conn = httplib.HTTPConnection( "%s:80" % host )
conn.request("POST", "/paste/paste.php", params, headers )
response = conn.getresponse()
conn.close()
if response.status == 302:
location = response.getheader( "location" ) or "unknown"
print("SHELL: Pasted to http://%s%s" % ( host, location ))
else:
print("ERROR: %s %s" % ( response.status, response.reason ))
def completer( text, state ):
"""Return a possible readline completion"""
debugOut( "completer called with text='%s', state='%d'" % ( text, state ) )
if state == 0:
line = readline.get_line_buffer()
if " " in line:
line = line.split()
# we are in second (or more) argument
if line[0] in cmds and hasattr( cmds[line[0]][0], "usage" ): # known command and usage
u = getattr( cmds[line[0]][0], "usage" ).split()[0]
if u == "<variable>":
allmatches = cooker.configuration.data.keys()
elif u == "<bbfile>":
if cooker.status.pkg_fn is None: allmatches = [ "(No Matches Available. Parsed yet?)" ]
else: allmatches = [ x.split("/")[-1] for x in cooker.status.pkg_fn ]
elif u == "<providee>":
if cooker.status.pkg_fn is None: allmatches = [ "(No Matches Available. Parsed yet?)" ]
else: allmatches = cooker.status.providers.iterkeys()
else: allmatches = [ "(No tab completion available for this command)" ]
else: allmatches = [ "(No tab completion available for this command)" ]
else:
# we are in first argument
allmatches = cmds.iterkeys()
completer.matches = [ x for x in allmatches if x[:len(text)] == text ]
#print "completer.matches = '%s'" % completer.matches
if len( completer.matches ) > state:
return completer.matches[state]
else:
return None
def debugOut( text ):
if debug:
sys.stderr.write( "( %s )\n" % text )
def columnize( alist, width = 80 ):
"""
A word-wrap function that preserves existing line breaks
and most spaces in the text. Expects that existing line
breaks are posix newlines (\n).
"""
return reduce(lambda line, word, width=width: '%s%s%s' %
(line,
' \n'[(len(line[line.rfind('\n')+1:])
+ len(word.split('\n', 1)[0]
) >= width)],
word),
alist
)
def globfilter( names, pattern ):
return fnmatch.filter( names, pattern )
##########################################################################
# Class MemoryOutput
##########################################################################
class MemoryOutput:
"""File-like output class buffering the output of the last 10 commands"""
def __init__( self, delegate ):
self.delegate = delegate
self._buffer = []
self.text = []
self._command = None
def startCommand( self, command ):
self._command = command
self.text = []
def endCommand( self ):
if self._command is not None:
if len( self._buffer ) == 10: del self._buffer[0]
self._buffer.append( ( self._command, self.text ) )
def removeLast( self ):
if self._buffer:
del self._buffer[ len( self._buffer ) - 1 ]
self.text = []
self._command = None
def lastBuffer( self ):
if self._buffer:
return self._buffer[ len( self._buffer ) -1 ][1]
def bufferedCommands( self ):
return [ cmd for cmd, output in self._buffer ]
def buffer( self, i ):
if i < len( self._buffer ):
return "BB>> %s\n%s" % ( self._buffer[i][0], "".join( self._buffer[i][1] ) )
else: return "ERROR: Invalid buffer number. Buffer needs to be in (0, %d)" % ( len( self._buffer ) - 1 )
def write( self, text ):
if self._command is not None and text != "BB>> ": self.text.append( text )
if self.delegate is not None: self.delegate.write( text )
def flush( self ):
return self.delegate.flush()
def fileno( self ):
return self.delegate.fileno()
def isatty( self ):
return self.delegate.isatty()
##########################################################################
# Class BitBakeShell
##########################################################################
class BitBakeShell:
def __init__( self ):
"""Register commands and set up readline"""
self.commandQ = Queue.Queue()
self.commands = BitBakeShellCommands( self )
self.myout = MemoryOutput( sys.stdout )
self.historyfilename = os.path.expanduser( "~/.bbsh_history" )
self.startupfilename = os.path.expanduser( "~/.bbsh_startup" )
readline.set_completer( completer )
readline.set_completer_delims( " " )
readline.parse_and_bind("tab: complete")
try:
readline.read_history_file( self.historyfilename )
except IOError:
pass # It doesn't exist yet.
print(__credits__)
def cleanup( self ):
"""Write readline history and clean up resources"""
debugOut( "writing command history" )
try:
readline.write_history_file( self.historyfilename )
except:
print("SHELL: Unable to save command history")
def registerCommand( self, command, function, numparams = 0, usage = "", helptext = "" ):
"""Register a command"""
if usage == "": usage = command
if helptext == "": helptext = function.__doc__ or "<not yet documented>"
cmds[command] = ( function, numparams, usage, helptext )
def processCommand( self, command, params ):
"""Process a command. Check number of params and print a usage string, if appropriate"""
debugOut( "processing command '%s'..." % command )
try:
function, numparams, usage, helptext = cmds[command]
except KeyError:
print("SHELL: ERROR: '%s' command is not a valid command." % command)
self.myout.removeLast()
else:
if (numparams != -1) and (not len( params ) == numparams):
print("Usage: '%s'" % usage)
return
result = function( self.commands, params )
debugOut( "result was '%s'" % result )
def processStartupFile( self ):
"""Read and execute all commands found in $HOME/.bbsh_startup"""
if os.path.exists( self.startupfilename ):
startupfile = open( self.startupfilename, "r" )
for cmdline in startupfile:
debugOut( "processing startup line '%s'" % cmdline )
if not cmdline:
continue
if "|" in cmdline:
print("ERROR: '|' in startup file is not allowed. Ignoring line")
continue
self.commandQ.put( cmdline.strip() )
def main( self ):
"""The main command loop"""
while not leave_mainloop:
try:
if self.commandQ.empty():
sys.stdout = self.myout.delegate
cmdline = raw_input( "BB>> " )
sys.stdout = self.myout
else:
cmdline = self.commandQ.get()
if cmdline:
allCommands = cmdline.split( ';' )
for command in allCommands:
pipecmd = None
#
# special case for expert mode
if command == 'python':
sys.stdout = self.myout.delegate
self.processCommand( command, "" )
sys.stdout = self.myout
else:
self.myout.startCommand( command )
if '|' in command: # disable output
command, pipecmd = command.split( '|' )
delegate = self.myout.delegate
self.myout.delegate = None
tokens = shlex.split( command, True )
self.processCommand( tokens[0], tokens[1:] or "" )
self.myout.endCommand()
if pipecmd is not None: # restore output
self.myout.delegate = delegate
pipe = popen2.Popen4( pipecmd )
pipe.tochild.write( "\n".join( self.myout.lastBuffer() ) )
pipe.tochild.close()
sys.stdout.write( pipe.fromchild.read() )
#
except EOFError:
print()
return
except KeyboardInterrupt:
print()
##########################################################################
# Start function - called from the BitBake command line utility
##########################################################################
def start( aCooker ):
global cooker
cooker = aCooker
bbshell = BitBakeShell()
bbshell.processStartupFile()
bbshell.main()
bbshell.cleanup()
if __name__ == "__main__":
print("SHELL: Sorry, this program should only be called by BitBake.")

View File

@@ -3,19 +3,22 @@ import logging
import os
import re
import tempfile
import pickle
import bb.data
import difflib
import simplediff
from bb.checksum import FileChecksumCache
logger = logging.getLogger('BitBake.SigGen')
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info('Importing cPickle failed. Falling back to a very slow implementation.')
def init(d):
siggens = [obj for obj in globals().values()
siggens = [obj for obj in globals().itervalues()
if type(obj) is type and issubclass(obj, SignatureGenerator)]
desired = d.getVar("BB_SIGNATURE_HANDLER") or "noop"
desired = d.getVar("BB_SIGNATURE_HANDLER", True) or "noop"
for sg in siggens:
if desired == sg.name:
return sg(d)
@@ -69,10 +72,6 @@ class SignatureGenerator(object):
def set_taskdata(self, data):
self.runtaskdeps, self.taskhash, self.file_checksum_values, self.taints, self.basehash = data
def reset(self, data):
self.__init__(data)
class SignatureGeneratorBasic(SignatureGenerator):
"""
"""
@@ -88,10 +87,10 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.gendeps = {}
self.lookupcache = {}
self.pkgnameextract = re.compile("(?P<fn>.*)\..*")
self.basewhitelist = set((data.getVar("BB_HASHBASE_WHITELIST") or "").split())
self.basewhitelist = set((data.getVar("BB_HASHBASE_WHITELIST", True) or "").split())
self.taskwhitelist = None
self.init_rundepcheck(data)
checksum_cache_file = data.getVar("BB_HASH_CHECKSUM_CACHE_FILE")
checksum_cache_file = data.getVar("BB_HASH_CHECKSUM_CACHE_FILE", True)
if checksum_cache_file:
self.checksum_cache = FileChecksumCache()
self.checksum_cache.init_cache(data, checksum_cache_file)
@@ -99,7 +98,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.checksum_cache = None
def init_rundepcheck(self, data):
self.taskwhitelist = data.getVar("BB_HASHTASK_WHITELIST") or None
self.taskwhitelist = data.getVar("BB_HASHTASK_WHITELIST", True) or None
if self.taskwhitelist:
self.twl = re.compile(self.taskwhitelist)
else:
@@ -107,7 +106,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
def _build_data(self, fn, d):
ignore_mismatch = ((d.getVar("BB_HASH_IGNORE_MISMATCH") or '') == '1')
tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d)
taskdeps = {}
@@ -140,9 +138,9 @@ class SignatureGeneratorBasic(SignatureGenerator):
var = lookupcache[dep]
if var is not None:
data = data + str(var)
datahash = hashlib.md5(data.encode("utf-8")).hexdigest()
datahash = hashlib.md5(data).hexdigest()
k = fn + "." + task
if not ignore_mismatch and k in self.basehash and self.basehash[k] != datahash:
if k in self.basehash and self.basehash[k] != datahash:
bb.error("When reparsing %s, the basehash value changed from %s to %s. The metadata is not deterministic and this needs to be fixed." % (k, self.basehash[k], datahash))
self.basehash[k] = datahash
taskdeps[task] = alldeps
@@ -155,21 +153,18 @@ class SignatureGeneratorBasic(SignatureGenerator):
def finalise(self, fn, d, variant):
mc = d.getVar("__BBMULTICONFIG", False) or ""
if variant or mc:
fn = bb.cache.realfn2virtual(fn, variant, mc)
if variant:
fn = "virtual:" + variant + ":" + fn
try:
taskdeps = self._build_data(fn, d)
except bb.parse.SkipRecipe:
raise
except:
bb.warn("Error during finalise of %s" % fn)
raise
#Slow but can be useful for debugging mismatched basehashes
#for task in self.taskdeps[fn]:
# self.dump_sigtask(fn, task, d.getVar("STAMP"), False)
# self.dump_sigtask(fn, task, d.getVar("STAMP", True), False)
for task in taskdeps:
d.setVar("BB_BASEHASH_task-%s" % task, self.basehash[fn + "." + task])
@@ -193,24 +188,15 @@ class SignatureGeneratorBasic(SignatureGenerator):
return taint
def get_taskhash(self, fn, task, deps, dataCache):
mc = ''
if fn.startswith('multiconfig:'):
mc = fn.split(':')[1]
k = fn + "." + task
data = dataCache.basetaskhash[k]
self.basehash[k] = data
self.runtaskdeps[k] = []
self.file_checksum_values[k] = []
recipename = dataCache.pkg_fn[fn]
for dep in sorted(deps, key=clean_basepath):
pkgname = self.pkgnameextract.search(dep).group('fn')
if mc:
depmc = pkgname.split(':')[1]
if mc != depmc:
continue
depname = dataCache.pkg_fn[pkgname]
depname = dataCache.pkg_fn[self.pkgnameextract.search(dep).group('fn')]
if not self.rundep_check(fn, recipename, task, dep, depname, dataCache):
continue
if dep not in self.taskhash:
@@ -240,9 +226,9 @@ class SignatureGeneratorBasic(SignatureGenerator):
if taint:
data = data + taint
self.taints[k] = taint
logger.warning("%s is tainted from a forced run" % k)
logger.warn("%s is tainted from a forced run" % k)
h = hashlib.md5(data.encode("utf-8")).hexdigest()
h = hashlib.md5(data).hexdigest()
self.taskhash[k] = h
#d.setVar("BB_TASKHASH_task-%s" % task, taskhash[task])
return h
@@ -315,7 +301,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
with os.fdopen(fd, "wb") as stream:
p = pickle.dump(data, stream, -1)
stream.flush()
os.chmod(tmpfile, 0o664)
os.chmod(tmpfile, 0664)
os.rename(tmpfile, sigfile)
except (OSError, IOError) as err:
try:
@@ -324,18 +310,16 @@ class SignatureGeneratorBasic(SignatureGenerator):
pass
raise err
def dump_sigfn(self, fn, dataCaches, options):
if fn in self.taskdeps:
def dump_sigs(self, dataCache, options):
for fn in self.taskdeps:
for task in self.taskdeps[fn]:
tid = fn + ":" + task
(mc, _, _) = bb.runqueue.split_tid(tid)
k = fn + "." + task
if k not in self.taskhash:
continue
if dataCaches[mc].basetaskhash[k] != self.basehash[k]:
if dataCache.basetaskhash[k] != self.basehash[k]:
bb.error("Bitbake's cached basehash does not match the one we just generated (%s)!" % k)
bb.error("The mismatched hashes were %s and %s" % (dataCaches[mc].basetaskhash[k], self.basehash[k]))
self.dump_sigtask(fn, task, dataCaches[mc].stamp[fn], True)
bb.error("The mismatched hashes were %s and %s" % (dataCache.basetaskhash[k], self.basehash[k]))
self.dump_sigtask(fn, task, dataCache.stamp[fn], True)
class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
name = "basichash"
@@ -356,78 +340,22 @@ class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
def stampcleanmask(self, stampbase, fn, taskname, extrainfo):
return self.stampfile(stampbase, fn, taskname, extrainfo, clean=True)
def invalidate_task(self, task, d, fn):
bb.note("Tainting hash to force rebuild of task %s, %s" % (fn, task))
bb.build.write_taint(task, d, fn)
def dump_this_task(outfile, d):
import bb.parse
fn = d.getVar("BB_FILENAME")
task = "do_" + d.getVar("BB_CURRENTTASK")
fn = d.getVar("BB_FILENAME", True)
task = "do_" + d.getVar("BB_CURRENTTASK", True)
referencestamp = bb.build.stamp_internal(task, d, None, True)
bb.parse.siggen.dump_sigtask(fn, task, outfile, "customfile:" + referencestamp)
def init_colors(enable_color):
"""Initialise colour dict for passing to compare_sigfiles()"""
# First set up the colours
colors = {'color_title': '\033[1;37;40m',
'color_default': '\033[0;37;40m',
'color_add': '\033[1;32;40m',
'color_remove': '\033[1;31;40m',
}
# Leave all keys present but clear the values
if not enable_color:
for k in colors.keys():
colors[k] = ''
return colors
def worddiff_str(oldstr, newstr, colors=None):
if not colors:
colors = init_colors(False)
diff = simplediff.diff(oldstr.split(' '), newstr.split(' '))
ret = []
for change, value in diff:
value = ' '.join(value)
if change == '=':
ret.append(value)
elif change == '+':
item = '{color_add}{{+{value}+}}{color_default}'.format(value=value, **colors)
ret.append(item)
elif change == '-':
item = '{color_remove}[-{value}-]{color_default}'.format(value=value, **colors)
ret.append(item)
whitespace_note = ''
if oldstr != newstr and ' '.join(oldstr.split()) == ' '.join(newstr.split()):
whitespace_note = ' (whitespace changed)'
return '"%s"%s' % (' '.join(ret), whitespace_note)
def list_inline_diff(oldlist, newlist, colors=None):
if not colors:
colors = init_colors(False)
diff = simplediff.diff(oldlist, newlist)
ret = []
for change, value in diff:
value = ' '.join(value)
if change == '=':
ret.append("'%s'" % value)
elif change == '+':
item = '{color_add}+{value}{color_default}'.format(value=value, **colors)
ret.append(item)
elif change == '-':
item = '{color_remove}-{value}{color_default}'.format(value=value, **colors)
ret.append(item)
return '[%s]' % (', '.join(ret))
def clean_basepath(a):
mc = None
if a.startswith("multiconfig:"):
_, mc, a = a.split(":", 2)
b = a.rsplit("/", 2)[1] + '/' + a.rsplit("/", 2)[2]
b = a.rsplit("/", 2)[1] + a.rsplit("/", 2)[2]
if a.startswith("virtual:"):
b = b + ":" + a.rsplit(":", 1)[0]
if mc:
b = b + ":multiconfig:" + mc
return b
def clean_basepaths(a):
@@ -442,32 +370,13 @@ def clean_basepaths_list(a):
b.append(clean_basepath(x))
return b
def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
def compare_sigfiles(a, b, recursecb = None):
output = []
colors = init_colors(color)
def color_format(formatstr, **values):
"""
Return colour formatted string.
NOTE: call with the format string, not an already formatted string
containing values (otherwise you could have trouble with { and }
characters)
"""
if not formatstr.endswith('{color_default}'):
formatstr += '{color_default}'
# In newer python 3 versions you can pass both of these directly,
# but we only require 3.4 at the moment
formatparams = {}
formatparams.update(colors)
formatparams.update(values)
return formatstr.format(**formatparams)
with open(a, 'rb') as f:
p1 = pickle.Unpickler(f)
a_data = p1.load()
with open(b, 'rb') as f:
p2 = pickle.Unpickler(f)
b_data = p2.load()
p1 = pickle.Unpickler(open(a, "rb"))
a_data = p1.load()
p2 = pickle.Unpickler(open(b, "rb"))
b_data = p2.load()
def dict_diff(a, b, whitelist=set()):
sa = set(a.keys())
@@ -515,100 +424,65 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
return changed, added, removed
if 'basewhitelist' in a_data and a_data['basewhitelist'] != b_data['basewhitelist']:
output.append(color_format("{color_title}basewhitelist changed{color_default} from '%s' to '%s'") % (a_data['basewhitelist'], b_data['basewhitelist']))
output.append("basewhitelist changed from '%s' to '%s'" % (a_data['basewhitelist'], b_data['basewhitelist']))
if a_data['basewhitelist'] and b_data['basewhitelist']:
output.append("changed items: %s" % a_data['basewhitelist'].symmetric_difference(b_data['basewhitelist']))
if 'taskwhitelist' in a_data and a_data['taskwhitelist'] != b_data['taskwhitelist']:
output.append(color_format("{color_title}taskwhitelist changed{color_default} from '%s' to '%s'") % (a_data['taskwhitelist'], b_data['taskwhitelist']))
output.append("taskwhitelist changed from '%s' to '%s'" % (a_data['taskwhitelist'], b_data['taskwhitelist']))
if a_data['taskwhitelist'] and b_data['taskwhitelist']:
output.append("changed items: %s" % a_data['taskwhitelist'].symmetric_difference(b_data['taskwhitelist']))
if a_data['taskdeps'] != b_data['taskdeps']:
output.append(color_format("{color_title}Task dependencies changed{color_default} from:\n%s\nto:\n%s") % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps'])))
output.append("Task dependencies changed from:\n%s\nto:\n%s" % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps'])))
if a_data['basehash'] != b_data['basehash'] and not collapsed:
output.append(color_format("{color_title}basehash changed{color_default} from %s to %s") % (a_data['basehash'], b_data['basehash']))
if a_data['basehash'] != b_data['basehash']:
output.append("basehash changed from %s to %s" % (a_data['basehash'], b_data['basehash']))
changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basewhitelist'] & b_data['basewhitelist'])
if changed:
for dep in changed:
output.append(color_format("{color_title}List of dependencies for variable %s changed from '{color_default}%s{color_title}' to '{color_default}%s{color_title}'") % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep]))
output.append("List of dependencies for variable %s changed from '%s' to '%s'" % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep]))
if a_data['gendeps'][dep] and b_data['gendeps'][dep]:
output.append("changed items: %s" % a_data['gendeps'][dep].symmetric_difference(b_data['gendeps'][dep]))
if added:
for dep in added:
output.append(color_format("{color_title}Dependency on variable %s was added") % (dep))
output.append("Dependency on variable %s was added" % (dep))
if removed:
for dep in removed:
output.append(color_format("{color_title}Dependency on Variable %s was removed") % (dep))
output.append("Dependency on Variable %s was removed" % (dep))
changed, added, removed = dict_diff(a_data['varvals'], b_data['varvals'])
if changed:
for dep in changed:
oldval = a_data['varvals'][dep]
newval = b_data['varvals'][dep]
if newval and oldval and ('\n' in oldval or '\n' in newval):
diff = difflib.unified_diff(oldval.splitlines(), newval.splitlines(), lineterm='')
# Cut off the first two lines, since we aren't interested in
# the old/new filename (they are blank anyway in this case)
difflines = list(diff)[2:]
if color:
# Add colour to diff output
for i, line in enumerate(difflines):
if line.startswith('+'):
line = color_format('{color_add}{line}', line=line)
difflines[i] = line
elif line.startswith('-'):
line = color_format('{color_remove}{line}', line=line)
difflines[i] = line
output.append(color_format("{color_title}Variable {var} value changed:{color_default}\n{diff}", var=dep, diff='\n'.join(difflines)))
elif newval and oldval and (' ' in oldval or ' ' in newval):
output.append(color_format("{color_title}Variable {var} value changed:{color_default}\n{diff}", var=dep, diff=worddiff_str(oldval, newval, colors)))
else:
output.append(color_format("{color_title}Variable {var} value changed from '{color_default}{oldval}{color_title}' to '{color_default}{newval}{color_title}'{color_default}", var=dep, oldval=oldval, newval=newval))
if not 'file_checksum_values' in a_data:
a_data['file_checksum_values'] = {}
if not 'file_checksum_values' in b_data:
b_data['file_checksum_values'] = {}
output.append("Variable %s value changed from '%s' to '%s'" % (dep, a_data['varvals'][dep], b_data['varvals'][dep]))
changed, added, removed = file_checksums_diff(a_data['file_checksum_values'], b_data['file_checksum_values'])
if changed:
for f, old, new in changed:
output.append(color_format("{color_title}Checksum for file %s changed{color_default} from %s to %s") % (f, old, new))
output.append("Checksum for file %s changed from %s to %s" % (f, old, new))
if added:
for f in added:
output.append(color_format("{color_title}Dependency on checksum of file %s was added") % (f))
output.append("Dependency on checksum of file %s was added" % (f))
if removed:
for f in removed:
output.append(color_format("{color_title}Dependency on checksum of file %s was removed") % (f))
output.append("Dependency on checksum of file %s was removed" % (f))
if not 'runtaskdeps' in a_data:
a_data['runtaskdeps'] = {}
if not 'runtaskdeps' in b_data:
b_data['runtaskdeps'] = {}
if not collapsed:
if len(a_data['runtaskdeps']) != len(b_data['runtaskdeps']):
changed = ["Number of task dependencies changed"]
else:
changed = []
for idx, task in enumerate(a_data['runtaskdeps']):
a = a_data['runtaskdeps'][idx]
b = b_data['runtaskdeps'][idx]
if a_data['runtaskhashes'][a] != b_data['runtaskhashes'][b] and not collapsed:
changed.append("%s with hash %s\n changed to\n%s with hash %s" % (clean_basepath(a), a_data['runtaskhashes'][a], clean_basepath(b), b_data['runtaskhashes'][b]))
if len(a_data['runtaskdeps']) != len(b_data['runtaskdeps']):
changed = ["Number of task dependencies changed"]
else:
changed = []
for idx, task in enumerate(a_data['runtaskdeps']):
a = a_data['runtaskdeps'][idx]
b = b_data['runtaskdeps'][idx]
if a_data['runtaskhashes'][a] != b_data['runtaskhashes'][b]:
changed.append("%s with hash %s\n changed to\n%s with hash %s" % (a, a_data['runtaskhashes'][a], b, b_data['runtaskhashes'][b]))
if changed:
clean_a = clean_basepaths_list(a_data['runtaskdeps'])
clean_b = clean_basepaths_list(b_data['runtaskdeps'])
if clean_a != clean_b:
output.append(color_format("{color_title}runtaskdeps changed:{color_default}\n%s") % list_inline_diff(clean_a, clean_b, colors))
else:
output.append(color_format("{color_title}runtaskdeps changed:"))
output.append("\n".join(changed))
if changed:
output.append("runtaskdeps changed from %s to %s" % (clean_basepaths_list(a_data['runtaskdeps']), clean_basepaths_list(b_data['runtaskdeps'])))
output.append("\n".join(changed))
if 'runtaskhashes' in a_data and 'runtaskhashes' in b_data:
@@ -624,7 +498,7 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
#output.append("Dependency on task %s was replaced by %s with same hash" % (dep, bdep))
bdep_found = True
if not bdep_found:
output.append(color_format("{color_title}Dependency on task %s was added{color_default} with hash %s") % (clean_basepath(dep), b[dep]))
output.append("Dependency on task %s was added with hash %s" % (clean_basepath(dep), b[dep]))
if removed:
for dep in removed:
adep_found = False
@@ -634,25 +508,21 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
#output.append("Dependency on task %s was replaced by %s with same hash" % (adep, dep))
adep_found = True
if not adep_found:
output.append(color_format("{color_title}Dependency on task %s was removed{color_default} with hash %s") % (clean_basepath(dep), a[dep]))
output.append("Dependency on task %s was removed with hash %s" % (clean_basepath(dep), a[dep]))
if changed:
for dep in changed:
if not collapsed:
output.append(color_format("{color_title}Hash for dependent task %s changed{color_default} from %s to %s") % (clean_basepath(dep), a[dep], b[dep]))
output.append("Hash for dependent task %s changed from %s to %s" % (clean_basepath(dep), a[dep], b[dep]))
if callable(recursecb):
# If a dependent hash changed, might as well print the line above and then defer to the changes in
# that hash since in all likelyhood, they're the same changes this task also saw.
recout = recursecb(dep, a[dep], b[dep])
if recout:
if collapsed:
output.extend(recout)
else:
# If a dependent hash changed, might as well print the line above and then defer to the changes in
# that hash since in all likelyhood, they're the same changes this task also saw.
output = [output[-1]] + recout
output = [output[-1]] + recout
a_taint = a_data.get('taint', None)
b_taint = b_data.get('taint', None)
if a_taint != b_taint:
output.append(color_format("{color_title}Taint (by forced/invalidated task) changed{color_default} from %s to %s") % (a_taint, b_taint))
output.append("Taint (by forced/invalidated task) changed from %s to %s" % (a_taint, b_taint))
return output
@@ -671,7 +541,7 @@ def calc_basehash(sigdata):
if val is not None:
basedata = basedata + str(val)
return hashlib.md5(basedata.encode("utf-8")).hexdigest()
return hashlib.md5(basedata).hexdigest()
def calc_taskhash(sigdata):
data = sigdata['basehash']
@@ -689,15 +559,14 @@ def calc_taskhash(sigdata):
else:
data = data + sigdata['taint']
return hashlib.md5(data.encode("utf-8")).hexdigest()
return hashlib.md5(data).hexdigest()
def dump_sigfile(a):
output = []
with open(a, 'rb') as f:
p1 = pickle.Unpickler(f)
a_data = p1.load()
p1 = pickle.Unpickler(open(a, "rb"))
a_data = p1.load()
output.append("basewhitelist: %s" % (a_data['basewhitelist']))

View File

@@ -37,24 +37,27 @@ def re_match_strings(target, strings):
return any(name == target or re.match(name, target)
for name in strings)
class TaskEntry:
def __init__(self):
self.tdepends = []
self.idepends = []
self.irdepends = []
class TaskData:
"""
BitBake Task Data implementation
"""
def __init__(self, abort = True, skiplist = None, allowincomplete = False):
def __init__(self, abort = True, tryaltconfigs = False, skiplist = None, allowincomplete = False):
self.build_names_index = []
self.run_names_index = []
self.fn_index = []
self.build_targets = {}
self.run_targets = {}
self.external_targets = []
self.seenfns = []
self.taskentries = {}
self.tasks_fnid = []
self.tasks_name = []
self.tasks_tdepends = []
self.tasks_idepends = []
self.tasks_irdepends = []
# Cache to speed up task ID lookups
self.tasks_lookup = {}
self.depids = {}
self.rdepids = {}
@@ -63,14 +66,95 @@ class TaskData:
self.failed_deps = []
self.failed_rdeps = []
self.failed_fns = []
self.failed_fnids = []
self.abort = abort
self.tryaltconfigs = tryaltconfigs
self.allowincomplete = allowincomplete
self.skiplist = skiplist
self.mcdepends = []
def getbuild_id(self, name):
"""
Return an ID number for the build target name.
If it doesn't exist, create one.
"""
if not name in self.build_names_index:
self.build_names_index.append(name)
return len(self.build_names_index) - 1
return self.build_names_index.index(name)
def getrun_id(self, name):
"""
Return an ID number for the run target name.
If it doesn't exist, create one.
"""
if not name in self.run_names_index:
self.run_names_index.append(name)
return len(self.run_names_index) - 1
return self.run_names_index.index(name)
def getfn_id(self, name):
"""
Return an ID number for the filename.
If it doesn't exist, create one.
"""
if not name in self.fn_index:
self.fn_index.append(name)
return len(self.fn_index) - 1
return self.fn_index.index(name)
def gettask_ids(self, fnid):
"""
Return an array of the ID numbers matching a given fnid.
"""
ids = []
if fnid in self.tasks_lookup:
for task in self.tasks_lookup[fnid]:
ids.append(self.tasks_lookup[fnid][task])
return ids
def gettask_id_fromfnid(self, fnid, task):
"""
Return an ID number for the task matching fnid and task.
"""
if fnid in self.tasks_lookup:
if task in self.tasks_lookup[fnid]:
return self.tasks_lookup[fnid][task]
return None
def gettask_id(self, fn, task, create = True):
"""
Return an ID number for the task matching fn and task.
If it doesn't exist, create one by default.
Optionally return None instead.
"""
fnid = self.getfn_id(fn)
if fnid in self.tasks_lookup:
if task in self.tasks_lookup[fnid]:
return self.tasks_lookup[fnid][task]
if not create:
return None
self.tasks_name.append(task)
self.tasks_fnid.append(fnid)
self.tasks_tdepends.append([])
self.tasks_idepends.append([])
self.tasks_irdepends.append([])
listid = len(self.tasks_name) - 1
if fnid not in self.tasks_lookup:
self.tasks_lookup[fnid] = {}
self.tasks_lookup[fnid][task] = listid
return listid
def add_tasks(self, fn, dataCache):
"""
@@ -79,71 +163,60 @@ class TaskData:
task_deps = dataCache.task_deps[fn]
if fn in self.failed_fns:
fnid = self.getfn_id(fn)
if fnid in self.failed_fnids:
bb.msg.fatal("TaskData", "Trying to re-add a failed file? Something is broken...")
# Check if we've already seen this fn
if fn in self.seenfns:
if fnid in self.tasks_fnid:
return
self.seenfns.append(fn)
self.add_extra_deps(fn, dataCache)
def add_mcdepends(task):
for dep in task_deps['mcdepends'][task].split():
if len(dep.split(':')) != 5:
bb.msg.fatal("TaskData", "Error for %s:%s[%s], multiconfig dependency %s does not contain exactly four ':' characters.\n Task '%s' should be specified in the form 'multiconfig:fromMC:toMC:packagename:task'" % (fn, task, 'mcdepends', dep, 'mcdepends'))
if dep not in self.mcdepends:
self.mcdepends.append(dep)
# Common code for dep_name/depends = 'depends'/idepends and 'rdepends'/irdepends
def handle_deps(task, dep_name, depends, seen):
if dep_name in task_deps and task in task_deps[dep_name]:
ids = []
for dep in task_deps[dep_name][task].split():
if dep:
parts = dep.split(":")
if len(parts) != 2:
bb.msg.fatal("TaskData", "Error for %s:%s[%s], dependency %s in '%s' does not contain exactly one ':' character.\n Task '%s' should be specified in the form 'packagename:task'" % (fn, task, dep_name, dep, task_deps[dep_name][task], dep_name))
ids.append((parts[0], parts[1]))
seen(parts[0])
depends.extend(ids)
for task in task_deps['tasks']:
tid = "%s:%s" % (fn, task)
self.taskentries[tid] = TaskEntry()
# Work out task dependencies
parentids = []
for dep in task_deps['parents'][task]:
if dep not in task_deps['tasks']:
bb.debug(2, "Not adding dependency of %s on %s since %s does not exist" % (task, dep, dep))
bb.debug(2, "Not adding dependeny of %s on %s since %s does not exist" % (task, dep, dep))
continue
parentid = "%s:%s" % (fn, dep)
parentid = self.gettask_id(fn, dep)
parentids.append(parentid)
self.taskentries[tid].tdepends.extend(parentids)
taskid = self.gettask_id(fn, task)
self.tasks_tdepends[taskid].extend(parentids)
# Touch all intertask dependencies
handle_deps(task, 'depends', self.taskentries[tid].idepends, self.seen_build_target)
handle_deps(task, 'rdepends', self.taskentries[tid].irdepends, self.seen_run_target)
if 'depends' in task_deps and task in task_deps['depends']:
ids = []
for dep in task_deps['depends'][task].split():
if dep:
if ":" not in dep:
bb.msg.fatal("TaskData", "Error for %s, dependency %s does not contain ':' character\n. Task 'depends' should be specified in the form 'packagename:task'" % (fn, dep))
ids.append(((self.getbuild_id(dep.split(":")[0])), dep.split(":")[1]))
self.tasks_idepends[taskid].extend(ids)
if 'rdepends' in task_deps and task in task_deps['rdepends']:
ids = []
for dep in task_deps['rdepends'][task].split():
if dep:
if ":" not in dep:
bb.msg.fatal("TaskData", "Error for %s, dependency %s does not contain ':' character\n. Task 'rdepends' should be specified in the form 'packagename:task'" % (fn, dep))
ids.append(((self.getrun_id(dep.split(":")[0])), dep.split(":")[1]))
self.tasks_irdepends[taskid].extend(ids)
if 'mcdepends' in task_deps and task in task_deps['mcdepends']:
add_mcdepends(task)
# Work out build dependencies
if not fn in self.depids:
dependids = set()
if not fnid in self.depids:
dependids = {}
for depend in dataCache.deps[fn]:
dependids.add(depend)
self.depids[fn] = list(dependids)
dependids[self.getbuild_id(depend)] = None
self.depids[fnid] = dependids.keys()
logger.debug(2, "Added dependencies %s for %s", str(dataCache.deps[fn]), fn)
# Work out runtime dependencies
if not fn in self.rdepids:
rdependids = set()
if not fnid in self.rdepids:
rdependids = {}
rdepends = dataCache.rundeps[fn]
rrecs = dataCache.runrecs[fn]
rdependlist = []
@@ -151,26 +224,24 @@ class TaskData:
for package in rdepends:
for rdepend in rdepends[package]:
rdependlist.append(rdepend)
rdependids.add(rdepend)
rdependids[self.getrun_id(rdepend)] = None
for package in rrecs:
for rdepend in rrecs[package]:
rreclist.append(rdepend)
rdependids.add(rdepend)
rdependids[self.getrun_id(rdepend)] = None
if rdependlist:
logger.debug(2, "Added runtime dependencies %s for %s", str(rdependlist), fn)
if rreclist:
logger.debug(2, "Added runtime recommendations %s for %s", str(rreclist), fn)
self.rdepids[fn] = list(rdependids)
self.rdepids[fnid] = rdependids.keys()
for dep in self.depids[fn]:
self.seen_build_target(dep)
for dep in self.depids[fnid]:
if dep in self.failed_deps:
self.fail_fn(fn)
self.fail_fnid(fnid)
return
for dep in self.rdepids[fn]:
self.seen_run_target(dep)
for dep in self.rdepids[fnid]:
if dep in self.failed_rdeps:
self.fail_fn(fn)
self.fail_fnid(fnid)
return
def add_extra_deps(self, fn, dataCache):
@@ -192,7 +263,9 @@ class TaskData:
"""
Have we a build target matching this name?
"""
if target in self.build_targets and self.build_targets[target]:
targetid = self.getbuild_id(target)
if targetid in self.build_targets:
return True
return False
@@ -200,54 +273,50 @@ class TaskData:
"""
Have we a runtime target matching this name?
"""
if target in self.run_targets and self.run_targets[target]:
targetid = self.getrun_id(target)
if targetid in self.run_targets:
return True
return False
def seen_build_target(self, name):
"""
Maintain a list of build targets
"""
if name not in self.build_targets:
self.build_targets[name] = []
def add_build_target(self, fn, item):
"""
Add a build target.
If already present, append the provider fn to the list
"""
if item in self.build_targets:
if fn in self.build_targets[item]:
return
self.build_targets[item].append(fn)
return
self.build_targets[item] = [fn]
targetid = self.getbuild_id(item)
fnid = self.getfn_id(fn)
def seen_run_target(self, name):
"""
Maintain a list of runtime build targets
"""
if name not in self.run_targets:
self.run_targets[name] = []
if targetid in self.build_targets:
if fnid in self.build_targets[targetid]:
return
self.build_targets[targetid].append(fnid)
return
self.build_targets[targetid] = [fnid]
def add_runtime_target(self, fn, item):
"""
Add a runtime target.
If already present, append the provider fn to the list
"""
if item in self.run_targets:
if fn in self.run_targets[item]:
return
self.run_targets[item].append(fn)
return
self.run_targets[item] = [fn]
targetid = self.getrun_id(item)
fnid = self.getfn_id(fn)
def mark_external_target(self, target):
if targetid in self.run_targets:
if fnid in self.run_targets[targetid]:
return
self.run_targets[targetid].append(fnid)
return
self.run_targets[targetid] = [fnid]
def mark_external_target(self, item):
"""
Mark a build target as being externally requested
"""
if target not in self.external_targets:
self.external_targets.append(target)
targetid = self.getbuild_id(item)
if targetid not in self.external_targets:
self.external_targets.append(targetid)
def get_unresolved_build_targets(self, dataCache):
"""
@@ -255,12 +324,12 @@ class TaskData:
are unknown.
"""
unresolved = []
for target in self.build_targets:
for target in self.build_names_index:
if re_match_strings(target, dataCache.ignored_dependencies):
continue
if target in self.failed_deps:
if self.build_names_index.index(target) in self.failed_deps:
continue
if not self.build_targets[target]:
if not self.have_build_target(target):
unresolved.append(target)
return unresolved
@@ -270,12 +339,12 @@ class TaskData:
are unknown.
"""
unresolved = []
for target in self.run_targets:
for target in self.run_names_index:
if re_match_strings(target, dataCache.ignored_dependencies):
continue
if target in self.failed_rdeps:
if self.run_names_index.index(target) in self.failed_rdeps:
continue
if not self.run_targets[target]:
if not self.have_runtime_target(target):
unresolved.append(target)
return unresolved
@@ -283,26 +352,50 @@ class TaskData:
"""
Return a list of providers of item
"""
return self.build_targets[item]
targetid = self.getbuild_id(item)
def get_dependees(self, item):
return self.build_targets[targetid]
def get_dependees(self, itemid):
"""
Return a list of targets which depend on item
"""
dependees = []
for fn in self.depids:
if item in self.depids[fn]:
dependees.append(fn)
for fnid in self.depids:
if itemid in self.depids[fnid]:
dependees.append(fnid)
return dependees
def get_rdependees(self, item):
def get_dependees_str(self, item):
"""
Return a list of targets which depend on item as a user readable string
"""
itemid = self.getbuild_id(item)
dependees = []
for fnid in self.depids:
if itemid in self.depids[fnid]:
dependees.append(self.fn_index[fnid])
return dependees
def get_rdependees(self, itemid):
"""
Return a list of targets which depend on runtime item
"""
dependees = []
for fn in self.rdepids:
if item in self.rdepids[fn]:
dependees.append(fn)
for fnid in self.rdepids:
if itemid in self.rdepids[fnid]:
dependees.append(fnid)
return dependees
def get_rdependees_str(self, item):
"""
Return a list of targets which depend on runtime item as a user readable string
"""
itemid = self.getrun_id(item)
dependees = []
for fnid in self.rdepids:
if itemid in self.rdepids[fnid]:
dependees.append(self.fn_index[fnid])
return dependees
def get_reasons(self, item, runtime=False):
@@ -338,7 +431,7 @@ class TaskData:
except bb.providers.NoProvider:
if self.abort:
raise
self.remove_buildtarget(item)
self.remove_buildtarget(self.getbuild_id(item))
self.mark_external_target(item)
@@ -353,14 +446,14 @@ class TaskData:
return
if not item in dataCache.providers:
close_matches = self.get_close_matches(item, list(dataCache.providers.keys()))
close_matches = self.get_close_matches(item, dataCache.providers.keys())
# Is it in RuntimeProviders ?
all_p = bb.providers.getRuntimeProviders(dataCache, item)
for fn in all_p:
new = dataCache.pkg_fn[fn] + " RPROVIDES " + item
if new not in close_matches:
close_matches.append(new)
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees(item), reasons=self.get_reasons(item), close_matches=close_matches), cfgData)
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees_str(item), reasons=self.get_reasons(item), close_matches=close_matches), cfgData)
raise bb.providers.NoProvider(item)
if self.have_build_target(item):
@@ -369,10 +462,10 @@ class TaskData:
all_p = dataCache.providers[item]
eligible, foundUnique = bb.providers.filterProviders(all_p, item, cfgData, dataCache)
eligible = [p for p in eligible if not p in self.failed_fns]
eligible = [p for p in eligible if not self.getfn_id(p) in self.failed_fnids]
if not eligible:
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees(item), reasons=["No eligible PROVIDERs exist for '%s'" % item]), cfgData)
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees_str(item), reasons=["No eligible PROVIDERs exist for '%s'" % item]), cfgData)
raise bb.providers.NoProvider(item)
if len(eligible) > 1 and foundUnique == False:
@@ -384,7 +477,8 @@ class TaskData:
self.consider_msgs_cache.append(item)
for fn in eligible:
if fn in self.failed_fns:
fnid = self.getfn_id(fn)
if fnid in self.failed_fnids:
continue
logger.debug(2, "adding %s to satisfy %s", fn, item)
self.add_build_target(fn, item)
@@ -408,14 +502,14 @@ class TaskData:
all_p = bb.providers.getRuntimeProviders(dataCache, item)
if not all_p:
bb.event.fire(bb.event.NoProvider(item, runtime=True, dependees=self.get_rdependees(item), reasons=self.get_reasons(item, True)), cfgData)
bb.event.fire(bb.event.NoProvider(item, runtime=True, dependees=self.get_rdependees_str(item), reasons=self.get_reasons(item, True)), cfgData)
raise bb.providers.NoRProvider(item)
eligible, numberPreferred = bb.providers.filterProvidersRunTime(all_p, item, cfgData, dataCache)
eligible = [p for p in eligible if not p in self.failed_fns]
eligible = [p for p in eligible if not self.getfn_id(p) in self.failed_fnids]
if not eligible:
bb.event.fire(bb.event.NoProvider(item, runtime=True, dependees=self.get_rdependees(item), reasons=["No eligible RPROVIDERs exist for '%s'" % item]), cfgData)
bb.event.fire(bb.event.NoProvider(item, runtime=True, dependees=self.get_rdependees_str(item), reasons=["No eligible RPROVIDERs exist for '%s'" % item]), cfgData)
raise bb.providers.NoRProvider(item)
if len(eligible) > 1 and numberPreferred == 0:
@@ -437,80 +531,82 @@ class TaskData:
# run through the list until we find one that we can build
for fn in eligible:
if fn in self.failed_fns:
fnid = self.getfn_id(fn)
if fnid in self.failed_fnids:
continue
logger.debug(2, "adding '%s' to satisfy runtime '%s'", fn, item)
self.add_runtime_target(fn, item)
self.add_tasks(fn, dataCache)
def fail_fn(self, fn, missing_list=None):
def fail_fnid(self, fnid, missing_list=None):
"""
Mark a file as failed (unbuildable)
Remove any references from build and runtime provider lists
missing_list, A list of missing requirements for this target
"""
if fn in self.failed_fns:
if fnid in self.failed_fnids:
return
if not missing_list:
missing_list = []
logger.debug(1, "File '%s' is unbuildable, removing...", fn)
self.failed_fns.append(fn)
logger.debug(1, "File '%s' is unbuildable, removing...", self.fn_index[fnid])
self.failed_fnids.append(fnid)
for target in self.build_targets:
if fn in self.build_targets[target]:
self.build_targets[target].remove(fn)
if fnid in self.build_targets[target]:
self.build_targets[target].remove(fnid)
if len(self.build_targets[target]) == 0:
self.remove_buildtarget(target, missing_list)
for target in self.run_targets:
if fn in self.run_targets[target]:
self.run_targets[target].remove(fn)
if fnid in self.run_targets[target]:
self.run_targets[target].remove(fnid)
if len(self.run_targets[target]) == 0:
self.remove_runtarget(target, missing_list)
def remove_buildtarget(self, target, missing_list=None):
def remove_buildtarget(self, targetid, missing_list=None):
"""
Mark a build target as failed (unbuildable)
Trigger removal of any files that have this as a dependency
"""
if not missing_list:
missing_list = [target]
missing_list = [self.build_names_index[targetid]]
else:
missing_list = [target] + missing_list
logger.verbose("Target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s", target, missing_list)
self.failed_deps.append(target)
dependees = self.get_dependees(target)
for fn in dependees:
self.fail_fn(fn, missing_list)
for tid in self.taskentries:
for (idepend, idependtask) in self.taskentries[tid].idepends:
if idepend == target:
fn = tid.rsplit(":",1)[0]
self.fail_fn(fn, missing_list)
missing_list = [self.build_names_index[targetid]] + missing_list
logger.verbose("Target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s", self.build_names_index[targetid], missing_list)
self.failed_deps.append(targetid)
dependees = self.get_dependees(targetid)
for fnid in dependees:
self.fail_fnid(fnid, missing_list)
for taskid in xrange(len(self.tasks_idepends)):
idepends = self.tasks_idepends[taskid]
for (idependid, idependtask) in idepends:
if idependid == targetid:
self.fail_fnid(self.tasks_fnid[taskid], missing_list)
if self.abort and target in self.external_targets:
if self.abort and targetid in self.external_targets:
target = self.build_names_index[targetid]
logger.error("Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s", target, missing_list)
raise bb.providers.NoProvider(target)
def remove_runtarget(self, target, missing_list=None):
def remove_runtarget(self, targetid, missing_list=None):
"""
Mark a run target as failed (unbuildable)
Trigger removal of any files that have this as a dependency
"""
if not missing_list:
missing_list = [target]
missing_list = [self.run_names_index[targetid]]
else:
missing_list = [target] + missing_list
missing_list = [self.run_names_index[targetid]] + missing_list
logger.info("Runtime target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s", target, missing_list)
self.failed_rdeps.append(target)
dependees = self.get_rdependees(target)
for fn in dependees:
self.fail_fn(fn, missing_list)
for tid in self.taskentries:
for (idepend, idependtask) in self.taskentries[tid].irdepends:
if idepend == target:
fn = tid.rsplit(":",1)[0]
self.fail_fn(fn, missing_list)
logger.info("Runtime target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s", self.run_names_index[targetid], missing_list)
self.failed_rdeps.append(targetid)
dependees = self.get_rdependees(targetid)
for fnid in dependees:
self.fail_fnid(fnid, missing_list)
for taskid in xrange(len(self.tasks_irdepends)):
irdepends = self.tasks_irdepends[taskid]
for (idependid, idependtask) in irdepends:
if idependid == targetid:
self.fail_fnid(self.tasks_fnid[taskid], missing_list)
def add_unresolved(self, cfgData, dataCache):
"""
@@ -524,16 +620,17 @@ class TaskData:
self.add_provider_internal(cfgData, dataCache, target)
added = added + 1
except bb.providers.NoProvider:
if self.abort and target in self.external_targets and not self.allowincomplete:
targetid = self.getbuild_id(target)
if self.abort and targetid in self.external_targets and not self.allowincomplete:
raise
if not self.allowincomplete:
self.remove_buildtarget(target)
self.remove_buildtarget(targetid)
for target in self.get_unresolved_run_targets(dataCache):
try:
self.add_rprovider(cfgData, dataCache, target)
added = added + 1
except (bb.providers.NoRProvider, bb.providers.MultipleRProvider):
self.remove_runtarget(target)
self.remove_runtarget(self.getrun_id(target))
logger.debug(1, "Resolved " + str(added) + " extra dependencies")
if added == 0:
break
@@ -541,54 +638,53 @@ class TaskData:
def get_providermap(self, prefix=None):
provmap = {}
for name in self.build_targets:
for name in self.build_names_index:
if prefix and not name.startswith(prefix):
continue
if self.have_build_target(name):
provider = self.get_provider(name)
if provider:
provmap[name] = provider[0]
provmap[name] = self.fn_index[provider[0]]
return provmap
def get_mcdepends(self):
return self.mcdepends
def dump_data(self):
"""
Dump some debug information on the internal data structures
"""
logger.debug(3, "build_names:")
logger.debug(3, ", ".join(self.build_targets))
logger.debug(3, ", ".join(self.build_names_index))
logger.debug(3, "run_names:")
logger.debug(3, ", ".join(self.run_targets))
logger.debug(3, ", ".join(self.run_names_index))
logger.debug(3, "build_targets:")
for target in self.build_targets:
for buildid in xrange(len(self.build_names_index)):
target = self.build_names_index[buildid]
targets = "None"
if target in self.build_targets:
targets = self.build_targets[target]
logger.debug(3, " %s: %s", target, targets)
if buildid in self.build_targets:
targets = self.build_targets[buildid]
logger.debug(3, " (%s)%s: %s", buildid, target, targets)
logger.debug(3, "run_targets:")
for target in self.run_targets:
for runid in xrange(len(self.run_names_index)):
target = self.run_names_index[runid]
targets = "None"
if target in self.run_targets:
targets = self.run_targets[target]
logger.debug(3, " %s: %s", target, targets)
if runid in self.run_targets:
targets = self.run_targets[runid]
logger.debug(3, " (%s)%s: %s", runid, target, targets)
logger.debug(3, "tasks:")
for tid in self.taskentries:
logger.debug(3, " %s: %s %s %s",
tid,
self.taskentries[tid].idepends,
self.taskentries[tid].irdepends,
self.taskentries[tid].tdepends)
for task in xrange(len(self.tasks_name)):
logger.debug(3, " (%s)%s - %s: %s",
task,
self.fn_index[self.tasks_fnid[task]],
self.tasks_name[task],
self.tasks_tdepends[task])
logger.debug(3, "dependency ids (per fn):")
for fn in self.depids:
logger.debug(3, " %s: %s", fn, self.depids[fn])
for fnid in self.depids:
logger.debug(3, " %s %s: %s", fnid, self.fn_index[fnid], self.depids[fnid])
logger.debug(3, "runtime dependency ids (per fn):")
for fn in self.rdepids:
logger.debug(3, " %s: %s", fn, self.rdepids[fn])
for fnid in self.rdepids:
logger.debug(3, " %s %s: %s", fnid, self.fn_index[fnid], self.rdepids[fnid])

View File

@@ -49,9 +49,6 @@ class ReferenceTest(unittest.TestCase):
def assertExecs(self, execs):
self.assertEqual(self.execs, execs)
def assertContains(self, contains):
self.assertEqual(self.contains, contains)
class VariableReferenceTest(ReferenceTest):
def parseExpression(self, exp):
@@ -71,7 +68,7 @@ class VariableReferenceTest(ReferenceTest):
def test_python_reference(self):
self.setEmptyVars(["BAR"])
self.parseExpression("${@d.getVar('BAR') + 'foo'}")
self.parseExpression("${@bb.data.getVar('BAR', d, True) + 'foo'}")
self.assertReferences(set(["BAR"]))
class ShellReferenceTest(ReferenceTest):
@@ -194,8 +191,8 @@ class PythonReferenceTest(ReferenceTest):
if hasattr(bb.utils, "_context"):
self.context = bb.utils._context
else:
import builtins
self.context = builtins.__dict__
import __builtin__
self.context = __builtin__.__dict__
def parseExpression(self, exp):
parsedvar = self.d.expandWithRefs(exp, None)
@@ -204,7 +201,6 @@ class PythonReferenceTest(ReferenceTest):
self.references = parsedvar.references | parser.references
self.execs = parser.execs
self.contains = parser.contains
@staticmethod
def indent(value):
@@ -213,17 +209,17 @@ be. These unit tests are testing snippets."""
return " " + value
def test_getvar_reference(self):
self.parseExpression("d.getVar('foo')")
self.parseExpression("bb.data.getVar('foo', d, True)")
self.assertReferences(set(["foo"]))
self.assertExecs(set())
def test_getvar_computed_reference(self):
self.parseExpression("d.getVar('f' + 'o' + 'o')")
self.parseExpression("bb.data.getVar('f' + 'o' + 'o', d, True)")
self.assertReferences(set())
self.assertExecs(set())
def test_getvar_exec_reference(self):
self.parseExpression("eval('d.getVar(\"foo\")')")
self.parseExpression("eval('bb.data.getVar(\"foo\", d, True)')")
self.assertReferences(set())
self.assertExecs(set(["eval"]))
@@ -269,35 +265,15 @@ be. These unit tests are testing snippets."""
self.assertExecs(set(["testget"]))
del self.context["testget"]
def test_contains(self):
self.parseExpression('bb.utils.contains("TESTVAR", "one", "true", "false", d)')
self.assertContains({'TESTVAR': {'one'}})
def test_contains_multi(self):
self.parseExpression('bb.utils.contains("TESTVAR", "one two", "true", "false", d)')
self.assertContains({'TESTVAR': {'one two'}})
def test_contains_any(self):
self.parseExpression('bb.utils.contains_any("TESTVAR", "hello", "true", "false", d)')
self.assertContains({'TESTVAR': {'hello'}})
def test_contains_any_multi(self):
self.parseExpression('bb.utils.contains_any("TESTVAR", "one two three", "true", "false", d)')
self.assertContains({'TESTVAR': {'one', 'two', 'three'}})
def test_contains_filter(self):
self.parseExpression('bb.utils.filter("TESTVAR", "hello there world", d)')
self.assertContains({'TESTVAR': {'hello', 'there', 'world'}})
class DependencyReferenceTest(ReferenceTest):
pydata = """
d.getVar('somevar')
bb.data.getVar('somevar', d, True)
def test(d):
foo = 'bar %s' % 'foo'
def test2(d):
d.getVar(foo)
d.getVar(foo, True)
d.getVar('bar', False)
test2(d)
@@ -309,9 +285,9 @@ def a():
test(d)
d.expand(d.getVar("something", False))
d.expand("${inexpand} somethingelse")
d.getVar(a(), False)
bb.data.expand(bb.data.getVar("something", False, d), d)
bb.data.expand("${inexpand} somethingelse", d)
bb.data.getVar(a(), d, False)
"""
def test_python(self):
@@ -326,7 +302,7 @@ d.getVar(a(), False)
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["somevar", "bar", "something", "inexpand", "test", "test2", "a"]))
self.assertEquals(deps, set(["somevar", "bar", "something", "inexpand", "test", "test2", "a"]))
shelldata = """
@@ -373,7 +349,7 @@ esac
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["somevar", "inverted"] + execs))
self.assertEquals(deps, set(["somevar", "inverted"] + execs))
def test_vardeps(self):
@@ -383,7 +359,7 @@ esac
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["oe_libinstall"]))
self.assertEquals(deps, set(["oe_libinstall"]))
def test_vardeps_expand(self):
self.d.setVar("oe_libinstall", "echo test")
@@ -392,31 +368,7 @@ esac
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["oe_libinstall"]))
def test_contains_vardeps(self):
expr = '${@bb.utils.filter("TESTVAR", "somevalue anothervalue", d)} \
${@bb.utils.contains("TESTVAR", "testval testval2", "yetanothervalue", "", d)} \
${@bb.utils.contains("TESTVAR", "testval2 testval3", "blah", "", d)} \
${@bb.utils.contains_any("TESTVAR", "testval2 testval3", "lastone", "", d)}'
parsedvar = self.d.expandWithRefs(expr, None)
# Check contains
self.assertEqual(parsedvar.contains, {'TESTVAR': {'testval2 testval3', 'anothervalue', 'somevalue', 'testval testval2', 'testval2', 'testval3'}})
# Check dependencies
self.d.setVar('ANOTHERVAR', expr)
self.d.setVar('TESTVAR', 'anothervalue testval testval2')
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(sorted(values.splitlines()),
sorted([expr,
'TESTVAR{anothervalue} = Set',
'TESTVAR{somevalue} = Unset',
'TESTVAR{testval testval2} = Set',
'TESTVAR{testval2 testval3} = Unset',
'TESTVAR{testval2} = Set',
'TESTVAR{testval3} = Unset'
]))
# Check final value
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['anothervalue', 'yetanothervalue', 'lastone'])
self.assertEquals(deps, set(["oe_libinstall"]))
#Currently no wildcard support
#def test_vardeps_wildcards(self):

View File

@@ -1,83 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# BitBake Tests for cooker.py
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import unittest
import tempfile
import os
import bb, bb.cooker
import re
import logging
# Cooker tests
class CookerTest(unittest.TestCase):
def setUp(self):
# At least one variable needs to be set
self.d = bb.data.init()
topdir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "testdata/cooker")
self.d.setVar('TOPDIR', topdir)
def test_CookerCollectFiles_sublayers(self):
'''Test that a sublayer of an existing layer does not trigger
No bb files matched ...'''
def append_collection(topdir, path, d):
collection = path.split('/')[-1]
pattern = "^" + topdir + "/" + path + "/"
regex = re.compile(pattern)
priority = 5
d.setVar('BBFILE_COLLECTIONS', (d.getVar('BBFILE_COLLECTIONS') or "") + " " + collection)
d.setVar('BBFILE_PATTERN_%s' % (collection), pattern)
d.setVar('BBFILE_PRIORITY_%s' % (collection), priority)
return (collection, pattern, regex, priority)
topdir = self.d.getVar("TOPDIR")
# Priorities: list of (collection, pattern, regex, priority)
bbfile_config_priorities = []
# Order is important for this test, shortest to longest is typical failure case
bbfile_config_priorities.append( append_collection(topdir, 'first', self.d) )
bbfile_config_priorities.append( append_collection(topdir, 'second', self.d) )
bbfile_config_priorities.append( append_collection(topdir, 'second/third', self.d) )
pkgfns = [ topdir + '/first/recipes/sample1_1.0.bb',
topdir + '/second/recipes/sample2_1.0.bb',
topdir + '/second/third/recipes/sample3_1.0.bb' ]
class LogHandler(logging.Handler):
def __init__(self):
logging.Handler.__init__(self)
self.logdata = []
def emit(self, record):
self.logdata.append(record.getMessage())
# Move cooker to use my special logging
logger = bb.cooker.logger
log_handler = LogHandler()
logger.addHandler(log_handler)
collection = bb.cooker.CookerCollectFiles(bbfile_config_priorities)
collection.collection_priorities(pkgfns, self.d)
logger.removeHandler(log_handler)
# Should be empty (no generated messages)
expected = []
self.assertEqual(log_handler.logdata, expected)

View File

@@ -34,14 +34,14 @@ class COWTestCase(unittest.TestCase):
from bb.COW import COWDictBase
a = COWDictBase.copy()
self.assertEqual(False, 'a' in a)
self.assertEquals(False, a.has_key('a'))
a['a'] = 'a'
a['b'] = 'b'
self.assertEqual(True, 'a' in a)
self.assertEqual(True, 'b' in a)
self.assertEqual('a', a['a'] )
self.assertEqual('b', a['b'] )
self.assertEquals(True, a.has_key('a'))
self.assertEquals(True, a.has_key('b'))
self.assertEquals('a', a['a'] )
self.assertEquals('b', a['b'] )
def testCopyCopy(self):
"""
@@ -60,31 +60,31 @@ class COWTestCase(unittest.TestCase):
c['a'] = 30
# test separation of the two instances
self.assertEqual(False, 'c' in c)
self.assertEqual(30, c['a'])
self.assertEqual(10, b['a'])
self.assertEquals(False, c.has_key('c'))
self.assertEquals(30, c['a'])
self.assertEquals(10, b['a'])
# test copy
b_2 = b.copy()
c_2 = c.copy()
self.assertEqual(False, 'c' in c_2)
self.assertEqual(10, b_2['a'])
self.assertEquals(False, c_2.has_key('c'))
self.assertEquals(10, b_2['a'])
b_2['d'] = 40
self.assertEqual(False, 'd' in c_2)
self.assertEqual(True, 'd' in b_2)
self.assertEqual(40, b_2['d'])
self.assertEqual(False, 'd' in b)
self.assertEqual(False, 'd' in c)
self.assertEquals(False, c_2.has_key('d'))
self.assertEquals(True, b_2.has_key('d'))
self.assertEquals(40, b_2['d'])
self.assertEquals(False, b.has_key('d'))
self.assertEquals(False, c.has_key('d'))
c_2['d'] = 30
self.assertEqual(True, 'd' in c_2)
self.assertEqual(True, 'd' in b_2)
self.assertEqual(30, c_2['d'])
self.assertEqual(40, b_2['d'])
self.assertEqual(False, 'd' in b)
self.assertEqual(False, 'd' in c)
self.assertEquals(True, c_2.has_key('d'))
self.assertEquals(True, b_2.has_key('d'))
self.assertEquals(30, c_2['d'])
self.assertEquals(40, b_2['d'])
self.assertEquals(False, b.has_key('d'))
self.assertEquals(False, c.has_key('d'))
# test copy of the copy
c_3 = c_2.copy()
@@ -92,19 +92,19 @@ class COWTestCase(unittest.TestCase):
b_3_2 = b_2.copy()
c_3['e'] = 4711
self.assertEqual(4711, c_3['e'])
self.assertEqual(False, 'e' in c_2)
self.assertEqual(False, 'e' in b_3)
self.assertEqual(False, 'e' in b_3_2)
self.assertEqual(False, 'e' in b_2)
self.assertEquals(4711, c_3['e'])
self.assertEquals(False, c_2.has_key('e'))
self.assertEquals(False, b_3.has_key('e'))
self.assertEquals(False, b_3_2.has_key('e'))
self.assertEquals(False, b_2.has_key('e'))
b_3['e'] = 'viel'
self.assertEqual('viel', b_3['e'])
self.assertEqual(4711, c_3['e'])
self.assertEqual(False, 'e' in c_2)
self.assertEqual(True, 'e' in b_3)
self.assertEqual(False, 'e' in b_3_2)
self.assertEqual(False, 'e' in b_2)
self.assertEquals('viel', b_3['e'])
self.assertEquals(4711, c_3['e'])
self.assertEquals(False, c_2.has_key('e'))
self.assertEquals(True, b_3.has_key('e'))
self.assertEquals(False, b_3_2.has_key('e'))
self.assertEquals(False, b_2.has_key('e'))
def testCow(self):
from bb.COW import COWDictBase
@@ -115,12 +115,12 @@ class COWTestCase(unittest.TestCase):
copy = c.copy()
self.assertEqual(1027, c['123'])
self.assertEqual(4711, c['other'])
self.assertEqual({'abc':10, 'bcd':20}, c['d'])
self.assertEqual(1027, copy['123'])
self.assertEqual(4711, copy['other'])
self.assertEqual({'abc':10, 'bcd':20}, copy['d'])
self.assertEquals(1027, c['123'])
self.assertEquals(4711, c['other'])
self.assertEquals({'abc':10, 'bcd':20}, c['d'])
self.assertEquals(1027, copy['123'])
self.assertEquals(4711, copy['other'])
self.assertEquals({'abc':10, 'bcd':20}, copy['d'])
# cow it now
copy['123'] = 1028
@@ -128,9 +128,9 @@ class COWTestCase(unittest.TestCase):
copy['d']['abc'] = 20
self.assertEqual(1027, c['123'])
self.assertEqual(4711, c['other'])
self.assertEqual({'abc':10, 'bcd':20}, c['d'])
self.assertEqual(1028, copy['123'])
self.assertEqual(4712, copy['other'])
self.assertEqual({'abc':20, 'bcd':20}, copy['d'])
self.assertEquals(1027, c['123'])
self.assertEquals(4711, c['other'])
self.assertEquals({'abc':10, 'bcd':20}, c['d'])
self.assertEquals(1028, copy['123'])
self.assertEquals(4712, copy['other'])
self.assertEquals({'abc':20, 'bcd':20}, copy['d'])

View File

@@ -77,13 +77,13 @@ class DataExpansions(unittest.TestCase):
self.assertEqual(str(val), "boo value_of_foo")
def test_python_snippet_getvar(self):
val = self.d.expand("${@d.getVar('foo') + ' ${bar}'}")
val = self.d.expand("${@d.getVar('foo', True) + ' ${bar}'}")
self.assertEqual(str(val), "value_of_foo value_of_bar")
def test_python_unexpanded(self):
self.d.setVar("bar", "${unsetvar}")
val = self.d.expand("${@d.getVar('foo') + ' ${bar}'}")
self.assertEqual(str(val), "${@d.getVar('foo') + ' ${unsetvar}'}")
val = self.d.expand("${@d.getVar('foo', True) + ' ${bar}'}")
self.assertEqual(str(val), "${@d.getVar('foo', True) + ' ${unsetvar}'}")
def test_python_snippet_syntax_error(self):
self.d.setVar("FOO", "${@foo = 5}")
@@ -99,7 +99,7 @@ class DataExpansions(unittest.TestCase):
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
def test_value_containing_value(self):
val = self.d.expand("${@d.getVar('foo') + ' ${bar}'}")
val = self.d.expand("${@d.getVar('foo', True) + ' ${bar}'}")
self.assertEqual(str(val), "value_of_foo value_of_bar")
def test_reference_undefined_var(self):
@@ -109,7 +109,7 @@ class DataExpansions(unittest.TestCase):
def test_double_reference(self):
self.d.setVar("BAR", "bar value")
self.d.setVar("FOO", "${BAR} foo ${BAR}")
val = self.d.getVar("FOO")
val = self.d.getVar("FOO", True)
self.assertEqual(str(val), "bar value foo bar value")
def test_direct_recursion(self):
@@ -129,12 +129,12 @@ class DataExpansions(unittest.TestCase):
def test_incomplete_varexp_single_quotes(self):
self.d.setVar("FOO", "sed -i -e 's:IP{:I${:g' $pc")
val = self.d.getVar("FOO")
val = self.d.getVar("FOO", True)
self.assertEqual(str(val), "sed -i -e 's:IP{:I${:g' $pc")
def test_nonstring(self):
self.d.setVar("TEST", 5)
val = self.d.getVar("TEST")
val = self.d.getVar("TEST", True)
self.assertEqual(str(val), "5")
def test_rename(self):
@@ -147,14 +147,14 @@ class DataExpansions(unittest.TestCase):
self.assertEqual(self.d.getVar("foo", False), None)
def test_keys(self):
keys = list(self.d.keys())
self.assertCountEqual(keys, ['value_of_foo', 'foo', 'bar'])
keys = self.d.keys()
self.assertEqual(keys, ['value_of_foo', 'foo', 'bar'])
def test_keys_deletion(self):
newd = bb.data.createCopy(self.d)
newd.delVar("bar")
keys = list(newd.keys())
self.assertCountEqual(keys, ['value_of_foo', 'foo'])
keys = newd.keys()
self.assertEqual(keys, ['value_of_foo', 'foo'])
class TestNestedExpansions(unittest.TestCase):
def setUp(self):
@@ -234,19 +234,19 @@ class TestConcat(unittest.TestCase):
def test_prepend(self):
self.d.setVar("TEST", "${VAL}")
self.d.prependVar("TEST", "${FOO}:")
self.assertEqual(self.d.getVar("TEST"), "foo:val")
self.assertEqual(self.d.getVar("TEST", True), "foo:val")
def test_append(self):
self.d.setVar("TEST", "${VAL}")
self.d.appendVar("TEST", ":${BAR}")
self.assertEqual(self.d.getVar("TEST"), "val:bar")
self.assertEqual(self.d.getVar("TEST", True), "val:bar")
def test_multiple_append(self):
self.d.setVar("TEST", "${VAL}")
self.d.prependVar("TEST", "${FOO}:")
self.d.appendVar("TEST", ":val2")
self.d.appendVar("TEST", ":${BAR}")
self.assertEqual(self.d.getVar("TEST"), "foo:val:val2:bar")
self.assertEqual(self.d.getVar("TEST", True), "foo:val:val2:bar")
class TestConcatOverride(unittest.TestCase):
def setUp(self):
@@ -258,66 +258,62 @@ class TestConcatOverride(unittest.TestCase):
def test_prepend(self):
self.d.setVar("TEST", "${VAL}")
self.d.setVar("TEST_prepend", "${FOO}:")
self.assertEqual(self.d.getVar("TEST"), "foo:val")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "foo:val")
def test_append(self):
self.d.setVar("TEST", "${VAL}")
self.d.setVar("TEST_append", ":${BAR}")
self.assertEqual(self.d.getVar("TEST"), "val:bar")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "val:bar")
def test_multiple_append(self):
self.d.setVar("TEST", "${VAL}")
self.d.setVar("TEST_prepend", "${FOO}:")
self.d.setVar("TEST_append", ":val2")
self.d.setVar("TEST_append", ":${BAR}")
self.assertEqual(self.d.getVar("TEST"), "foo:val:val2:bar")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "foo:val:val2:bar")
def test_append_unset(self):
self.d.setVar("TEST_prepend", "${FOO}:")
self.d.setVar("TEST_append", ":val2")
self.d.setVar("TEST_append", ":${BAR}")
self.assertEqual(self.d.getVar("TEST"), "foo::val2:bar")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "foo::val2:bar")
def test_remove(self):
self.d.setVar("TEST", "${VAL} ${BAR}")
self.d.setVar("TEST_remove", "val")
self.assertEqual(self.d.getVar("TEST"), "bar")
def test_remove_cleared(self):
self.d.setVar("TEST", "${VAL} ${BAR}")
self.d.setVar("TEST_remove", "val")
self.d.setVar("TEST", "${VAL} ${BAR}")
self.assertEqual(self.d.getVar("TEST"), "val bar")
# Ensure the value is unchanged if we have an inactive remove override
# (including that whitespace is preserved)
def test_remove_inactive_override(self):
self.d.setVar("TEST", "${VAL} ${BAR} 123")
self.d.setVar("TEST_remove_inactiveoverride", "val")
self.assertEqual(self.d.getVar("TEST"), "val bar 123")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "bar")
def test_doubleref_remove(self):
self.d.setVar("TEST", "${VAL} ${BAR}")
self.d.setVar("TEST_remove", "val")
self.d.setVar("TEST_TEST", "${TEST} ${TEST}")
self.assertEqual(self.d.getVar("TEST_TEST"), "bar bar")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST_TEST", True), "bar bar")
def test_empty_remove(self):
self.d.setVar("TEST", "")
self.d.setVar("TEST_remove", "val")
self.assertEqual(self.d.getVar("TEST"), "")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "")
def test_remove_expansion(self):
self.d.setVar("BAR", "Z")
self.d.setVar("TEST", "${BAR}/X Y")
self.d.setVar("TEST_remove", "${BAR}/X")
self.assertEqual(self.d.getVar("TEST"), "Y")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "Y")
def test_remove_expansion_items(self):
self.d.setVar("TEST", "A B C D")
self.d.setVar("BAR", "B D")
self.d.setVar("TEST_remove", "${BAR}")
self.assertEqual(self.d.getVar("TEST"), "A C")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "A C")
class TestOverrides(unittest.TestCase):
def setUp(self):
@@ -326,53 +322,60 @@ class TestOverrides(unittest.TestCase):
self.d.setVar("TEST", "testvalue")
def test_no_override(self):
self.assertEqual(self.d.getVar("TEST"), "testvalue")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "testvalue")
def test_one_override(self):
self.d.setVar("TEST_bar", "testvalue2")
self.assertEqual(self.d.getVar("TEST"), "testvalue2")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "testvalue2")
def test_one_override_unset(self):
self.d.setVar("TEST2_bar", "testvalue2")
self.assertEqual(self.d.getVar("TEST2"), "testvalue2")
self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST2', 'OVERRIDES', 'TEST2_bar'])
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST2", True), "testvalue2")
self.assertItemsEqual(self.d.keys(), ['TEST', 'TEST2', 'OVERRIDES', 'TEST2_bar'])
def test_multiple_override(self):
self.d.setVar("TEST_bar", "testvalue2")
self.d.setVar("TEST_local", "testvalue3")
self.d.setVar("TEST_foo", "testvalue4")
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST_foo', 'OVERRIDES', 'TEST_bar', 'TEST_local'])
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "testvalue3")
self.assertItemsEqual(self.d.keys(), ['TEST', 'TEST_foo', 'OVERRIDES', 'TEST_bar', 'TEST_local'])
def test_multiple_combined_overrides(self):
self.d.setVar("TEST_local_foo_bar", "testvalue3")
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "testvalue3")
def test_multiple_overrides_unset(self):
self.d.setVar("TEST2_local_foo_bar", "testvalue3")
self.assertEqual(self.d.getVar("TEST2"), "testvalue3")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST2", True), "testvalue3")
def test_keyexpansion_override(self):
self.d.setVar("LOCAL", "local")
self.d.setVar("TEST_bar", "testvalue2")
self.d.setVar("TEST_${LOCAL}", "testvalue3")
self.d.setVar("TEST_foo", "testvalue4")
bb.data.update_data(self.d)
bb.data.expandKeys(self.d)
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
self.assertEqual(self.d.getVar("TEST", True), "testvalue3")
def test_rename_override(self):
self.d.setVar("ALTERNATIVE_ncurses-tools_class-target", "a")
self.d.setVar("OVERRIDES", "class-target")
bb.data.update_data(self.d)
self.d.renameVar("ALTERNATIVE_ncurses-tools", "ALTERNATIVE_lib32-ncurses-tools")
self.assertEqual(self.d.getVar("ALTERNATIVE_lib32-ncurses-tools"), "a")
self.assertEqual(self.d.getVar("ALTERNATIVE_lib32-ncurses-tools", True), "a")
def test_underscore_override(self):
self.d.setVar("TEST_bar", "testvalue2")
self.d.setVar("TEST_some_val", "testvalue3")
self.d.setVar("TEST_foo", "testvalue4")
self.d.setVar("OVERRIDES", "foo:bar:some_val")
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
self.assertEqual(self.d.getVar("TEST", True), "testvalue3")
class TestKeyExpansion(unittest.TestCase):
def setUp(self):
@@ -386,7 +389,7 @@ class TestKeyExpansion(unittest.TestCase):
with LogRecord() as logs:
bb.data.expandKeys(self.d)
self.assertTrue(logContains("Variable key VAL_${FOO} (A) replaces original key VAL_foo (B)", logs))
self.assertEqual(self.d.getVar("VAL_foo"), "A")
self.assertEqual(self.d.getVar("VAL_foo", True), "A")
class TestFlags(unittest.TestCase):
def setUp(self):
@@ -441,167 +444,3 @@ class Contains(unittest.TestCase):
self.assertFalse(bb.utils.contains_any("SOMEFLAG", "x", True, False, self.d))
self.assertFalse(bb.utils.contains_any("SOMEFLAG", "x y z", True, False, self.d))
class Serialize(unittest.TestCase):
def test_serialize(self):
import tempfile
import pickle
d = bb.data.init()
d.enableTracking()
d.setVar('HELLO', 'world')
d.setVarFlag('HELLO', 'other', 'planet')
with tempfile.NamedTemporaryFile(delete=False) as tmpfile:
tmpfilename = tmpfile.name
pickle.dump(d, tmpfile)
with open(tmpfilename, 'rb') as f:
newd = pickle.load(f)
os.remove(tmpfilename)
self.assertEqual(d, newd)
self.assertEqual(newd.getVar('HELLO'), 'world')
self.assertEqual(newd.getVarFlag('HELLO', 'other'), 'planet')
# Remote datastore tests
# These really only test the interface, since in actual usage we have a
# tinfoil connector that does everything over RPC, and this doesn't test
# that.
class TestConnector:
d = None
def __init__(self, d):
self.d = d
def getVar(self, name):
return self.d._findVar(name)
def getKeys(self):
return set(self.d.keys())
def getVarHistory(self, name):
return self.d.varhistory.variable(name)
def expandPythonRef(self, varname, expr, d):
localdata = self.d.createCopy()
for key in d.localkeys():
localdata.setVar(d.getVar(key))
varparse = bb.data_smart.VariableParse(varname, localdata)
return varparse.python_sub(expr)
def setVar(self, name, value):
self.d.setVar(name, value)
def setVarFlag(self, name, flag, value):
self.d.setVarFlag(name, flag, value)
def delVar(self, name):
self.d.delVar(name)
return False
def delVarFlag(self, name, flag):
self.d.delVarFlag(name, flag)
return False
def renameVar(self, name, newname):
self.d.renameVar(name, newname)
return False
class Remote(unittest.TestCase):
def test_remote(self):
d1 = bb.data.init()
d1.enableTracking()
d2 = bb.data.init()
d2.enableTracking()
connector = TestConnector(d1)
d2.setVar('_remote_data', connector)
d1.setVar('HELLO', 'world')
d1.setVarFlag('OTHER', 'flagname', 'flagvalue')
self.assertEqual(d2.getVar('HELLO'), 'world')
self.assertEqual(d2.expand('${HELLO}'), 'world')
self.assertEqual(d2.expand('${@d.getVar("HELLO")}'), 'world')
self.assertIn('flagname', d2.getVarFlags('OTHER'))
self.assertEqual(d2.getVarFlag('OTHER', 'flagname'), 'flagvalue')
self.assertEqual(d1.varhistory.variable('HELLO'), d2.varhistory.variable('HELLO'))
# Test setVar on client side affects server
d2.setVar('HELLO', 'other-world')
self.assertEqual(d1.getVar('HELLO'), 'other-world')
# Test setVarFlag on client side affects server
d2.setVarFlag('HELLO', 'flagname', 'flagvalue')
self.assertEqual(d1.getVarFlag('HELLO', 'flagname'), 'flagvalue')
# Test client side data is incorporated in python expansion (which is done on server)
d2.setVar('FOO', 'bar')
self.assertEqual(d2.expand('${@d.getVar("FOO")}'), 'bar')
# Test overrides work
d1.setVar('FOO_test', 'baz')
d1.appendVar('OVERRIDES', ':test')
self.assertEqual(d2.getVar('FOO'), 'baz')
# Remote equivalents of local test classes
# Note that these aren't perfect since we only test in one direction
class RemoteDataExpansions(DataExpansions):
def setUp(self):
self.d1 = bb.data.init()
self.d = bb.data.init()
self.d1["foo"] = "value_of_foo"
self.d1["bar"] = "value_of_bar"
self.d1["value_of_foo"] = "value_of_'value_of_foo'"
connector = TestConnector(self.d1)
self.d.setVar('_remote_data', connector)
class TestRemoteNestedExpansions(TestNestedExpansions):
def setUp(self):
self.d1 = bb.data.init()
self.d = bb.data.init()
self.d1["foo"] = "foo"
self.d1["bar"] = "bar"
self.d1["value_of_foobar"] = "187"
connector = TestConnector(self.d1)
self.d.setVar('_remote_data', connector)
class TestRemoteConcat(TestConcat):
def setUp(self):
self.d1 = bb.data.init()
self.d = bb.data.init()
self.d1.setVar("FOO", "foo")
self.d1.setVar("VAL", "val")
self.d1.setVar("BAR", "bar")
connector = TestConnector(self.d1)
self.d.setVar('_remote_data', connector)
class TestRemoteConcatOverride(TestConcatOverride):
def setUp(self):
self.d1 = bb.data.init()
self.d = bb.data.init()
self.d1.setVar("FOO", "foo")
self.d1.setVar("VAL", "val")
self.d1.setVar("BAR", "bar")
connector = TestConnector(self.d1)
self.d.setVar('_remote_data', connector)
class TestRemoteOverrides(TestOverrides):
def setUp(self):
self.d1 = bb.data.init()
self.d = bb.data.init()
self.d1.setVar("OVERRIDES", "foo:bar:local")
self.d1.setVar("TEST", "testvalue")
connector = TestConnector(self.d1)
self.d.setVar('_remote_data', connector)
class TestRemoteKeyExpansion(TestKeyExpansion):
def setUp(self):
self.d1 = bb.data.init()
self.d = bb.data.init()
self.d1.setVar("FOO", "foo")
self.d1.setVar("BAR", "foo")
connector = TestConnector(self.d1)
self.d.setVar('_remote_data', connector)
class TestRemoteFlags(TestFlags):
def setUp(self):
self.d1 = bb.data.init()
self.d = bb.data.init()
self.d1.setVar("foo", "value of foo")
self.d1.setVarFlag("foo", "flag1", "value of flag1")
self.d1.setVarFlag("foo", "flag2", "value of flag2")
connector = TestConnector(self.d1)
self.d.setVar('_remote_data', connector)

View File

@@ -1,986 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# BitBake Tests for the Event implementation (event.py)
#
# Copyright (C) 2017 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import unittest
import bb
import logging
import bb.compat
import bb.event
import importlib
import threading
import time
import pickle
from unittest.mock import Mock
from unittest.mock import call
from bb.msg import BBLogFormatter
class EventQueueStubBase(object):
""" Base class for EventQueueStub classes """
def __init__(self):
self.event_calls = []
return
def _store_event_data_string(self, event):
if isinstance(event, logging.LogRecord):
formatter = BBLogFormatter("%(levelname)s: %(message)s")
self.event_calls.append(formatter.format(event))
else:
self.event_calls.append(bb.event.getName(event))
return
class EventQueueStub(EventQueueStubBase):
""" Class used as specification for UI event handler queue stub objects """
def __init__(self):
super(EventQueueStub, self).__init__()
def send(self, event):
super(EventQueueStub, self)._store_event_data_string(event)
class PickleEventQueueStub(EventQueueStubBase):
""" Class used as specification for UI event handler queue stub objects
with sendpickle method """
def __init__(self):
super(PickleEventQueueStub, self).__init__()
def sendpickle(self, pickled_event):
event = pickle.loads(pickled_event)
super(PickleEventQueueStub, self)._store_event_data_string(event)
class UIClientStub(object):
""" Class used as specification for UI event handler stub objects """
def __init__(self):
self.event = None
class EventHandlingTest(unittest.TestCase):
""" Event handling test class """
def setUp(self):
self._test_process = Mock()
ui_client1 = UIClientStub()
ui_client2 = UIClientStub()
self._test_ui1 = Mock(wraps=ui_client1)
self._test_ui2 = Mock(wraps=ui_client2)
importlib.reload(bb.event)
def _create_test_handlers(self):
""" Method used to create a test handler ordered dictionary """
test_handlers = bb.compat.OrderedDict()
test_handlers["handler1"] = self._test_process.handler1
test_handlers["handler2"] = self._test_process.handler2
return test_handlers
def test_class_handlers(self):
""" Test set_class_handlers and get_class_handlers methods """
test_handlers = self._create_test_handlers()
bb.event.set_class_handlers(test_handlers)
self.assertEqual(test_handlers,
bb.event.get_class_handlers())
def test_handlers(self):
""" Test set_handlers and get_handlers """
test_handlers = self._create_test_handlers()
bb.event.set_handlers(test_handlers)
self.assertEqual(test_handlers,
bb.event.get_handlers())
def test_clean_class_handlers(self):
""" Test clean_class_handlers method """
cleanDict = bb.compat.OrderedDict()
self.assertEqual(cleanDict,
bb.event.clean_class_handlers())
def test_register(self):
""" Test register method for class handlers """
result = bb.event.register("handler", self._test_process.handler)
self.assertEqual(result, bb.event.Registered)
handlers_dict = bb.event.get_class_handlers()
self.assertIn("handler", handlers_dict)
def test_already_registered(self):
""" Test detection of an already registed class handler """
bb.event.register("handler", self._test_process.handler)
handlers_dict = bb.event.get_class_handlers()
self.assertIn("handler", handlers_dict)
result = bb.event.register("handler", self._test_process.handler)
self.assertEqual(result, bb.event.AlreadyRegistered)
def test_register_from_string(self):
""" Test register method receiving code in string """
result = bb.event.register("string_handler", " return True")
self.assertEqual(result, bb.event.Registered)
handlers_dict = bb.event.get_class_handlers()
self.assertIn("string_handler", handlers_dict)
def test_register_with_mask(self):
""" Test register method with event masking """
mask = ["bb.event.OperationStarted",
"bb.event.OperationCompleted"]
result = bb.event.register("event_handler",
self._test_process.event_handler,
mask)
self.assertEqual(result, bb.event.Registered)
handlers_dict = bb.event.get_class_handlers()
self.assertIn("event_handler", handlers_dict)
def test_remove(self):
""" Test remove method for class handlers """
test_handlers = self._create_test_handlers()
bb.event.set_class_handlers(test_handlers)
count = len(test_handlers)
bb.event.remove("handler1", None)
test_handlers = bb.event.get_class_handlers()
self.assertEqual(len(test_handlers), count - 1)
with self.assertRaises(KeyError):
bb.event.remove("handler1", None)
def test_execute_handler(self):
""" Test execute_handler method for class handlers """
mask = ["bb.event.OperationProgress"]
result = bb.event.register("event_handler",
self._test_process.event_handler,
mask)
self.assertEqual(result, bb.event.Registered)
event = bb.event.OperationProgress(current=10, total=100)
bb.event.execute_handler("event_handler",
self._test_process.event_handler,
event,
None)
self._test_process.event_handler.assert_called_once_with(event)
def test_fire_class_handlers(self):
""" Test fire_class_handlers method """
mask = ["bb.event.OperationStarted"]
result = bb.event.register("event_handler1",
self._test_process.event_handler1,
mask)
self.assertEqual(result, bb.event.Registered)
result = bb.event.register("event_handler2",
self._test_process.event_handler2,
"*")
self.assertEqual(result, bb.event.Registered)
event1 = bb.event.OperationStarted()
event2 = bb.event.OperationCompleted(total=123)
bb.event.fire_class_handlers(event1, None)
bb.event.fire_class_handlers(event2, None)
bb.event.fire_class_handlers(event2, None)
expected_event_handler1 = [call(event1)]
expected_event_handler2 = [call(event1),
call(event2),
call(event2)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected_event_handler1)
self.assertEqual(self._test_process.event_handler2.call_args_list,
expected_event_handler2)
def test_class_handler_filters(self):
""" Test filters for class handlers """
mask = ["bb.event.OperationStarted"]
result = bb.event.register("event_handler1",
self._test_process.event_handler1,
mask)
self.assertEqual(result, bb.event.Registered)
result = bb.event.register("event_handler2",
self._test_process.event_handler2,
"*")
self.assertEqual(result, bb.event.Registered)
bb.event.set_eventfilter(
lambda name, handler, event, d :
name == 'event_handler2' and
bb.event.getName(event) == "OperationStarted")
event1 = bb.event.OperationStarted()
event2 = bb.event.OperationCompleted(total=123)
bb.event.fire_class_handlers(event1, None)
bb.event.fire_class_handlers(event2, None)
bb.event.fire_class_handlers(event2, None)
expected_event_handler1 = []
expected_event_handler2 = [call(event1)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected_event_handler1)
self.assertEqual(self._test_process.event_handler2.call_args_list,
expected_event_handler2)
def test_change_handler_event_mapping(self):
""" Test changing the event mapping for class handlers """
event1 = bb.event.OperationStarted()
event2 = bb.event.OperationCompleted(total=123)
# register handler for all events
result = bb.event.register("event_handler1",
self._test_process.event_handler1,
"*")
self.assertEqual(result, bb.event.Registered)
bb.event.fire_class_handlers(event1, None)
bb.event.fire_class_handlers(event2, None)
expected = [call(event1), call(event2)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected)
# unregister handler and register it only for OperationStarted
bb.event.remove("event_handler1",
self._test_process.event_handler1)
mask = ["bb.event.OperationStarted"]
result = bb.event.register("event_handler1",
self._test_process.event_handler1,
mask)
self.assertEqual(result, bb.event.Registered)
bb.event.fire_class_handlers(event1, None)
bb.event.fire_class_handlers(event2, None)
expected = [call(event1), call(event2), call(event1)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected)
# unregister handler and register it only for OperationCompleted
bb.event.remove("event_handler1",
self._test_process.event_handler1)
mask = ["bb.event.OperationCompleted"]
result = bb.event.register("event_handler1",
self._test_process.event_handler1,
mask)
self.assertEqual(result, bb.event.Registered)
bb.event.fire_class_handlers(event1, None)
bb.event.fire_class_handlers(event2, None)
expected = [call(event1), call(event2), call(event1), call(event2)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected)
def test_register_UIHhandler(self):
""" Test register_UIHhandler method """
result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
self.assertEqual(result, 1)
def test_UIHhandler_already_registered(self):
""" Test registering an UIHhandler already existing """
result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
self.assertEqual(result, 1)
result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
self.assertEqual(result, 2)
def test_unregister_UIHhandler(self):
""" Test unregister_UIHhandler method """
result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
self.assertEqual(result, 1)
result = bb.event.unregister_UIHhandler(1)
self.assertIs(result, None)
def test_fire_ui_handlers(self):
""" Test fire_ui_handlers method """
self._test_ui1.event = Mock(spec_set=EventQueueStub)
result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
self.assertEqual(result, 1)
self._test_ui2.event = Mock(spec_set=PickleEventQueueStub)
result = bb.event.register_UIHhandler(self._test_ui2, mainui=True)
self.assertEqual(result, 2)
event1 = bb.event.OperationStarted()
bb.event.fire_ui_handlers(event1, None)
expected = [call(event1)]
self.assertEqual(self._test_ui1.event.send.call_args_list,
expected)
expected = [call(pickle.dumps(event1))]
self.assertEqual(self._test_ui2.event.sendpickle.call_args_list,
expected)
def test_ui_handler_mask_filter(self):
""" Test filters for UI handlers """
mask = ["bb.event.OperationStarted"]
debug_domains = {}
self._test_ui1.event = Mock(spec_set=EventQueueStub)
result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
bb.event.set_UIHmask(result, logging.INFO, debug_domains, mask)
self._test_ui2.event = Mock(spec_set=PickleEventQueueStub)
result = bb.event.register_UIHhandler(self._test_ui2, mainui=True)
bb.event.set_UIHmask(result, logging.INFO, debug_domains, mask)
event1 = bb.event.OperationStarted()
event2 = bb.event.OperationCompleted(total=1)
bb.event.fire_ui_handlers(event1, None)
bb.event.fire_ui_handlers(event2, None)
expected = [call(event1)]
self.assertEqual(self._test_ui1.event.send.call_args_list,
expected)
expected = [call(pickle.dumps(event1))]
self.assertEqual(self._test_ui2.event.sendpickle.call_args_list,
expected)
def test_ui_handler_log_filter(self):
""" Test log filters for UI handlers """
mask = ["*"]
debug_domains = {'BitBake.Foo': logging.WARNING}
self._test_ui1.event = EventQueueStub()
result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
bb.event.set_UIHmask(result, logging.ERROR, debug_domains, mask)
self._test_ui2.event = PickleEventQueueStub()
result = bb.event.register_UIHhandler(self._test_ui2, mainui=True)
bb.event.set_UIHmask(result, logging.ERROR, debug_domains, mask)
event1 = bb.event.OperationStarted()
bb.event.fire_ui_handlers(event1, None) # All events match
event_log_handler = bb.event.LogHandler()
logger = logging.getLogger("BitBake")
logger.addHandler(event_log_handler)
logger1 = logging.getLogger("BitBake.Foo")
logger1.warning("Test warning LogRecord1") # Matches debug_domains level
logger1.info("Test info LogRecord") # Filtered out
logger2 = logging.getLogger("BitBake.Bar")
logger2.error("Test error LogRecord") # Matches filter base level
logger2.warning("Test warning LogRecord2") # Filtered out
logger.removeHandler(event_log_handler)
expected = ['OperationStarted',
'WARNING: Test warning LogRecord1',
'ERROR: Test error LogRecord']
self.assertEqual(self._test_ui1.event.event_calls, expected)
self.assertEqual(self._test_ui2.event.event_calls, expected)
def test_fire(self):
""" Test fire method used to trigger class and ui event handlers """
mask = ["bb.event.ConfigParsed"]
result = bb.event.register("event_handler1",
self._test_process.event_handler1,
mask)
self._test_ui1.event = Mock(spec_set=EventQueueStub)
result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
self.assertEqual(result, 1)
event1 = bb.event.ConfigParsed()
bb.event.fire(event1, None)
expected = [call(event1)]
self.assertEqual(self._test_process.event_handler1.call_args_list,
expected)
self.assertEqual(self._test_ui1.event.send.call_args_list,
expected)
def test_fire_from_worker(self):
""" Test fire_from_worker method """
self._test_ui1.event = Mock(spec_set=EventQueueStub)
result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
self.assertEqual(result, 1)
event1 = bb.event.ConfigParsed()
bb.event.fire_from_worker(event1, None)
expected = [call(event1)]
self.assertEqual(self._test_ui1.event.send.call_args_list,
expected)
def test_worker_fire(self):
""" Test the triggering of bb.event.worker_fire callback """
bb.event.worker_fire = Mock()
event = bb.event.Event()
bb.event.fire(event, None)
expected = [call(event, None)]
self.assertEqual(bb.event.worker_fire.call_args_list, expected)
def test_print_ui_queue(self):
""" Test print_ui_queue method """
event1 = bb.event.OperationStarted()
event2 = bb.event.OperationCompleted(total=123)
bb.event.fire(event1, None)
bb.event.fire(event2, None)
event_log_handler = bb.event.LogHandler()
logger = logging.getLogger("BitBake")
logger.addHandler(event_log_handler)
logger.info("Test info LogRecord")
logger.warning("Test warning LogRecord")
with self.assertLogs("BitBake", level="INFO") as cm:
bb.event.print_ui_queue()
logger.removeHandler(event_log_handler)
self.assertEqual(cm.output,
["INFO:BitBake:Test info LogRecord",
"WARNING:BitBake:Test warning LogRecord"])
def _set_threadlock_test_mockups(self):
""" Create UI event handler mockups used in enable and disable
threadlock tests """
def ui1_event_send(event):
if type(event) is bb.event.ConfigParsed:
self._threadlock_test_calls.append("w1_ui1")
if type(event) is bb.event.OperationStarted:
self._threadlock_test_calls.append("w2_ui1")
time.sleep(2)
def ui2_event_send(event):
if type(event) is bb.event.ConfigParsed:
self._threadlock_test_calls.append("w1_ui2")
if type(event) is bb.event.OperationStarted:
self._threadlock_test_calls.append("w2_ui2")
time.sleep(2)
self._threadlock_test_calls = []
self._test_ui1.event = EventQueueStub()
self._test_ui1.event.send = ui1_event_send
result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
self.assertEqual(result, 1)
self._test_ui2.event = EventQueueStub()
self._test_ui2.event.send = ui2_event_send
result = bb.event.register_UIHhandler(self._test_ui2, mainui=True)
self.assertEqual(result, 2)
def _set_and_run_threadlock_test_workers(self):
""" Create and run the workers used to trigger events in enable and
disable threadlock tests """
worker1 = threading.Thread(target=self._thread_lock_test_worker1)
worker2 = threading.Thread(target=self._thread_lock_test_worker2)
worker1.start()
time.sleep(1)
worker2.start()
worker1.join()
worker2.join()
def _thread_lock_test_worker1(self):
""" First worker used to fire the ConfigParsed event for enable and
disable threadlocks tests """
bb.event.fire(bb.event.ConfigParsed(), None)
def _thread_lock_test_worker2(self):
""" Second worker used to fire the OperationStarted event for enable
and disable threadlocks tests """
bb.event.fire(bb.event.OperationStarted(), None)
def test_enable_threadlock(self):
""" Test enable_threadlock method """
self._set_threadlock_test_mockups()
bb.event.enable_threadlock()
self._set_and_run_threadlock_test_workers()
# Calls to UI handlers should be in order as all the registered
# handlers for the event coming from the first worker should be
# called before processing the event from the second worker.
self.assertEqual(self._threadlock_test_calls,
["w1_ui1", "w1_ui2", "w2_ui1", "w2_ui2"])
def test_disable_threadlock(self):
""" Test disable_threadlock method """
self._set_threadlock_test_mockups()
bb.event.disable_threadlock()
self._set_and_run_threadlock_test_workers()
# Calls to UI handlers should be intertwined together. Thanks to the
# delay in the registered handlers for the event coming from the first
# worker, the event coming from the second worker starts being
# processed before finishing handling the first worker event.
self.assertEqual(self._threadlock_test_calls,
["w1_ui1", "w2_ui1", "w1_ui2", "w2_ui2"])
class EventClassesTest(unittest.TestCase):
""" Event classes test class """
_worker_pid = 54321
def setUp(self):
bb.event.worker_pid = EventClassesTest._worker_pid
def test_Event(self):
""" Test the Event base class """
event = bb.event.Event()
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_HeartbeatEvent(self):
""" Test the HeartbeatEvent class """
time = 10
event = bb.event.HeartbeatEvent(time)
self.assertEqual(event.time, time)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_OperationStarted(self):
""" Test OperationStarted event class """
msg = "Foo Bar"
event = bb.event.OperationStarted(msg)
self.assertEqual(event.msg, msg)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_OperationCompleted(self):
""" Test OperationCompleted event class """
msg = "Foo Bar"
total = 123
event = bb.event.OperationCompleted(total, msg)
self.assertEqual(event.msg, msg)
self.assertEqual(event.total, total)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_OperationProgress(self):
""" Test OperationProgress event class """
msg = "Foo Bar"
total = 123
current = 111
event = bb.event.OperationProgress(current, total, msg)
self.assertEqual(event.msg, msg + ": %s/%s" % (current, total))
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_ConfigParsed(self):
""" Test the ConfigParsed class """
event = bb.event.ConfigParsed()
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_MultiConfigParsed(self):
""" Test MultiConfigParsed event class """
mcdata = {"foobar": "Foo Bar"}
event = bb.event.MultiConfigParsed(mcdata)
self.assertEqual(event.mcdata, mcdata)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_RecipeEvent(self):
""" Test RecipeEvent event base class """
callback = lambda a: 2 * a
event = bb.event.RecipeEvent(callback)
self.assertEqual(event.fn(1), callback(1))
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_RecipePreFinalise(self):
""" Test RecipePreFinalise event class """
callback = lambda a: 2 * a
event = bb.event.RecipePreFinalise(callback)
self.assertEqual(event.fn(1), callback(1))
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_RecipeTaskPreProcess(self):
""" Test RecipeTaskPreProcess event class """
callback = lambda a: 2 * a
tasklist = [("foobar", callback)]
event = bb.event.RecipeTaskPreProcess(callback, tasklist)
self.assertEqual(event.fn(1), callback(1))
self.assertEqual(event.tasklist, tasklist)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_RecipeParsed(self):
""" Test RecipeParsed event base class """
callback = lambda a: 2 * a
event = bb.event.RecipeParsed(callback)
self.assertEqual(event.fn(1), callback(1))
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_StampUpdate(self):
targets = ["foo", "bar"]
stampfns = [lambda:"foobar"]
event = bb.event.StampUpdate(targets, stampfns)
self.assertEqual(event.targets, targets)
self.assertEqual(event.stampPrefix, stampfns)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_BuildBase(self):
""" Test base class for bitbake build events """
name = "foo"
pkgs = ["bar"]
failures = 123
event = bb.event.BuildBase(name, pkgs, failures)
self.assertEqual(event.name, name)
self.assertEqual(event.pkgs, pkgs)
self.assertEqual(event.getFailures(), failures)
name = event.name = "bar"
pkgs = event.pkgs = ["foo"]
self.assertEqual(event.name, name)
self.assertEqual(event.pkgs, pkgs)
self.assertEqual(event.getFailures(), failures)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_BuildInit(self):
""" Test class for bitbake build invocation events """
event = bb.event.BuildInit()
self.assertEqual(event.name, None)
self.assertEqual(event.pkgs, [])
self.assertEqual(event.getFailures(), 0)
name = event.name = "bar"
pkgs = event.pkgs = ["foo"]
self.assertEqual(event.name, name)
self.assertEqual(event.pkgs, pkgs)
self.assertEqual(event.getFailures(), 0)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_BuildStarted(self):
""" Test class for build started events """
name = "foo"
pkgs = ["bar"]
failures = 123
event = bb.event.BuildStarted(name, pkgs, failures)
self.assertEqual(event.name, name)
self.assertEqual(event.pkgs, pkgs)
self.assertEqual(event.getFailures(), failures)
self.assertEqual(event.msg, "Building Started")
name = event.name = "bar"
pkgs = event.pkgs = ["foo"]
msg = event.msg = "foobar"
self.assertEqual(event.name, name)
self.assertEqual(event.pkgs, pkgs)
self.assertEqual(event.getFailures(), failures)
self.assertEqual(event.msg, msg)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_BuildCompleted(self):
""" Test class for build completed events """
total = 1000
name = "foo"
pkgs = ["bar"]
failures = 123
interrupted = 1
event = bb.event.BuildCompleted(total, name, pkgs, failures,
interrupted)
self.assertEqual(event.name, name)
self.assertEqual(event.pkgs, pkgs)
self.assertEqual(event.getFailures(), failures)
self.assertEqual(event.msg, "Building Failed")
event2 = bb.event.BuildCompleted(total, name, pkgs)
self.assertEqual(event2.name, name)
self.assertEqual(event2.pkgs, pkgs)
self.assertEqual(event2.getFailures(), 0)
self.assertEqual(event2.msg, "Building Succeeded")
self.assertEqual(event2.pid, EventClassesTest._worker_pid)
def test_DiskFull(self):
""" Test DiskFull event class """
dev = "/dev/foo"
type = "ext4"
freespace = "104M"
mountpoint = "/"
event = bb.event.DiskFull(dev, type, freespace, mountpoint)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_MonitorDiskEvent(self):
""" Test MonitorDiskEvent class """
available_bytes = 10000000
free_bytes = 90000000
total_bytes = 1000000000
du = bb.event.DiskUsageSample(available_bytes, free_bytes,
total_bytes)
event = bb.event.MonitorDiskEvent(du)
self.assertEqual(event.disk_usage.available_bytes, available_bytes)
self.assertEqual(event.disk_usage.free_bytes, free_bytes)
self.assertEqual(event.disk_usage.total_bytes, total_bytes)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_NoProvider(self):
""" Test NoProvider event class """
item = "foobar"
event1 = bb.event.NoProvider(item)
self.assertEqual(event1.getItem(), item)
self.assertEqual(event1.isRuntime(), False)
self.assertEqual(str(event1), "Nothing PROVIDES 'foobar'")
runtime = True
dependees = ["foo", "bar"]
reasons = None
close_matches = ["foibar", "footbar"]
event2 = bb.event.NoProvider(item, runtime, dependees, reasons,
close_matches)
self.assertEqual(event2.isRuntime(), True)
expected = ("Nothing RPROVIDES 'foobar' (but foo, bar RDEPENDS"
" on or otherwise requires it). Close matches:\n"
" foibar\n"
" footbar")
self.assertEqual(str(event2), expected)
reasons = ["Item does not exist on database"]
close_matches = ["foibar", "footbar"]
event3 = bb.event.NoProvider(item, runtime, dependees, reasons,
close_matches)
expected = ("Nothing RPROVIDES 'foobar' (but foo, bar RDEPENDS"
" on or otherwise requires it)\n"
"Item does not exist on database")
self.assertEqual(str(event3), expected)
self.assertEqual(event3.pid, EventClassesTest._worker_pid)
def test_MultipleProviders(self):
""" Test MultipleProviders event class """
item = "foobar"
candidates = ["foobarv1", "foobars"]
event1 = bb.event.MultipleProviders(item, candidates)
self.assertEqual(event1.isRuntime(), False)
self.assertEqual(event1.getItem(), item)
self.assertEqual(event1.getCandidates(), candidates)
expected = ("Multiple providers are available for foobar (foobarv1,"
" foobars)\n"
"Consider defining a PREFERRED_PROVIDER entry to match "
"foobar")
self.assertEqual(str(event1), expected)
runtime = True
event2 = bb.event.MultipleProviders(item, candidates, runtime)
self.assertEqual(event2.isRuntime(), runtime)
expected = ("Multiple providers are available for runtime foobar "
"(foobarv1, foobars)\n"
"Consider defining a PREFERRED_RPROVIDER entry to match "
"foobar")
self.assertEqual(str(event2), expected)
self.assertEqual(event2.pid, EventClassesTest._worker_pid)
def test_ParseStarted(self):
""" Test ParseStarted event class """
total = 123
event = bb.event.ParseStarted(total)
self.assertEqual(event.msg, "Recipe parsing Started")
self.assertEqual(event.total, total)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_ParseCompleted(self):
""" Test ParseCompleted event class """
cached = 10
parsed = 13
skipped = 7
virtuals = 2
masked = 1
errors = 0
total = 23
event = bb.event.ParseCompleted(cached, parsed, skipped, masked,
virtuals, errors, total)
self.assertEqual(event.msg, "Recipe parsing Completed")
expected = [cached, parsed, skipped, virtuals, masked, errors,
cached + parsed, total]
actual = [event.cached, event.parsed, event.skipped, event.virtuals,
event.masked, event.errors, event.sofar, event.total]
self.assertEqual(str(actual), str(expected))
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_ParseProgress(self):
""" Test ParseProgress event class """
current = 10
total = 100
event = bb.event.ParseProgress(current, total)
self.assertEqual(event.msg,
"Recipe parsing" + ": %s/%s" % (current, total))
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_CacheLoadStarted(self):
""" Test CacheLoadStarted event class """
total = 123
event = bb.event.CacheLoadStarted(total)
self.assertEqual(event.msg, "Loading cache Started")
self.assertEqual(event.total, total)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_CacheLoadProgress(self):
""" Test CacheLoadProgress event class """
current = 10
total = 100
event = bb.event.CacheLoadProgress(current, total)
self.assertEqual(event.msg,
"Loading cache" + ": %s/%s" % (current, total))
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_CacheLoadCompleted(self):
""" Test CacheLoadCompleted event class """
total = 23
num_entries = 12
event = bb.event.CacheLoadCompleted(total, num_entries)
self.assertEqual(event.msg, "Loading cache Completed")
expected = [total, num_entries]
actual = [event.total, event.num_entries]
self.assertEqual(str(actual), str(expected))
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_TreeDataPreparationStarted(self):
""" Test TreeDataPreparationStarted event class """
event = bb.event.TreeDataPreparationStarted()
self.assertEqual(event.msg, "Preparing tree data Started")
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_TreeDataPreparationProgress(self):
""" Test TreeDataPreparationProgress event class """
current = 10
total = 100
event = bb.event.TreeDataPreparationProgress(current, total)
self.assertEqual(event.msg,
"Preparing tree data" + ": %s/%s" % (current, total))
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_TreeDataPreparationCompleted(self):
""" Test TreeDataPreparationCompleted event class """
total = 23
event = bb.event.TreeDataPreparationCompleted(total)
self.assertEqual(event.msg, "Preparing tree data Completed")
self.assertEqual(event.total, total)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_DepTreeGenerated(self):
""" Test DepTreeGenerated event class """
depgraph = Mock()
event = bb.event.DepTreeGenerated(depgraph)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_TargetsTreeGenerated(self):
""" Test TargetsTreeGenerated event class """
model = Mock()
event = bb.event.TargetsTreeGenerated(model)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_ReachableStamps(self):
""" Test ReachableStamps event class """
stamps = [Mock(), Mock()]
event = bb.event.ReachableStamps(stamps)
self.assertEqual(event.stamps, stamps)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_FilesMatchingFound(self):
""" Test FilesMatchingFound event class """
pattern = "foo.*bar"
matches = ["foobar"]
event = bb.event.FilesMatchingFound(pattern, matches)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_ConfigFilesFound(self):
""" Test ConfigFilesFound event class """
variable = "FOO_BAR"
values = ["foo", "bar"]
event = bb.event.ConfigFilesFound(variable, values)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_ConfigFilePathFound(self):
""" Test ConfigFilePathFound event class """
path = "/foo/bar"
event = bb.event.ConfigFilePathFound(path)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_message_classes(self):
""" Test message event classes """
msg = "foobar foo bar"
event = bb.event.MsgBase(msg)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
event = bb.event.MsgDebug(msg)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
event = bb.event.MsgNote(msg)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
event = bb.event.MsgWarn(msg)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
event = bb.event.MsgError(msg)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
event = bb.event.MsgFatal(msg)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
event = bb.event.MsgPlain(msg)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_LogExecTTY(self):
""" Test LogExecTTY event class """
msg = "foo bar"
prog = "foo.sh"
sleep_delay = 10
retries = 3
event = bb.event.LogExecTTY(msg, prog, sleep_delay, retries)
self.assertEqual(event.msg, msg)
self.assertEqual(event.prog, prog)
self.assertEqual(event.sleep_delay, sleep_delay)
self.assertEqual(event.retries, retries)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def _throw_zero_division_exception(self):
a = 1 / 0
return
def _worker_handler(self, event, d):
self._returned_event = event
return
def test_LogHandler(self):
""" Test LogHandler class """
logger = logging.getLogger("TestEventClasses")
logger.propagate = False
handler = bb.event.LogHandler(logging.INFO)
logger.addHandler(handler)
bb.event.worker_fire = self._worker_handler
try:
self._throw_zero_division_exception()
except ZeroDivisionError as ex:
logger.exception(ex)
event = self._returned_event
try:
pe = pickle.dumps(event)
newevent = pickle.loads(pe)
except:
self.fail('Logged event is not serializable')
self.assertEqual(event.taskpid, EventClassesTest._worker_pid)
def test_MetadataEvent(self):
""" Test MetadataEvent class """
eventtype = "footype"
eventdata = {"foo": "bar"}
event = bb.event.MetadataEvent(eventtype, eventdata)
self.assertEqual(event.type, eventtype)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_ProcessStarted(self):
""" Test ProcessStarted class """
processname = "foo"
total = 9783128974
event = bb.event.ProcessStarted(processname, total)
self.assertEqual(event.processname, processname)
self.assertEqual(event.total, total)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_ProcessProgress(self):
""" Test ProcessProgress class """
processname = "foo"
progress = 243224
event = bb.event.ProcessProgress(processname, progress)
self.assertEqual(event.processname, processname)
self.assertEqual(event.progress, progress)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_ProcessFinished(self):
""" Test ProcessFinished class """
processname = "foo"
total = 1242342344
event = bb.event.ProcessFinished(processname)
self.assertEqual(event.processname, processname)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_SanityCheck(self):
""" Test SanityCheck class """
event1 = bb.event.SanityCheck()
self.assertEqual(event1.generateevents, True)
self.assertEqual(event1.pid, EventClassesTest._worker_pid)
generateevents = False
event2 = bb.event.SanityCheck(generateevents)
self.assertEqual(event2.generateevents, generateevents)
self.assertEqual(event2.pid, EventClassesTest._worker_pid)
def test_SanityCheckPassed(self):
""" Test SanityCheckPassed class """
event = bb.event.SanityCheckPassed()
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_SanityCheckFailed(self):
""" Test SanityCheckFailed class """
msg = "The sanity test failed."
event1 = bb.event.SanityCheckFailed(msg)
self.assertEqual(event1.pid, EventClassesTest._worker_pid)
network_error = True
event2 = bb.event.SanityCheckFailed(msg, network_error)
self.assertEqual(event2.pid, EventClassesTest._worker_pid)
def test_network_event_classes(self):
""" Test network event classes """
event1 = bb.event.NetworkTest()
generateevents = False
self.assertEqual(event1.pid, EventClassesTest._worker_pid)
event2 = bb.event.NetworkTest(generateevents)
self.assertEqual(event2.pid, EventClassesTest._worker_pid)
event3 = bb.event.NetworkTestPassed()
self.assertEqual(event3.pid, EventClassesTest._worker_pid)
event4 = bb.event.NetworkTestFailed()
self.assertEqual(event4.pid, EventClassesTest._worker_pid)
def test_FindSigInfoResult(self):
""" Test FindSigInfoResult event class """
result = [Mock()]
event = bb.event.FindSigInfoResult(result)
self.assertEqual(event.result, result)
self.assertEqual(event.pid, EventClassesTest._worker_pid)

File diff suppressed because it is too large Load Diff

View File

@@ -50,7 +50,7 @@ C = "3"
def parsehelper(self, content, suffix = ".bb"):
f = tempfile.NamedTemporaryFile(suffix = suffix)
f.write(bytes(content, "utf-8"))
f.write(content)
f.flush()
os.chdir(os.path.dirname(f.name))
return f
@@ -58,9 +58,9 @@ C = "3"
def test_parse_simple(self):
f = self.parsehelper(self.testfile)
d = bb.parse.handle(f.name, self.d)['']
self.assertEqual(d.getVar("A"), "1")
self.assertEqual(d.getVar("B"), "2")
self.assertEqual(d.getVar("C"), "3")
self.assertEqual(d.getVar("A", True), "1")
self.assertEqual(d.getVar("B", True), "2")
self.assertEqual(d.getVar("C", True), "3")
def test_parse_incomplete_function(self):
testfileB = self.testfile.replace("}", "")
@@ -68,44 +68,6 @@ C = "3"
with self.assertRaises(bb.parse.ParseError):
d = bb.parse.handle(f.name, self.d)['']
unsettest = """
A = "1"
B = "2"
B[flag] = "3"
unset A
unset B[flag]
"""
def test_parse_unset(self):
f = self.parsehelper(self.unsettest)
d = bb.parse.handle(f.name, self.d)['']
self.assertEqual(d.getVar("A"), None)
self.assertEqual(d.getVarFlag("A","flag"), None)
self.assertEqual(d.getVar("B"), "2")
exporttest = """
A = "a"
export B = "b"
export C
exportD = "d"
"""
def test_parse_exports(self):
f = self.parsehelper(self.exporttest)
d = bb.parse.handle(f.name, self.d)['']
self.assertEqual(d.getVar("A"), "a")
self.assertIsNone(d.getVarFlag("A", "export"))
self.assertEqual(d.getVar("B"), "b")
self.assertEqual(d.getVarFlag("B", "export"), 1)
self.assertIsNone(d.getVar("C"))
self.assertEqual(d.getVarFlag("C", "export"), 1)
self.assertIsNone(d.getVar("D"))
self.assertIsNone(d.getVarFlag("D", "export"))
self.assertEqual(d.getVar("exportD"), "d")
self.assertIsNone(d.getVarFlag("exportD", "export"))
overridetest = """
RRECOMMENDS_${PN} = "a"
RRECOMMENDS_${PN}_libc = "b"
@@ -116,11 +78,11 @@ PN = "gtk+"
def test_parse_overrides(self):
f = self.parsehelper(self.overridetest)
d = bb.parse.handle(f.name, self.d)['']
self.assertEqual(d.getVar("RRECOMMENDS"), "b")
self.assertEqual(d.getVar("RRECOMMENDS", True), "b")
bb.data.expandKeys(d)
self.assertEqual(d.getVar("RRECOMMENDS"), "b")
self.assertEqual(d.getVar("RRECOMMENDS", True), "b")
d.setVar("RRECOMMENDS_gtk+", "c")
self.assertEqual(d.getVar("RRECOMMENDS"), "c")
self.assertEqual(d.getVar("RRECOMMENDS", True), "c")
overridetest2 = """
EXTRA_OECONF = ""
@@ -133,7 +95,7 @@ EXTRA_OECONF_append = " c"
d = bb.parse.handle(f.name, self.d)['']
d.appendVar("EXTRA_OECONF", " d")
d.setVar("OVERRIDES", "class-target")
self.assertEqual(d.getVar("EXTRA_OECONF"), "b c d")
self.assertEqual(d.getVar("EXTRA_OECONF", True), "b c d")
overridetest3 = """
DESCRIPTION = "A"
@@ -145,11 +107,11 @@ PN = "bc"
f = self.parsehelper(self.overridetest3)
d = bb.parse.handle(f.name, self.d)['']
bb.data.expandKeys(d)
self.assertEqual(d.getVar("DESCRIPTION_bc-dev"), "A B")
self.assertEqual(d.getVar("DESCRIPTION_bc-dev", True), "A B")
d.setVar("DESCRIPTION", "E")
d.setVar("DESCRIPTION_bc-dev", "C D")
d.setVar("OVERRIDES", "bc-dev")
self.assertEqual(d.getVar("DESCRIPTION"), "C D")
self.assertEqual(d.getVar("DESCRIPTION", True), "C D")
classextend = """
@@ -180,6 +142,6 @@ python () {
alldata = bb.parse.handle(f.name, self.d)
d1 = alldata['']
d2 = alldata[cls.name]
self.assertEqual(d1.getVar("VAR_var"), "B")
self.assertEqual(d2.getVar("VAR_var"), None)
self.assertEqual(d1.getVar("VAR_var", True), "B")
self.assertEqual(d2.getVar("VAR_var", True), None)

View File

@@ -1,8 +1,7 @@
# tinfoil: a simple wrapper around cooker for bitbake-based command-line utilities
#
# Copyright (C) 2012-2017 Intel Corporation
# Copyright (C) 2012 Intel Corporation
# Copyright (C) 2011 Mentor Graphics Corporation
# Copyright (C) 2006-2012 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -18,883 +17,89 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import logging
import warnings
import os
import sys
import atexit
import re
from collections import OrderedDict, defaultdict
import bb.cache
import bb.cooker
import bb.providers
import bb.taskdata
import bb.utils
import bb.command
import bb.remotedata
from bb.cooker import state, BBCooker, CookerFeatures
from bb.cookerdata import CookerConfiguration, ConfigParameters
from bb.main import setup_bitbake, BitBakeConfigParameters, BBMainException
import bb.fetch2
# We need this in order to shut down the connection to the bitbake server,
# otherwise the process will never properly exit
_server_connections = []
def _terminate_connections():
for connection in _server_connections:
connection.terminate()
atexit.register(_terminate_connections)
class TinfoilUIException(Exception):
"""Exception raised when the UI returns non-zero from its main function"""
def __init__(self, returncode):
self.returncode = returncode
def __repr__(self):
return 'UI module main returned %d' % self.returncode
class TinfoilCommandFailed(Exception):
"""Exception raised when run_command fails"""
class TinfoilDataStoreConnector:
"""Connector object used to enable access to datastore objects via tinfoil"""
def __init__(self, tinfoil, dsindex):
self.tinfoil = tinfoil
self.dsindex = dsindex
def getVar(self, name):
value = self.tinfoil.run_command('dataStoreConnectorFindVar', self.dsindex, name)
overrides = None
if isinstance(value, dict):
if '_connector_origtype' in value:
value['_content'] = self.tinfoil._reconvert_type(value['_content'], value['_connector_origtype'])
del value['_connector_origtype']
if '_connector_overrides' in value:
overrides = value['_connector_overrides']
del value['_connector_overrides']
return value, overrides
def getKeys(self):
return set(self.tinfoil.run_command('dataStoreConnectorGetKeys', self.dsindex))
def getVarHistory(self, name):
return self.tinfoil.run_command('dataStoreConnectorGetVarHistory', self.dsindex, name)
def expandPythonRef(self, varname, expr, d):
ds = bb.remotedata.RemoteDatastores.transmit_datastore(d)
ret = self.tinfoil.run_command('dataStoreConnectorExpandPythonRef', ds, varname, expr)
return ret
def setVar(self, varname, value):
if self.dsindex is None:
self.tinfoil.run_command('setVariable', varname, value)
else:
# Not currently implemented - indicate that setting should
# be redirected to local side
return True
def setVarFlag(self, varname, flagname, value):
if self.dsindex is None:
self.tinfoil.run_command('dataStoreConnectorSetVarFlag', self.dsindex, varname, flagname, value)
else:
# Not currently implemented - indicate that setting should
# be redirected to local side
return True
def delVar(self, varname):
if self.dsindex is None:
self.tinfoil.run_command('dataStoreConnectorDelVar', self.dsindex, varname)
else:
# Not currently implemented - indicate that setting should
# be redirected to local side
return True
def delVarFlag(self, varname, flagname):
if self.dsindex is None:
self.tinfoil.run_command('dataStoreConnectorDelVar', self.dsindex, varname, flagname)
else:
# Not currently implemented - indicate that setting should
# be redirected to local side
return True
def renameVar(self, name, newname):
if self.dsindex is None:
self.tinfoil.run_command('dataStoreConnectorRenameVar', self.dsindex, name, newname)
else:
# Not currently implemented - indicate that setting should
# be redirected to local side
return True
class TinfoilCookerAdapter:
"""
Provide an adapter for existing code that expects to access a cooker object via Tinfoil,
since now Tinfoil is on the client side it no longer has direct access.
"""
class TinfoilCookerCollectionAdapter:
""" cooker.collection adapter """
def __init__(self, tinfoil):
self.tinfoil = tinfoil
def get_file_appends(self, fn):
return self.tinfoil.get_file_appends(fn)
def __getattr__(self, name):
if name == 'overlayed':
return self.tinfoil.get_overlayed_recipes()
elif name == 'bbappends':
return self.tinfoil.run_command('getAllAppends')
else:
raise AttributeError("%s instance has no attribute '%s'" % (self.__class__.__name__, name))
class TinfoilRecipeCacheAdapter:
""" cooker.recipecache adapter """
def __init__(self, tinfoil):
self.tinfoil = tinfoil
self._cache = {}
def get_pkg_pn_fn(self):
pkg_pn = defaultdict(list, self.tinfoil.run_command('getRecipes') or [])
pkg_fn = {}
for pn, fnlist in pkg_pn.items():
for fn in fnlist:
pkg_fn[fn] = pn
self._cache['pkg_pn'] = pkg_pn
self._cache['pkg_fn'] = pkg_fn
def __getattr__(self, name):
# Grab these only when they are requested since they aren't always used
if name in self._cache:
return self._cache[name]
elif name == 'pkg_pn':
self.get_pkg_pn_fn()
return self._cache[name]
elif name == 'pkg_fn':
self.get_pkg_pn_fn()
return self._cache[name]
elif name == 'deps':
attrvalue = defaultdict(list, self.tinfoil.run_command('getRecipeDepends') or [])
elif name == 'rundeps':
attrvalue = defaultdict(lambda: defaultdict(list), self.tinfoil.run_command('getRuntimeDepends') or [])
elif name == 'runrecs':
attrvalue = defaultdict(lambda: defaultdict(list), self.tinfoil.run_command('getRuntimeRecommends') or [])
elif name == 'pkg_pepvpr':
attrvalue = self.tinfoil.run_command('getRecipeVersions') or {}
elif name == 'inherits':
attrvalue = self.tinfoil.run_command('getRecipeInherits') or {}
elif name == 'bbfile_priority':
attrvalue = self.tinfoil.run_command('getBbFilePriority') or {}
elif name == 'pkg_dp':
attrvalue = self.tinfoil.run_command('getDefaultPreference') or {}
elif name == 'fn_provides':
attrvalue = self.tinfoil.run_command('getRecipeProvides') or {}
elif name == 'packages':
attrvalue = self.tinfoil.run_command('getRecipePackages') or {}
elif name == 'packages_dynamic':
attrvalue = self.tinfoil.run_command('getRecipePackagesDynamic') or {}
elif name == 'rproviders':
attrvalue = self.tinfoil.run_command('getRProviders') or {}
else:
raise AttributeError("%s instance has no attribute '%s'" % (self.__class__.__name__, name))
self._cache[name] = attrvalue
return attrvalue
def __init__(self, tinfoil):
self.tinfoil = tinfoil
self.collection = self.TinfoilCookerCollectionAdapter(tinfoil)
self.recipecaches = {}
# FIXME all machines
self.recipecaches[''] = self.TinfoilRecipeCacheAdapter(tinfoil)
self._cache = {}
def __getattr__(self, name):
# Grab these only when they are requested since they aren't always used
if name in self._cache:
return self._cache[name]
elif name == 'skiplist':
attrvalue = self.tinfoil.get_skipped_recipes()
elif name == 'bbfile_config_priorities':
ret = self.tinfoil.run_command('getLayerPriorities')
bbfile_config_priorities = []
for collection, pattern, regex, pri in ret:
bbfile_config_priorities.append((collection, pattern, re.compile(regex), pri))
attrvalue = bbfile_config_priorities
else:
raise AttributeError("%s instance has no attribute '%s'" % (self.__class__.__name__, name))
self._cache[name] = attrvalue
return attrvalue
def findBestProvider(self, pn):
return self.tinfoil.find_best_provider(pn)
class TinfoilRecipeInfo:
"""
Provides a convenient representation of the cached information for a single recipe.
Some attributes are set on construction, others are read on-demand (which internally
may result in a remote procedure call to the bitbake server the first time).
Note that only information which is cached is available through this object - if
you need other variable values you will need to parse the recipe using
Tinfoil.parse_recipe().
"""
def __init__(self, recipecache, d, pn, fn, fns):
self._recipecache = recipecache
self._d = d
self.pn = pn
self.fn = fn
self.fns = fns
self.inherit_files = recipecache.inherits[fn]
self.depends = recipecache.deps[fn]
(self.pe, self.pv, self.pr) = recipecache.pkg_pepvpr[fn]
self._cached_packages = None
self._cached_rprovides = None
self._cached_packages_dynamic = None
def __getattr__(self, name):
if name == 'alternates':
return [x for x in self.fns if x != self.fn]
elif name == 'rdepends':
return self._recipecache.rundeps[self.fn]
elif name == 'rrecommends':
return self._recipecache.runrecs[self.fn]
elif name == 'provides':
return self._recipecache.fn_provides[self.fn]
elif name == 'packages':
if self._cached_packages is None:
self._cached_packages = []
for pkg, fns in self._recipecache.packages.items():
if self.fn in fns:
self._cached_packages.append(pkg)
return self._cached_packages
elif name == 'packages_dynamic':
if self._cached_packages_dynamic is None:
self._cached_packages_dynamic = []
for pkg, fns in self._recipecache.packages_dynamic.items():
if self.fn in fns:
self._cached_packages_dynamic.append(pkg)
return self._cached_packages_dynamic
elif name == 'rprovides':
if self._cached_rprovides is None:
self._cached_rprovides = []
for pkg, fns in self._recipecache.rproviders.items():
if self.fn in fns:
self._cached_rprovides.append(pkg)
return self._cached_rprovides
else:
raise AttributeError("%s instance has no attribute '%s'" % (self.__class__.__name__, name))
def inherits(self, only_recipe=False):
"""
Get the inherited classes for a recipe. Returns the class names only.
Parameters:
only_recipe: True to return only the classes inherited by the recipe
itself, False to return all classes inherited within
the context for the recipe (which includes globally
inherited classes).
"""
if only_recipe:
global_inherit = [x for x in (self._d.getVar('BBINCLUDED') or '').split() if x.endswith('.bbclass')]
else:
global_inherit = []
for clsfile in self.inherit_files:
if only_recipe and clsfile in global_inherit:
continue
clsname = os.path.splitext(os.path.basename(clsfile))[0]
yield clsname
def __str__(self):
return '%s' % self.pn
class Tinfoil:
"""
Tinfoil - an API for scripts and utilities to query
BitBake internals and perform build operations.
"""
def __init__(self, output=sys.stdout, tracking=False):
# Needed to avoid deprecation warnings with python 2.6
warnings.filterwarnings("ignore", category=DeprecationWarning)
def __init__(self, output=sys.stdout, tracking=False, setup_logging=True):
"""
Create a new tinfoil object.
Parameters:
output: specifies where console output should be sent. Defaults
to sys.stdout.
tracking: True to enable variable history tracking, False to
disable it (default). Enabling this has a minor
performance impact so typically it isn't enabled
unless you need to query variable history.
setup_logging: True to setup a logger so that things like
bb.warn() will work immediately and timeout warnings
are visible; False to let BitBake do this itself.
"""
# Set up logging
self.logger = logging.getLogger('BitBake')
self.config_data = None
self.cooker = None
self.tracking = tracking
self.ui_module = None
self.server_connection = None
self.recipes_parsed = False
self.quiet = 0
self.oldhandlers = self.logger.handlers[:]
if setup_logging:
# This is the *client-side* logger, nothing to do with
# logging messages from the server
bb.msg.logger_create('BitBake', output)
self.localhandlers = []
for handler in self.logger.handlers:
if handler not in self.oldhandlers:
self.localhandlers.append(handler)
self._log_hdlr = logging.StreamHandler(output)
bb.msg.addDefaultlogFilter(self._log_hdlr)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if output.isatty():
format.enable_color()
self._log_hdlr.setFormatter(format)
self.logger.addHandler(self._log_hdlr)
def __enter__(self):
return self
self.config = CookerConfiguration()
configparams = TinfoilConfigParameters(parse_only=True)
self.config.setConfigParameters(configparams)
self.config.setServerRegIdleCallback(self.register_idle_function)
features = []
if tracking:
features.append(CookerFeatures.BASEDATASTORE_TRACKING)
self.cooker = BBCooker(self.config, features)
self.config_data = self.cooker.data
bb.providers.logger.setLevel(logging.ERROR)
self.cooker_data = None
def __exit__(self, type, value, traceback):
self.shutdown()
def prepare(self, config_only=False, config_params=None, quiet=0, extra_features=None):
"""
Prepares the underlying BitBake system to be used via tinfoil.
This function must be called prior to calling any of the other
functions in the API.
NOTE: if you call prepare() you must absolutely call shutdown()
before your code terminates. You can use a "with" block to ensure
this happens e.g.
with bb.tinfoil.Tinfoil() as tinfoil:
tinfoil.prepare()
...
Parameters:
config_only: True to read only the configuration and not load
the cache / parse recipes. This is useful if you just
want to query the value of a variable at the global
level or you want to do anything else that doesn't
involve knowing anything about the recipes in the
current configuration. False loads the cache / parses
recipes.
config_params: optionally specify your own configuration
parameters. If not specified an instance of
TinfoilConfigParameters will be created internally.
quiet: quiet level controlling console output - equivalent
to bitbake's -q/--quiet option. Default of 0 gives
the same output level as normal bitbake execution.
extra_features: extra features to be added to the feature
set requested from the server. See
CookerFeatures._feature_list for possible
features.
"""
self.quiet = quiet
if self.tracking:
extrafeatures = [bb.cooker.CookerFeatures.BASEDATASTORE_TRACKING]
else:
extrafeatures = []
if extra_features:
extrafeatures += extra_features
if not config_params:
config_params = TinfoilConfigParameters(config_only=config_only, quiet=quiet)
cookerconfig = CookerConfiguration()
cookerconfig.setConfigParameters(config_params)
if not config_only:
# Disable local loggers because the UI module is going to set up its own
for handler in self.localhandlers:
self.logger.handlers.remove(handler)
self.localhandlers = []
self.server_connection, ui_module = setup_bitbake(config_params,
cookerconfig,
extrafeatures)
self.ui_module = ui_module
# Ensure the path to bitbake's bin directory is in PATH so that things like
# bitbake-worker can be run (usually this is the case, but it doesn't have to be)
path = os.getenv('PATH').split(':')
bitbakebinpath = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), '..', '..', 'bin'))
for entry in path:
if entry.endswith(os.sep):
entry = entry[:-1]
if os.path.abspath(entry) == bitbakebinpath:
break
else:
path.insert(0, bitbakebinpath)
os.environ['PATH'] = ':'.join(path)
if self.server_connection:
_server_connections.append(self.server_connection)
if config_only:
config_params.updateToServer(self.server_connection.connection, os.environ.copy())
self.run_command('parseConfiguration')
else:
self.run_actions(config_params)
self.recipes_parsed = True
self.config_data = bb.data.init()
connector = TinfoilDataStoreConnector(self, None)
self.config_data.setVar('_remote_data', connector)
self.cooker = TinfoilCookerAdapter(self)
self.cooker_data = self.cooker.recipecaches['']
else:
raise Exception('Failed to start bitbake server')
def run_actions(self, config_params):
"""
Run the actions specified in config_params through the UI.
"""
ret = self.ui_module.main(self.server_connection.connection, self.server_connection.events, config_params)
if ret:
raise TinfoilUIException(ret)
def register_idle_function(self, function, data):
pass
def parseRecipes(self):
"""
Legacy function - use parse_recipes() instead.
"""
self.parse_recipes()
sys.stderr.write("Parsing recipes..")
self.logger.setLevel(logging.WARNING)
def parse_recipes(self):
"""
Load information on all recipes. Normally you should specify
config_only=False when calling prepare() instead of using this
function; this function is designed for situations where you need
to initialise Tinfoil and use it with config_only=True first and
then conditionally call this function to parse recipes later.
"""
config_params = TinfoilConfigParameters(config_only=False)
self.run_actions(config_params)
self.recipes_parsed = True
def run_command(self, command, *params):
"""
Run a command on the server (as implemented in bb.command).
Note that there are two types of command - synchronous and
asynchronous; in order to receive the results of asynchronous
commands you will need to set an appropriate event mask
using set_event_mask() and listen for the result using
wait_event() - with the correct event mask you'll at least get
bb.command.CommandCompleted and possibly other events before
that depending on the command.
"""
if not self.server_connection:
raise Exception('Not connected to server (did you call .prepare()?)')
commandline = [command]
if params:
commandline.extend(params)
result = self.server_connection.connection.runCommand(commandline)
if result[1]:
raise TinfoilCommandFailed(result[1])
return result[0]
def set_event_mask(self, eventlist):
"""Set the event mask which will be applied within wait_event()"""
if not self.server_connection:
raise Exception('Not connected to server (did you call .prepare()?)')
llevel, debug_domains = bb.msg.constructLogOptions()
ret = self.run_command('setEventMask', self.server_connection.connection.getEventHandle(), llevel, debug_domains, eventlist)
if not ret:
raise Exception('setEventMask failed')
def wait_event(self, timeout=0):
"""
Wait for an event from the server for the specified time.
A timeout of 0 means don't wait if there are no events in the queue.
Returns the next event in the queue or None if the timeout was
reached. Note that in order to recieve any events you will
first need to set the internal event mask using set_event_mask()
(otherwise whatever event mask the UI set up will be in effect).
"""
if not self.server_connection:
raise Exception('Not connected to server (did you call .prepare()?)')
return self.server_connection.events.waitEvent(timeout)
def get_overlayed_recipes(self):
"""
Find recipes which are overlayed (i.e. where recipes exist in multiple layers)
"""
return defaultdict(list, self.run_command('getOverlayedRecipes'))
def get_skipped_recipes(self):
"""
Find recipes which were skipped (i.e. SkipRecipe was raised
during parsing).
"""
return OrderedDict(self.run_command('getSkippedRecipes'))
def get_all_providers(self):
return defaultdict(list, self.run_command('allProviders'))
def find_providers(self):
return self.run_command('findProviders')
def find_best_provider(self, pn):
return self.run_command('findBestProvider', pn)
def get_runtime_providers(self, rdep):
return self.run_command('getRuntimeProviders', rdep)
def get_recipe_file(self, pn):
"""
Get the file name for the specified recipe/target. Raises
bb.providers.NoProvider if there is no match or the recipe was
skipped.
"""
best = self.find_best_provider(pn)
if not best or (len(best) > 3 and not best[3]):
skiplist = self.get_skipped_recipes()
taskdata = bb.taskdata.TaskData(None, skiplist=skiplist)
skipreasons = taskdata.get_reasons(pn)
if skipreasons:
raise bb.providers.NoProvider('%s is unavailable:\n %s' % (pn, ' \n'.join(skipreasons)))
else:
raise bb.providers.NoProvider('Unable to find any recipe file matching "%s"' % pn)
return best[3]
def get_file_appends(self, fn):
"""
Find the bbappends for a recipe file
"""
return self.run_command('getFileAppends', fn)
def all_recipes(self, mc='', sort=True):
"""
Enable iterating over all recipes in the current configuration.
Returns an iterator over TinfoilRecipeInfo objects created on demand.
Parameters:
mc: The multiconfig, default of '' uses the main configuration.
sort: True to sort recipes alphabetically (default), False otherwise
"""
recipecache = self.cooker.recipecaches[mc]
if sort:
recipes = sorted(recipecache.pkg_pn.items())
else:
recipes = recipecache.pkg_pn.items()
for pn, fns in recipes:
prov = self.find_best_provider(pn)
recipe = TinfoilRecipeInfo(recipecache,
self.config_data,
pn=pn,
fn=prov[3],
fns=fns)
yield recipe
def all_recipe_files(self, mc='', variants=True, preferred_only=False):
"""
Enable iterating over all recipe files in the current configuration.
Returns an iterator over file paths.
Parameters:
mc: The multiconfig, default of '' uses the main configuration.
variants: True to include variants of recipes created through
BBCLASSEXTEND (default) or False to exclude them
preferred_only: True to include only the preferred recipe where
multiple exist providing the same PN, False to list
all recipes
"""
recipecache = self.cooker.recipecaches[mc]
if preferred_only:
files = []
for pn in recipecache.pkg_pn.keys():
prov = self.find_best_provider(pn)
files.append(prov[3])
else:
files = recipecache.pkg_fn.keys()
for fn in sorted(files):
if not variants and fn.startswith('virtual:'):
continue
yield fn
def get_recipe_info(self, pn, mc=''):
"""
Get information on a specific recipe in the current configuration by name (PN).
Returns a TinfoilRecipeInfo object created on demand.
Parameters:
mc: The multiconfig, default of '' uses the main configuration.
"""
recipecache = self.cooker.recipecaches[mc]
prov = self.find_best_provider(pn)
fn = prov[3]
if fn:
actual_pn = recipecache.pkg_fn[fn]
recipe = TinfoilRecipeInfo(recipecache,
self.config_data,
pn=actual_pn,
fn=fn,
fns=recipecache.pkg_pn[actual_pn])
return recipe
else:
return None
def parse_recipe(self, pn):
"""
Parse the specified recipe and return a datastore object
representing the environment for the recipe.
"""
fn = self.get_recipe_file(pn)
return self.parse_recipe_file(fn)
def parse_recipe_file(self, fn, appends=True, appendlist=None, config_data=None):
"""
Parse the specified recipe file (with or without bbappends)
and return a datastore object representing the environment
for the recipe.
Parameters:
fn: recipe file to parse - can be a file path or virtual
specification
appends: True to apply bbappends, False otherwise
appendlist: optional list of bbappend files to apply, if you
want to filter them
config_data: custom config datastore to use. NOTE: if you
specify config_data then you cannot use a virtual
specification for fn.
"""
if self.tracking:
# Enable history tracking just for the parse operation
self.run_command('enableDataTracking')
try:
if appends and appendlist == []:
appends = False
if config_data:
dctr = bb.remotedata.RemoteDatastores.transmit_datastore(config_data)
dscon = self.run_command('parseRecipeFile', fn, appends, appendlist, dctr)
while self.cooker.state in (state.initial, state.parsing):
self.cooker.updateCache()
except KeyboardInterrupt:
self.cooker.shutdown()
self.cooker.updateCache()
sys.exit(2)
self.logger.setLevel(logging.INFO)
sys.stderr.write("done.\n")
self.cooker_data = self.cooker.recipecache
def prepare(self, config_only = False):
if not self.cooker_data:
if config_only:
self.cooker.parseConfiguration()
self.cooker_data = self.cooker.recipecache
else:
dscon = self.run_command('parseRecipeFile', fn, appends, appendlist)
if dscon:
return self._reconvert_type(dscon, 'DataStoreConnectionHandle')
else:
return None
finally:
if self.tracking:
self.run_command('disableDataTracking')
def build_file(self, buildfile, task, internal=True):
"""
Runs the specified task for just a single recipe (i.e. no dependencies).
This is equivalent to bitbake -b, except with the default internal=True
no warning about dependencies will be produced, normal info messages
from the runqueue will be silenced and BuildInit, BuildStarted and
BuildCompleted events will not be fired.
"""
return self.run_command('buildFile', buildfile, task, internal)
def build_targets(self, targets, task=None, handle_events=True, extra_events=None, event_callback=None):
"""
Builds the specified targets. This is equivalent to a normal invocation
of bitbake. Has built-in event handling which is enabled by default and
can be extended if needed.
Parameters:
targets:
One or more targets to build. Can be a list or a
space-separated string.
task:
The task to run; if None then the value of BB_DEFAULT_TASK
will be used. Default None.
handle_events:
True to handle events in a similar way to normal bitbake
invocation with knotty; False to return immediately (on the
assumption that the caller will handle the events instead).
Default True.
extra_events:
An optional list of events to add to the event mask (if
handle_events=True). If you add events here you also need
to specify a callback function in event_callback that will
handle the additional events. Default None.
event_callback:
An optional function taking a single parameter which
will be called first upon receiving any event (if
handle_events=True) so that the caller can override or
extend the event handling. Default None.
"""
if isinstance(targets, str):
targets = targets.split()
if not task:
task = self.config_data.getVar('BB_DEFAULT_TASK')
if handle_events:
# A reasonable set of default events matching up with those we handle below
eventmask = [
'bb.event.BuildStarted',
'bb.event.BuildCompleted',
'logging.LogRecord',
'bb.event.NoProvider',
'bb.command.CommandCompleted',
'bb.command.CommandFailed',
'bb.build.TaskStarted',
'bb.build.TaskFailed',
'bb.build.TaskSucceeded',
'bb.build.TaskFailedSilent',
'bb.build.TaskProgress',
'bb.runqueue.runQueueTaskStarted',
'bb.runqueue.sceneQueueTaskStarted',
'bb.event.ProcessStarted',
'bb.event.ProcessProgress',
'bb.event.ProcessFinished',
]
if extra_events:
eventmask.extend(extra_events)
ret = self.set_event_mask(eventmask)
includelogs = self.config_data.getVar('BBINCLUDELOGS')
loglines = self.config_data.getVar('BBINCLUDELOGS_LINES')
ret = self.run_command('buildTargets', targets, task)
if handle_events:
result = False
# Borrowed from knotty, instead somewhat hackily we use the helper
# as the object to store "shutdown" on
helper = bb.ui.uihelper.BBUIHelper()
# We set up logging optionally in the constructor so now we need to
# grab the handlers to pass to TerminalFilter
console = None
errconsole = None
for handler in self.logger.handlers:
if isinstance(handler, logging.StreamHandler):
if handler.stream == sys.stdout:
console = handler
elif handler.stream == sys.stderr:
errconsole = handler
format_str = "%(levelname)s: %(message)s"
format = bb.msg.BBLogFormatter(format_str)
helper.shutdown = 0
parseprogress = None
termfilter = bb.ui.knotty.TerminalFilter(helper, helper, console, errconsole, format, quiet=self.quiet)
try:
while True:
try:
event = self.wait_event(0.25)
if event:
if event_callback and event_callback(event):
continue
if helper.eventHandler(event):
if isinstance(event, bb.build.TaskFailedSilent):
logger.warning("Logfile for failed setscene task is %s" % event.logfile)
elif isinstance(event, bb.build.TaskFailed):
bb.ui.knotty.print_event_log(event, includelogs, loglines, termfilter)
continue
if isinstance(event, bb.event.ProcessStarted):
if self.quiet > 1:
continue
parseprogress = bb.ui.knotty.new_progress(event.processname, event.total)
parseprogress.start(False)
continue
if isinstance(event, bb.event.ProcessProgress):
if self.quiet > 1:
continue
if parseprogress:
parseprogress.update(event.progress)
else:
bb.warn("Got ProcessProgress event for someting that never started?")
continue
if isinstance(event, bb.event.ProcessFinished):
if self.quiet > 1:
continue
if parseprogress:
parseprogress.finish()
parseprogress = None
continue
if isinstance(event, bb.command.CommandCompleted):
result = True
break
if isinstance(event, bb.command.CommandFailed):
self.logger.error(str(event))
result = False
break
if isinstance(event, logging.LogRecord):
if event.taskpid == 0 or event.levelno > logging.INFO:
self.logger.handle(event)
continue
if isinstance(event, bb.event.NoProvider):
self.logger.error(str(event))
result = False
break
elif helper.shutdown > 1:
break
termfilter.updateFooter()
except KeyboardInterrupt:
termfilter.clearFooter()
if helper.shutdown == 1:
print("\nSecond Keyboard Interrupt, stopping...\n")
ret = self.run_command("stateForceShutdown")
if ret and ret[2]:
self.logger.error("Unable to cleanly stop: %s" % ret[2])
elif helper.shutdown == 0:
print("\nKeyboard Interrupt, closing down...\n")
interrupted = True
ret = self.run_command("stateShutdown")
if ret and ret[2]:
self.logger.error("Unable to cleanly shutdown: %s" % ret[2])
helper.shutdown = helper.shutdown + 1
termfilter.clearFooter()
finally:
termfilter.finish()
if helper.failed_tasks:
result = False
return result
else:
return ret
self.parseRecipes()
def shutdown(self):
"""
Shut down tinfoil. Disconnects from the server and gracefully
releases any associated resources. You must call this function if
prepare() has been called, or use a with... block when you create
the tinfoil object which will ensure that it gets called.
"""
if self.server_connection:
self.run_command('clientComplete')
_server_connections.remove(self.server_connection)
bb.event.ui_queue = []
self.server_connection.terminate()
self.server_connection = None
self.cooker.shutdown(force=True)
self.cooker.post_serve()
self.cooker.unlockBitbake()
self.logger.removeHandler(self._log_hdlr)
# Restore logging handlers to how it looked when we started
if self.oldhandlers:
for handler in self.logger.handlers:
if handler not in self.oldhandlers:
self.logger.handlers.remove(handler)
class TinfoilConfigParameters(ConfigParameters):
def _reconvert_type(self, obj, origtypename):
"""
Convert an object back to the right type, in the case
that marshalling has changed it (especially with xmlrpc)
"""
supported_types = {
'set': set,
'DataStoreConnectionHandle': bb.command.DataStoreConnectionHandle,
}
origtype = supported_types.get(origtypename, None)
if origtype is None:
raise Exception('Unsupported type "%s"' % origtypename)
if type(obj) == origtype:
newobj = obj
elif isinstance(obj, dict):
# New style class
newobj = origtype()
for k,v in obj.items():
setattr(newobj, k, v)
else:
# Assume we can coerce the type
newobj = origtype(obj)
if isinstance(newobj, bb.command.DataStoreConnectionHandle):
connector = TinfoilDataStoreConnector(self, newobj.dsindex)
newobj = bb.data.init()
newobj.setVar('_remote_data', connector)
return newobj
class TinfoilConfigParameters(BitBakeConfigParameters):
def __init__(self, config_only, **options):
def __init__(self, **options):
self.initial_options = options
# Apply some sane defaults
if not 'parse_only' in options:
self.initial_options['parse_only'] = not config_only
#if not 'status_only' in options:
# self.initial_options['status_only'] = config_only
if not 'ui' in options:
self.initial_options['ui'] = 'knotty'
if not 'argv' in options:
self.initial_options['argv'] = []
super(TinfoilConfigParameters, self).__init__()
def parseCommandLine(self, argv=None):
# We don't want any parameters parsed from the command line
opts = super(TinfoilConfigParameters, self).parseCommandLine([])
for key, val in self.initial_options.items():
setattr(opts[0], key, val)
return opts
def parseCommandLine(self, argv=sys.argv):
class DummyOptions:
def __init__(self, initial_options):
for key, val in initial_options.items():
setattr(self, key, val)
return DummyOptions(self.initial_options), None

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,17 @@
#
# Gtk+ UI pieces for BitBake
#
# Copyright (C) 2006-2007 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

View File

@@ -0,0 +1,44 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class CrumbsDialog(gtk.Dialog):
"""
A GNOME HIG compliant dialog widget.
Add buttons with gtk.Dialog.add_button or gtk.Dialog.add_buttons
"""
def __init__(self, title="", parent=None, flags=0, buttons=None):
super(CrumbsDialog, self).__init__(title, parent, flags, buttons)
self.set_property("has-separator", False) # note: deprecated in 2.22
self.set_border_width(6)
self.vbox.set_property("spacing", 12)
self.action_area.set_property("spacing", 12)
self.action_area.set_property("border-width", 6)

View File

@@ -0,0 +1,70 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import glib
import gtk
from bb.ui.crumbs.hobwidget import HobIconChecker
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class CrumbsMessageDialog(gtk.MessageDialog):
"""
A GNOME HIG compliant dialog widget.
Add buttons with gtk.Dialog.add_button or gtk.Dialog.add_buttons
"""
def __init__(self, parent = None, label="", dialog_type = gtk.MESSAGE_QUESTION, msg=""):
super(CrumbsMessageDialog, self).__init__(None,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT,
dialog_type,
gtk.BUTTONS_NONE,
None)
self.set_skip_taskbar_hint(False)
self.set_markup(label)
if 0 <= len(msg) < 300:
self.format_secondary_markup(msg)
else:
vbox = self.get_message_area()
vbox.set_border_width(1)
vbox.set_property("spacing", 12)
self.textWindow = gtk.ScrolledWindow()
self.textWindow.set_shadow_type(gtk.SHADOW_IN)
self.textWindow.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
self.msgView = gtk.TextView()
self.msgView.set_editable(False)
self.msgView.set_wrap_mode(gtk.WRAP_WORD)
self.msgView.set_cursor_visible(False)
self.msgView.set_size_request(300, 300)
self.buf = gtk.TextBuffer()
self.buf.set_text(msg)
self.msgView.set_buffer(self.buf)
self.textWindow.add(self.msgView)
self.msgView.show()
vbox.add(self.textWindow)
self.textWindow.show()

Some files were not shown because too many files have changed in this diff Show More