Compare commits

..

428 Commits

Author SHA1 Message Date
Scott Rifenbark
58863ad092 bitbake: bitbake-user-manual: Fixed porno hack for hello world example
Someone hacked the http://hambedded site or it was moved and some
links to that site in the BB manual had been hijacked to point to
an entry portal for a pornography site.  Replaced the link with an
archived version that restores the integrity of the links.

(Bitbake rev: daa0aa05a04d8d20473a05b5b5878610e40ef820)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2018-01-17 22:33:30 +00:00
Leonardo Sandoval
fb8bf6a75e init-install-efi.sh: Avoid /mnt/mtab creation if already present
The base-files recipe installs /mnt/mtab (it is a softlink of /proc/mounts),
so if an image includes the latter, there is no new to created it again inside
the install-efi.sh script, otherwise an error may occur as indicated on the
bug's site.

[YOCTO #7971]

(From OE-Core rev: 1679c3d7bfa1cff4e126e2ed3dff50bdd7c2eeab)

Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-11 23:10:10 +01:00
Armin Kuster
c282df8993 glibc: CVE-2015-8776
it was found that out-of-range time values passed to the strftime function may
cause it to crash, leading to a denial of service, or potentially disclosure
information.

(From OE-Core rev: b9bc001ee834e4f8f756a2eaf2671aac3324b0ee)

(From OE-Core rev: c50e30cb078ca0ad6f76241f0b0a5557cc17e3c0)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-21 15:48:47 +00:00
Armin Kuster
204ad23574 glibc: CVE-2015-9761
A stack overflow vulnerability was found in nan* functions that could cause
applications which process long strings with the nan function to crash or,
potentially, execute arbitrary code.

(From OE-Core rev: fd3da8178c8c06b549dbc19ecec40e98ab934d49)

(From OE-Core rev: 1916b4c34ee9d752c12b8311cb9fd41e09b82900)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-21 15:48:47 +00:00
Armin Kuster
14a42e2719 glibc: CVE-2015-8779
A stack overflow vulnerability in the catopen function was found, causing
applications which pass long strings to the catopen function to crash or,
potentially execute arbitrary code.

(From OE-Core rev: af20e323932caba8883c91dac610e1ba2b3d4ab5)

(From OE-Core rev: 01e9f306e0af4ea2d9fe611c1592b0f19d83f487)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-21 15:48:47 +00:00
Armin Kuster
dae5ee4e5e glibc: CVE-2015-8777
The process_envvars function in elf/rtld.c in the GNU C Library (aka glibc or
libc6) before 2.23 allows local users to bypass a pointer-guarding protection
mechanism via a zero value of the LD_POINTER_GUARD environment variable.

(From OE-Core rev: 22570ba08d7c6157aec58764c73b1134405b0252)

(From OE-Core rev: bb6ce1334bfb3711428b4b82bca4c0d5339ee2f8)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-21 15:48:47 +00:00
Koen Kooi
bebaaf1d21 glibc 2.20: Security fix CVE-2015-7547
CVE-2015-7547: getaddrinfo() stack-based buffer overflow

(From OE-Core rev: b30a7375f09158575d63367600190a5e3a00b9fc)

Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-03 10:38:50 +00:00
Sona Sarmadi
aefcb6b115 bind: CVE-2015-8000
Fixes a denial of service in BIND.

An error in the parsing of incoming responses allows some
records with an incorrect class to be accepted by BIND
instead of being rejected as malformed. This can trigger
a REQUIRE assertion failure when those records are subsequently
cached.

[YOCTO #8838]

References:
http://www.openwall.com/lists/oss-security/2015/12/15/14
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8000
https://bugzilla.redhat.com/attachment.cgi?id=1105581

(From OE-Core rev: c9c42b0ec2c7b9b3e613f68db06230ebc6e2711c)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:15 +00:00
Belal, Awais
79e4cc8954 grub2: Fix CVE-2015-8370
http://git.savannah.gnu.org/cgit/grub.git/commit/?id=451d80e52d851432e109771bb8febafca7a5f1f2

(From OE-Core rev: 76ef966b1f47663f570e87aeb21bc98147b0eca2)

Signed-off-by: Awais Belal <awais_belal@mentor.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:15 +00:00
Armin Kuster
faf6ada4f2 glibc: Fixes a heap buffer overflow in glibc wscanf.
References:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1472
https://sourceware.org/ml/libc-alpha/2015-02/msg00119.html
http://openwall.com/lists/oss-security/2015/02/04/1

Reference to upstream fix:
https://sourceware.org/git/gitweb.cgi?p=glibc.git;a=commit;
h=5bd80bfe9ca0d955bfbbc002781bc7b01b6bcb06

(From OE-Core rev: 5aa90eef9b503ba0ffb138e146add6f430dea917)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Tudor Florea <tudor.florea@enea.com>

Hand applied.

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:15 +00:00
Sona Sarmadi
a779191033 libxml2: CVE-2015-8241
Upstream bug (contains reproducer):
https://bugzilla.gnome.org/show_bug.cgi?id=756263

Upstream patch:
https://git.gnome.org/browse/libxml2/commit/?id=
ab2b9a93ff19cedde7befbf2fcc48c6e352b6cbe

(From OE-Core rev: 84c6a67baaafee565ac4fad229bd8d07a21da09c)

Signed-off-by: Tudor Florea <tudor.florea@enea.com>
Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:14 +00:00
Sona Sarmadi
1930286e3f openssl: CVE-2015-3194, CVE-2015-3195
Fixes following vulnerabilities:
Certificate verify crash with missing PSS parameter (CVE-2015-3194)
X509_ATTRIBUTE memory leak (CVE-2015-3195)

References:
https://openssl.org/news/secadv/20151203.txt
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-3194
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-3195

Upstream patches:
CVE-2015-3194:
https://git.openssl.org/?p=openssl.git;a=commit;h=
d8541d7e9e63bf5f343af24644046c8d96498c17

CVE-2015-3195:
https://git.openssl.org/?p=openssl.git;a=commit;h=
b29ffa392e839d05171206523e84909146f7a77c

(From OE-Core rev: 09c3a0f01572a6a65e9f87ce16817ee7de3296f1)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:14 +00:00
Sona Sarmadi
d4db68ae6b libxml2: CVE-2015-8035
Fixes DoS when parsing specially crafted XML document
if XZ support is enabled.

References:
https://bugzilla.gnome.org/show_bug.cgi?id=757466

Upstream correction:
https://git.gnome.org/browse/libxml2/commit/?id=
f0709e3ca8f8947f2d91ed34e92e38a4c23eae63

(From OE-Core rev: e40cae30575a227bb0274869f720dffd816d629a)

Signed-off-by: Tudor Florea <tudor.florea@enea.com>
Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:14 +00:00
Tudor Florea
3beebd9447 unzip: CVE-2015-7696, CVE-2015-7697
CVE-2015-7696: Fixes a heap overflow triggered by unzipping a file with password
CVE-2015-7697: Fixes a denial of service with a file that never finishes unzipping

References:
http://www.openwall.com/lists/oss-security/2015/10/11/5
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-7696
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-7697

(From OE-Core rev: 9c841157f8ecd3221702c4675a4145f586617780)

Signed-off-by: Tudor Florea <tudor.florea@enea.com>
Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:14 +00:00
Sona Sarmadi
aa10f103e1 libxml2: CVE-2015-7942
Fixes heap-based buffer overflow in xmlParseConditionalSections().

Upstream patch:
https://git.gnome.org/browse/libxml2/commit/
?id=9b8512337d14c8ddf662fcb98b0135f225a1c489

Upstream bug:
https://bugzilla.gnome.org/show_bug.cgi?id=756456

(From OE-Core rev: a2980f004519a4baeb4c88ad924e15195fe75e32)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Tudor Florea <tudor.florea@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:14 +00:00
Martin Jansa
d54de3ebc0 linux-dtb.inc: drop unused DTB_NAME variable from do_install
* this is causing do_install to depend on KERNEL_IMAGE_BASE_NAME which
  in some cases contains something like BUILD_NUMBER from CI, that
  caused do_install to be reexecuted every single time, which is very
  sad to be caused by unused variable.
* jethro and newer don't need this change, because it's also fixed in
  commit 86b3f29f93e3f87903668ea317c6bd97be4cdf62
  Author: Marek Vasut <marex@denx.de>
  Date:   Thu May 14 14:31:11 2015 +0200
  Subject: kernel: Build DTBs early

(From OE-Core rev: 7bbed4ecd5e919eb274aeb9d6cdaba2c85cccc71)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:14 +00:00
Tudor Florea
fc3d4ce07d glibc: use patch for CVE-2015-1781
Patch added to the repo wasn't actually considered due to a
erronously way of specifying the sources.

(From OE-Core rev: 2cdc3dd4cc4426aa081b6cb99b67f1143cc64f81)

Signed-off-by: Tudor Florea <tudor.florea@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:14 +00:00
Martin Jansa
217d56ec31 texinfo: don't create dependency on INHERIT variable
* we don't want the do_package signature depending on INHERIT variable
* e.g. just adding the own-mirrors causes texinfo to rebuild:
  # bitbake-diffsigs BUILD/sstate-diff/*/*/texinfo/*do_package.sig*
  basehash changed from 015df2fd8e396cc1e15622dbac843301 to 9f1d06c4f238c70a99ccb6d8da348b6a
  Variable INHERIT value changed from
  ' rm_work blacklist blacklist report-error ${PACKAGE_CLASSES} ${USER_CLASSES} ${INHERIT_DISTRO} ${INHERIT_BLACKLIST} sanity'
  to
  ' rm_work own-mirrors blacklist blacklist report-error ${PACKAGE_CLASSES} ${USER_CLASSES} ${INHERIT_DISTRO} ${INHERIT_BLACKLIST} sanity'

(From OE-Core rev: 2f61930f55390bd2dfeb52a1ccfbc1cbe560c3ad)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:14 +00:00
Mike Crowe
6fd01ed845 allarch: Force TARGET_*FLAGS variable values
TARGET_CPPFLAGS, TARGET_CFLAGS, TARGET_CPPFLAGS and TARGET_LDFLAGS may
differ between MACHINEs. Since they are exported they affect task hashes
even if unused which leads to multiple variants of allarch packages
existing in sstate and bouncing in the sysroot when switching between
MACHINEs.

allarch packages shouldn't be using these variables anyway, so let's
ensure they have a fixed value in order to avoid this problem.

(Compare with 05a70ac30b37cab0952f1b9df501993a9dec70da and
14f4d016fef9d660da1e7e91aec4a0e807de59ab.)

(From OE-Core rev: b5a9d4ab564c2a6645922eed0203acb88ec5dd33)

Signed-off-by: Mike Crowe <mac@mcrowe.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:13 +00:00
Richard Purdie
767142c1ba layer.conf: Add missing dependency for allarch package initramfs-framework
Similiarly to the other previous changes, add a missing allarch package dependency
for initramfs-framework on udev.

(From OE-Core rev: 685cc8a2922d51f7b1a255f11c72233ae572e2b2)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:13 +00:00
Richard Purdie
eca2b438bc layer.conf: Add several allarch dependency exclusions
These are dependencies that our allarch packages have in OE-Core that cause
those allarch packages to rebuild every time MACHINE changes.

With these changes, OE-Core allarch packages all have a common sstate
signatures and no longer rebuild.

(From OE-Core rev: 63bff90fa4fb4a95e8c79f9f8e5dd90ae1dfc69d)

(From OE-Core rev: 0d07fd7496c1f86538341eac43753f031583e2c4)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:13 +00:00
Chen Qi
2d569edae2 image.bbclass: don't let do_rootfs depend on BUILDNAME
BUILDNAME is set by cooker as a string of current time. Letting do_rootfs
task depend on this variable gets us no benefit. Besides, letting do_rootfs
task depend on this variable will cause us trouble when executing
`bitbake -S none core-image-minimal'. With current code, this command
gives us error complaining about the different bashhash of do_rootfs task.

(From OE-Core rev: e1763aae5961a06a05ee8834ab20cf752bddf793)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:13 +00:00
Martin Jansa
2ad71d0ae8 fontcache: allow to pass extra parameters and environment to fc-cache
* this can be useful for passing extra parameters, pass
  -v by default to see what's going on in do_rootfs
* we need to use this for extra parameter we implemented
  in fontconfig:
  --ignore-mtime always use cache file regardless of font directory mtime
  because the checksum of fontcache generated in do_rootfs
  doesn't match with /usr/share/fonts directory as seen on
  target device causing fontconfig to re-create the cache
  when fontconfig is used for first time or worse create
  new cache in every user's home directory when /usr/
  filesystem is read only and cache cannot be updated.

  Running FC_DEBUG=16 fc-cache -v on such device shows:
  FcCacheTimeValid dir "/usr/share/fonts" cache checksum 1441207803 dir checksum 1441206149
* my guess is that the checksum is different, because pseudo
  (which is unloaded when running qemuwrapper) or because some
  influence of running the rootfs under qemu.

(From OE-Core rev: f2b86a69d88d382f16bbec070adc8199932b2c02)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:13 +00:00
Armin Kuster
73a04a266c openssh: CVE-2015-6563 CVE-2015-6564 CVE-2015-6565
three security fixes.

CVE-2015-6563 (Low) openssh: Privilege separation weakness related to PAM support
CVE-2015-6564 (medium)  openssh: Use-after-free bug related to PAM support
CVE-2015-6565 (High)  openssh: Incorrectly set TTYs to be world-writable

(From OE-Core rev: 259df232b513367a0a18b17e3e377260a770288f)

(From OE-Core rev: ddfe191355a042e6995f7b4b725b108c5bb4d36e)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>

Conflicts:
	meta/recipes-connectivity/openssh/openssh_6.6p1.bb
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:13 +00:00
Sergiy Kibrik
b3269fc2e6 rsync: backport libattr checking patch
Add check_libattr.patch to version 3.1.0 recipe, which checks
and includes libattr to linker, otherwise rsync may fail to build
with linker error below (as -lattr option gets omitted):

[..]
lib/sysxattrs.o: undefined reference to symbol 'llistxattr@@ATTR_1.0'
[..]/lib/libattr.so.1: error adding symbols: DSO missing from command line

(From OE-Core rev: 576f63c50badd54b47cdda42a6466bb18984958d)

Signed-off-by: Sergiy Kibrik <sakib@meta.ua>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:13 +00:00
Sona Sarmadi
0facda51ce grep2.19: CVE-2015-1345
Fixes heap-based buffer overflow flaw in grep.
Affected versions are: grep 2.19 through 2.21

Removed THANKS.in changes from upstream patch since this
file does not exist in version 2.19.
Replaced tab with spaces in SRC_URI as well.

Upstream fix:
http://git.sv.gnu.org/cgit/grep.git/commit/?id=
83a95bd8c8561875b948cadd417c653dbe7ef2e2

(From OE-Core rev: fb3e73fb2536b718dfce0e7b126f75464b9874aa)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:13 +00:00
Sona Sarmadi
8cf47f82b9 libtasn1: CVE-2015-3622
_asn1_extract_der_octet: prevent past of boundary access

References:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-3622
http://git.savannah.gnu.org/gitweb/?p=libtasn1.git;a=patch;
h=f979435823a02f842c41d49cd41cc81f25b5d677

(From OE-Core rev: 61bee3f813127c91d75a2af5197bdc874483a1fd)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:03:13 +00:00
Richard Purdie
8ef55cc0da bitbake: cooker: Ensure bbappend files are processed in a determistic order
self.appendlist is a dict and as such unordered. This can lead to cases
where appends with different names (e.g. x_%.bbappend vs. x_123.bbappend)
can be reordered in application which in turn reorders the variables
that those bbappend files might touch. Reorderd variables changes the sstate
cache signatures causing real world issues.

To avoid this, use a list for the append files instead.

This patch is conservative and just adds a new data structure alongside
the existing one and uses it to resolve the core issue. Later patches
(post release) can handle some of the wider but less problematic ones
(e.g. issues in bitbake-layers flatten).

[YOCTO #7511]

(Bitbake rev: 370a19bf956a2fba5bf4db3d72806e17d7f9e000)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-17 17:52:13 +00:00
Scott Rifenbark
6d34267e0a documentation: Changed some 'intro' tags to resolve multiple mega-manual warnings.
(From yocto-docs rev: 411beb911b826d19fe3a6755c7a432ca1f17352f)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-11-18 16:44:05 +00:00
Scott Rifenbark
9d6d902326 poky.ent, mega-manual.sed: Updated to support 1.7.3 release
(From yocto-docs rev: a8294f1fb2e1d5d990a678492dd87d9d31dcf0ee)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-11-18 16:44:05 +00:00
Scott Rifenbark
0d8ed50877 documentation: Updated manual revision tables for 1.7.3 release
(From yocto-docs rev: 4de7a8b829cd45356d64885202850b3499b0da10)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-11-18 16:44:05 +00:00
Richard Purdie
b38454c2e3 build-appliance-image: Update to dizzy head revision
(From OE-Core rev: 7bb182bdd130266100fc541fd09b82d09c51cd80)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-29 14:56:15 +01:00
Richard Purdie
19f07a31a6 poky.conf: Bump version for 1.7.3 dizzy release
(From meta-yocto rev: 661f1023c499c490255ca5e97b76d54e51a8f59e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-29 14:55:27 +01:00
Ross Burton
03666c8a74 sstate: run recipe-provided hooks outside of ${B}
To avoid races between the sstate tasks/hooks using ${B} as the cwd, and other
tasks such as cmake_do_configure which deletes and re-creates ${B}, ensure that
all sstate hooks are run in the right directory, and run the prefunc/postfunc in WORKDIR.

(From OE-Core rev: dc8546241a66c6eb076dc67fd165b5216b822ced)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-29 14:42:29 +01:00
Armin Kuster
85f6cf736b bind: CVE-2015-1349 CVE-2015-4620 CVE-2015-5722
three security fixes.

(From OE-Core rev: d3af844b05e566c2188fc3145e66a9826fed0ec8)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-19 11:53:16 +01:00
Sona Sarmadi
a01280b7ab icu: CVE-2014-8146-CVE-2014-8147
CVE-2014-8146 icu: heap overflow via incorrect isolateCount
CVE-2014-8147 icu: integer truncation in the resolveImplicitLevels function

References:
[1] https://github.com/pedrib/PoC/raw/master/generic/i-c-u-fail.7z
[2] https://www.kb.cert.org/vuls/id/602540
[3] http://bugs.icu-project.org/trac/changeset/37080
[4] http://bugs.icu-project.org/trac/changeset/37162

(From OE-Core rev: 1bc6391f65dec41ff0360b625b7a85a161e43955)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-19 11:53:16 +01:00
Saul Wold
800a3dc9b0 oprofileui: Use inherit gettext
oprofileui uses gettext during the configuration task so should be inherit
gettext. This issue appears when an older version of gettext is used do to
pinning to the older non-gplv3 version.

[YOCTO #7795]

(From OE-Core rev: 9a747554ba985970009a065f3403b94565e698e3)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-19 11:53:15 +01:00
Sona Sarmadi
bdfee8758e gnutls: CVE-2015-3308
(From OE-Core rev: 75b25e7d463ed1af0fd9b3dd56e407e6e72b0f6a)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-19 11:53:15 +01:00
Martin Jansa
915498e230 rootfs.py: show intercept script output in log.do_rootfs
* without this the output wasn't shown anywhere even when the bb.warn
  says:
  "See log for details!"

(From OE-Core rev: a3c322b42c7a14584a80e04519c34689ec813210)

(From OE-Core rev: b708151b798013119cbc651cd11a534c0cb816af)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-19 11:51:34 +01:00
Martin Jansa
8897773fe4 postinst_intercept: allow to pass variables with spaces
* trying to pass foo="a b" through postinst_intercept ends
  with the actual script header to containing:
  b
  foo=a
  which fails because "b" command doesn't exist.

(From OE-Core rev: c66d7d85b7225be8c838449324d506565dd0081d)

(From OE-Core rev: 05af103b9b9141319644cde452afbe73e4c2d226)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-19 11:51:34 +01:00
Martin Jansa
2e6494e55a rootfs.py: Allow to override postinst-intercepts location
* useful when we need to overlay/extend intercept scripts from oe-core

(From OE-Core rev: 7d08d2d5c0ae686e3bb8732ea82f30fd189b1cd8)

(From OE-Core rev: 2374910466d82c817d74e9098a1636b21ff779af)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-19 11:51:33 +01:00
Beth Flanagan
55fbde1fde base.bbclass: Note when including pn with INCOMPATIBLE_LICENSES
We need to be able to tell people if we WHITELIST a recipe
that contains an incompatible licese.

Example: If we set WHITELIST_GPL-3.0 ?= "foo", foo will end
up on an image even if GPL-3.0 is incompatible. This is the
correct behaviour but there is nothing telling people that it
is even happening.

(From OE-Core rev: c9da529943b2f563b7b0aeb43576c13dd3b6f932)

(From OE-Core rev: c468724d2932708dffc766e182a69665de6226f6)

Signed-off-by: Beth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-19 11:51:33 +01:00
Robert Yang
3a2725e5d9 autotools.bbclass: mkdir ${B} -> mkdir -p ${B}
${B} is the default cwd of tasks, so there might be race issues such as:
| mkdir: cannot create directory `${B}': File exists
[snip]
NOTE: recipe perf-1.0-r9: task do_configure: Failed

(From OE-Core rev: 3390dde6addaafad84c635eb37d2eae1ac22fcb7)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-19 11:11:20 +01:00
Robert Yang
adcc476412 perf: mkdir ${B} -> mkdir -p ${B}
${B} is the default cwd of tasks, so there might be race issues such as:
| mkdir: cannot create directory `/path/to/work/qemux86-poky-linux/perf/1.0-r9/perf-1.0/': File exists
[snip]
NOTE: recipe perf-1.0-r9: task do_configure: Failed

(From OE-Core rev: 197d9fb922cc234294e8ca090bddfcd023fc82ce)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-19 11:11:20 +01:00
Richard Purdie
ab4cc02bf8 bitbake: prserv/serv: Improve exit handling
Currently, I'm not sure how the prserver managed to shut down cleanly. These
issues may explain some of the hangs people have reported.

This change:

* Ensures the connection acceptance thread monitors self.quit
* We wait for the thread to exit before exitting
* We sync the database when the thread exits
* We do what the comment mentions, timeout after 30s and sync the database
  if needed. Previously, there was no timeout (the 0.5 applies to sockets,
  not the Queue object)

(Bitbake rev: bd9d827ae6ef02ec9a0577fb2fd19b830ccb4416)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0926492295d485813d8a4f6b77c7b152e4c5b4c4)
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-02 23:41:00 +01:00
Richard Purdie
01c1167336 bitbake: bitbake: cooker: properly fix bitbake.lock handling
If the PR server or indeed any other child process takes some time to
exit (which it sometimes does when saving its database), it can end up
holding bitbake.lock after the UI exits, which led to errors if you ran
bitbake commands successively - we saw this when running the PR server
oe-selftest tests in OE-Core. The recent attempt to fix this wasn't
quite right and ended up breaking memory resident bitbake. This time we
close the lock file when cooker shuts down (inside the UI process)
instead of unlocking it, and this is done in the cooker code rather than
the actual UI code so it doesn't matter which UI is in use. Additionally
we report that we're waiting for the lock to be released, using lsof or
fuser if available to list the processes with the lock open.

The 'magic' in the locking is due to all spawned subprocesses of bitbake
holding an open file descriptor to the bitbake.lock. It is automatically
unlocked when all those fds close the file (as all the processes terminate).
We close the UI copy of the lock explicitly, then close the server process
copy, any remaining open copy is therefore some proess exiting.

(The reproducer for the problem is to set PRSERV_HOST = "localhost:0"
and add a call to time.sleep(20) after self.server_close() in
lib/prserv/serv.py, then run "bitbake -p; bitbake -p" ).

Cleanup work done by Paul Eggleton <paul.eggleton@linux.intel.com>.

This reverts bitbake commit 69ecd15aece54753154950c55d7af42f85ad8606 and
e97a9f1528d77503b5c93e48e3de9933fbb9f3cd.

(Bitbake rev: a29780bd43f74b7326fe788dbd65177b86806fcf)

(Bitbake rev: 830b8f31459ca484bdaf2caa8ff4b7cbf21c77ac)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Signed-off-by: Saul Wold <sgw@linux.intel.com>

Conflicts:
	bitbake/lib/bb/cooker.py
	bitbake/lib/bb/main.py
	bitbake/lib/bb/tinfoil.py
	bitbake/lib/bb/ui/knotty.py
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:37:33 +01:00
Richard Purdie
c1803b774a bitbake: runqueue: Add message to explain the problem if diffsigs multiple tasks don't exist
(Bitbake rev: 3bfc0105ae993a3304face1fc0af75e012673567)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:37:32 +01:00
Yi Zhao
d526b3f9ac oeqa/selftest: fix test_incremental_image_generation for changes in log output
test_incremental_image_generation case failed because the log output
chanaged:

FAIL: test_incremental_image_generation (oeqa.selftest.buildoptions.ImageOptionsTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File
  "/buildarea3/yzhao1/poky-build/meta/lib/oeqa/utils/decorators.py", line 90, in wrapped_f
    return func(*args)
  File
  "/buildarea3/yzhao1/poky-build/meta/lib/oeqa/selftest/buildoptions.py", line 25, in test_incremental_image_generation
    self.assertEqual(0, res.status, msg="No match for openssh-sshd in log.do_rootfs")
AssertionError: 0 != 1 : No match for openssh-sshd in log.do_rootfs
----------------------------------------------------------------------

Using re search instead grep

(From OE-Core rev: 1872a9430cec0c61f1ec349df198160addd430de)

(From OE-Core rev: afecc84cdd491789e62fb191a4f03de61e408629)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:37:32 +01:00
Alejandro Hernandez
e74c4a5ff4 qemurunner: Improves checking for server and target IPs on qemus parameters
Fixes OS hanging infinitely waiting for qemus process to release bitbake.lock

(From OE-Core rev: d168bf34c553dbe5de7511e158cd83869d7a88bc)

(From OE-Core rev: 99ac0971aecb1b6bc113da28b79d169095e6b671)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:37:32 +01:00
Paul Eggleton
1a99652a88 oeqa/utils/qemurunner: fix logging
OE-Core commit 519e381278d40bdac79add340e4c0460a9f97e17 unfortunately
broke logging in two different ways:

1) it prevented logging to the task log from working within bitbake
   -c testimage. This is due to the logger object being set up too early
   which interferes with BitBake's own logging. If we prefix the name
   with "BitBake." everything works (and we don't need to set the
   logging level).

2) Additionally because it called the log functions on the logging
   module and not the logger object it set up, this caused the
   oe-selftest logging to start printing everything from that point
   forward.

Fix these two issues and return us to the desired behaviour for
do_testimage.

(From OE-Core rev: 429b1971be06d5146bb1c14f4697966cddab3b33)

(From OE-Core rev: 144c6a2d711f7cf4dafc22999ed8cf4cdb329dfc)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:37:31 +01:00
Ross Burton
20db29fb4d oeqa/QemuRunner: don't use bb for logging
Instead of using bb.note() etc for logging use logging.Logger directly, allowing
the use of QemuRunner outside of bitbake.

Also clean up the logging/errors by moving create_socket() out of
__init__()/restart() and into start().

(From OE-Core rev: 519e381278d40bdac79add340e4c0460a9f97e17)

(From OE-Core rev: c3c87fa26fec8c6e620ad2f1ce95b989f8c108ed)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:37:31 +01:00
Sona Sarmadi
f7b041121e qemu-slirp: CVE-2014-3640
Fixes NULL pointer deref in sosendto().

Reference:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-3640

Upstream patch:
http://git.qemu.org/?p=qemu.git;a=commit;
h=9a72433843d912a45046959b1953861211d1838d

(From OE-Core rev: f63a4f706269b4cd82c56d92f37c881de824d8bc)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:37:30 +01:00
Martin Jansa
7a263b2e60 license.bbclass: fix unexpected operator for LICENSE values with space
* add quotes around pkged_lic so that it works correctly with spaces
* fixes following error:
  run.license_create_manifest.50601: 193: [: GPLv2: unexpected operator

(From OE-Core rev: 2bb8b2abb689d91b7b7e28e6bd528747bde94dd2)

(From OE-Core rev: 4c31f726cf1ea2e01b1fbf1c23e96a110fbb9623)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:37:30 +01:00
Aníbal Limón
117d9b2f45 license_class: fix license.manifest shows LICENSE field differently to recipe
Drop removal of [|&()*] operators in pkged_lic because this removal is only
needed to validate if license is collected.

[YOCTO #6757]

(From OE-Core rev: 57e5f74382d51f2a8df00e18b6008e3d2b44ad1a)

(From OE-Core rev: a5fe29ff72dc2ce1667caa2ab1fdfbf2c1a4413b)

Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:37:30 +01:00
Martin Jansa
a4162fa9fa connman-conf: fix SRC_URI_append
* add leading space so that it works even with some .bbappend adding
  additional files to SRC_URI without trailing space

(From OE-Core rev: 0f282f1d4946ac6e81959c66172c115405632a26)

(From OE-Core rev: 55b183aa476754b050779d36dfbb03eb936443ad)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:37:30 +01:00
Sona Sarmadi
5a3899981c qemu-vnc: CVE-2014-7815
Fixes an uninitialized data structure use flaw in qemu-vnc
which allows remote attackers to cause a denial of service
(crash).

Upstream patch:
http://git.qemu.org/?p=qemu.git;a=commit;
h=b2f1d90530301d7915dddc8a750063757675b21a

References:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-7815
http://www.securityfocus.com/bid/70998

(From OE-Core rev: 31e3d1bab6612d8116086f9ada048a0c094fb2c8)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:37:29 +01:00
Sona Sarmadi
db031c40bb qemu: CVE-2014-7840
Fixes insufficient parameter validation during ram load

Reference
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-7840

Upstream commit:
http://git.qemu.org/?p=qemu.git;a=commit;
h=0be839a2701369f669532ea5884c15bead1c6e08

(From OE-Core rev: 0bd4b0c7ede8a52559e4bf05085a3f0d46a0a280)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:37:29 +01:00
Sona Sarmadi
b64eae5767 bind9.9.5: CVE-2015-5477
Fixed a flaw in the way BIND handled requests for TKEY
DNS resource records.

References:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-5477
https://kb.isc.org/article/AA-01272

(From OE-Core rev: 18a01db3f2430095a4e6966aed5afd738dbc112e)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:37:29 +01:00
Richard Purdie
0e6473ad75 sstate: Use SSTATE_DIR for FILESPATH
FILESPATH was only being overridden in one fetch location, it should be
equally handled in both.

Also use SSTATE_DIR as FILESPATH so that mirror urls which do remapping
can search the local SSTATE_DIR for other paths.

Also ensure that MIRRORS is removed in both locations, previously
it was only unset in one but both codepaths should be consistent.

(From OE-Core rev: d66a45c52200f73e67ebb3e6e447907bb3334319)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:37:28 +01:00
Ross Burton
27fc73496c gnome: move introspection options to gnomebase
The gnome class is really a convenience class to include other classes, so move
the introspection arguments into gnomebase.bbclass.

(From OE-Core rev: d0bf0e5fd9c2cb18437ccca14b2f41d410aa832a)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:19:56 +01:00
Martin Jansa
012e1a4431 tzdata, tzcode-native: drop older versions 2014h, 2015b
* unlike in master, the older versions weren't dropped when upgrading to 2015d

(From OE-Core rev: 1341554e582407e85697f05e3fcc82fcf29c9d56)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-09-01 21:19:56 +01:00
Richard Purdie
137f52ac3a grub-efi: Add backslash lost from previous commit
(From OE-Core rev: 4621675632518caae3a8c2098ee36896b9372551)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-08-20 21:42:31 +01:00
Saul Wold
ebe3096910 grub-efi: Use the backport patch from grub
This fixes the build error seen on newer distros that use gcc5 such as Fedora22

(From OE-Core rev: ac135bd462dc4e674260fdb97c9e2e79c2e96460)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-08-20 17:49:21 +01:00
Aníbal Limón
f1c45d15c2 license_class: Fix choose_lic_set into incompatible license
Use canonical_license when doing evaluation of license expresion
since INCOMPATIBLE_LICENSE are already canonized.

[YOCTO #8080]

(From OE-Core rev: 8687b8bb8233e7f867539d69463671aa9c0806e9)

Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-08-20 11:26:53 +01:00
Richard Purdie
fd35017edf dpkg: Fix tarfix.patch
Accidentally forgot to merge the backport changes into the commit. Fix
so the patch applies correctly.

(From OE-Core rev: 5f50f90ed824ea6a8d1d1b41a5345f51a15c443f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-27 14:21:45 +01:00
Richard Purdie
e07aa344ee dpkg: Fix for Fedora22 and new versions of tar
They managed to 'break' tar. Again. Sorry, they fixed a regression
which broke dpkg-deb.

The addition of:
http://git.savannah.gnu.org/cgit/tar.git/commit/?id=163e96a0e619a900eab6de827c7c5749ecc9d3f2
("Bugfix: entries read from the -T file did not get proper matching_flag.")
means that the no-recursion option gets lost. This leads to many files getting included
multiple times, along with files which shouldn't be there.

The commit message is horrendous. The patch actually makes the option positional
(as documnted since 2003) and therefore doesn't affect the input from the -T option.

Moving the --no-reursion option to earlier in the command avoids the bug.

The bug was not present in tar 1.28 however it has been backported in at least
Fedora 22 and heading into Fedora 21.

Redhat reports of issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1230762 [tar]
https://bugzilla.redhat.com/show_bug.cgi?id=1241508 [dpkg]

Discussion of bug in upstream tar:
http://www.mail-archive.com/bug-tar@gnu.org/msg04799.html

[YOCTO #7988]

(From OE-Core rev: 6be698b7270f73f40d38713ecf13f12aec0ced61)

(From OE-Core rev: 386898afde40971653af646d55e64aef65807e3b)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Conflicts:
	meta/recipes-devtools/dpkg/dpkg_1.17.25.bb
2015-07-27 12:25:45 +01:00
Richard Purdie
112839bebe oeqa/bbtests: Fix to ensure DL_DIR is set
write_config overwrites the config rather than appends to it, so
ensure we write both variables in one go.

(From OE-Core rev: c94ba6160d5965d4d2071154b43112eb87f4c898)

(From OE-Core rev: c58814c910d813a761b5c0e3ba63d6fddef86cc9)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-26 09:14:44 +01:00
Richard Purdie
f48d1a75e1 oeqa/bbtests: Fix race over DL_DIR and SSTATE_DIR
Running "-c cleanall" on shared DL_DIR and SSTATE_DIR is antisocial.
It leads to hard to debug races where we wonder why files disappear
and reappear from those directories.

Fix this by using a specific set of directories for these tests. This
avoids a long standing bug on the autobuilder where aspell and man
sources would disappear.

[YOCTO #6276]

(From OE-Core rev: 6b089c4a79dc3aae00c8a6e7ab0f6ba4b4b5f138)

(From OE-Core rev: f1447c256e027553442cf507e217323f7868000c)

(From OE-Core rev: e4434982e0d2c086ee946d3742c257daf31e8bfd)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-26 09:14:44 +01:00
Richard Purdie
4d41954e94 subversion: Fix subversion-native on Fedora22
Similarly to:
http://git.yoctoproject.org/cgit.cgi/poky/commit/?id=9b19d6548a345009a6de79a6820c07a72054d961

we also need to fix the subversion-native case with gcc5 by using
the same fix to the BUILD_CPPFLAGS.

(From OE-Core rev: a5e7a1e597e7bbe3bbc547f43a89d00a8a9a9924)

(From OE-Core rev: 7d445547df528aa9e5bfb85568a7270e27f633ef)

(From OE-Core rev: 7e57945be22c1d141c6a9be6f73f585cd07938a6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-26 09:14:43 +01:00
Khem Raj
53b0be3761 subversion: Add -P to CPPFLAGS
see https://gcc.gnu.org/gcc-5/porting_to.html

we need to stop the preprocessor from generating the #line directives
or we run into issues like

| checking for apr_int64_t Python/C API format string...
| configure: error: failed to recognize APR_INT64_T_FMT on this platform
| Configure failed. The contents of all config.log files follows to aid
debugging
| ERROR: oe_runconf failed

Rightly subversion should be fixed but lets leave that to subversion
folks

Change-Id: I02a89798ff949f79967ab0a73adcddaa4218662d
(From OE-Core rev: 7793b1c425077ed6ed11a9bc2a8b1b96612b1c96)

(From OE-Core rev: 4954cd6abad556d75beec860e82750bb1090a109)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-26 09:14:43 +01:00
Richard Purdie
ca052426a6 cross-localedef-native: Use older C standards for older code
This older code needs specific compiler options to allow it to work
with gcc 5. These options are used in the 2.21 recipe in master/fido
so this simply backports them.

(From OE-Core rev: 447dba2a6a077c83083556ab79ab265d4b8a048f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-26 09:14:43 +01:00
Khem Raj
540b92736c grub: Backport const qualifier fix for gcc-5
gcc-5 is stricter and complains about const to non-const
conversions, we backport the patch from upstream into 2.00

Change-Id: I17db365fdd253daaa1ab726e2a70ecad0ac7b2ae
(From OE-Core rev: ea3d48471db19a2432e4afd86df8caad51ee5166)

(From OE-Core rev: f396bcfdc4f05d0a047903262edc5b52f3c85b6e)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Conflicts:
	meta/recipes-bsp/grub/grub2.inc
2015-07-26 09:14:43 +01:00
George McCollister
a93005e6d0 binutils: fix native builds when host has gcc5
Cherry pick upstream commit to fix -Werror=logical-not-parentheses error
when building with native gcc5.

(From OE-Core rev: b3bd0dba3139a3e79bfcebe137248c7bdcadf04d)

(From OE-Core rev: c8bc2d7913e11278990d1fe82066e26f7fc1c11b)

Signed-off-by: George McCollister <george.mccollister@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-26 09:14:43 +01:00
Martin Stolpe
cfc5952b11 ncurses: fix native builds when host has gcc5
GCC"s preprocessor starts to add newlines which are not
handled properly by ncurses build system startin from
version 5.0.

See also: https://bugzilla.yoctoproject.org/show_bug.cgi?id=7870

(From OE-Core rev: 3a5435b371c84ec28b6936b8c8fa6541a592d061)

(From OE-Core rev: 8492e143af25bf64d07fc117e7f1607aadf89f09)

Signed-off-by: Martin Stolpe <martin.stolpe@gmail.com>
Signed-off-by: Joshua Lock <joshua.lock@collabora.co.uk>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-26 09:14:43 +01:00
Yue Tao
1b492dfcdd libxml2: Security Advisory - libxml2 - CVE-2015-1819
for CVE-2015-1819 Enforce the reader to run in constant memory

(From OE-Core rev: 9e67d8ae592a37d7c92d6566466b09c83e9ec6a7)

(From OE-Core rev: de6e4114d5285ea0d2a53d19c93ce96430cc9e30)

Signed-off-by: Yue Tao <Yue.Tao@windriver.com>
Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>

Conflicts:
	meta/recipes-core/libxml/libxml2.inc
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:34 +01:00
Leonardo Sandoval
bf3ee430a4 rpm: Fix CVE-2013-6435
Backport to fix CVE-2013-6435. Description on [1] and original
patch taken from [2].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2013-6435
[2] https://bugzilla.redhat.com/attachment.cgi?id=956207

[YOCTO #7181]

(From OE-Core rev: 6bf846ed5ccd1a4d01b36630708b2b9aa9e69ed5)

(From OE-Core rev: 74d4895c4d30a45af5856228a00810bd14e5e071)

Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:34 +01:00
Leonardo Sandoval
abd315bc05 rpm: Fix CVE-2014-8118
Backport patch to fix CVE-2014-8118. Description is on [1] and
original patch taken from [2].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1168715
[2] https://bugzilla.redhat.com/attachment.cgi?id=962159

[YOCTO #7181]

(From OE-Core rev: 0a1f924157cb75d0f67cf534762c89dc8656d352)

(From OE-Core rev: f61750cfc3dd14a72b1ade4274b1a577136111fe)

Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:34 +01:00
Roy Li
1e6d987374 unzip: drop 12-cve-2014-9636-test-compr-eb.patch
12-cve-2014-9636-test-compr-eb.patch is same as unzip-6.0_overflow3.diff,
is to fix CVE-2014-9636

(From OE-Core rev: 9cf42db4e545cd260faf45931d3b3c63ab3b3aab)

(From OE-Core rev: 7567dbc552819906a876b729e2a599ec412139a3)

Signed-off-by: Roy Li <rongqing.li@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:34 +01:00
Ng Wei Tee
38a334ad84 linux-firmware: Package Marvell pci8897 and usb8897 firmware
(From OE-Core rev: 86106da1068ec802ec9e1dd7bcdd9baf78182cb7)

Signed-off-by: Ng Shui Lei <shui.lei.ng@intel.com>
Signed-off-by: Ng Wei Tee <wei.tee.ng@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:34 +01:00
Jussi Kukkonen
00fce45b55 dbus: CVE-2015-0245: prevent forged ActivationFailure
Fix CVE-2015-0245 by preventing non-root and non-systemd processes
from fooling the dbus daemon into thinking systemd service activation
failed.

(From OE-Core rev: a8aa06b2405dec31a306fdf47bd1fdf740fde7bd)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:34 +01:00
Roy Li
fcd25c6d2e unzip: fix four CVE defects
Port four patches from unzip_6.0-8+deb7u2.debian.tar.gz to fix:
     cve-2014-8139
     cve-2014-8140
     cve-2014-8141
     cve-2014-9636

(From OE-Core rev: 429ab46f975c05f65120beddf50099c7cb0b2f86)

Signed-off-by: Roy Li <rongqing.li@windriver.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:34 +01:00
Roy Li
9f363a9c8a unzip: Security Advisory -CVE-2014-9636 and CVE-2015-1315
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9636

unzip 6.0 allows remote attackers to cause a denial of service
(out-of-bounds read or write and crash) via an extra field with
an uncompressed size smaller than the compressed field size in a
zip archive that advertises STORED method compression.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-1315

Buffer overflow in the charset_to_intern function in unix/unix.c in
Info-Zip UnZip 6.10b allows remote attackers to execute arbitrary code
via a crafted string, as demonstrated by converting a string from CP866
to UTF-8.

(From OE-Core rev: f86a178fd7036541a45bf31a46bddf634c133802)

(From OE-Core rev: 7c667c6aa0302649c125b0325a2e6f641810cb09)

Signed-off-by: Roy Li <rongqing.li@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:33 +01:00
Martin Jansa
19bce8f5c6 test-dependencies.sh: strip only .bb suffix
* we were stripping too much when stripping recipe name from line like this:
  ERROR: Task 12016 (/some/patch/something.dot.bar.bb, do_fetch) failed with exit code '1'
  where the recipe name contains dots and doesn't end with _<version>.bb

(From OE-Core rev: f4953004ec26c97fb696854f8e31d36b8bbeb8bf)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:33 +01:00
Andre McCurdy
1d909fb8da mesa: update --with-llvm-shared-libs configure option
As per the Mesa 10.2 release notes, "--with-llvm-shared-libs"
has been renamed to "--enable-llvm-shared-libs".

  http://www.mesa3d.org/relnotes/10.2.html

(From OE-Core rev: b534c13bb13c1ab2739daaf32b59d917e93106fd)

Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:33 +01:00
Martin Jansa
d19d976bf5 e2fsprogs: install populate-extfs.sh
* install populate-extfs.sh from contrib, be aware that in order
  to use it you need to set DEBUGFS shell variable, otherwise it will
  try to use debugfs from relative path which is almost always
  incorrect:
    CONTRIB_DIR=$(dirname $(readlink -f $0))
    DEBUGFS="$CONTRIB_DIR/../debugfs/debugfs"

(From OE-Core rev: 1a3a7a1ba8c271acd13cb1d740ef83ee02829e33)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:33 +01:00
Kai Kang
ea2e7dbcd7 gpgme: fix CVE-2014-3564
Backport patch to fix CVE-2014-3564.

http://git.gnupg.org/cgi-bin/gitweb.cgi?p=gpgme.git;a=commit;h=2cbd76f

(From OE-Core rev: 421e21b08a6a32db88aaf46033ca503a99e49b74)

(From OE-Core rev: 7643fe96bbce57995580162b5339674cc4a9c81f)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>

Conflicts:
	meta/recipes-support/gpgme/gpgme_1.4.3.bb
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:33 +01:00
Haris Okanovic
215c4d948d glibc: CVE-2015-1781: resolv/nss_dns/dns-host.c buffer overflow
Backport Arjun Shankar's patch for CVE-2015-1781:

A buffer overflow flaw was found in the way glibc's gethostbyname_r() and
other related functions computed the size of a buffer when passed a
misaligned buffer as input. An attacker able to make an application call
any of these functions with a misaligned buffer could use this flaw to
crash the application or, potentially, execute arbitrary code with the
permissions of the user running the application.

https://sourceware.org/bugzilla/show_bug.cgi?id=18287

(From OE-Core rev: c0f0b6e6ef1edc0a9f9e1ceffb1cdbbef2e409c6)

(From OE-Core rev: 96ff830b79c64d8f35c311b66906b492cbeeeb55)

Signed-off-by: Haris Okanovic <haris.okanovic@ni.com>
Reviewed-by: Ben Shelton <ben.shelton@ni.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:33 +01:00
Kai Kang
9ae261263a qemu: fix CVE-2015-3456
Backport patch to fix qemuc CVE issue CVE-2015-3456.

Refs:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-3456
http://git.qemu.org/?p=qemu.git;a=commit;h=e907746266721f305d67bc0718795fedee2e824c

(From OE-Core rev: 1d9e6ef173bea8181fabc6abf0dbb53990b15fd8)

(From OE-Core rev: e4c1374330679f84436796a3f6c50b486465a7ed)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>

Conflicts:
	meta/recipes-devtools/qemu/qemu_2.1.0.bb
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:33 +01:00
Roy Li
22690105da ppp: Security Advisory - CVE-2015-3310
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-3310

Buffer overflow in the rc_mksid function in plugins/radius/util.c in
Paul's PPP Package (ppp) 2.4.6 and earlier, when the PID for pppd is
greater than 65535, allows remote attackers to cause a denial of
service (crash) via a start accounting message to the RADIUS server.

oe-core is using ppp 2.4.7, and this CVE say ppp 2.4.7 was not
effected, but I found this buggy codes are same between 2.4.6 and
2.4.7, and 2.4.7 should have this issue.

(From OE-Core rev: 5b549c6d73e91fdbd0b618a752d618deb1449ef9)

(From OE-Core rev: d2f15f2ec2d9e8ecdb9aa69a413663f3615d7e0c)

Signed-off-by: Roy Li <rongqing.li@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:33 +01:00
Jonathan Liu
3054c73445 qt4: add patch for BMP denial-of-service vulnerability
did not include aarch64 patches.

For further details, see:
https://bugreports.qt.io/browse/QTBUG-44547

(From OE-Core rev: 840fccf8ec7691f03deeb167487cde941ebea8bf)

(From OE-Core rev: c050f01d56c1eaf747ebb471b0b726b9cb3794d8)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>

Conflicts:
	meta/recipes-qt/qt4/qt4-4.8.6.inc
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:32 +01:00
Yue Tao
c5a583e8bd libsndfile: Security Advisory - libsndfile - CVE-2014-9496
Backport two commits from libsndfile upstream to fix a segfault and
two potential buffer overflows.

(From OE-Core rev: e2fdc340c109bd64b1520443b27bd42a0faef0e0)

Signed-off-by: Yue Tao <Yue.Tao@windriver.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:32 +01:00
Robert Yang
7113efd02d license.bbclass: set dirs for do_populate_lic_setscene
Fixed:
ERROR: Build of do_populate_lic failed
ERROR: Traceback (most recent call last):
  File "bitbake/lib/bb/build.py", line 497, in exec_task
    return _exec_task(fn, task, d, quieterr)
  File "bitbake/lib/bb/build.py", line 437, in _exec_task
    exec_func(func, localdata)
  File "bitbake/lib/bb/build.py", line 212, in exec_func
    exec_func_python(func, d, runfile, cwd=adir)
  File "/home/nxadm/nx/ala-blade44.1/builds-2015-03-09-163005/qemuppc_world_oe_bp/bitbake/lib/bb/build.py", line 237, in exec_func_python
    os.chdir(cwd)
OSError: [Errno 2] No such file or directory: 'bitbake_build/tmp/work/ppc7400-wrs-linux/taglib/1.9.1-r0/build'

When running setscene, the cwd is $B which maybe removed by
autotools.bbclass or cmake.bbclass when rebuild.

(From OE-Core rev: 29872741d1d118e32cc04469535fed1b892b92e6)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Armin Kuster <akuster@smtp.gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:31 +01:00
Robert Yang
b469799103 perf: add LIBNUMA_DEFINES
Fixed:
WARNING: QA Issue: perf rdepends on numactl, but it isn't a build dependency? [build-deps]

The numactl is in meta-oe.

(From OE-Core rev: bf7bbcf1f28f83b08b9067b13352af477bf48b37)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Armin Kuster <akuster@smtp.gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:31 +01:00
Martin Jansa
0891b8789d squashfs-tools: build and install unsquashfs as well
* it's useful for debugging corrupt squashfs images from mksquashfs

(From OE-Core rev: 2811ea0d0f9cc4e9a1d4eed71bbc2d0c77043a40)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Armin Kuster <akuster@smtp.gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:31 +01:00
Armin Kuster
b8b7df8304 curl: add a few missing security fixes
CVE-2014-3707
CVE-2014-8150
CVE-2015-3153

not affected by:  CVE-2014-8151

(From OE-Core rev: cfcda9db45350d03158569c8c01e448cb426de5a)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:31 +01:00
Maxin B. John
0c1c0877e8 curl: several security fixes
Fixes below listed bugs:
1. CVE-2015-3143
2. CVE-2015-3144
3. CVE-2015-3145

Dropped: 4. CVE-2015-3148
SPNEGO was introduced in 7.39 so this version not affected

(From OE-Core rev: e525ef63ed2b4f3a250caf0748637b7f16b34d90)

Signed-off-by: Maxin B. John <maxin.john@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:31 +01:00
Armin Kuster
c930052636 tzdata: update to 2015d
Changes affecting future time stamps

Egypt will not observe DST in 2015 and will consider canceling it
permanently.  For now, assume no DST indefinitely.
(Thanks to Ahmed Nazmy and Tim Parenti.)

Changes affecting past time stamps
America/Whitehorse switched from UTC-9 to UTC-8 on 1967-05-28, not
1966-07-01.  Also, Yukon's time zone history is documented better.
(Thanks to Brian Inglis and Dennis Ferguson.)

Change affecting past and future time zone abbreviations
The abbreviations for Hawaii-Aleutian standard and daylight times
have been changed from HAST/HADT to HST/HDT, as per US Government
Printing Office style.  This affects only America/Adak since 1983,
as America/Honolulu was already using the new style.

(From OE-Core rev: b9f366ab4e0a9cad69b631f402b9afa02d40f667)

(From OE-Core rev: ff1547cccd840068500193d4aec772988a1f2023)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:30 +01:00
Armin Kuster
6d307e9b0c tzcode: update to 2015d
Changes affecting code

    zic has some minor performance improvements.

(From OE-Core rev: 3ab7e247b0662a1791169f16424abec426885f80)

(From OE-Core rev: 0c90fd63e8f4cd7179e836c3f20981913d19be75)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:30 +01:00
Cristian Iorga
d0315a6cdf neard: fix the install path in init scripts
The neard make scripts will place the daemon executable
in /usr/lib/neard/nfc/neard. Change the path accordingly
in init scripts.

Fixes [YOCTO #7390].

(From OE-Core rev: bd277f3a46e7fc764cc55c5354d2136fcfddc3c1)

(From OE-Core rev: d86fd6190b9ffd5012f229f319520615176c27ee)

Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-20 20:54:30 +01:00
Tudor Florea
5f0d25152b openssl: upgrade to 1.0.1p
This upgrade fixes CVE-2015-1793
Removed openssl-fix-link.patch. The linking issue has been fixed in openssl.

(From OE-Core rev: 208d1d72b0d248b12f800e566cb011aec9a1a084)

Signed-off-by: Tudor Florea <tudor.florea@enea.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-07-15 15:25:43 +01:00
Ed Bartosh
9c4ff467f6 split_and_strip_files: regroup hardlinks to make build deterministic
Reverted 7c0fd561bad0250a00cef63e3d787573112a59cf

Created separate group of hardlinks for the files inside
the same package. This should prevent stripped files to be
populated outside of package directories.

This turns out not to be straightforward and has overlap with the
other hardlink handling code in this area. The code is condensed
into a more concise and documented form.

[Original patch from Ed with tweaks from RP]

[YOCTO #7586]

(From OE-Core master rev: 82d00f7254b7d3bb6a167d675d798134884d1b19)

(From OE-Core rev: 96270e79a70960289856cf424c9e4c1894acb18c)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-05-15 18:13:40 +01:00
Fabrice Coulon
6adbd2deb9 meta/lib/oe/package.py: fix files ownership in packages
This fix solves the problem with the ownership of files in packages.
The do_install task was producing correct and expected output but when
the files were being put in, e.g. a rpm package, the ownership could
be different than that in the do_install task.

[YOCTO #7428]

(From OE-Core master rev: 1a50cc5aeafff0d8ee6c4a41dd2770ecd31455f0)

(From OE-Core rev: ad1a50a549377a0a74c51e20e53f146011e6c269)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Fabrice Coulon <fabrice.coulon@axis.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-05-15 18:13:40 +01:00
Reinette Chatre
9fd145d27e init-install-efi.sh: fix gummiboot entry installation
After selecting the "install" gummiboot option of a Live image we are
seeing boot failure resulting from the gummiboot entries not being
installed correctly. This seems to be a problem in this init-install-efi.sh
script where it incorrectly installs the gummiboot entries into the root
filesystem, not the boot partition. We fix it by installing the entries in
the boot partition.

(From OE-Core rev: c9b06c79ed8a082d1b385e9f61721aeeda9bf1af)

(From OE-Core rev: 4a44c9287d80dec0973b31d30d3d6250ce4b4df4)

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Acked-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-05-01 12:35:40 +01:00
Saul Wold
29812e6173 busybox: unbreak tar of uncompressed files
A patch was added to fix compressed tar files, but broke uncompressed
tar files, this fix is from the busybox mailing list

http://lists.busybox.net/pipermail/busybox/2014-January/080389.html

[YOCTO #7645]

(From OE-Core rev: 2e67a2d35ffcaa0d35363b05209060aff7026c9a)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-29 17:53:35 +01:00
Martin Jansa
80bc382c62 fontcache: allow to pass different fontconfig cache dir
(From OE-Core rev: fc732ee788a254ec388cff8fe5619348014255d3)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-27 15:02:38 +01:00
Jonathan Liu
0dc2a530df postinst-intercepts/update_font_cache: fix ownership of fontconfig cache
The file ownership of the cache files in /var/cache/fontconfig needs to
be set to root:root otherwise it inherits the user and group id of the
build user.

[YOCTO #7411]

(From OE-Core rev: 0ecccc7e75f2833c4f2599ce46b6fb9a0bc06e22)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-27 15:02:38 +01:00
Martin Jansa
6579836d82 pulseaudio: use stricter PACKAGES_DYNAMIC
* I don't see any usage for libpulse-* packages
* adding '-' resolves the issue when we have separate recipe for
  pulseaudio-modules-droid which isn't built to satisfy RDEPENDS
  with the same name, because generic pulseaudio recipe seems to
  RPROVIDE it through PACKAGES_DYNAMIC

(From OE-Core rev: 88dfdf7f87f5ea9f5b6200896fc7e7f5374929df)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-27 15:02:38 +01:00
Paul Eggleton
3037db60f7 bitbake: lib/bb/utils: add safeguard against recursively deleting things we shouldn't
Add some very basic safeguard against recursively deleting paths such
as / and /home in the event of bugs or user mistakes.

Addresses [YOCTO #7620].

(Bitbake master rev: 56cddeb9e1e4d249f84ccd6ef65db245636e38ea)

(Bitbake rev: fbf1c39641f78d553961974a2bb96256eb9496e7)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-24 11:33:13 +01:00
Anders Darander
46f73593c0 bitbake: fetch/git: Remove a possible trailing '/' in subpath
If the subpath parameter to the git fetcher ends with a trailing '/',
 bb.utils.prunedir() will be called on '/'...

Fixes [YOCTO #7620].

(Bitbake master rev: 380a3fb372c8b0a53dd7528562e6e7a222dc76ef)

(Bitbake rev: faffa1c4a4d8353b21a0d359076153da0dc31a05)

Signed-off-by: Anders Darander <anders@chargestorm.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-24 11:33:13 +01:00
Scott Rifenbark
f460fd853b ref-manual: Updates to the TCLIBC variable description
An old note still existed in this entry that stated we don't support
glibc.  This is not true.  I deleted the note.

Reported-by: Paul Eggleton <paul.eggleton@intel.com>
(From yocto-docs rev: 3a8f5210dfa401bf2d2c9df86dd744c6b39671d7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-24 11:07:54 +01:00
Martin Jansa
d098f7ed05 valgrind: enable building on 4.x kernel
(From OE-Core rev: 7351c03e3bd674fcad4cb805bba3f34ef20d7003)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-20 15:31:42 +01:00
Richard Purdie
192a9e1031 build-appliance-image: Update to dizzy head revision
(From OE-Core rev: 907ef15bb8bf6bd4fb9edb529240ed9982626401)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-18 08:57:38 +01:00
Saul Wold
c4ebd5d28b dpkg: Fix patch to adjust for older code
The older version of dpkg uses subproc_wait_check() instead of the newer subproc_reap()

(From OE-Core rev: 3e5632a02ee8f07705d5c34a57f36c6932a2e6cb)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-18 08:57:09 +01:00
Richard Purdie
15892013ce build-appliance-image: Update to dizzy head revision
(From OE-Core rev: 723e5486e89c6ebe4533ad05ebe5346744c452b1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:44:19 +01:00
Richard Purdie
489df6edb8 poky: Update to 1.7.2 release version
(From meta-yocto rev: f1b296085c8e511861de951db594884bc7ab42c8)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:43:24 +01:00
Richard Purdie
3251b84c20 gcc-target: Don't install target gcc libdir files
Installing /usr/lib/gcc/* means we'd have two copies, one from gcc-cross
and one from here. These can confuse gcc cross where includes use #include_next
and builds track file dependencies (e.g. perl and its makedepends code).
For determinism we don't install this to the sysroot, ever and rely on the
copy from gcc-cross.

[YOCTO #7287]

(From OE-Core rev: 15b3324b769dc92e1b0d4b9da9fbfccbc8dde9dd)

(From OE-Core rev: e80025efbfc8e8df01950045975d103b6d7f87b4)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:32 +01:00
Bryan Evenson
1f994e8171 initscripts: Remove /etc/volatile.cache on upgrade
/etc/volatile.cache is a cached copy of a script (which is
generated by /etc/init.d/populate-volatile.sh) that generates
the volatile filesystem directories.  Since volatile.cache is
a generated file, it is not necessarily changed if
populate-volatile.sh is updated.  As a result, the stale script
can add/remove the wrong directories on the next system boot.

If initscripts is being upgraded, make sure volatile.cache gets
deleted.

(From OE-Core rev: 3bdc098028732a4b22b1e65e5566b4cbe105fd41)

Signed-off-by: Bryan Evenson <bevenson@melinkcorp.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:32 +01:00
Bryan Evenson
433ec67686 base-files: Check for /run and /var/lock softlinks on upgrade
Commit ea647cd9ee moved the locations
of /run and /var/lock to match the FHS 3 draft specifications.
However, the install doesn't remove the existing directories.
As a result, upgrading a system may result in /run as a softlink
to /var/run and /var/run as a softlink to /run, creating a circular
link.

During pre-install, check for the existence of the old softlinks and
remove them so the new directories can be installed.

(From OE-Core rev: edeeee8432dc749b02e5e6eca0503229e394ebd3)

Signed-off-by: Bryan Evenson <bevenson@melinkcorp.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:32 +01:00
Richard Purdie
f77133783e dpkg-native: Avoid 'file changed' errors from tar
Hardlink count duing do_package_write_deb can change causing dpkg-deb
failures. We don't care about this error case so avoid it by checking
the tar exit code.

[YOCTO #7529]

(From OE-Core rev: 8ee36a5f2f9367550d28bf271afc53bca6ff3d5f)

(From OE-Core rev: bcb124931af57dc2f9d8fe9cbbabd5f8ee58e414)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:32 +01:00
Aníbal Limón
29855df01e files/toolchain-shar-template.sh: fix replace target_sdk_dir twice in environment setup file
When specify a target sdk dir that contains default install dir as
subdir,

	target_sdk_dir=/opt/poky/$version/
	custom_target_sdk_dir=/opt/poky/$version/some

The target_sdk_dir variable in environment-setup file is replaced twice
causes to point to wrong PATH.

In order to fix filter environment-setup file in second replacement.

[YOCTO #7032]

(From OE-Core rev: 02ecaa69abe97fe2f01cd609e0e59933c0f9ddbf)

(From OE-Core rev: 9f2825cf35d04ec99d29e0e4266410a8843dd80d)

Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:31 +01:00
Daniel Dragomir
7bd5bf8947 gcc-runtime: Remove libgfortran data from receipe
Remove libgfortran packages from PACKAGES list as long as libgfortran
has separate receipe since commit

5bde5d9b39
gcc: Allow fortran to build successfully in 4.8

Otherwise, when fortran support will be enabled in the compiler, both
lingfortran and gcc-runtime receipes will create the same files and will
try to install them. This will cause errors:

ERROR: The recipe libgfortran is trying to install files into a shared
area when those files already exist. Those files and their manifest
location are: ...
Please verify which recipe should provide the above files.

(From OE-Core rev: 872342fa3d08edede4a0105ac3ddb0f2ae3224b4)

(From OE-Core rev: de2aa7a56790581406f219339c9022638cd47494)

Signed-off-by: Daniel Dragomir <daniel.dragomir@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:31 +01:00
Jonathan Liu
d03e94ef47 fontcache.bbclass: prepend to PACKAGEFUNCS instead of appending
Appending to PACKAGEFUNCS results in the font packages missing the
postinst/postrm scripts and the fontconfig cache not being generated
in /var/cache/fontconfig when creating images or installing font
packages. This is because the package data has already been emitted
by emit_pkgdata in PACKAGEFUNCS. Prepend to PACKAGEFUNCS to ensure
add_fontcache_postinsts is executed before emit_pkgdata.

[YOCTO #7410]

(From OE-Core rev: 7c6d8054bb87e56180920d790efc25d42e25ab8c)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:31 +01:00
Jonathan Liu
bf6f9f44ad libunwind: backport patch to link against libgcc_s intead of libgcc
(From OE-Core rev: 986b46517ed9cd0821821371faab68e92c2d6dab)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:31 +01:00
Richard Purdie
54e3c92279 autotools: Avoid find race for S = "${WORKDIR}"
For recipes with PACKAGES_remove = "${PN}", the find which removes .la files
can race against deletion of other directories in WORKDIR e.g.:

find: '/home/autobuilder/yocto-autobuilder/yocto-worker/nightly-oe-selftest/build/build/tmp/work/qemux86_64-poky-linux/init-ifupdown/1.0-r7/sstate-build-populate_lic': No such file or directory
| WARNING: /home/autobuilder/yocto-autobuilder/yocto-worker/nightly-oe-selftest/build/build/tmp/work/qemux86_64-poky-linux/init-ifupdown/1.0-r7/temp/run.do_configure.6558:1 exit 1 from
|   find /home/autobuilder/yocto-autobuilder/yocto-worker/nightly-oe-selftest/build/build/tmp/work/qemux86_64-poky-linux/init-ifupdown/1.0-r7 -name \*.la -delete

The simplest fix is to add the find option which ignores these kind of races.

[YOCTO #7522]

(From OE-Core rev: dd8099ca3092fbd5c685e5ef1b1c5a8185a6893d)

(From OE-Core rev: 1334c1f78b0020855a2579cfc1f4ab077151e917)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:31 +01:00
Robert Yang
c6b0ce743f cpio: fix CVE-2015-1197
Additional directory traversal vulnerability via symlinks
cpio CVE-2015-1197

Initial report:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=774669
Upstream report:
https://lists.gnu.org/archive/html/bug-cpio/2015-01/msg00000.html

And fix the indent in SRC_URI.

[YOCTO #7182]

(From OE-Core rev: af18ce070bd1c73f3619d6370928fe7e2e06ff5e)

(From OE-Core rev: 68aaca0ff60a9cc770583d3dd89b0c4281b88675)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:31 +01:00
Robert Yang
6923ef6f94 patch: fix CVE-2015-1196
A directory traversal flaw was reported in patch:

References:
http://www.openwall.com/lists/oss-security/2015/01/18/6
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=775227
https://bugzilla.redhat.com/show_bug.cgi?id=1182154

[YOCTO #7182]

(From OE-Core rev: 4c389880dc9c6221344f7aed221fe8356e8c2056)

(From OE-Core rev: e2032c5788f7a77aa0e4e8545b550551c23a25fb)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:30 +01:00
Sona Sarmadi
9bbe7473a9 e2fsprogs: CVE-2015-0247
Fixes a heap buffer overflow in lib/ext2fs/openfs.c which allows
a trivial arbitrary memory write under certain conditions.

References
http://git.kernel.org/cgit/fs/ext2/e2fsprogs.git/commit/?id=f66e6ce4
http://www.ocert.org/advisories/ocert-2015-002.html

(From OE-Core rev: 572437720b6698a3a10627fcd9654ef10f827836)

(From OE-Core rev: 67ac6070b1b11a3459ed8fd7e145eb476e493dc6)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:30 +01:00
Richard Purdie
9c2e4e50a8 e2fsprogs: Add a patch to speedup mkfs
See the patch description, this adds a tweak to an algorithm to improve
core-image-sato-sdk mkfs time from over 8 minutes to about 35s.

Needs discussion upstream but seems reasonable for our uses of it.

(From OE-Core rev: 468fa9a7fac86bb0fcd3cbd18dc1492b57ca25f3)

(From OE-Core rev: 5aee64c9577affc35ad1555f2a7eb9d287b9fda4)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:30 +01:00
Armin Kuster
e8a260c9b8 util-linux: fix CVE-2014-9114
Backport a patch to fix CVE-2014-9114.
The patch has been integrated in util-linux-2.26.

[YOCTO #7180]

Hand applied do to version differencses.

(From OE-Core rev: de0c751f57de118bba808f85fa255bb2d99ed9cb)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:30 +01:00
Armin Kuster
8e7d7e5c3a tzdata: update to 2015b
Changes affecting future time stamps

Mongolia will start observing DST again this year, from the last
Saturday in March at 02:00 to the last Saturday in September at 00:00.
(Thanks to Ganbold Tsagaankhuu.)

Palestine will start DST on March 28, not March 27.  Also,
correct the fall 2014 transition from September 26 to October 24.
Adjust future predictions accordingly.  (Thanks to Steffen Thorsen.)

Changes affecting past time stamps

The 1982 zone shift in Pacific/Easter has been corrected, fixing a 2015a
regression.  (Thanks to Stuart Bishop for reporting the problem.)

Some more zones have been turned into links, when they differed
from existing zones only for older time stamps.  As usual,
these changes affect UTC offsets in pre-1970 time stamps only.
Their old contents have been moved to the 'backzone' file.
The affected zones are: America/Antigua, America/Cayman,
Pacific/Midway, and Pacific/Saipan.

Changes affecting time zone abbreviations

Correct the 1992-2010 DST abbreviation in Volgograd from "MSK" to "MSD".
(Thanks to Hank W.)

(From OE-Core rev: b00539285ffce0b7d954bc0610c986aa53c8255f)

(From OE-Core rev: 7f8c1229ec79d256d7249725d8a90312c452e9e7)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:30 +01:00
Armin Kuster
5ed8733bac tzcode: update to 2015b
Changes affecting code

Fix integer overflow bug in reference 'mktime' implementation.
(Problem reported by Jörg Richter.)

Allow -Dtime_tz=time_t compilations, and allow -Dtime_tz=... libraries
to be used in the same executable as standard-library time_t functions.
(Problems reported by Bradley White.)

Changes affecting commentary

Cite the recent Mexican decree changing Quintana Roo's time zone.
(Thanks to Carlos Raúl Perasso.)

Likewise for the recent Chilean decree.  (Thanks to Eduardo Romero Urra.)

Update info about Mars time.

(From OE-Core rev: fbd98e677dcf6324cf713d888aa85c4264f42ec9)

(From OE-Core rev: 098055d44b20010771b420a0ff5640ea7921e455)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:30 +01:00
Robert Yang
ff71dd264a tzdata: fix HOMEPAGE
(From OE-Core rev: 7efed4d963bd8424af0ddebc3a09226182232759)

(From OE-Core rev: 5217300dcb40e77dae51f2ad5f05aee17a47adef)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:29 +01:00
Robert Yang
12f3536d36 which 2.18: fix SRC_URI
It is the GPLv2+ version, the old SRC_URI is down, use fedoraproject's
repo. Its homepage is also down, but I can't find a new one for it.

(From OE-Core rev: 41c4bad11e4a8ebc13f2e4a9712265f3946bf0a8)

(From OE-Core rev: e9ed18b3f207ccd5f49118b674006ae3ea37db2d)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:29 +01:00
Robert Yang
94e96643db dpkg: add perl to RDEPENDS
perl scripts:
packages-split/dpkg/usr/bin/dpkg-parsechangelog:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-mergechangelogs:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-architecture:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-vendor:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-shlibdeps:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-scanpackages:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-buildpackage:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-genchanges:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-gensymbols:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-distaddfile:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-buildflags:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-checkbuilddeps:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-gencontrol:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-scansources:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-source:#!/usr/bin/perl
packages-split/dpkg/usr/bin/dpkg-name:#!/usr/bin/perl
packages-split/dpkg/usr/lib/dpkg/parsechangelog/debian:#!/usr/bin/perl

(From OE-Core rev: eb7179e3c182dc456956fd8ae7e0b512488ad0f2)

(From OE-Core rev: bddfec608b065c54ddf2cd3c8bb7668aba929927)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:29 +01:00
Enrico Scholz
b6cc30adf4 serf: fix 'ccache' builds
'scons' cleans the environment which breaks ccache builds because
CCACHEDIR can point to an unexpected location:

| ccache arm-linux-gnueabi-gcc ... context.c
| ccache: failed to create .../serf/1.3.8-r0/.home/.ccache (No such file or directory)

Issue is described in

  http://www.scons.org/wiki/ImportingEnvironmentSettings

and because 'bitbake' cleans environment we can pass it completely
instead of trying to enumerate needed env.

With the 'env.patch' the FULLCC variable is not needed anymore (which
would break when CC is 'ccache arm-...-gcc' and host ccache is used)
because the correct $PATH is available during scons build:

| sh: .../sysroots/x86_64-oe-linux/usr/bin/arm-linux-gnueabi/ccache: No such file or directory
| scons: *** [context.o] Error 127

(From OE-Core rev: 24c35c63b85621b263e7a211dc39b2257154cd28)

Signed-off-by: Enrico Scholz <enrico.scholz@sigma-chemnitz.de>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:29 +01:00
Chen Qi
feaf9a98df package_manager.py: fix rootfs failure with multilib enabled
With the current code, if we use debian package backend and enable
multilib support, the do_rootfs process would always fail with error
messages like below.

    E: Unable to locate package packagegroup-core-boot

This patch fixes the above problem.

(From OE-Core rev: d140d556ae30b6dbd0ffce8882c3e22b17050820)

(From OE-Core rev: c4306385f6f2139474a4389a465c1650e10b2444)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:29 +01:00
Bruce Ashfield
9bdf737982 linux-yocto/3.17: update to v3.17.8
Updating to the latest korg stable version.

(From OE-Core rev: 4d342c2531bbb33c9101dcd7a669a620c8cf6917)

(From OE-Core rev: 5eb9911fd8c0c83d51c377b088c85ee10605376f)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>

Conflicts:
	meta/recipes-kernel/linux/linux-yocto-tiny_3.17.bb
	meta/recipes-kernel/linux/linux-yocto_3.17.bb
	remove arm64, not supported in dizzy.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:29 +01:00
Saul Wold
c316df044a linux-yocto-tiny_3.17: Update to actually use 3.17 git repo
The named release was still using the -dev git repo which did not contain
the SRCREV referenced in the numbered/named version.

(From OE-Core rev: b4f2f39ce0f4690ed51d14d1034b9f5e21c0f5a0)

(From OE-Core rev: 9b5eb3b534e0153e40a16b7b83d6eaa2dc0f155e)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:29 +01:00
Bruce Ashfield
4cf1a6af8e linux-yocto/3.14: update to 3.14.29
Updating to the latest korg -stable release for 3.14.

(From OE-Core rev: a6a64ee87182c6fa62117e68fafc4ec25ceefc0b)

(From OE-Core rev: e34f906abbe77ec4979ac40f01e35989445db86b)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>

Conflicts:
	meta/recipes-kernel/linux/linux-yocto_3.14.bb

	removed arm64 since its not supported in Dizzy.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:28 +01:00
Bruce Ashfield
35e54baa51 linux-yocto/3.10: update to v3.10.65
Integrating the latest korg -stable updates for 3.10 LTSI.

(From OE-Core rev: d159e9db537f68ed91d4a1ab0f432ac1d0020697)

(From OE-Core rev: 407023e0ed7b19b548448dbfb4c03b1c88b7ba21)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 22:39:28 +01:00
Martin Jansa
4ac156de84 powertop: Fix build for !uclibc
* EXTRA_LDFLAGS isn't defined for !uclibc and configure fails
  when it reads it unexpanded, see config.log snippet:

  configure:4177: checking whether the C compiler works
  configure:4199: i586-oe-linux-gcc  -m32 -march=i586 --sysroot=/OE/sysroots/qemux86  -O2 -pipe -g -feliminate-unused-debug-types  -Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed ${EXTRA_LDFLAGS} conftest.c  >&5
  i586-oe-linux-gcc: error: ${EXTRA_LDFLAGS}: No such file or directory
  configure:4203: $? = 1
  configure:4241: result: no

(From OE-Core rev: c8f9b5c9a8e5179c2013f25decd6a5483df9c716)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-17 11:35:16 +01:00
Ross Burton
0143d3a6a9 bitbake: bitbake: tests/data: add test for incorrect remove behaviour
The _remove operator isn't working correctly when used with a variable that
expands to several items, so add a test case to exercise this path.

(Bitbake rev: cb2a62a5fbffb358528a85b46c1fc6383286cb9d)

(Bitbake rev: ed950f95fc80f069e800e9c6e785641f307e6512)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-15 15:56:06 +01:00
Ross Burton
fecee8ffdc bitbake: bitbake: data_smart: split expanded removal values when handling _remove
Given these assignments:

 TEST="a b c d"
 TEST_remove = "b d"

TEST evaluates to "a c".  However, if the _remove override is given as a
variable:

 TEST="a b c d"
 FOO = "b d"
 TEST_remove = "${FOO}

TEST evaluates to "a b c d", because when FOO is expanded it isn't split into a
list.

Solve this by splitting all members of removeactive once they've been expanded.

[ YOCTO #7272 ]

(Bitbake rev: 207013b6dde82f9654f9be996695c8335b95a288)

(Bitbake rev: c25b0e0ca289f6ad0ed697a0b0252fa48ab5dd0b)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-15 15:56:06 +01:00
Richard Purdie
67cabcd94f toolchain-scripts: Allow the CONFIGSITE_CACHE variable to be overridden
In multilib and baremetal configurations, this variable can cause a variety of
problems due to the use of TCLIBC. At least allowing it to be overriden
is a start and allows various configurations to avoid the issue.

(From OE-Core rev: 816a62af5181e940f4b5e5f35f6775499fad94eb)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-15 15:56:06 +01:00
Brendan Le Foll
96852794bc openssl: Fix x32 openssl patch which was not building
x32 builds where broken due to patch rebase not having been done correctly for
this patch

(From OE-Core rev: 8e46230fe94c44ab81a0ca9cb8b2c9f7b605e226)

Signed-off-by: Brendan Le Foll <brendan.le.foll@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-15 15:56:06 +01:00
Richard Tollerton
c59e3bd26d bitbake: data.py: fixes bad substitution when running devshell
Running bitbake inside make results in the exported environment variable
MAKEOVERRIDES="${-*-command-variables-*-}", which the shell chokes on
when trying to expand it. But of course, it probably shouldn't have been
trying to expand it in the first place -- so just escape the dollar
sign.

(Bitbake rev: 18cd0ce6a55c9065c3f1bf223b47d817b5efcd8f)

(Bitbake rev: 34226a9e02f319a7547967bbdaca3ca918927dd1)

Signed-off-by: Richard Tollerton <rich.tollerton@ni.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-11 16:27:19 +01:00
Richard Purdie
c18e52c0c8 bitbake: cooker/server: Fix up 100% CPU usage at idle
The recent inotify changes are causing a 100% cpu usage issue in the
idle handlers. To avoid this, we update the idle functions to optionally
report a float value which is the delay before the function needs to be
called again. 1 second is fine for the inotify handler, in reality its
more like 0.1s due to the default idle function sleep.

This reverts performance regressions of 1.5 minutes on a kernel build
and ~5-6 minutes on a image from scratch.

(Bitbake rev: 0e0ba408c2dce14a0fabd3fdf61d8465a031495b)

(Bitbake rev: 88dfe16b5abd804bae0c1e3b60cb93cb951cbc3f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-11 16:21:48 +01:00
Alexandru DAMIAN
8e64c535af bitbake: cooker: read file watches on server idle
The inotify facility monitoring changes to the config files
could be overwhelmed by massive changes to the watched files
while server is running.

This patch adds verification the notification watches to the
server idle functions, in addition to the cooker updateCache
command which executes only infrequently, thus preventing
overflowing the notification buffer.

[YOCTO #7316]

(Bitbake rev: 996e663fd5c254292f44eca46f5fdc95af897f98)

(Bitbake rev: b44694b1efc7389536df2f901a8b70321edfeeba)

Signed-off-by: Alexandru DAMIAN <alexandru.damian@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-11 16:21:48 +01:00
Richard Purdie
763bff1f22 bitbake: cooker: Improve pyinotify performance
Benchmarks show that the introduction of pyinotify regressed
performance. This patch ensures we only call the add_watch() function
for new entries, not ones we've already processed which does improve
performance as measured by "time bitbake -p".

This doesn't completely remove the overhead but it does substantially
reduce it.

(Bitbake rev: 493361f35f6cc332d4ea359a2695622c2c91a9c2)

(Bitbake rev: f668b347a8f9563f41d454288b9d4632190f308f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-11 16:21:48 +01:00
Richard Purdie
0d1f75b9d6 bitbake: cooker: Further optimise pyinotify
We currently add crazy numbers of watches on files. The per user limit is 8192
by default and on a system handling multiple builds, this can be an issue.

We don't need to watch all files individually, we can watch the directory containing
the file instead. This gives better resource utilisation and better performance
further reverting some of the performance regression seen with the introduction
of pyinotify.

(Bitbake rev: a2d441237916a99405b800c1a3dc39f860100a8c)

(Bitbake rev: 6ab3945fc54b2a242292a874d78ebd8cccb99573)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-11 16:21:47 +01:00
Richard Purdie
542d9770f2 bitbake: cooker: Fix pyinotify handling of ENOENT issues
We try and add watches for files that don't exist but if they did, would influence
the parser. The parent directory of these files may not exist, in which case we need
to watch any parent that does exist for changes. This change implements that fallback
handling.

(Bitbake rev: 979ddbe4b7340d7cf2f432f6b1eba1c58d55ff42)

(Bitbake rev: 6d0abc6a5c9b8b37eecfa63fbcb5343162bc9311)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-11 16:21:47 +01:00
Richard Purdie
13cb1aea8c bitbake: cooker/cache/parse: Implement pyinofity based reconfigure
Memory resident bitbake has one current flaw, changes in the base configuration
are not noticed by bitbake. The parsing cache is also refreshed on each invocation
of bitbake (although the mtime cache is not cleared so its pointless).

This change adds in pyinotify support and adds two different watchers, one
for the base configuration and one for the parsed recipes.

Changes in the latter will trigger a reparse (and an update of the mtime cache).
The former will trigger a complete reload of the configuration.

Note that this code will also correctly handle creation of new configuration files
since the __depends and __base_depends variables already track these for cache
correctness purposes.

We could be a little more clever about parsing cache invalidation, right now we just
invalidate the whole thing and recheck. For now, its better than what we have and doesn't
seem to perform that badly though.

For education and QA purposes I can document a workflow that illustrates this:

$ source oe-init-build-env-memres
$ time bitbake bash
[base configuration is loaded, recipes are parsed, bash builds]
$ time bitbake bash
[command returns quickly since all caches are valid]
$ touch ../meta/classes/gettext.bbclass
$ time bitbake bash
[reparse is triggered, time is longer than above]
$ echo 'FOO = "1"' >> conf/local.conf
$ time bitbake bash
[reparse is triggered, but with a base configuration reload too]

As far as changes go, I like this one a lot, it makes memory resident bitbake
truly usable and may be the tweak we need to make it the default.

The new pyinotify dependency is covered in the previous commit.

(Bitbake rev: 0557d03c170fba8d7efe82be1b9641d0eb229213)

(Bitbake rev: 47809de6459deb346929e4ca6efa87a997cfcb38)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-11 16:21:47 +01:00
Richard Purdie
55303f7a38 bitbake: bitbake: Add pyinotify to lib/
We need inotify support within bitbake and pyinotify provides the best
mechanism to add this. We have a few options:

a) Depend on pyinotify from the system
b) Add in our own copy
c) Only use pyinotify in cases like the memory resident server

For a), it would mean adding in dependencies, updating documentation and
generally creating churn for users as well as having implications for things
like the build-appliance recipe.

It turns out that glibc has the C functionality we need from version 2.4
onwards (2006) and that we just need a single python file for b), there
is no binary module needed. We therefore add in a copy of pyinotify 0.9.5
into the tree meaning we can depend on it simply and unconditionally.

c) is unattractive as we need fewer possible code paths, not more.

(Bitbake rev: d49004a4e247e3958a2f7ea9ffe5ec92794e1352)

(Bitbake rev: 2835b12288cf0c46586d6f708a0ee0b5e025cba3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-11 16:21:47 +01:00
Scott Rifenbark
8a00b63e43 ref-manual: Corrected the "package_rpm.bbclass" section.
A cut-and-paste error had left a "package_deb" string in the
first sentence of the section.  Replaced with "package_rpm."

Reported-by: Geoffroy VanCutsem <geoffroy.vancutsem@intel.com>
(From yocto-docs rev: 03d13476caec1d2219017ea904875dfff3219aa7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-10 14:28:56 +01:00
Richard Purdie
ec75238f6c Revert "file: Update CVE patch to ensure file gets built correctly"
This reverts commit d9519a17ea2ca07433164697a7222dd2b6dd2b9a.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-03-28 10:56:50 +00:00
Richard Purdie
b90dd7944e file: Update CVE patch to ensure file gets built correctly
If we touch both files, we can end up in a situation where magic.h should be
rebuilt and isn't. The easiest fix is not to touch the generated files which
ensures the timestamps are such that it is always rebuilt.

(From OE-Core rev: d9519a17ea2ca07433164697a7222dd2b6dd2b9a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-03-28 10:45:27 +00:00
Jonathan Liu
82c8438428 liburcu: revert ARM GCC blacklist commit
This fixes the following error when building liburcu:
"Your gcc version produces clobbered frame accesses"

OE-Core is using a patched GCC 4.8.2 which is able to compile liburcu
properly.

(From OE-Core rev: aaf5ae09f14578ff1961ad3b199aacbc77a1f8ff)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-03-27 21:44:39 +00:00
Jonathan Liu
0fecd492b2 systemd: fix /var/log/journal ownership
The ownership needs to be explicitly set otherwise it inherits the user
and group id of the build user.

(From OE-Core rev: b81ad1d960fc0555f6255a887f6a3b524893703e)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-03-27 21:44:39 +00:00
Max Krummenacher
f2a6123ba3 udev: don't keep ptest testdata laying around
Only unpack udev's testdata right before executing the tests and cleanup
afterwards.

udev's testsuite can be used by ptest. However currently the testdata against
which its functionality is tested is installed in the sysroot at udev install
time.
If the sysroot is used with qemu the testdata makes qemu entering an infinite
loop.
http://lists.openembedded.org/pipermail/openembedded-core/2014-September/097098.html

This has already been fixed for the systemd udev flavour.
https://bugzilla.yoctoproject.org/show_bug.cgi?id=5664

(From OE-Core rev: 60c0b80048e1f8aae1a4aaa3619c84496a111ae2)

Signed-off-by: Max Krummenacher <max.oss.09@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-03-26 14:17:21 +00:00
Max Krummenacher
ff2621b86c udev: fix ptest rule syntax check
The ptest which checks for correct udev rules fails.
Missing files and paths for the build host caused this.

(From OE-Core rev: 32fa3ff2849a74deeb13ac53cc65e212b9cffd92)

Signed-off-by: Max Krummenacher <max.oss.09@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-03-26 14:17:21 +00:00
Brendan Le Foll
2a3805a666 openssl: Upgrade to 1.0.1m
Security update, some patches modified to apply correctly mostly due to
upstream changing indentation/styling

* configure-targets.patch updated
* fix-cipher-des-ede3-cfb1.patch updated
* openssl-avoid-NULL-pointer-dereference-in-EVP_DigestInit_ex.patch updated
* openssl-avoid-NULL-pointer-dereference-in-dh_pub_encode.patch removed as no
merged with 3942e7d9ebc262fa5c5c42aba0167e06d981f004 in upstream

(From OE-Core rev: 03739bcc1672df8f55c6428184670f1a8c8f80b2)

Signed-off-by: Brendan Le Foll <brendan.le.foll@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-03-25 15:05:35 +00:00
Andre McCurdy
4c6ceb07f0 busybox: libarchive: open_zipped() does not need to check extensions
Backport from busybox 1_22_stable branch:

  http://git.busybox.net/busybox/commit/?h=1_22_stable&id=28dd64a0e1a9cffcde7799f2849b66c0e16bb9cc

(From OE-Core rev: cd20b3c009a9c1743f5cb054710214231e5dfcfc)

Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-03-25 12:52:51 +00:00
Andre McCurdy
1f718df76e busybox: lzop: add overflow check (CVE-2014-4607)
Backport from busybox 1_22_stable branch:

  http://git.busybox.net/busybox/commit/?h=1_22_stable&id=5698ff93233b47218a677fd7facd8cc90211d1a4

(From OE-Core rev: 680fc6e7c571f70cffa9799c21604e0719504591)

Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-03-25 12:52:51 +00:00
Scott Rifenbark
93e3df91aa documenation: Updates to release month for rev history tables.
Using June 2015 for 1.7.2

(From yocto-docs rev: 7a7ccdaa8b7d39f5aef51452aaaec2030a779203)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-03-10 11:07:55 +00:00
Scott Rifenbark
dea47b2715 documentation: Preparation for 1.7.2 release.
Updates to the following:

 * poky.ent - set the variables for a 1.7.2 release
 * <manual>.xml files - Updated the manual revision history
   tables to have a 1.7.2 release entry using March of 2015
 * mega-manual.sed - Updated the 1.7.1 string to be 1.7.2 so
   the links will be local for the mega-manual.

(From yocto-docs rev: 5f0e7ccaa736ca02c24a6a8b0a48ec5161ddfc2a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-03-10 11:07:54 +00:00
Paul Eggleton
6e9632e979 lib/oe/package_manager: support exclusion from complementary glob process by regex
Sometimes you do not want certain packages to be installed when
installing complementary packages, e.g. when using dev-pkgs in
IMAGE_FEATURES you may not want to install all packages from a
particular multilib. This introduces a new PACKAGE_EXCLUDE_COMPLEMENTARY
variable to allow specifying regexes to match packages to exclude.

(From OE-Core master rev: d4fe8f639d87d5ff35e50d07d41d0c1e9f12c4e3)

(From OE-Core rev: e848484989307ae6826ba0f5217f7702322181e3)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Brendan Le Foll <brendan.le.foll@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-03-03 14:34:10 +00:00
Javier Viguera
59a85d3a95 utils.bbclass: fix create_cmdline_wrapper
Similar to commit 4569d74 for create_wrapper function, this commit fixes
hardcoded absolute build paths in create_cmdline_wrapper.

Otherwise we end up with incorrect paths in users of this function. For
example the 'file' wrapper in current released toolchain:

exec -a
/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-fsl-arm/build/build/tmp/work/x86_64-nativesdk-pokysdk-linux/nativesdk-file/5.18-r0/image//opt/poky/1.7.1/sysroots/x86_64-pokysdk-linux/usr/bin/file
`dirname $realpath`/file.real --magic-file
/opt/poky/1.7.1/sysroots/x86_64-pokysdk-linux/usr/share/misc/magic.mgc
"$@"

(From OE-Core rev: 5102848f97a1821b12e83b2c415ce730c1b35f1b)

Signed-off-by: Javier Viguera <javier.viguera@digi.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-03-03 14:34:10 +00:00
Scott Rifenbark
b630f2f536 ref-manual: Fixed icecc example code
Fixes [YOCTO #6912]

The example used to make sure builders use the same sstate
signatures regardless if they use icecc or not was incorrect.
I updated the INHERIT_DISTRO line of the example to use the
append part in the name so it appends the icecc as suggested
by the bug submitter.

Reported-by: Peter Bergin <petan679@gmail.com>
(From yocto-docs rev: 0e2a7bef65c6ec2e817b0ead9a453ed8fb2d70fa)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-17 15:17:09 +00:00
Scott Rifenbark
b91ca2c5fd documentation: Reverted back to the 1.76.1 XSL stylesheet
Using the 1.76.1 version in all the customization layers so
the manual revision tables will build with boxes.

(From yocto-docs rev: 2d2169ca205bc1d6e9fe49d2d54353ac1dfc8e99)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-17 15:17:09 +00:00
Scott Rifenbark
5db1e07e1c ref-manual, dev-manual, adt-manual, yocto-project-qs: scrub eglibc
Scrubbed out the occurrences of eglibc and replaced them with
glibc.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 2386b22b0d2de8ae7b67c884bf7b5b55df0e887e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-17 15:17:09 +00:00
Scott Rifenbark
f5d869d9d6 ref-manual: scrubbed eglibc
Removed the references to eglibc and replaced them with glibc.
This involved updating the example buildhistory output with
current examples as well

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: c4b20efc58d957221bce016e3900560d43592758)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-17 15:17:09 +00:00
Scott Rifenbark
1c34a41ad2 poky.ent: Updated two variables that had issues
The ECLIPSE_INDIGO_CDT_URL had an extra "indigo;" on the end.
The YOCTO_ECLIPSE_DL_URL was missing a "/" character.

(From yocto-docs rev: 914792ee56c4c7be497f34c660fd6c03544e5510)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-17 15:17:08 +00:00
Alexandru DAMIAN
9bef9b9ddd toaster.bbclass: use the openembedded-core name
Fixing the bug where the openembedded-core name was registered
as "meta" in toaster.

[YOCTO #7317]

(From OE-Core rev: ab9f17893c4b004906ec232da300915145c125e0)

(From OE-Core rev: 3cd31ef5bb5d0bd9245956d16680ba9d9668817e)

Signed-off-by: Alexandru DAMIAN <alexandru.damian@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-13 14:47:26 +00:00
Armin Kuster
016d607e23 groff: fix QA issue with rdepends
WARNING: QA Issue: groff requires /bin/sed, but no providers in its RDEPENDS [file-rdeps]

(From OE-Core rev: 980430f4439b9962a75c698ad19bbab8b9979d58)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:10 +00:00
Khem Raj
42195a3ff5 systemd: Backports fixes to 216
Fix systemd-timesyncd assertion

when networkd is disabled then we now do not
create /run/systemd/netif/links but timesyncd needs it. So lets
manually create this file when networkd is disabled so timesyncd
can still function

When enabling systemd-timesyncd we need systemd-timesync user

Backport patches to enable timesyncd when resolved and networkd
are disabled

replace the resolv.conf symlinink patch with a proper backport

Change-Id: I53f1a53eec4e4a4dbdfb7e8cd155d544ee5d81ec
(From OE-Core rev: 2a675bc63b22724f12e6ed6ff58d0f1d1e0d3b29)

(From OE-Core rev: c53b22e593fe13edacddf2ecd4d5df67abd74905)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:10 +00:00
Armin Kuster
15ff8423dc busybox: cve-2014-9645
modprobe,rmmod: reject module names with slashes

(From OE-Core rev: 815a7b6fbf3b0cf95f5464bca687d97366d7ed6a)

(From OE-Core rev: 698ef44edcff82457e29baef1dd364d1fecf892b)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:10 +00:00
Richard Purdie
83767cbe90 scripts/send-error-report: Set exit code if error occurs
If an error occurs, set an error exit code so the world knows about it. This fixes
issues where the autobuilder doesn't notice these failures.

[YOCTO #7265]

(From OE-Core rev: b219377defc9517af360986352bd7da1a7906f10)

(From OE-Core rev: 88b9a9dd491d6803a72c497cf674434da14704b7)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:10 +00:00
Ross Burton
8b3b21494f security_flags: disable PIE on expect
Disable PIE in expect as otherwise it tries to link the shared library as an
executable.

(From OE-Core rev: fe1f5c90eede593100fe57630d39cf329e59ef8f)

(From OE-Core rev: fdf9e8e4679bb04e89222034ba999ae3bee63938)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:10 +00:00
Li xin
17b4994c5f elfutils_0.148.bb: CVE-2014-9447 fix
Reference: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9447

(From OE-Core rev: c992868a989926eac6c4b78a6bb9729bce54f2ed)

(From OE-Core rev: 1f0f66620ab6969620a1858ed2f57b6262a81ef9)

Signed-off-by: Li Xin <lixin.fnst@cn.fujitsu.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:10 +00:00
Sona Sarmadi
d97f1c2697 python: Disables SSLv3
This is related to "SSLv3 POODLE vulnerability" CVE-2014-3566

Building python without SSLv3 support when openssl is built without
any support for SSLv3 (e.g. by adding EXTRA_OECONF = " -no-ssl3" in
the openssl recipes).

Backport from:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=768611#22
[python2.7-nossl3.patch] only Modules/_ssl.c is backported.

References:
https://bugzilla.yoctoproject.org/show_bug.cgi?id=7015
https://bugzilla.yoctoproject.org/show_bug.cgi?id=6843
http://bugs.python.org/issue22638
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-3566

(From OE-Core rev: 3462cac82cf0ab32e5e530f543b14fdcc211c678)

(From OE-Core rev: 443f3add0179a1015a4ce59cb68840f9783e3782)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:10 +00:00
Armin Kuster
a1f594881d tzdata: update to 2015a including leap second
Changes affecting future time stamps

The Mexican state of Quintana Roo, represented by America/Cancun,
will shift from Central Time with DST to Eastern Time without DST
on 2015-02-01 at 02:00.  (Thanks to Steffen Thorsen and Gwillim Law.)

Chile will not change clocks in April or thereafter; its new standard time
will be its old daylight saving time.  This affects America/Santiago,
Pacific/Easter, and Antarctica/Palmer.  (Thanks to Juan Correa.)

New leap second 2015-06-30 23:59:60 UTC as per IERS Bulletin C 49.
(Thanks to Tim Parenti.)

Changes affecting past time stamps
Iceland observed DST in 1919 and 1921, and its 1939 fallback
transition was Oct. 29, not Nov. 29.  Remove incorrect data from
Shanks about time in Iceland between 1837 and 1908.

Some more zones have been turned into links, when they differed
from existing zones only for older time stamps.  As usual,
these changes affect UTC offsets in pre-1970 time stamps only.
Their old contents have been moved to the 'backzone' file.
The affected zones are: Asia/Aden, Asia/Bahrain, Asia/Kuwait,
and Asia/Muscat.

(From OE-Core rev: 4ee327602a0cc3200b5d6490ef2f115768cff2f4)

(From OE-Core rev: 93128f6cdad7ceb1bdd1cf88f0054765f615fbd0)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:10 +00:00
Armin Kuster
991597a272 tzcode: update to 2015a leap second changes too
Changes affecting code

tzalloc now scrubs time zone abbreviations compatibly with the way
that tzset always has, by replacing invalid bytes with '_' and by
shortening too-long abbreviations.

tzselect ports to POSIX awk implementations, no longer mishandles
POSIX TZ settings when GNU awk is used, and reports POSIX TZ
settings to the user.  (Thanks to Stefan Kuhn.)

Changes affecting build procedure

'make check' now checks for links to links in the data.
One such link (for Africa/Asmera) has been fixed.
(Thanks to Stephen Colebourne for pointing out the problem.)

Changes affecting commentary
The leapseconds file commentary now mentions the expiration date.
(Problem reported by Martin Burnicki.)

Update Mexican Library of Congress URL.

(From OE-Core rev: ccc543570b96bb1f1efefd5ed79469da142cafd3)

(From OE-Core rev: c3f8855b6f09fd4efd187db0080c7f7ed93a6f70)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:09 +00:00
Ross Burton
eebe97cd35 python: remove spurious nativesdk dependency
There's no need to add a dependency on python-crypt_class-native to
nativesdk-openssl as the general dependency there is transformed appropriately.

Presumably this is cruft from back when SDK packages were suffixed instead of
prefixed, and there were mapping problems.

(From OE-Core rev: f0b1eab1ef24fabac98609eb9d314f618dca713a)

(From OE-Core rev: 597ce0c2b77fb5d4fec7967704a3bf40f639d5a7)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:09 +00:00
Ross Burton
50572b0104 python: ensure all of Python is installed in nativesdk
If any part of Python gets installed in a SDK, we need to ensure that all of
Python gets installed to avoid replacing python in the environment with a
minimal package set.

[ YOCTO #6735 ]

(From OE-Core rev: e36ff98a7a4da478bb886f61005cd72a0b5a9c0e)

(From OE-Core rev: bb4270020852ea19e40635d306e0bf7de6ec225a)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:09 +00:00
Martin Jansa
769fb519be xorg-app: add x11 to required DISTRO_FEATURES and cleanup dependencies
(From OE-Core rev: 1cf0245344ce272e7330cfe1b04a0ed7bd18e8f5)

(From OE-Core rev: 8e2f9d9e25c7db9a0912219445bfdf8af3a63002)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:09 +00:00
Paul Gortmaker
8e02546ddd packagegroup-self-hosted: package all of Python
Based on commit 745dfbc869fd593d1b92e2bc9c01d589ab21ade3
"buildtools-tarball: package all of Python", we do the same here
for packagegroup-self-hosted.

The switch to the fetcher where it added BeautifulSoup revealed
a shortcoming in the python packaged for the self hosting (missing
htmlentitydefs).  Here we fix it in the same way as what was done
for buildtools-tarball and include python-modules vs. all the
individual little chunks.

(From OE-Core rev: 4afbc5f7b2b8a6587110b16cda90e72c3e73a506)

(From OE-Core rev: 55073276dabf0a996209296e0096ff1a93a3e1e5)

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:09 +00:00
Mark Hatle
dc565377c6 python-smartpm: Fix attemptonly builds when file conflicts occur
[YOCTO #7299]

When file conflicts occur, the RPM transaction aborts.  Instead of
simply accepting the failure, we now identify, capture, and remove
the offending package(s) from the transaction and retry.

(From OE-Core rev: cd475aea5f5bc4b6a2dd3e576070a117ae079597)

(From OE-Core rev: ce09e1be344abce981a40feb9970c3f86cfdc0ee)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:09 +00:00
Enrico Scholz
8e11a94b90 image_types.bbclass: manage 'cpio_append' directory
For cpio images, do_rootfs() can operate on a dirty '${WORKDIR}/cpio_append'
directory which contains e.g. files from previous builds.  This can cause
unwanted files in the image or can break the build.

E.g. when there is a cpio_append/init -> /sbin/init symlink symlink, the
'ln -sf' can fail due to SELinux restrictions:

| $ ls -la cpio_append/init
| lrwxrwxrwx. 1 ensc ensc 10 22. Jan 16:26 cpio_append/init -> /sbin/init
|
| $ strace ln -sf /sbin/init cpio_append/init
| ...
| stat("cpio_append/init", 0x7fffbb9ca310) = -1 EACCES (Permission denied)
| exit_group(1)                           = ?

Patch cleans up 'cpio_append' before executing the 'do_rootfs' task by
adding it to 'cleandirs'.  An alternative implementation (which avoids
creation of this empty dir for non-cpio images) might remove it within
IMAGE_CMD_cpio, but this might break builds where people rely on the
existence of this directory (e.g. to add local files).

(From OE-Core rev: 4db3cc2360289c062fa0df4678f2f2ef990f0c1a)

(From OE-Core rev: 5a5802b15d965f62bf61697e1dbffab89702da96)

Signed-off-by: Enrico Scholz <enrico.scholz@sigma-chemnitz.de>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:09 +00:00
Mike Looijmans
cfdeaeeb77 package.bbclass: Let PR server update PKGV, not PV
PV is the package version as we need it to be during the build. PKGV is the
final version as it ends up in the package, and defaults to PV.

The packager handled builds without PR-server by replacing the AUTOINC string
in PKGV, but when the PR-server is being used, the script replaces the contents
of PKGV with the PV if the PV contains "AUTOINC". Thus the packager overrides
any change to PKGV the recipe might have made.
This breaks classes like gitpkgv that provide a correctly numbered PKGV, the
number as calculated by that class will simply be replaced with a 0-based index
from the PR-server.

This patch makes the packager look at the PKGV version instead of the PV, and
update the PKGV only based on the PKGV contents as set by the recipe.

See also the discussion here:
http://lists.openembedded.org/pipermail/openembedded-core/2015-January/100329.html

From investigating the history of the code and changes in the past year, the
use of "pv" instead of "pkgv" appears to be just an oversight, introduced in:
commit b27b438221e16ac3df6ac66d761b77e3bd43db67 "prs: use the PRServer to replace the BB_URI_LOCALCOUNT functionality"
A later commit 865d001de168915a5796e5c760f96bdd04cebd61 "package/prserv: Merge two similar functions into one"
silently fixed this only for the case without PR-server by using pkgv there.

(From OE-Core rev: 7895c0a67d381ff66668fca5207bd196f36c91db)

(From OE-Core rev: c524c5cfdfe0395b601cb9980e0bbd69b4dc9afa)

Signed-off-by: Mike Looijmans <mike.looijmans@topic.nl>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:09 +00:00
Richard Purdie
51d5204084 package/prserv: Merge two similar functions into one
Having these two separate functions handling PR values seems pointless,
and worse, there are impossible code branches mixed within them.

Merge them into one function and tweak comments so at least you
don't have to read both functions to figure out what is going on.

This does restructure the conditionals to try and aid readability.

(From OE-Core rev: 865d001de168915a5796e5c760f96bdd04cebd61)

(From OE-Core rev: 508f7dfb301db30964bf77d370a9e48cb7f354f8)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:09 +00:00
Richard Purdie
909a80fe34 net-tools: Fix rerunning of do_patch task
Rerunning the do_patch task currently fails. The code is nearly correct
but needs to remove the quilt ".pc" directory and move the secondary
one into place in order to rerun, not move it into the .pc directory
as the code currently does.

[YOCTO #7128]

(From OE-Core rev: 2a775ebbb175dd70fc7228607c306d4ccb9e4ba4)

(From OE-Core rev: d979f8589da79e02afac588e8b63d571f912f528)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:09 +00:00
Enrico Scholz
71164f126b image_types.bbclass: fixed 'init' creation for cpio images
When /init is a dangling symlink or a symlink to a file which can not be
stated on the build system (e.g. due to SELinux restrictions), the '[ !
-e .../init ]' test will succeed which causes the manual creation of
/init.

E.g. here:

| $ ls -la cpio_append/init
| lrwxrwxrwx. 1 ensc ensc 10 22. Jan 16:26 cpio_append/init -> /sbin/init
|
| $ strace /bin/test -e  cpio_append/init
| stat("cpio_append/init", 0x7fff374a9db0) = -1 EACCES (Permission denied)
| exit_group(1)                           = ?

To test for the existence of a file, both '-L' and '-e' checks must be
executed and to prevent SELinux noise, the '-L' should happen before
'-e'.

(From OE-Core rev: 2aa5d2880ee3578f4965f245addd365fb7b1c1ca)

(From OE-Core rev: f8d3bee7140cade4c70a1c6583fb6d9ef4063b92)

Signed-off-by: Enrico Scholz <enrico.scholz@sigma-chemnitz.de>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:08 +00:00
Mark Hatle
76ba20f9c0 gcc/libgcc-common.inc: Add missing 'fakeroot' to two tasks
Without the fakeroot flag the two tasks may create files or
symbolic links that end up being owned by the user and not
root:root as expected.

(From OE-Core rev: 7e9fd9d34a540fdfc1243d059d1f13f1d09864d2)

(From OE-Core rev: 86bee4a8d187bebe7f82d8ea1069ee610caac151)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:08 +00:00
Hongxu Jia
82c567748a distcc: fix initscript can not stop distcc daemon correctly
The distcc's initscript has used option '--pid-file' to save daemon
process id, but it didn't to create that file, that caused start/stop
distcc daemon failed.

We refer what Ubuntu 14.04 did, create pid file before start and
delete it after stop

[YOCTO #7090]

(From OE-Core rev: 3b0d6c7c324f0283cfab10445d1a5a3bf2526598)

(From OE-Core rev: b9dc92ae6efbedcca4e21479412d6d4954c05bce)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:08 +00:00
Ting Liu
d3953fcb40 bind: fix typo chown->chmod
(From OE-Core rev: a6ee74222b43d0bb7fe9ef0072ede78f82a5e446)

(From OE-Core rev: 43cf6cd3b282226ce379a03a0d1fd5670c303648)

Signed-off-by: Ting Liu <ting.liu@freescale.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:08 +00:00
Richard Purdie
6f79439ce4 lib/oe/package: Ensure strip breaks hardlinks
Normally, strip preserves hardlinks which in the case of the way our hardlink
rather than copy functionality works, is a disadvantage and leads to non-deterministic
builds. This adds a move into place after the strip operation to ensure hardlinks
are broken and we bring back build determinism.

(From OE-Core rev: 7c0fd561bad0250a00cef63e3d787573112a59cf)

(From OE-Core rev: a7d0115d286e0b6c7d1f22a201e61a2360e40eb2)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:08 +00:00
Vincent Génieux
00af07c6b6 fix '[[: not found' error message using dash
Remove bash specific syntax '[[ test ]]' replaced with '[ test ]'.

Fixes [YOCTO #7112]

(From OE-Core rev: f2ff849d5936d3dc5e24301e0620da265df50fea)

(From OE-Core rev: 574f27be14a0f0be6a96e097903704c3492620a7)

Signed-off-by: Vincent Génieux <vincent2014@startigen.fr>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:08 +00:00
Richard Purdie
f30d619a63 oeqa/utils/decorators: Try and improve ugly _ErrorHandler tracebacks
Currently, if one module is skipped, any other module calling skipModule
causes tracebacks about _ErrorHandler not having a _testMethodName
method.

This reworks the code in a way to avoid some of the problems by using
the id() method of the objects. It also maps to the correct name
format rather than "setupModule" or just skiping the item entirely.

(From OE-Core rev: 78d3bf2e4c88779df32b9dfbe8362dc24e9ad080)

(From OE-Core rev: 4019ae1dc223a5ec925e49fb9c3ad33ce170cbab)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:08 +00:00
Robert Yang
2f598a8318 perf: fix for rebuilding
Fix for rebuilding error:
make[3]: *** No rule to make target `/path/to/sysroots/qemuarm64/usr/src/kernel/tools/lib/traceevent//trace-seq.c',
needed by `.trace-seq.d'.  Stop.
make[2]: *** [sub-make] Error 2

(From OE-Core rev: 9dafa571ed0a40d21a886dec7704c31150b21942)

(From OE-Core rev: c32bf128beb21a45b4a5f85c890c5ed058eb1d8e)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:08 +00:00
Fabien Proriol
d2c3e23af6 boost: Avoid to use local host configuration
(From OE-Core rev: 6586aeb3e26d58322c169dfef0228a425fe5d3fa)

(From OE-Core rev: 028400fb47e6462d702c8822b9a98b4310f9ed6f)

Signed-off-by: Fabien Proriol <fabien.proriol@jdsu.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:07 +00:00
Paul Eggleton
ceb5a66d0b gcc: ensure target gcc headers can be included
There are a few headers installed as part of gcc-runtime (omp.h,
ssp/*.h). Being installed from a recipe built for the target
architecture, these are within the target sysroot and not
cross/nativesdk; thus they weren't able to be found by gcc with the
existing search paths. Add support for picking up these headers
under the sysroot supplied on the gcc command line in order to
resolve this.

Thanks to Richard Purdie for giving me a number of pointers during
fixing this issue.

Fixes [YOCTO #7141].

(From OE-Core rev: 5c87bb9ac2b35b3f8cf2b7d3e4507e7013115162)

(From OE-Core rev: ce3f7777fd1d057f399f3f5df8df620e7eaf6cc2)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:07 +00:00
Armin Kuster
620718b05d glibc: CVE-2014-9402 endless loop in getaddr_r
The getnetbyname function in glibc 2.21 in earlier will enter an infinite loop
if the DNS backend is activated in the system Name Service Switch
configuration, and the DNS resolver receives a positive answer while processing
the network name.

(From OE-Core rev: f03bf84c179f69ef4800ed92a4a9d9401d0e5966)

(From OE-Core rev: 7e3f4ddd001f9c50a49d8ba5ab548af311e6b51f)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:07 +00:00
Gary Thomas
8b255bd491 perl: Backport fix for bug #123591
This patch fixes a crash in perl when using formatted strings @...

(From OE-Core rev: 6ff3776bb7f1a7ba2fc641bfd9b8546c4bb02466)

(From OE-Core rev: 598d8f869a145ced01d059b30f8307df714d1938)

Signed-off-by: Gary Thomas <gary@mlbassoc.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:07 +00:00
Robert Yang
33cda871d3 neard: fix parallel issue
There might be no src dir if the src/builtin.h runs earlier, create it
to fix the race issue:
src/genbuiltin nfctype1 nfctype2 nfctype3 nfctype4 p2p > src/builtin.h
/bin/sh: src/builtin.h: No such file or directory

(From OE-Core rev: 4b6762b924a561febede13b85330309dbf75da19)

(From OE-Core rev: 3d0f678cb5796066798394238be4b12b09d2a983)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:07 +00:00
Dan McGregor
f7ba14a571 dpkg: fix host contamination
Force dpkg to use "tar" on the target.

The dpkg configure script looks for gnutar, gtar, and
tar in order. If it finds gnutar or gtar on the host
it expects to use that as its tar program on the target.
Without this, if gtar exists (as it does on my system) then
dpkg will consistently fail on the target with an error about
gtar not being found.

(From OE-Core rev: 45bcb1ea92f244df4745aca6f9f9556c43e9b6ce)

(From OE-Core rev: 781d7e7fdff9d41dc962b7d35809396051a47303)

Signed-off-by: Dan McGregor <dan.mcgregor@usask.ca>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:07 +00:00
Tom Zanussi
bf32370c5e perf: Disable perf-libunwind
It hasn't actually been being enabled anyway: 'Disabling post unwind,
no support found.'.  For now, turn it off because of [YOCTO #7129].

Fixes [YOCTO #7129].

(From OE-Core rev: d8c839afa96925b27909eb5a7b89ee83c87924bc)

(From OE-Core rev: 9bd6079fcea79d6a83832d1faa8bf566aecaa532)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:07 +00:00
Tom Zanussi
2ec5a5473e perf: Add libdw unwind support to perf-libunwind feature
perf can use either libdw or libunwind dwarf unwinders, or neither.
The perf-libunwind feature implies that if disabled, neither should be
used, so have it disable both libdw and libunwind DWARF unwinders if
disabled.

This fixes [YOCTO #7129].

(From OE-Core rev: 868dd446fa2732858813e96dd8f3f64b2a9ec339)

(From OE-Core rev: 8ae5965d8e9abf8cda37ec7efe236c285a08d7fa)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:07 +00:00
Ross Burton
dddb84aae0 socat: forcibly disable use of libbsd
Socat will look for openpty() in BSD headers before Linux headers, so if libbsd
is present at configure time then that will be used.  We don't need to depend on
libbsd though, and leaving it floating can cause build errors, so tell configure
that the libbsd header isn't present.

(From OE-Core rev: 7defa2bb5b28ea69f749363a607a114cfa4ba4ed)

(From OE-Core rev: eab55e22c685f9192ed1abd7a559aeb13eab41fd)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:06 +00:00
Robert Yang
6c3ccc8ae9 guile: fixed installed-vs-shipped error
Fixed:
guile-2.0.11: guile: Files/directories were installed but not shipped
  /usr/lib64/libguile-2.0*-gdb.scm [installed-vs-shipped]

This is because when there is no file in the directory:
for f in libguile-2.0*; do
    [snip]
done

The f would be libguile-2.0* itself, make sure the libs are installed
firstly will fix the problem.

(From OE-Core rev: adf32ca3d0657cb5d363ae7a3fdb539c6627cf39)

(From OE-Core rev: f6305b451fd5f13e62642b8ac34edc0e6ab19542)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:06 +00:00
Martin Jansa
0cf128ca1b package.bbclass: Fix support for private libs
* n is a tuple since this commit:
  commit d3aa7668a9f001044d0a0f1ba2de425a36056102
  Author: Richard Purdie <richard.purdie@linuxfoundation.org>
  Date:   Mon Jul 7 18:41:23 2014 +0100
  Subject package.bbclass: Improve shlibs needed data structure

  since then 'n in private_libs' was always false and private libs
  were always processed
* this is bad when we have libfoo in private libs, but also some package
  providing libfoo, that way we ship own libfoo.so, but together with
  runtime dependency on package providing libfoo

(From OE-Core rev: ec1d379683cedca4be1c252475d02c8041227142)

(From OE-Core rev: c78a9246a1aae14a1598d4c801faaf27dd31f66a)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:06 +00:00
Chong.Lu@windriver.com
86da1430b7 file: CVE-2014-9620 and CVE-2014-9621
CVE-2014-9620:
Limit the number of ELF notes processed - DoS
CVE-2014-9621:
Limit string printing to 100 chars - DoS

The patch comes from:
6ce24f35cd
0056ec3225
09e41625c9
af444af073
68bd8433c7
dddd3cdb95
445c8fb0eb
ce90e05774
65437cee25

[YOCTO #7178]

(From OE-Core rev: 0e4f0f893de2c0fac444b779b2b3028fd79e6048)

Signed-off-by: Chong Lu <Chong.Lu@windriver.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:06 +00:00
Robert Yang
2a53df980d pax-utils: RDEPENDS on python
python script:
pax-utils/usr/bin/lddtree

(From OE-Core rev: b972e7fc5774a6daf92511e897919ebad29f405b)

(From OE-Core rev: c45486fb91d53b427b93103392a470d169e39767)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:06 +00:00
Robert Yang
a1a8857fa6 parted: parted-ptest RDEPENDS on python
python scripts:
parted-ptest/usr/lib64/parted/ptest/tests/gpt-header-move
parted-ptest/usr/lib64/parted/ptest/tests/msdos-overlap

(From OE-Core rev: 80262094fde6a44afd954bbecc7e016243661b81)

(From OE-Core rev: 7bac0f98d0e8a45dbaafcff0c5f3382f0cf298a3)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:06 +00:00
Richard Purdie
6a07977e76 cross-canadian/meta-environment: Allow modification of TARGET_OS to be optional
There are some cases we want the manipulation cross-canadian performance
on TARGET_OS, there are also cases like meta-environment where we do not
want this manipulation.

We did try and use immediate expansion to avoid this problem and it
works in the non multilib case. If we have a multilib that used an
extension, like for example:

require conf/multilib.conf
MULTILIBS = "multilib:lib32 multilib:lib64"
DEFAULTTUNE = "mips32r2"
DEFAULTTUNE_virtclass-multilib-lib32 = "mips64-n32"
DEFAULTTUNE_virtclass-multilib-lib64 = "mips64"

then the n32 extension case will be misconfigured.

It turns out saving an unexpanded variable is hard. The best I could
come up with was:

SAVEDTOS := "${@d.getVar('TARGET_OS', False).replace("{", "*")}"

and then

localdata.setVar("TARGET_OS", d.getVar("SAVEDOS", False).replace('*','{'))

which is rather evil, I'd challenge someone to come up with a nicer way
of making it work though!

Rather than the above madness, we modify cross-canadian to make the
problamtic code conditional.

This fixes the original issue (where a linux-gnuspe target was seeing
'linux') of
http://cgit.openembedded.org/openembedded-core/commit/?id=0038634ee6e2b6035c023a2702547f20f67c103a
but also fixes the multilib one.

(From OE-Core rev: 85ff3d6491c54aa712ed238c561742cda4f4ba07)

(From OE-Core rev: 78a2eeea4e2ef867437c315337b9188e1f3fa759)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:06 +00:00
Dmitry Eremin-Solenikov
0de0abd9e4 icecc.bbclass: properly handle disabling of icecc
Always use use_icc to check if IceCC should be enabled. Move
ICECC_DISABLED variable checking to use_icc function. Also while we are
at it, fix condition in icc_is_allarch function.

(From OE-Core rev: 20b0168da47d6e30fcbaf6adab3bde0d398d0d00)

(From OE-Core rev: 72ff97a2ec225bafb83be56ca1b8c3c4e68a0c55)

Signed-off-by: Dmitry Eremin-Solenikov <dmitry_eremin@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:06 +00:00
Kai Kang
8010a0f2cf openssh: deliver ssh-copy-id
Deliver script ssh-copy-id from openssh which is useful to add an
authorized ssh key.

(From OE-Core rev: 16562034a2c28cbfc6c90f9324c42c08e0655b7d)

(From OE-Core rev: 00638cc0ca8213f6aac154eccf29ee0213c0a7e9)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:05 +00:00
Ross Burton
37d5b56cb6 systemd: add missing RDEPENDS
systemd-ptest also needs a Python interpretter.  Also remove the redundant
comment.

systemd-kernel-install is a bash script that can't be trivially ported to POSIX
sh.

(From OE-Core rev: 9f6b34493d332f9eff54c3eb2da9483a344e6d3c)

(From OE-Core rev: 66900dc504d8e8af5439a01f94c7853e418fd0e3)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:05 +00:00
Saul Wold
d31f7ec85b security_flags: disable pie support for libaio, blktrace and ltp
libaio when built with pie and fpie does not link correctly with blktrace or ltp
so we need to disable those flags until a better solution comes along.

(From OE-Core rev: 4fbf13a6c28fc1170a4defbf50032546a14eaa59)

(From OE-Core rev: b93c62e03724defa6a1465575c7db95485be37fb)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:05 +00:00
André Draszik
ad9ba796fa openssl: fix hard paths in native openssl
This causes the package to not be relocateable from sstate

The OpenSSL binaries respect a few environment variables for determining
locations of files, so we now use these to point the binaries to the
relocated locations.

[YOCTO #6827]

(From OE-Core rev: 771d3123331fbfab1eb9ce47e3013eabcb2248f5)

(From OE-Core rev: 4d8b1f51d5910e12c0189b7b3df31f4d8fd7bffb)

Signed-off-by: André Draszik <adraszik@digisoft.tv>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:05 +00:00
Ting Liu
f7fc59f2fd valgrind: build with altivec only if it supported
(From OE-Core rev: 2471f9b32a96bcb64a5a04d53456818cad57befe)

Signed-off-by: Ting Liu <ting.liu@freescale.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:05 +00:00
Ting Liu
f70b8b393d bitbake.conf: add PKGDATA_DIR to BB_HASHBASE_WHITELIST
In meta/conf/bitbake.conf, PKGDATA_DIR is default to:
PKGDATA_DIR = "${STAGING_DIR_HOST}/pkgdata"

But in meta/conf/multilib.conf, PKGDATA_DIR is set as:
PKGDATA_DIR = "${STAGING_DIR}/${MACHINE}/pkgdata"

When multilib enabled, linux-libc-headers cache will be machine
specific:
$ bitbake-diffsigs sstate-cache/1a/sstate:linux-libc-headers:ppce6500-poky-linux:3.17.7:r0:ppce6500:3:1a0c3934d91479fd7242a5b1d407d155_package.tgz.siginfo sstate-cache/28/sstate:linux-libc-headers:ppce6500-poky-linux:3.17.7:r0:ppce6500:3:28c918e8f9f4a4cfceb3a38b258f7501_package.tgz.siginfo
basehash changed from 8d3158bbddcee612fa30badd05f47b8e to 68ac258fc6c8e489f360fde3123a5894
Variable MACHINE value changed from 'b4420qds' to 'b4860qds'

(From OE-Core rev: 02af85cbaac660e92c760db41a1efce9e359248f)

Signed-off-by: Ting Liu <ting.liu@freescale.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:05 +00:00
Ting Liu
72a43adb4d libunwind: Fix test case link failure on PowerPC with Altivec
(From OE-Core rev: 519c42d6c32c38d20411afbbd879850d4e6ae3b0)

Signed-off-by: Ting Liu <ting.liu@freescale.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:05 +00:00
Richard Purdie
64090cf0d8 libxml2: Backport fix for CVE introduced entity issues
The CVE fix introduced problems with entity issues, we observed this
when building the Yocto Docs in particular. Backport the fix from
upstream so we can build our docs correctly.

[YOCTO #7134]

(From OE-Core rev: af501bd51f9a86edd34e0405bc32dabe21312229)

(From OE-Core rev: 9aa93835d19159ffd7cb212680044fc7f914a68f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:05 +00:00
Joe MacDonald
41cca6fbe7 libxml2: fix CVE-2014-3660
It was discovered that the patch for CVE-2014-0191 for libxml2 is
incomplete.  It is still possible to have libxml2 incorrectly perform
entity substituton even when the application using libxml2 explicitly
disables the feature.  This can allow a remote denial-of-service attack on
systems with libxml2 prior to 2.9.2.

References:
    http://www.openwall.com/lists/oss-security/2014/10/17/7
    https://www.ncsc.nl/actueel/nieuwsberichten/kwetsbaarheid-ontdekt-in-libxml2.html

(From OE-Core rev: 643597a5c432b2e02033d0cefa3ba4da980d078f)

(From OE-Core rev: de7bc57398aaeb84fc9370d025b87f7711986ada)

Signed-off-by: Joe MacDonald <joe_macdonald@mentor.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:04 +00:00
Maxin B. John
de51204518 coreutils: Fix CVE-2014-9471
Fiedler Roman discovered that coreutils' parse_datetime() function
has some flaws that may be exploitable if the date(1), touch(1),
or potentially other programs, accept untrusted input for certain
parameters. While researching this issue, he discovered that it
was independently discovered by Bertrand Jacquin and reported at
http://debbugs.gnu.org/cgi/bugreport.cgi?bug=16872

$ touch '--date=TZ="123"345" @1'
*** Error in `touch': free(): invalid pointer: 0x00007fffd33e55e0 ***
Aborted

$ date '--date=TZ="123"345" @1'
date[394]: segfault at 7fff24000000 ip 00007f6dd5b73404 sp 00007fff27cce8f8
error 4 in libc-2.20.so[7f6dd5af7000+199000]
Segmentation fault

(From OE-Core rev: 54debe63cbd38dba56895541c434f895e158f70b)

Signed-off-by: Maxin B. John <maxin.john@enea.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:04 +00:00
Saul Wold
eed2260137 glibc: Fix up minimal build with libc-libm
This addresses 2 issues discovered trying to build a minimal libc with
libm option.  By default nscd was always being built and without inet
enabled there were missing symbols.

[YOCTO #7108]

(From OE-Core rev: 89649881bcd0e76d6ee7c85c30e75bb01e1c004f)

(From OE-Core rev: 965943176c580b7943bb4d94efd58b8818c04919)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:40:04 +00:00
Paul Eggleton
81b8da4c88 poky.conf: add file-rdeps to WARN_QA
This was added to the default value of WARN_QA in insane.bbclass in
OE-Core, but we missed adding it to the value set in poky.conf.

(From meta-yocto rev: 81d74d30d4a034801306a124d0036999613fdfce)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:36:50 +00:00
Richard Purdie
289ccaa24d bitbake: cooker: Shut down the parser in error state
If the cooker is in an error state, we shouldn't continue to try parsing.
This fixes an issue where an invalid PR server is detected when bitbake
is started and ensures bitbake exits cleanly rather than hanging.

[YOCTO #6934]

(Bitbake rev: 923fc5ee0ace02cc29110bff502a2c65e6bdebf0)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:36:50 +00:00
Richard Purdie
646cd97d24 bitbake: bitbake-worker: Use setsid() rather than setpgid()
The bug has a long discussion of this. Basically, in some environments,
the exact details of which aren't understood, a Ctrl+C signal to the
UI is being transmitted to all the process children. Looking at the output
of "ps ax -O tpgid", its clear the main process is still the terminal
owner of these processes.

stty -a on a problematic system shows: "-ignbrk brkint"
and on a working system shows: "-ignbrk -brkint"

The description of brkint would suggest this is the problem, setting up
that terminal environment wasn't able to reproduce the problem though.
It was confirmed that using setsid() caused the problem to be resolved
and is probably the right thing to be doing anyway, so lets do it.

[YOCTO #6949]

(Bitbake rev: 81d90389edd4d4778d3aec86e0775ab98dd1496e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:36:50 +00:00
Richard Purdie
384863c7bc bitbake: cache/fetch2/siggen: Ensure we track include history for file checksums
Currently, if you reference a file url, its checksum is included in the
task hash, however if you change to a different file at a different
location, perhaps taking advantage of the FILESPATH functionality, the
system will not reparse the file in question and change its checksum to
match the new file.

To correctly handle this, the system not only needs to know if the
existing file still exists or not, but also check the existance
of every file it would have looked at when computing the original file.

We already do this in the bitbake parsing code for class inclusion. This
change uses the same technique to log the file list we looked at and
if files in these locations exist when they previously did not, to
invalidate and reparse the file.

Since data stored in the cache is flattened text, we have to use a string
form of the data and split on the ":" character which is ugly, but is
an internal detail we can improve later if a better method is found.

The cache version changes to trigger a reparse since the previous
cache data is now incompatible.

[YOCTO #7019]

(Bitbake rev: 67ebf368aab8fbe372374190f013bdf2c83c59de)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:36:50 +00:00
Richard Purdie
0e65f09580 bitbake: wget: Add localpaths method which gives localpath with history
In some cases for cache purpoes we not only need to know which file
is going to be used but also which paths were considered. Add a
localpaths method which includes the history.

The core which() funciton already supports this, this just extends
the function to preserve the extra data we need. localpath becomes
just a special case of the case with history.

(Bitbake rev: d71407dbbf82659f245e002ecaad02b26838f455)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:36:50 +00:00
Richard Purdie
43a04ddc7f bitbake: ast: Add error when trying to use dash in sh function names
A dash character is illegal in function names in sh (but not bash). Since
our shell tasks run under sh and the shell parser is sh based, EXPORT_FUNCTIONS
won't work with class names containing a dash.

We can't change sh, we can ensure the user is warned about the problem
straight away though.

[YOCTO #7006]

(Bitbake rev: 879fe20f47ba75f4afb3484d4398d5fd60431e12)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:36:50 +00:00
Richard Purdie
8abe4b8b2a bitbake: siggen: Ensure taskdata default functions exist in base class
The get/set_taskdata functions are now part of the API of the class,
ensure they exist in the base class definition so the noop handler
works.

[YOCTO #7233]

(Bitbake rev: d571149cd82028c5e05cca33a3007ce1b779a654)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-11 17:36:49 +00:00
Tom Zanussi
53b33d85de yocto-bsp: Add branch to SRC_URI for custom kernels
Without 'branch' in the SRC_URI, a SRCREV specified for a non-master
KBRANCH will result in a fetch failure since the branch tested by the
fetcher will default to master, which doesn't contain the SRCREV.
This fixes the problem by adding branch=KBRANCH to the SRC_URI.

Fixes [Yocto #6518].

(From meta-yocto rev: 71cf7a7866f192caa97cf90b408ac6b9b6ddddf8)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-02-06 14:55:59 +00:00
Chen Qi
9fc095a439 image.bbclass: avoid boot error on read-only systemd image
New version of systemd implements a new feature of updating /etc
or /var when needed at boot. For details, please see link below.

Opointer.de/blog/projects/stateless.html

For now, at boot time, the systemd-sysusers.service would update user
database files (/etc/passwd, /etc/group, etc.) according to the configuration
files under /usr/lib/sysusers.d. This step is necessary for other systemd
services to work correctly. Examples of such services are systemd-resolved
and systemd-tmpfiles-setup.

The problem is that on a read-only file system, that is, if /etc is read-only,
the user database files could not be updated, causing failures of services.

This patch fixes this problem by adding users/groups at rootfs time.

(From OE-Core rev: c7b9611ad0ead17a624fc73a60c321ff249c2214)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-15 16:01:47 +00:00
Richard Purdie
e13f2681b7 rm_work: Fix RM_WORK_EXCLUDE for image/sdk recipes
A previous change meant image/sdk recipes were removed unconditionally
by the class and did not respect RM_WORK_EXCLUDE. This fixes that
problem.

[YOCTO #7114]

(From OE-Core rev: 93d79fc162bd49387958e9e4d898dc4ba50d20b0)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-08 09:25:03 +00:00
Richard Purdie
6dd21a9f15 build-appliance-image: Update to dizzy head revision
(From OE-Core rev: 64efe68c731d202059880b2fb61a282b9ad1c3e6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:18:52 +00:00
Richard Purdie
b6e41cf744 poky.conf: Update DISTRO_VERSION for 1.7.1
(From meta-yocto rev: fa38fe6665bdc9acba77dd2ddff64077d188b0ee)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:17:24 +00:00
Fredrik Svensson
cc6d968241 bitbake: fetch2/git: Allow other namespaces than refs/heads to be searched.
This makes it possble to fetch Gerrit review references which are
normally stored under refs/changes.

Please disregard previous patch with the same topic.

(Bitbake rev: ab8cbf2a71750f5ea36e218036b050857301607b)

Signed-off-by: Fredrik Svensson <fredrik.svensson@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:15:48 +00:00
Koen Kooi
74bb618474 tcmode-default.inc: use GCCVERSION for gcc-source
This was missing leading to gcc-source-<foo> being built when using gcc-cross-<bar> with GCCVERSION=bar.

(From OE-Core rev: fa249f347b3453537ee6aaea0d3bb75cfe7a75d1)

(From OE-Core rev: e7f94f589b17c64ae2fd72c8dda41c113ff399c9)

Signed-off-by: Koen Kooi <koen.kooi@linaro.org>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:44 +00:00
Ross Burton
ab5c5e3a0e default-distrovars: add gcc-source recipe to the GPLv3 whitelist
gcc-source is a convenience recipe to save duplicate copies of the GCC source
tree and should be whitelisted for GPLv3 avoidance along with the rest of GCC.

(From OE-Core rev: fd58d0e920707198caf62ffef50b67c7c7882c69)

(From OE-Core rev: 6ec22ea1d40256c0b780c6ba533413684fa02c8e)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:43 +00:00
Ross Burton
8e354428a2 gcc: stub do_fetch instead of removing it
Whilst gcc doesn't have any source to fetch, it still needs a fetch task so that
a world fetch can run without errors.  So instead of deleting the fetch task,
stub it.

(From OE-Core rev: 8e68ebbddc2bc41eb6cb607c51d6a80c54c4199d)

(From OE-Core rev: ebe7b52c90b8cc7626f93d6771412848825905ce)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:43 +00:00
Richard Purdie
a6f2e49038 gcc: Rework shared work
The current implementation of shared work for gcc is at best confusing. It relies
on the fetch/unpack/patch tasks having exactly the same stamps and if this gets
broken for some reason, its hard to figure out what the problem is. It also
leads to complex code in bitbake.

The benefits of shared work for gcc are clear but a better approach is needed. This
patch adjusts things so that a single new recipe (gcc-source) provides the
fetch/unpack/patch/preconfigure tasks, the rest of gcc simply depends on these tasks
and have no fetch/unpack/patch tasks of their own.

This means we should get the significant benefits (disk usage/performance) of the
single source tree but in a way which has less potential for problems and is
easier for people to understand. The cost is an extra recipe/some inc files
which is probably a good tradeoff.

(From OE-Core rev: ceaa0a448dc5ebddb4f7fb94fb8a503a1c0248c3)

(From OE-Core rev: 6e9af42063c4135d3e72406a22d762425e5bebfd)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:42 +00:00
Jackie Huang
63c0b4a441 packagegroup-self-hosted: add git-perltools
git-perltools provides some usefull git tools like:
git-submodule, git-request-pull, git-send-email, git-am, etc.

We should have it added in self-hosted image.

(From OE-Core rev: 4b0cbdc9c94b336f3102d4cce1886842b28ce6d5)

(From OE-Core rev: a4296a49ada629bc21ab29e2c6860583e0b4be07)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:42 +00:00
Sona Sarmadi
9d20b675dd bind: fix for CVE-2014-8500
[From upstream commit: 603a0e2637b35a2da820bc807f69bcf09c682dce]

[YOCTO #7098]

External References:
===================
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-8500

(From OE-Core rev: 7225d6e0c82f264057de40c04b31655f2b0e0c96)

(From OE-Core rev: 10128cd331af0c4378cac4fbac80a7cd11869bd3)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:42 +00:00
Bruce Ashfield
39e09709bf lttng-modules: fix mm_compaction_isolate_template build
linux-stable integrated the 3.16 commit f8c9301fa5a2a [mm/compaction: do
not count migratepages when unnecessary] with the 3.14.25 update.

So we have to update the lttng-module linux version codes to use the
new definition in builds greater than 3.14.24 or 3.16.

(From OE-Core rev: cf76820379746e91fc4cf01895cb98cc56987002)

(From OE-Core rev: 35857df362599becfde1b9163f6906fd3eff4ccc)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:41 +00:00
Bruce Ashfield
36e42c0ddb linux-yocto/3.14: update to 3.14.26, integrate ltsi and -rt updates
Updating the 3.14 tree to the latest korg 3.14.26, as well as
integrating 3.14 LTSI content, and refreshing preempt-rt. Minor
conflict resolutions were performed between ltsi, stable and -rt

(From OE-Core rev: 8c30cec8233605cbec334fcc5c2b9ef5cf8f6482)

(From OE-Core rev: 015a64ea82866fd621ed4f9a946b6c5db7d8c3d7)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:41 +00:00
Bruce Ashfield
446acfb5a4 linux-yocto/3.14: update to v3.14.24
(From OE-Core rev: e2c2960ae79953b5ef69444d91f2e784a35bfefd)

(From OE-Core rev: e5d5c96263e15b9fb45aa7e053000228c1c26cde)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:40 +00:00
Bruce Ashfield
d7c61053da linux-yocto/3.10: update to v3.10.62
Updating to the latest korg -stable update for the 3.10 series. Minor
merge conflict resolution was done with the standard/ltsi and
standard/preempt-rt branches.

(From OE-Core rev: a87bf5d3d435d333f5ee9d15b8c641b03ff4bb9c)

(From OE-Core rev: 03d54a2ad086d017ced0f79f1770c7d7ae6219cc)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:40 +00:00
Bruce Ashfield
b52e2f4f2e linux-yocto/3.10: update to v3.10.59
Updating to the latest 3.10 -korg stable update. We also bring in a meta
change for the valley island IO configuration.

(From OE-Core rev: 22d5ac7e1fc096dc11c766eda91c9e131398c6c5)

(From OE-Core rev: 4f0d827bbf0f3db01c70512c18b0e96c175fada9)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:40 +00:00
Bruce Ashfield
4b7d844d84 linux-yocto/3.10: 8250/8250_dw: fix compile failure due to stable/Yocto conflict
Updating the SRCREVs for the following fix:

   8250/8250_dw: fix compile failure due to stable/Yocto conflict

    As of merge 60a9d9fc565e4503dbb8705803e83d906afc4ad2, "Merge
    tag 'v3.10.48' into standard/base" the 8250_dw.c fails to
    compile due to an undeclared variable.

    This happens because stable brought in:

     -------------------------
     commit 6d5e79331417886196cb3a733bdb6645ba85bc42
     Author: Tim Kryger <tim.kryger@linaro.org>
     Date:   Tue Oct 1 10:18:08 2013 -0700

        serial: 8250_dw: Improve unwritable LCR workaround

        commit c49436b657d0a56a6ad90d14a7c3041add7cf64d upstream.

     [...]

        [wangnan: backport to 3.10.43:
          - adjust context
          - remove unneeded local var]
        Signed-off-by: Wang Nan <wangnan0@huawei.com>
        Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
     ------------------------

    ...which deletes the p->private_data declaration since it became
    unused at that point, however in Yocto, we also have this:

     -----------------------
     commit 0e02b050c3cafbcbf9952125089a27e02d6ecea9
     Author: David Daney <david.daney@cavium.com>
     Date:   Wed Jun 19 20:37:27 2013 +0000

        tty/8250_dw: Add support for OCTEON UARTS.

     [...]

        Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
        (cherry picked from commit d5f1af7ece96cf52e0b110c72210ac15c2f65438)
        Signed-off-by: Darren Hart <dvhart@linux.intel.com>
     -----------------------

    ...which _adds_ another user of the p->private_data.

    Here we restore the declaration in order that 8250_dw compiles.

    Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com>
    [PG: add root cause info to commit log.]
    Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
    Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>

(From OE-Core rev: 4b4d1f38ea54ef8545e726ac9e181da08a2bad05)

(From OE-Core rev: 201acc1578b6cf5f118cdc3dcb3358e46f5ce6b3)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:39 +00:00
Richard Purdie
510c27ad8c image: Avoid race over directory creation
There is a race over the do_package_qa task and the do_rootfs task
since rootfs recreates a directory. This patch disables the task
(which isn't used for images) to avoid the race:

NOTE: recipe core-image-minimal-1.0-r0: task do_package_qa: Started
NOTE: recipe core-image-minimal-1.0-r0: task do_rootfs: Started
ERROR: Build of do_package_qa failed
ERROR: Traceback (most recent call last):
  File "/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-mips/build/bitbake/lib/bb/build.py", line 497, in exec_task
    return _exec_task(fn, task, d, quieterr)
  File "/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-mips/build/bitbake/lib/bb/build.py", line 440, in _exec_task
    exec_func(func, localdata)
  File "/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-mips/build/bitbake/lib/bb/build.py", line 212, in exec_func
    exec_func_python(func, d, runfile, cwd=adir)
  File "/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-mips/build/bitbake/lib/bb/build.py", line 237, in exec_func_python
    os.chdir(cwd)
OSError: [Errno 2] No such file or directory: '/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-mips/build/build/tmp/work/qemumips-poky-linux/core-image-minimal/1.0-r0/core-image-minimal-1.0'

(From OE-Core rev: 0550d112ad9c2ca9f8167dcae35200210923f2c5)

(From OE-Core rev: 6becebfea6695d9feb968d5cfd14c54e4da200b3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:39 +00:00
Richard Purdie
9ffc238025 report-error: Handle the case no logfile exists
If the task fails early, no error log may exist. Currently we crash in
that case, this handles the situation more gracefully.

(From OE-Core rev: 1e6bfcab47f532677f87683ba2f5e5fb905e9ba5)

(From OE-Core rev: 8e52fea95441f88ab366c3c32b869cb30cc386b7)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:38 +00:00
Otavio Salvador
7461790c39 sysvinit-inittab: Disable the carrier detect requirement for serial consoles
This aligns the params of getty with the ones used in Debian. From the
getty(8) manpage:

,----[ getty(8) manpage ]
|  -L, --local-line
|
|    Force the line to be a local line with no need for carrier
| 	 detect. This can be useful when you have a locally attached
| 	 terminal where the serial line does not set the carrier detect
| 	 signal.
`----

Reported-by: Craig McQueen <craig.mcqueen@beamcommunications.com>
(From OE-Core rev: a899c362be71cb7b94bd318c57702446b017005c)

(From OE-Core rev: 9936afa01866e1024770b9ad4c378b5ce93e8298)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Tested-by: Craig McQueen <craig.mcqueen@beamcommunications.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:38 +00:00
Armin Kuster
69df8dc63f binutils: several security fixes
CVE-2014-8484
CVE-2014-8485
CVE-2014-8501
CVE-2014-8502
CVE-2014-8503
CVE-2014-8504
CVE-2014-8737

and one supporting patch.

[Yocto # 7084]

(From OE-Core rev: 859fb4d9ec6974be9ce755e4ffefd9b199f3604c)

(From OE-Core rev: d2b2d8c9ce3ef16ab053bd19a5705b01402b76ba)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:37 +00:00
Paul Eggleton
2658acef69 buildtools-tarball: restore missing git tools
Since the split out of git-perltools, some git tools (such as "git am",
"git send-email" and "git-submodule") have no longer been part of the
buildtools. We need these, so add them back in.

However, adding git-perltools to buildtools triggers perl itself being
brought into buildtools as well, and we don't want that; but we also
don't want to have to hack the git recipe or indeed anything else that
starts depending on perl. Thus, add a dummy package which gets installed
in its place, in a separate package architecture that is only enabled
for buildtools to ensure it doesn't start appearing in place of
nativesdk-perl anywhere else.

Fixes [YOCTO #7033].

(From OE-Core rev: 5b051d65e797624cca3a81fc6f5c924925f3493e)

(From OE-Core rev: 1f7651763e48d5d3d661987997dc6edae17a8718)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-01-06 14:13:37 +00:00
Armin Kuster
f20e4c0cf6 lbdrm: fix build issue.
add --disable-manpages as was done in master.

(From OE-Core rev: 8ec3851c68fa1641629db56b25352f31c4cb4774)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:09 +00:00
Awais Belal
d77ee86680 libxcb: fix socket writes
The patch has been picked up from libxcb git and is only
applicable to v1.10 while it gets fixed in mainstream v1.11.
http://cgit.freedesktop.org/xcb/libxcb/commit/?id=be0fe56c3bcad5124dcc6c47a2fad01acd16f71a

(From OE-Core rev: 6df3c5ee3f05a1ac01599785b34ba0a2a8573f3b)

Signed-off-by: Awais Belal <awais_belal@mentor.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:08 +00:00
Richard Purdie
36576c7087 security_flags: Fix typo for cups
(From OE-Core rev: 146b1ea632294b2830e2cfe2d1258d48cd0c0e85)

(From OE-Core rev: 7ea83ed3c662312fa17718bb4358063f1e248021)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:08 +00:00
mike.looijmans@topic.nl
7cacecf444 busybox-mdev: Install missing find-touchscreen.sh
mdev.conf references the find-touchscreen.sh script, but this file
was not being installed. Add the script to the busybox-mdev package.

(From OE-Core rev: 44f6df0dfac54845ef5c3ab1af5663d1b6c1d64b)

(From OE-Core rev: 1734b65056f379b358e70efcdba94e2abb98ce37)

Signed-off-by: Mike Looijmans <mike.looijmans@topic.nl>
Acked-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:07 +00:00
Maciej Borzecki
0b8a386a68 wic: IMAGE_BOOT_FILES format checks in bootimg-partition source
Check for malformed entries in IMAGE_BOOT_FILES, fail early if such
entries were found.

(From OE-Core rev: e56072aaaad6cfa222853a4e9e68dd8aa861de18)

(From OE-Core rev: bcc7612dc98b22dc61162a9c403ecc92989d9484)

Signed-off-by: Maciej Borzecki <maciej.borzecki@open-rnd.pl>
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:07 +00:00
Jonathan Liu
87f7ca2613 systemd: backport patch to fix reading journal backwards
(From OE-Core rev: c0650feb6ce7151a22632bab7270002314a1b6be)

(From OE-Core rev: 97a90102f5834c317c0d0f4b645fdfa410c27e04)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:07 +00:00
Richard Tollerton
068a9f5cbe font-util: Fix incorrect PKG_CONFIG_PATH
PKG_CONFIG_PATH always defaults to /usr/lib/pkgconfig, and the host
/usr/lib/pkgconfig is always checked as a fallback; however,
PKG_CONFIG_PATH is currently (incorrectly) set to /usr/lib/pkg-config in
the sysroot, which doesn't exist. On host distros where the font
encoding maps are stored under a different path than OE, this will break
font builds, because ucs2any will attempt to read the sysroot's encoding
maps with the host paths.

(From OE-Core rev: 89a29a3ad0742cd713e739d3d460be7711966679)

(From OE-Core rev: f80a57a6c2a101ba4832899d88171fdb23977af2)

Signed-off-by: Richard Tollerton <rich.tollerton@ni.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:06 +00:00
Peter Seebach
27a34b69a0 package.bbclass: do variable fixups even when FILES was set
A number of settings (DESCRIPTION, SUMMARY, postinst, postrm,
and appends to RDEPENDS) were made only if FILES_foo was not
set for a given package. If you had a modified glibc packaging
setup that was defining FILES_glibc-gconv-somelocale, this would
prevent the automatic append of glibc-gconv as a dependency,
because extra_depends was ignored.

I think the assumption may have been that if FILES_foo was set,
DESCRIPTION_foo and SUMMARY_foo would also be set, but it seems
to me that the right answer is probably to set them if they aren't
already set, and leave them alone if they are.

(From OE-Core rev: 7e59b0c7e03fc08a6eaf9c8ccb6bfa72b4604cc5)

(From OE-Core rev: 860e91dd7cfca6afd08d7c3c62e4653fca2b790c)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:06 +00:00
Khem Raj
ec321182bd kernel.bbclass: Remove bashism
Fixes build on systems using dash for default shell e.g.

errors like

run.do_strip.25842: [[: not found
| readelf: Error: Unable to read in 0x37 bytes of section headers
| readelf: Error: Not an ELF file - it has the wrong magic bytes at the start

Change-Id: I29cac15be44a02d75a3d6889b6ae9b2e19bf46af
(From OE-Core rev: 6956ffdc6e9879e32360b6ee3a3d286618807485)

(From OE-Core rev: 162996ed5a12786a2a5255ebafa15291bcc328cf)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:05 +00:00
Nathan Rossi
6540ecdb37 image_types.bbclass: Populate cpio /init as symlink
For cpio/ramfs the kernel will first attempt to execute /init and will
emit the following error as the file is empty:

    Failed to execute /init (error -13)

If /sbin/init exists symlink to it so the kernel can immediately start
the correct init executable instead of an empty file.

(From OE-Core rev: 3505558e067fdde4ab7aaaf3c50886f292d7c166)

(From OE-Core rev: 40bf6d0d71bd534fadcbc07b6fbba856e50bc534)

Signed-off-by: Nathan Rossi <nathan.rossi@xilinx.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:05 +00:00
Baptiste DURAND
84d0b6fd98 shadow: disable nscd feature when glibc is not built with spawn posix functions
shadow package  configure step fails with this log output :
| checking location of faillog/lastlog/wtmp... (cached) /var/log
| checking location of the passwd program... (cached) /usr/bin
| checking for posix_spawn... no
| configure: error: posix_spawn is needed for nscd support
| Configure failed. The contents of all config.log files follows to aid debugging
| ERROR: oe_runconf failed

(From OE-Core rev: 3678e504cf81f45bd0b0ab315f9cc4da87a633b5)

(From OE-Core rev: a56d726562c3b075d49125d206af56c829b3377b)

Signed-off-by: Baptiste DURAND <baptiste.durand@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:05 +00:00
Armin Kuster
37ca92bb2a glibc: CVE 2014-7817 and 2012-3406 fixes
(From OE-Core rev: 41eb5a1ae2a92034bed93c735e712d18ea3d9d1d)

(From OE-Core rev: 007144bdfb2dfb10e4b1794799f8b5aa6976266c)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:04 +00:00
Saul Wold
8dde9d4bd4 openssh: move setting LD to allow for correct override
Using the export LD in the recipe does not allow for secodnary toolchain
overriding LD later, by setting it in the do_configure_append the export
is used by autotools setting LD based on the env, but would allow for
override later.

[YOCTO #6997]

(From OE-Core rev: 9b37e630f5f6e37e928f825c4f67481cf58c98a1)

(From OE-Core rev: 9dd9d23096e73fa7b6f865241cdd9eff77e5b208)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:04 +00:00
Richard Tollerton
b0feb20abc qemu: disable vte if gtk is also disabled
vte will pull in the gtk libs itself. This can cause build failures if
the native gtk was build with glib>=2.41 while the sysroot native glib
is <=2.40.

Fix for [YOCTO #7077].

(From OE-Core rev: 6cea10dd8f041731269ad16b94d8e172ab1f7257)

(From OE-Core rev: 03c2129351b39cf5299c2f531483f77e1aead7fc)

Signed-off-by: Richard Tollerton <rich.tollerton@ni.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:03 +00:00
Awais Belal
6ede9224f8 gstreamer1.0-* fix configure for out of tree build on git recipes
The autogen.sh script lies in the srcdir ($S) and is required to be run on git
based checkouts of gstreamer packages in order to generate initial
makefiles. So, we fix this by cd'ing to the specific dir, run the required
script and then come back to our initial dir which is builddir ($B).
Additionally rather than overriding the whole do_configure step we only _prepend
to make it clear what we are doing here.

(From OE-Core rev: f4a26b72377380e60d1e7058ba40aaf49b6316e5)

(From OE-Core rev: dbb6cb42a9113038e437cf417f0b9cb25a285e9f)

Signed-off-by: Awais Belal <awais_belal@mentor.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:03 +00:00
Ross Burton
112c10ac64 gst-plugins-bad: add PACKAGECONFIG for the RTMP plugin
The RTMP plugin was non-deterministic, based on whether rtmpdump from
meta-multimedia had been built.  Add a PACKAGECONFIG to resolve this.

(From OE-Core rev: b34147722b1ea43e960eae10c514325e40cdf0ba)

(From OE-Core rev: 00b62db6a53c1d47acbcae02ad1fe33aec5839e4)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:02 +00:00
Jackie Huang
484b928531 apr: avoid absolute paths for grep
The apr provides usr/share/build-1/libtool which is required by
the recipe such as apache2, and it will find grep on the host
and set absolute paths in libtool: GREP="/usr/bin/grep"

If we build apr/apr-native on a host that grep is in "/usr/bin/grep",
and re-use the sstate on another host with "/bin/grep", it will fail
when build apache2/apache2-native with:

| tmp/sysroots/x86_64-linux/usr/share/build-1/libtool: line 1093: /usr/bin/grep: No such file or directory
| tmp/sysroots/intel-x86-64/usr/share/build-1/libtool: line 1093: /usr/bin/grep: No such file or directory

(From OE-Core rev: 475709fc4f32e1ed01f45ee44819cd24e739eb43)

(From OE-Core rev: bbfa5c57ee97a96acf0b280ce342a515744b89a2)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:02 +00:00
Peter A. Bigot
0fb10cf659 bluez-hcidump: select provider as bluez4 or bluez5
bluez-hcidump was a separate package in bluez4, but was integrated into
bluez5.

(From OE-Core rev: 0dcaea0fcf38f0e382eda11e74ded1daeb98a8ac)

(From OE-Core rev: 0c18fdd44accbcc04731e1e3f1ce1faa5e350db9)

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:01 +00:00
Alejandro Hernandez
f7fd58319c python3-core: Fix minimal python3 install
Added additional runtime dependencies for python3-core needed
to run the interpreter with a minimal install (codecs,io,math,reprlib).

Created python3-reprlib package to avoid getting python3-misc bringing
lots of unneeded libraries.

Fixed FILES-python3-core, missing _sysconfigdata, renamed copyreg
undetected before due to previously needed installation of python3-misc.

[YOCTO #6967]

(From OE-Core rev: bafdfb28726d0a9b30b8283b2472727e8208059d)

(From OE-Core rev: 19134b005af620a115db4530409e164eff1e5d9e)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:01 +00:00
Magnus Olsson
554962b380 python: add python-codecs runtime dependency for python-json
A piece of JSON initialization code that runs when you "import json"
tries to use the hex-decoder, thus breaks if you do not have
python-codecs installed. Example:

    >>> import json
    Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    File "/usr/lib/python2.7/json/__init__.py", line 108, in <module>
        from .decoder import JSONDecoder
      File "/usr/lib/python2.7/json/decoder.py", line 24, in <module>
        NaN, PosInf, NegInf = _floatconstants()
      File "/usr/lib/python2.7/json/decoder.py", line 18, in _floatconstants
        _BYTES = '7FF80000000000007FF0000000000000'.decode('hex')
    LookupError: no codec search functions registered: can't find encoding

This patch adds a runtime dependency on python-codecs for python-json and
re-generates the python manifests for Python v2.7. Solves [YOCTO #7020].

(From OE-Core rev: 90fd48144f146f455b18372a9b061314ab3a3857)

(From OE-Core rev: e726819bb2b5b960a50d2ae8d4c6fe85e70c99b7)

Signed-off-by: Magnus Olsson <magnus@minimum.se>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:00 +00:00
Ross Burton
6a2ff9b067 cwautomacros: stub do_configure to avoid cleaning
cwuatomacros's build system doesn't have a clean target, so stub out
do_configure to a no-op.

(From OE-Core rev: c52f380b1df716517a585075f59546d559cc1ebb)

(From OE-Core rev: ef41e1f6b53db827a0d83f6b7620efc046d8cf5a)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:00 +00:00
Wenzong Fan
b48b07f1fb coreutils-native: don't install groups
This binary is provided by shadow-native nowadays. Fixes:

  ERROR: The recipe coreutils-native is trying to install files \
    into a shared area when those files already exist. \
    Those files and their manifest location are: \
      .../tmp/sysroots/x86_64-linux/usr/bin/groups \
    Matched in manifest-x86_64-shadow-native.populate_sysroot

To reproduce the errors:

  $ bitbake shadow-native && bitbake coreutils-native

(From OE-Core rev: 113225b93c55d55a330fcca7d9f996ec039fb953)

(From OE-Core rev: 40de12333e05247ff52a5837fd55d61b38af3bf0)

Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:19:00 +00:00
Jackie Huang
b268f7cc93 gzip: fix MakeMaker issues with using wrong SHELL/GREP
A set of substitution is being processed to all target scripts with sed by
replacing some key words with the detected values at configure time, this
is exactly not compliant with cross compling, and will cause missing path
errors at run time like:
"/usr/bin/zgrep: line 230: /usr/bin/grep: No such file or directory"

Fixed by removing unneeded substitution and using real runtime paths
instead.

(From OE-Core rev: fafdf20179cf28b24459dc0263e4ba36e5843b85)

(From OE-Core rev: 9e147ac704a5ed148568e0deeb3df12475fab23c)

Signed-off-by: Ming Liu <ming.liu@windriver.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:59 +00:00
Saul Wold
8e9950dbaa lzo: add debian patch for alignment issue
[YOCTO #6994]

(From OE-Core rev: 2910478f42ec23ab112da4753dbf38cefb835a3a)

(From OE-Core rev: b750efd2bf9859cba462ef9d814dd8560fda8f74)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:59 +00:00
Jackie Huang
4eab67dda8 util-linux: add switch_root to alternatives list
switch_root is provided by both busybox in /sbin/switch_root and util-linux provides one
in /usr/sbin/switch_root, so move util-linux's to sbin and setup ALTERNATIVE_LINK.

(From OE-Core rev: cac818f0ecd0553b59b967a94766534643fecdf4)

(From OE-Core rev: 812e525ce46c7e4e87ab2e6509376235dd3523df)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:58 +00:00
Hongxu Jia
89398c3e07 Revert "busybox : fix do_compile failed on qemumips when DEBUG_BUILD (ICE)"
Since the gcc has resolved this, so we revert the workaround patch.

This reverts commit f026b7a211.

(From OE-Core rev: cfabce81df042121e0b98af92050333b7a284eaa)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:58 +00:00
Zheng Junling
c54b9fb2ed openssh: fix using the original config files in srcdir
Currently, we install our own ssh_config and sshd_config into ${S} in
do_compile_append() task. So when finishing compiling, their .out files
are generated by the original files, rather than by our own files.

In most cases, installing "$(CONFIGFILES)" in Makefile will generate .out
files again, and then installing "install-sysconf", which will install
these two files into $(DESTDIR), thus we get what we expect.

However, when parallel installing, "install-sysconf" may be installed
before "$(CONFIGFILES)" sometimes. In this rare case, the .out files
generated in the first time rather than those in the second time will be
installed into $(DESTDIR), and thus we get an unexpect result.

This patch fixes this bug through transfering the installing of our own
files from do_compile_append() into do_configure_prepend().

(From OE-Core rev: 6a60a4ba8d8e529882daa33140c9a2fc08714fb2)

(From OE-Core rev: af1096b7e1e9c15d83fb44739d449fcbaf70c220)

Signed-off-by: Zheng Junling <zhengjunling@huawei.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:58 +00:00
Ross Burton
64271845dc package_manager.py: fix arguments to string format
Multiple arguments to string formats need to be in a tuple.

Reported by Lorenz <lqb.list@gmail.com>.

(From OE-Core rev: e30a4650beabac215b6d867070b7acdb3601a4d7)

(From OE-Core rev: 54bff44ffbec47de6c8f6b60ac9d683831413a41)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:57 +00:00
Juro Bystricky
0d65f6c16d eglibc: modified option-groups.h generation
option-groups.h only explicitely #defines options that are enabled.
EGLIBC options are typically pre-processed under the assumption that if
an option is not explicitely defined then it evaluates as 0.
This assumption is correct, but it generates a compiler warning
message each time an undefined symbol is being evaluated.
In order to remove the warnings, each EGLIBC option is now defined
as 1 if the option is enabled or as 0 otherwise.
The consequence is we cannot use #ifdef OPTION_XXX when evaluating
the option, we must always use #if OPTION_XXX.

[YOCTO #7001]

(From OE-Core rev: 7f1bdc331304a61a4836a5752bca210450b6c5b5)

(From OE-Core rev: bce598f21ee9f21228766d4bb19fef21695981da)

Signed-off-by: Juro Bystricky <jurobystricky@hotmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:57 +00:00
Drew Moseley
f8adeb08f1 mesa-demos: Move util to the front of the SUBDIRS variable.
This forces it to be built first since many of the demos
require it.  Resolves build failures such as the following
when certain demos are enabled (notably when PACKAGECONFIG
contains glut):

    make[2]: *** No rule to make target `../util/libutil.la', needed by `copypix'.  Stop.

(From OE-Core rev: 9e4b25893cc8e15e390b8f25545416ef431f0b88)

(From OE-Core rev: 1d2ab458335e3a12129c08dc81fbaf41198bdfa0)

Signed-off-by: Drew Moseley <drew_moseley@mentor.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:56 +00:00
Drew Moseley
e0cb09c6ac glew: Additional fix for generation of glew.pc.
Without this fix, building mesa-demos with the glew
PACKAGECONFIG will result in errors like the below
being logged in tmp/work/*/mesa-demos/*/build/config.log:

    configure:16529: checking for GLEW
    configure:16536: $PKG_CONFIG --exists --print-errors "glew >= 1.5.4"
    Package @requireslib@ was not found in the pkg-config search path.
    Perhaps you should add the directory containing `@requireslib@.pc'
    to the PKG_CONFIG_PATH environment variable
    Package '@requireslib@', required by 'glew', not found
    configure:16539: $? = 1

(From OE-Core rev: 9245cb4fe211da06283d53086bca3fcd5b2c8aef)

(From OE-Core rev: 77597c8b6f30090f5680af2f5251b53968727b10)

Signed-off-by: Drew Moseley <drew_moseley@mentor.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:56 +00:00
Chen Qi
d04fdd5f9e bootchart2: fix to find collector correctly in case of multilib
This patch fixes the following error of being not able to find the
bootchart-collector program when using bootchart2 in multilib system.

In order for bootchartd to correctly find the collector program, we need
to set several vars while compiling.

(From OE-Core rev: 26518bea1d6aa0e438e6492c2af70225b431d7a1)

(From OE-Core rev: 87abce8dd583dfad2cf08ad24fd33980db819b0a)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:56 +00:00
Chen Qi
b30db74cd8 libxslt: create wrapper to avoid host system referencing
By default, xsltproc from libxslt would use configuration files under
/etc/xml. To avoid host system contamination, we create a wrapper for
this command to make it use configuration files in the sysroot directory.

(From OE-Core rev: f14ecfa98baf98edf47b6820d3b0b3af376c5623)

(From OE-Core rev: 004ac11daf8f73aef68874d88dbc27301065aa83)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:55 +00:00
Paul Eggleton
7526e8d006 kernel-yocto.bbclass: fix shell syntax error
Spaces aren't valid around = in an assignment statement (not even with
bash).

(From OE-Core rev: fb419b1a3f5dbc5e5019be9d09c4acdbeb460c19)

(From OE-Core rev: 47f6432dbc4e5315bed15e073c4b1359c181d227)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:55 +00:00
Chong Lu
fb760567a3 gnutls: disable tpm
Disable tpm to solve following error:

.../usr/lib64/libtspi.la: No such file or directory

trousers isn't an oe-core recipe, disable it for now.

(From OE-Core rev: f735a540d2bf489547aede0745e34174c39c71bd)

(From OE-Core rev: 228c240b99404ae8ee3d020cbe2cce55ea9ff42d)

Signed-off-by: Chong Lu <Chong.Lu@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:54 +00:00
Hongxu Jia
b6079e0c71 package_manager.py: check the result of create_index
While invoking create_index failed, there was no error output
and didn't break the build until the package installation.
...
|ERROR: run-postinsts not found in the base feeds (qemux86 i586 x86
noarch any all).
...

The reason is we used multiprocessing to execute create_index, and
did not check its invoking result.

(From OE-Core rev: d8921e4ea68647dfcf02ae046c9e09bf59f3e6e4)

(From OE-Core rev: d30920e3e51d731bacb68f50857b55ea0bb512bc)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:54 +00:00
Hongxu Jia
8d0600569b compress_doc.bbclass: support update-alternatives
While doc file make use of update-alternatives to fix confliction,
we should reconfigure update-alternatives for doc compression.

Such as util-linux-doc:
...
update-alternatives --install /usr/share/man/man1/last.1 last.1
  /usr/share/man/man1/last.1.util-linux 100
...
was updated by doc_compress to
...
update-alternatives --install /usr/share/man/man1/last.1.bz2 last.1.bz2
  /usr/share/man/man1/last.1.util-linux.bz2 100
...

(From OE-Core rev: ba4dd1afc2476259eff52f8a68fba1344e0f0474)

(From OE-Core rev: 972b8a14e65c544082806d8bcc38195b27345a89)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:54 +00:00
Merten Sach
dfd5bbdfa9 metadata_scm: Fix crash due to uncaught python exception
Function base_get_metadata_svn_revision was crashing due to an uncaught
IndexError exception.

The except notation without parentheses is legacy syntax. It is the equivalent
to 'except IOError as IndexError' which is not what we want here.

The change catches both exceptions.

(From OE-Core rev: 33bea949bae54ddc89aa83cf07d7b1ee62e2b393)

(From OE-Core rev: 4a3f37f7d004b196b9caeb558d3461452dd85edc)

Signed-off-by: Merten Sach <msach@mailbox.tu-berlin.de>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:53 +00:00
Khem Raj
49ece9bb51 package.bbclass: Create empty key/value if not there for shlib_provider
When we use ASSUME_SHLIBS,e.g.

ASSUME_SHLIBS = "libEGL.so.1:libegl-implementation"

then we end up with errors like below when using shlibs2 (dizzy+)

File: 'package_do_shlibs', lineno: 216, function: package_do_shlibs
     0212:            dep_pkg = dep_pkg.rsplit("_", 1)
     0213:            if len(dep_pkg) == 2:
     0214:                lib_ver = dep_pkg[1]
     0215:            dep_pkg = dep_pkg[0]
 *** 0216:            shlib_provider[l][libdir] = (dep_pkg, lib_ver)
     0217:
     0218:    libsearchpath = [d.getVar('libdir', True),
d.getVar('base_libdir', True)]
     0219:
     0220:    for pkg in packages.split():
Exception: KeyError: 'libEGL.so.1'

This is because the entry which is being populated does not exist
so lets create it if its not already there.

Change-Id: I9e292c5439e5d1e01ea48341334507aacc3784ae
(From OE-Core rev: a64f81fcef42172f788cec7a63bb4672eac99f94)

(From OE-Core rev: b143340700961f916e4a21da42b859ec014dd366)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:53 +00:00
Saul Wold
10710d7a92 resolvconf: add fixes for busybox and make it work
resolvconf was missing a script and needed readlink which was in
/usr/bin.  Also the /etc/resolv.conf was not being correctly linked
to /etc/resolvconf/run/resolv.conf, which is fixed by the volaties
change which is now a file as opposed to created in do_install.

Ensure that the correct scripts for ifup/ifdown get installed and that
resolvconf is correctly enabled at startup

[YOCTO #5361]

(From OE-Core rev: 853e8d2c7aff6dddc1d555af22f54c4ecef13df1)

(From OE-Core rev: 10a1ae28ecee10695efb6a5bc08de4f04e0acac1)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:52 +00:00
Chong Lu
527574602a connman-gnome: fix dbus interface name
This patch resolves following error:

"connman-dbus.xml": "connman" is not a valid D-Bus interface name

(From OE-Core rev: 964bcac02bb182340e44dc8a07b5d308f0a4a719)

(From OE-Core rev: f9398787975c6e9468e1d58974f070571e9c664c)

Signed-off-by: Chong Lu <Chong.Lu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:52 +00:00
Ross Burton
25cdadb86b gdk-pixbuf: use ptest-gnome
(From OE-Core rev: ff2ff155ea5273b2023a1c9834b13f10249d343f)

(From OE-Core rev: b06fd31248ea170e1a87018d9aa814a670a055d9)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:52 +00:00
Richard Purdie
6b3673db74 qemu: Add missing wacom HID descriptor
The wacom driver we use is missing a HID descriptor causing it not to work
with 3.17 kernels and later. This patch adds in a descriptor to make the
driver work again.

(From OE-Core rev: 51200e0151f0a3b0ed06649ffe77ef20bb296499)

(From OE-Core rev: 9564a6ea2c4648205136a1c2e9a6cedb8a19aaf1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:51 +00:00
Peter A. Bigot
e7d461a473 useradd.bbclass: set PSEUDO_PASSWD consistent with root directory
When installing into a sysroot this class examines $D/etc/passwd for
content, then invokes useradd to make changes.  Under pseudo useradd
attempts to look up user information in directories specified by
$PSEUDO_PASSWD.  For opkg multilib installs $D is not always the same as
$IMAGE_ROOT, and the user might already be in the IMAGE_ROOT files,
causing a failure during rootfs population.

Fix this by ensuring the files pseudo looks at when doing useradd stuff
are the same ones that useradd.bbclass will be manipulating.

(From OE-Core rev: ec3417ad825c52f5137d38b91d8fcb4637a50f4c)

(From OE-Core rev: 0b7e70aafdee68825f3c65bae89bde3e03a20de8)

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:51 +00:00
Peter A. Bigot
358794f2fb bitbake.conf: pseudo fall back to last-resort passwd files
Recipe packaging for the target requires permissions that are consistent
with meta/files/fs-perms.txt which specifies certain user and group
names.  In the early parts of a target build base-passwd is not yet
available to provide the target /etc files used for user/group lookup.
Allow pseudo to fall-back to the last-resort files it installs if the
target ones aren't there yet.

(From OE-Core rev: 071d364b7a758ba5e546bb18c5816ac4c2e6747c)

(From OE-Core rev: fdf7e1829810df75d180c06db615f9771f46d592)

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:50 +00:00
Peter A. Bigot
3c76f85d5f pseudo: provide fallback passwd and group files
Normally pseudo is built with --without-passwd-fallback, which requires
that somebody provide target passwd and group files.  Those come from
base-passwd in OE, but base-passwd cannot be built without first
invoking operations under pseudo that require getpw*/getgr*.

Provide the absolute minimum stub files, matching in content what will
eventually be on the target, that can be used in the cases where the
target files are not yet available.  The requirements for minimum stub
are the usernames and groups identified in meta/files/fs-perms.txt.

(From OE-Core rev: 91443426246fbe13083c19801b7c74365e041271)

(From OE-Core rev: a81b9811803c7a904e0d806302636f80ce6d31a4)

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:50 +00:00
Peter A. Bigot
7e9d8bcada pseudo: default --without-passwd-fallback
No good reason exists to fall back to the build host /etc files when
attempting to resolve user and group information.  Recipe dependencies
should be updated so the correct target files are available.

(From OE-Core rev: 899fe3d1d05054a10e4d427810c20ad1e34f916a)

(From OE-Core rev: 9a4f8895d76a1b2aca5a3a479beeaee8c9ffbcc2)

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:50 +00:00
Peter A. Bigot
08613cc339 image.bbclass: search both rootfs and native staging for passwd files
When pseudo is configured to disallow fallback to the build host
/etc/hosts and /etc/group, the selection of ${IMAGE_ROOT} for
PSEUDO_PASSWD is insufficient as the necessary files will not be
available until base-passwd has been installed and its pkg_postinst
script run.  Fall back to the ${STAGING_DIR_NATIVE} version of those
files until the rootfs versions are available.  (The native copies are
never modified by the build; the ones in ${STAGING_DIR_TARGET} are
updated and may contain settings not consistent with what would be
created by post-install useradd/groupadd commands invoked in the image
rootfs.

(From OE-Core rev: 8c653bafaa32126c54400bb56b9a94f07cd33197)

(From OE-Core rev: 185b38b5e9ae22e5ba66bd2edc54f3971a9c97cf)

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:49 +00:00
Peter A. Bigot
5381289530 pseudo: support multiple search directories in PSEUDO_PASSWD
This makes it possible to use --without-passwd-fallback when building
images where the preferred passwd files are not available until after
installation has begun.

(From OE-Core rev: 15b3b796d6e06fb7a7867d132b234d783e733531)

(From OE-Core rev: 31a8d1a14f39908ad1aa855434893994a127a19e)

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:49 +00:00
Peter A. Bigot
d57978aafc pseudo: support --without-passwd-fallback configuration option
A bug in pseudo 1.6.2 results in lock failures if this option is
present.

(From OE-Core rev: eb5b99e4fbfdf31497a4606fc55cab268ec8d654)

(From OE-Core rev: 516f71ac0583d9fe6aded8cd4332e6b329039a2e)

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:48 +00:00
Chen Qi
7fe785d692 procps: install symlink under /etc/sysctl.d in case of systemd
Install /etc/sysctl.d/99-sysctl.conf symlink in case of systemd so
that /etc/sysctl.conf is taken into consideration by systemd-sysctl.

(From OE-Core rev: a32869fcbcb5f31741a32fdca14e7f38c2abace6)

(From OE-Core rev: 897faad73e478cfb4a884ff83180bdba2420e7c4)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:48 +00:00
Shiqun Lin
b3e9e56756 bind: clean host path in isc-config.sh
* /usr/bin/isc-config.sh
* /usr/bin/bind9-config - hardlink to isc-config.sh

(From OE-Core rev: c2332d304a2c872e97653c980b090efa2181123b)

(From OE-Core rev: f8385a94ef915c3905c50ab3c774c2dd9d89ba47)

Signed-off-by: Shiqun Lin <Shiqun.Lin@windriver.com>
Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:48 +00:00
Shiqun Lin
5d7fe4a07b e2fsprogs: clean host path in compile_et, mk_cmds
* /usr/bin/compile_et
* /usr/bin/mk_cmds

(From OE-Core rev: 7ac224ee9f626d0ba2305bc4608b29f047cc65a4)

(From OE-Core rev: 34f08d81aa721f620a23a1b671f0da87f20ff020)

Signed-off-by: Shiqun Lin <Shiqun.Lin@windriver.com>
Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:47 +00:00
Shiqun Lin
6062cbe8db bash: clean host path in bashbug
* /usr/bin/bashbug

(From OE-Core rev: a745b4b790fe2550fafa731c02f33dd39a9d8651)

(From OE-Core rev: 89a9097626051771062ed44954e6fad1475e9ced)

Signed-off-by: Shiqun Lin <Shiqun.Lin@windriver.com>
Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:47 +00:00
Yue Tao
9f41c7df9e libpam: Stop a QA WARNING when building multlib version
WARNING: QA Issue: lib64-libpam: Files/directories were installed but
not shipped
  /usr/sbin/pam_console_apply

Because the package name is changed to mlprefix-pam-plugin-console. The file
must be appended to that item.

(From OE-Core rev: a9bc116ab80d920b781a8ae31370220fac683f3d)

(From OE-Core rev: e741c3b4854d82dd9055425e1f08ef40113197fa)

Signed-off-by: Yue Tao <Yue.Tao@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:46 +00:00
Jackie Huang
4e0180b746 python3: several fixes for cross compiling
* Add a patch to use CROSSPYTHONPATH as PYTHONPATH for
  PYTHON_FOR_BUILD, otherwise CROSSPYTHONPATH is never used,
  and it use the path in target builds to find libraries.

* Add a patch to avoid finding host headers and libs

* Fix a typo: s/python-native3/python3-native/

(From OE-Core rev: d3d00163671bda5395c9046c1109f711772e4ed9)

(From OE-Core rev: 4cda344f2c159b81588e2418071865f501d46be9)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:46 +00:00
Wolfgang Denk
fa75856b4b perl: fix PERL5LIB settings
The PERL5LIB settings in the perl wrapper script did not include the
"site_perl" or "vendor_perl" directories, which caused some errors.

See https://bugzilla.yoctoproject.org/show_bug.cgi?id=6890

(From OE-Core rev: 477ca2da14abaf072d3645c4be916760a48b8938)

(From OE-Core rev: df4ace81f24e17d5fbf68ddf012b18d4bfc8c156)

Signed-off-by: Wolfgang Denk <wd@denx.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:46 +00:00
Krzysztof Sywula
8d8e8d0a8e dtc: old SRC_URI died, changing to new working one
(From OE-Core rev: 131a17f014e6373dae526cc927588ccc0fedc38d)

(From OE-Core rev: 239dd6d2a15677fc21bdae5ce601390b71d4b14a)

Signed-off-by: Krzysztof Sywula <krzysztof.m.sywula@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:45 +00:00
Robert Yang
4157aa7c0b toolchain-shar-template.sh: fix the text files in the top dir
It only fixed the text files in native_sysroot, but there might be some
files in the top installed dir (whose var name is target_sdk_dir in the
code) which are also needed to be fixed.

It used "find $native_sysroot", now also "find $target_sdk_dir -maxdepth 1",
and split the long line into small ones.

(From OE-Core rev: 104990923f82d129a0fc8e6cd5bf0224751d5d03)

(From OE-Core rev: 6dae14efe736f3d9976a8a2e06df6c14082d0588)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:45 +00:00
Ross Burton
b0811fe4a2 bind: use PACKAGE_BEFORE_PN instead of PACKAGES_prepend
Appending or prepending to PACKAGES breaks when the package is built natively,
so use PACKAGE_BEFORE_PN instead.

(From OE-Core rev: 23d7223a21582edefc4e30d76f94f8e81a543af9)

(From OE-Core rev: 0475d37cde09d62667b3edf0a928c563ed7efc50)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:44 +00:00
Hongxu Jia
8f21845460 gcc-4.9: fix the compile failure of 'defaults.h' not found
While compiling gcc-crosssdk-initial-x86_64 on some host, there is
occasionally failure that test the existance of default.h doesn't
work.
...
| tmp/work-shared/gcc-4.9.1-r0/gcc-4.9.1/gcc/calls.c:1240:
error: 'STACK_CHECK_MAX_VAR_SIZE' was not declared in this scope
...

The reason is tm_include_list='** defaults.h' rather than
tm_include_list='** ./defaults.h'

So we add the test condition for this situation.

(From OE-Core rev: fec684512c6f934d7a847b0c9f5151da81426910)

(From OE-Core rev: 174b7c3fe0240ff6d897b5418a8bc020086f7ba1)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:44 +00:00
Roy.Li
3d101b429d rpm: fix the rpm addsign function
(From OE-Core rev: d382c1541bec301468119268f4940ae15c326b1c)

(From OE-Core rev: 1a7c242d29b657f3ba2bd629535ef7d833b5b118)

Signed-off-by: Roy.Li <rongqing.li@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:44 +00:00
Ming Liu
a4814ac1b0 rpm: realpath is required before expanding _dbpath in chroot
A regression is introduced by commit 66573093:
[ rpm: Fix rpm relocation macro usage ]

_usr turned out to be a relative path to support dyanmic config after
that, but it's being used somewhere as a indicator to locate substrings,
so we must get the real path of it in advance.

(From OE-Core rev: 1247955a907f51aac7efd305d26856e263c11a65)

(From OE-Core rev: 4b9ae27e3ac9cf55bff5418fe884738b8ec5ab9b)

Signed-off-by: Ming Liu <ming.liu@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:43 +00:00
Otavio Salvador
9b0df21b87 toolchain-scripts.bbclass: Export KCFLAGS to ensure sysroot is provided
When building the U-Boot the lack of a proper sysroot can trigger
following error:

,----
| arm-poky-linux-gnueabi-ld.bfd: cannot find -lgcc
| make[2]: *** [examples/standalone/hello_world] Error 1
| make[1]: *** [examples/standalone] Error 2
| make: *** [examples] Error 2
`----

Guillaume Fournier has posted a very complete analysis of the
problem[1].

1. https://lists.yoctoproject.org/pipermail/meta-freescale/2014-November/011270.html

The use of KCFLAGS makes the build of U-Boot work out of box, now that
it uses the Linux kernel build system.

Reported-by: Guillaume Fournier <gfournier@brioconcept.com>
(From OE-Core rev: 50437f9c187f1a884825a8d1ec12da47a5e58670)

(From OE-Core rev: 68954e7e62b8b494168bf83fec517e751c679b21)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:43 +00:00
Ross Burton
2d51c7e62c docbook-xsl-stylesheets: fix do_configure typo
do_configure was incorrectly spelt do_configre, which with recent changes to
base.bbclass mean make clean was invoked, which doesn't exist.

(From OE-Core rev: e7b731a1a358e0007dba1038ad504888bec5916e)

(From OE-Core rev: 4a4665b2ee1dfd2d8cdb48fc4e994522bbcd748e)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:43 +00:00
Jackie Huang
22861f8031 classextend: Do not extend for that already have multilib prefix
If a BSP supports two or more multilibs, for example:

    MULTILIBS = "multilib:lib32 multilib:lib64"

and a variable is already extended to include multilib variants,
for example in populate_sdk_base:

    commit 396371588c7fd2d691ca9c39cd02287e43cb665b
    Author: Richard Purdie <richard.purdie@linuxfoundation.org>
    Date: Thu Jul 24 22:09:09 2014 +0100

    populate_sdk_base: Extend TOOLCHAIN_TARGET_TASK to include multilib variants

    Most people expect the toolchain from a multilib build to contain multilib
    components. This change makes that happen and is easy for users to override
    should they want something different.

    Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

The mapping clsextend.map_depends_variable("TOOLCHAIN_TARGET_TASK")
ends up with a wrong double extended package name like:

    lib32-lib64-packagegroup-core-standalone-sdk-target

This patch avoid such issues.

(From OE-Core rev: c4e9b2aa894d59fe951038b3b73795b6891df70a)

(From OE-Core rev: fbb8e9942333befad9e7e5da703c7970eda1c1a4)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:42 +00:00
Chen Qi
74dec87ce2 systemd: add PACKAGECONFIG for 'audit'
Add PACKAGECONFIG for 'audit', otherwise there would be warnings like
below which would possibly lead to do_rootfs failure.

WARNING: QA Issue: systemd-analyze rdepends on audit, but it isn't a build dependency? [build-deps]
WARNING: QA Issue: systemd rdepends on audit, but it isn't a build dependency? [build-deps]

(From OE-Core rev: b4e6e0aa0229d2ce4c8bee24581c127a31109676)

(From OE-Core rev: 185b2c73a1c62a17bf5a190459aac4a3a5de75d5)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:42 +00:00
Chen Qi
2558a15919 systemd: fix libidn floating dependency
WARNING: QA Issue: systemd rdepends on libidn, but it isn't a build dependency? [build-deps]

(From OE-Core rev: 83be6e94f35b44baa6c363c9518f85e7670246f3)

(From OE-Core rev: 0acc66fb4a1d037ad2e33276ac40cd918f7308d9)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:42 +00:00
Robert Yang
2c13c5d3fe apr-native: Set CONFIG_SHELL to /bin/bash
The apr-native provides usr/share/build-1/libtool which is required by
the recipe such as apache2-native. If we don't set the CONFIG_SHELL to
/bin/bash, then:

1) If we build apr-native on a host which is "/bin/sh -> bash", the
   interpreter in usr/share/build-1/libtool would be "#!/bin/sh".
2) When we re-use apr-native's sstate on a host which is
   "/bin/sh -> dash", there would be errors.

(From OE-Core rev: 38d83009dfe77437533969ce681605a9ab9534ac)

(From OE-Core rev: 3a1e6615b8ff3edc9ed2b16baf182673140ca3d2)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:41 +00:00
Chen Qi
769c4ebb4f systemd: avoid using system-auth
Patch systemd-user pam configuartion file to avoid using system-auth
file. Instead, we use common-* files.

(From OE-Core rev: a3c863c4a65737a410a0353d97a0ee538eb82434)

(From OE-Core rev: 95d3c25e5214962053e07102e25a906262bf65c2)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:41 +00:00
Shiqun Lin
69767a27cc libtool: remove build host paths from installed libtool
Resulted libtool contains references about paths from the build host

Below variables contains hard coded build paths from the host:
LTCC=
lt_sysroot=
sys_lib_search_path_spec=
LD=
CC=
compiler_lib_search_dirs=
predep_objects=
postdep_objects=
compiler_lib_search_path=

(From OE-Core rev: d27c4226f600584f83f66c86b0988a165e8ecb75)

(From OE-Core rev: b7ab8e045c97d82ed9c424310bb0dbe39cb1dcf0)

Signed-off-by: Shiqun Lin <Shiqun.Lin@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:40 +00:00
Kai Kang
c9a9e0199b beecrypt: add option --with-dev-dsp
Add this configure option for developer to control if the
/dev/dsp should be used on target. Instead of judging it
based on the very device file of build server.

(From OE-Core rev: 5960262802c394cb6a54ede30e4994929621ca06)

(From OE-Core rev: ebf601ad063b935f605c27c8a107ea0cb0fdf221)

Signed-off-by: Zhang Xiao <xiao.zhang@windriver.com>
Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:40 +00:00
Martin Hundebøll
c458dde820 scripts: use '/usr/bin/env' in shebangs with python
To support yocto on systems with python3 as default version, scripts
should use /usr/bin/env python in the shebang, as this allows the use of
a fake env to mimic python2 as default version.

This patch simply replaces occurrences of #!/usr/bin/python with
 #!/usr/bin/env python and was done with this oneliner:

     git grep -lE '^#!/usr/bin/python' | xargs \
         sed -i 's|/usr/bin/python|/usr/bin/env python|'

(From OE-Core rev: 6d3de22a19657a413e01d7bb5fd74d16c00dc696)

(From OE-Core rev: 129dff8cc5a6cbfa2a3f0d21aeb16efabe5b4575)

Signed-off-by: Martin Hundebøll <martin@hundeboll.net>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:40 +00:00
Gary Thomas
9393fdbd7e python-pygtk: Clean up incorrect "fix"
This patch removes most of "dirty fix #1" which is no longer needed
(no dependency on python-pygobject-dev exists).  A side effect is
that the pygtk code generator will also be installed.

Merge 'fix-path.inc' into this recipe as it is not used by any other
recipe.

(From OE-Core rev: 02985d315f71126d3af789b0666dbf428f586e4b)

(From OE-Core rev: a52a4dd06e9c768292725ef57031f69bddcfdffe)

Signed-off-by: Gary Thomas <gary@mlbassoc.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:39 +00:00
Richard Tollerton
0cfc7dd0a5 populate_sdk_base: improve POSIXLY_CORRECT compat
The install script is sometimes called under POSIXLY_CORRECT. This
requires two fixes be made:

1. `find -perm /0000` is a gnuism; replace with an equivalent boolean
expression using `-perm -0000`.

2. POSIX grep requires that all options be passed on the command line
before all files; otherwise, the options must be parsed as filenames.

(From OE-Core rev: 0870d9115546ad3b456af52ed45e46e637874a48)

(From OE-Core rev: 21cfc81493d9f8ae15194b39a2b8e1c1d228f1e2)

Signed-off-by: Richard Tollerton <rich.tollerton@ni.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:39 +00:00
Jason Wessel
04ffa0b961 ncurses, busybox, cml1.bbclass: Fix menuconfig display corruption
Previously there was a change to the ncurses compile to make it more
like the typical way it was compiled on a host system.  This fixed a
whole class of host machines, but masked the real underlying problem
with the display corruption issues and menuconfig.

The corner case that led to the discovery that the wrong curses.h file
was getting used was when there was no curses libraries at all on one
of the development hosts.  What had happened before was that
/usr/include/curses.h on the host system had to match closely enough
to the curses.h in the sysroot and then linking against the sysroot
version of curses.so was ok (meaning no display corruption).  But on
some systems with ncurses.h vs curses.h such as SuSE hosts, there were
still issues.

If we fix the root of the problem and force the mconf and lxdialog to
use the correct headers and libraries from the sysroot there is no
further issues and the menuconfig target works properly.  It also
means we can back out the custom compilation flags to the ncurses
recipe because they are no longer needed.

For the kernel part of the menuconfig / nconfig changes it will be
merged separately and this is all based on:

https://lkml.org/lkml/2013/3/3/103

(From OE-Core rev: 889e02659dd396feba24f0b0ee6b4043c3f3735a)

(From OE-Core rev: b8bba551f96f7ff7c2eb772bbdc0b38ed2449683)

Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Roy Li <rongqing.li@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:39 +00:00
Chen Qi
fdf882c091 nfs-utils: change owner/group of directories in do_install
Previously, the owners/groups of directories like /var/lib/nfs/statd
are changed in the init script, /etc/init.d/nfscommon. This is actually
a workaround. We need to change them at do_install time.

This patch fixes the above problem by changing owners/groups at do_install
time.

Besides, configuration option '--with-staduser=nobody' is changed to be
'--with-statduser=rpcuser'. And /var/lib/nfs/statd/state is modified to have
permission 0644, just like other distros (ubuntu, fedora, etc.) do.

(From OE-Core rev: 8c27a1e25ae42a435ab7d290cab40f94f9286243)

(From OE-Core rev: 10b5adcc6d3d3ae1ffe64de72777da091ba98274)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:38 +00:00
Jackie Huang
2d40d3228d perl: set the perl libraries search path
The default value for this is ../../lib which ends up with
something like:
| ./sysroots/x86_64-linux/usr/bin/perl-native/perl5.20.0.real \
| "-I../../lib" "-I../../lib" "-MExtUtils::Command::MM" -e pod2man \
| "--" --section=0 --perm_rw=644 perldoc.pod blib/man1/perldoc.1

in this case, nativeperl will find libraries from the target build,
When using an x86-64 host to target Haswell, you can end up with
../../lib including precompiled modules which use Haswell
instructions, it fails with:
| Running pm_to_blib for dist/if directly
| Skip ../../lib/if.pm (unchanged)
| Makefile:457: recipe for target 'manifypods' failed
| make[1]: *** [manifypods] Illegal instruction

So set it to use the -native ones instead of those from the target
build.

(From OE-Core rev: 82ac2a29126dc38d23c278b82d129d73b17000b7)

(From OE-Core rev: 6ba03a72b1bed2f6367d2a1486ef1436bdd44a5b)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:38 +00:00
Mark Hatle
a832f18ac2 gcc: Fix intermittent failures during configure
If configure or any of the components it uses from the shared work directory
change, do_configure may fail.

An existing do_preconfigure was created to handle these conditions, but
a 'sed' operation was missed, and a call to gnu-configize was also missed.

(From OE-Core rev: 21c2cfff14442cf224e3568bdbb9bcd4070be247)

(From OE-Core rev: cb7548feaeb07eca4855223ff2fa6676882b6424)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:37 +00:00
Richard Purdie
a0d91bbdef perl: Enable rebuilds to account for configuration changes
If configure/compile was rerun for perl, changes such as libdir changes
were not being picked up. To fix this we we add "make clean"
functionality, if the makefile is present.

We also in this case need to delete the .so file, else some perl modules
try and load the target arch libraries leading to build failures. I'd
love it if there were a better way to do this and am open to better
proposals but this was the best I could find, not being a perl expert.

(From OE-Core rev: 3b8adee2756085df47b90357eed4c20ee98c7cd1)

(From OE-Core rev: 326590db7067660af6c896a130c793cd1b55f650)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:37 +00:00
Jackie Huang
68f431e850 gcc: backport two patches to fix ICE in dwarf2out_var_location
The first patch fixes the ICE in dwarf2out_var_location, at
dwarf2out.c.

r212171:
    * except.c (emit_note_eh_region_end): New helper function.
    (convert_to_eh_region_ranges): Use emit_note_eh_region_end to
    emit EH_REGION_END note.
    * jump.c (cleanup_barriers): Do not split a call and its
    corresponding CALL_ARG_LOCATION note.

But it introduced a regression issue:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63348

so backport the fix for the regression as well:

r215613:
    PR rtl-optimization/63348
    * emit-rtl.c (try_split): Do not emit extra barrier.

(From OE-Core rev: de52db1b1b0dbc9060dddceb42b7dd4f66a7e0f3)

(From OE-Core rev: 0447732a7884ef49c7afbc2b408848e969666516)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:37 +00:00
Alexandru DAMIAN
e5c3c1501b toaster.bbclass: read elapsed time from the stats file
We read the elapsed time fromt the build stats file, instead
of computing it independently.

[YOCTO #6833]
[YOCTO #6685]

(From OE-Core rev: 4f5a4ec0cdaf078463f04be4a6683816e9b78d5f)

(From OE-Core rev: 9c1bc0c2e49e1e98edb17bf0f00077497dd26272)

Signed-off-by: Alexandru DAMIAN <alexandru.damian@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:36 +00:00
Aníbal Limón
bb6990e057 perl: Fix bug when installs SDK in custom directory
Add site_perl and vendor_perl directories in create_wrapper
this fix bug when searching for libraries in these directories.

[YOCTO #6890]

(From OE-Core rev: ea2584213e2e852157ec2490c84cc6c03feb4b40)

(From OE-Core rev: 857661e7d55ac73c7c360f49a0103f2a5cd8a310)

Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:36 +00:00
Chong Lu
2f2b081589 docbook-xsl-stylesheets: add perl to RDEPENDS
This solves the following warning:

docbook-xsl-stylesheets-1.78.1: docbook-xsl-stylesheets requires /usr/bin/perl,
/bin/bash, but no providers in its RDEPENDS [file-rdeps]

(From OE-Core rev: d7a277b35bcc67050046c76fb70412101679a545)

(From OE-Core rev: 5ab3d737a9961725b97204a99167f4e0df2fa005)

Signed-off-by: Chong Lu <Chong.Lu@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:36 +00:00
Hongxu Jia
9d4df89e5b python-smartpm: report warn rather than error during install with --attempt
With the following config and build image:
...
IMAGE_INSTALL_append = "shadow man-pages"
EXTRA_IMAGE_FEATURES += "doc-pkgs"
...

There is an error during install with --attempt, and it breaks the build.
...
|error: file /usr/share/man/man5/passwd.5 from install of
shadow-doc-4.2.1-r0.i586 conflicts with file from package
man-pages-3.71-r0.i586
...

For complementary and 'attemptonly' package processing, we should make sure
the warn rather than error messages reported.

[YOCTO #6769]

(From OE-Core rev: beb2e989e24e671fecd37805876dfb2375ee0df6)

(From OE-Core rev: 81f3b5b5fba98509be9d159dde828b800afe2c4d)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:35 +00:00
Hongxu Jia
a6d7512b5e man-pages/shadow: resolve man pages confliction
Invoke smart/rpm to install man-pages and shadow-doc, there
is a build failure:
...
|error: file /usr/share/man/man5/passwd.5 from install of
shadow-doc-4.2.1-r0.0.core2_64 conflicts with file from
package man-pages-3.70-r0.0.core2_64
|error: file /usr/share/man/man3/getspnam.3 from install of
shadow-doc-4.2.1-r0.0.core2_64 conflicts with file from
package man-pages-3.70-r0.0.core2_64
...
Use alternatives mechanism to fix it.

As README in man-pages said: "Note that sometimes these
pages are duplicates of pages also distributed in other
packages. Be careful not to overwrite more up-to-date
versions. So we set man-pages with lower priority.

[YOCTO #6769]

(From OE-Core rev: 32357da67fa640bc0c14048af1d7b8dbbe8e775e)

(From OE-Core rev: 222e5c9202cb4d20ee8f9f2b9845a5922811e9fc)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:35 +00:00
Bogdan Purcareata
637580101c shadow: enable support for subordinate IDs
The subordinate IDs support in pkg-shadow allows unprivileged users to manage a
set of UIDs and GIDs. These subordinate IDs are specified by root, and can be
further used by the unprivileged user they have been assigned to. This user can
then create an e.g. user namespace, where he is allowed to manage his own set of
users and group from the pool of subordinate IDs. More details can be found at
http://lwn.net/Articles/533617/.

Pull a required change from upstream in order to make shadow cross-compile with
subordinate IDs support. Enable flag in recipe.

Changes since v1:
- update changelog

(From OE-Core rev: 8548868c05e52700fd4712298b1705b8ec7ae446)

(From OE-Core rev: 986e7f4a937bb21115ed56d981baa863365487ea)

Signed-off-by: Bogdan Purcareata <bogdan.purcareata@freescale.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:35 +00:00
Roy.Li
673bb3cffc iproute2: backport a patch to make adding vxlan link success
If without this patch:
    $ ip link add vxlan0 type vxlan id 51 group 238.1.1.1 dev eth0
    Error: argument "vxlan0" is wrong: Unknown device
    $

With this patch;
    $ ip link add vxlan0 type vxlan id 51 group 238.1.1.1 dev eth0
    $ ifconfig -a |grep vxlan0
    vxlan0    Link encap:Ethernet  HWaddr da:61:56:2e:c2:20
    $

(From OE-Core rev: 4f2873c8567738310f7e86c633c6da759554b21a)

(From OE-Core rev: 2d90e1e01d4a732a52d50c654022a4dbd508e084)

Signed-off-by: Roy.Li <rongqing.li@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:34 +00:00
Gary Thomas
09f6349eeb python-pygtk: Restore pkg-config file
Some previous version of this recipe was errantly removing the pygtk-2.0.pc
(pkg-config) file.  This is needed for other packages to be able to build
against this library.

Also update the .pc file to match current pkg-config use (libdir was missing).

(From OE-Core rev: 8c6158d7bcca2ecf3e150d1e8eaaaa4ece58e1e2)

(From OE-Core rev: 94099c4b198aca6bb3c759a11ce8c62e6130a96d)

Signed-off-by: Gary Thomas <gary@mlbassoc.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:34 +00:00
Ming Liu
31f39a91e6 pciutils: Fix multilib header conflict - pci/config.h
pci/config.h conflicts between 32-bit and 64-bit versions.

(From OE-Core rev: 21fb6bc1b030cab14e2c9b14607b34a62262ac06)

(From OE-Core rev: e54e5b792f9b6fa8bb2ed3123518709c882859a4)

Signed-off-by: Ming Liu <ming.liu@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:33 +00:00
Pascal Bach
b8ea994e11 image_types.bbclass: Make ubi depend on ubifs
The ubi command assumes the ubifs file is present.
This makes sure this is really the case.

(From OE-Core rev: 0a947408f32d7ab10d2004e7d9332296b82191a3)

(From OE-Core rev: 0fff562384670b64ed207423b7f596c99baa71c4)

Signed-off-by: Pascal Bach <pascal.bach@siemens.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:33 +00:00
Chong Lu
3725cdf43a kmod: fix debuginfo is missing in shared library
INHIBIT_PACKAGE_STRIP variable will make debuginfo lose in shared library.
The test cases of kmod contain kernel modules for many different architectures,
strip and arch gets confused and throws errors. Pack kernel modules in test
cases to avoid strip command failed.

(From OE-Core rev: 3576399ed163cb3136ee1a2077622035d2033158)

(From OE-Core rev: a6b79ecb502df0f935f5c8575ace9e781770e5c1)

Signed-off-by: Chong Lu <Chong.Lu@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:32 +00:00
Ross Burton
3244f4540c systemd: don't add files and dependencies from units Conflicts
Adding dependencies and moving files based on Conflicts tags in unit files isn't
right, mainly as it means that systemd depends on systemd-binfmt, because the
latter ends up containing the shutdown.target unit.

(From OE-Core rev: 02767aac492cedf6ccd02648b8e65751cc23c11c)

(From OE-Core rev: 9884e4f872b9ff354832053c86842dd0d3b0c8b3)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:31 +00:00
Paul Barker
ec853e4eea package_manager: Fix BAD_RECOMMENDATIONS for opkg
In package_manager.py, when using opkg as the packager, the command 'opkg <args>
info <pkg>' is called to get information about each pkg in BAD_RECOMMENDATIONS
in a format that can be written to the status file. The 'Status: ...' line is
modified and all other lines are passed through. Changing the verbosity level
argument for this command will change what it written into the status file.
Crucially, with the default verbosity level, no blank lines are being printed by
the opkg command and so no blank lines are being written to the status file to
separate each package entry.

The package parsing code in opkg expects package entries in the status file to
be separated by at least one blank line. If no blank line is seen, the next
package entry is interpreted as a continuation of the last package entry, but
the new values overwrite the old values.

So with the default verbosity level, a blank line follows some package entries
and these are parsed. The others are dropped due to the lack of blank lines. As
the verbosity increases, more debugging messages add blank lines and more
packages are parsed.

The solution to ensure that this works correctly regardless of the verbosity
level is simply add a blank line after the output of 'opkg info' is written to
the status file, ensuring that the next package is separated from the current
package.

[YOCTO #6816]

(From OE-Core rev: 3fa24eee41c26fecd5e4f680082288ec772d2de9)

(From OE-Core rev: ae776a39376629bfada9bd5fabc949e9277774ba)

Signed-off-by: Paul Barker <paul@paulbarker.me.uk>
Cc: Chris Carr <chris.carr@ge.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:31 +00:00
Chen Qi
07fdc5a275 bind: fix to use correct environment file in service file
Use /etc/default/bind9 as the environment file in named.service.

(From OE-Core rev: 0ee1fa68a4d749585c43fc706c8da6e849d10857)

(From OE-Core rev: 3de15ae4cc8a561859e6761ab6e6b8c45eaad646)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:31 +00:00
Johan Hovold
dda084e13c udev: fix uevent-helper disable
Make sure that /proc/sys/kernel/hotplug exists before trying to disable
the uevent-helper mechanism.

Since kernel commit 86d56134f1b6 ("kobject: Make support for
uevent_helper optional.") the kernel can be built without uevent-helper
support. In this case /proc/sys/kernel/hotplug does not exist and the
current sysvinit script fails with

	/etc/rcS.d/S04udev: line 132: can't create /proc/sys/kernel/hotplug: nonexistent directory

when trying to disable the uevent-helper mechanism during boot.

Note that a single NULL-character has always been sufficient to disable.

(From OE-Core rev: f7b8445f2e89ad0a59c2859f9eb26855769f1070)

(From OE-Core rev: 8e666f643e6a8720ca604706afed91fba4096eef)

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:30 +00:00
yadi.hu@windriver.com
044039dc8e BusyBox: Fixing broadcast address is not fed and rightly initialized
When using udhcpc along with ip command(/sbin/ip), broadcast address is not
assigned. Broadcast address is successfully assigned when using udhcpc without
ip command existence.

with ip command:
    $ifconfig eth0|grep Bcast
          inet addr:128.224.162.141  Bcast:0.0.0.0  Mask:255.255.254.0
    $
without ip command:
    $ifconfig eth0|grep Bcast
          inet addr:128.224.162.141  Bcast:128.224.163.255  Mask:255.255.254.0
    $

/etc/udhcp.d/50default[simple.script] is called to set ip address by dhcp
client, In case of ifconfig, it doesn't care of it's existence because it
will automatically calculate broadcast address then assign it if there is
no broadcast option. However in case of ip command, it requires broadcast
address statically.

(From OE-Core rev: 666c6a126cd12d2555361f5b573b6a26437df780)

(From OE-Core rev: 479baa37ba366f5371fbc35d95d39e27f9b14cd2)

Signed-off-by: Hu <yadi.hu@windriver.com>
Signed-off-by: Roy Li <rongqing.li@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:30 +00:00
Shan Hai
0bc80a3850 ldconfig-native: fix a endian-ness bug
Some header fields of ELF were read with wrong size on 64bit
big-endian machine, fix it by reading the fields with read64
instead of read32.

(From OE-Core rev: adbf0b1fdf897076e5e3dec2443c8927f315c2e6)

(From OE-Core rev: 7799b884f57642a48f9ed9a829a176d83b474516)

Signed-off-by: Par Olsson <Par.Olsson@windriver.com>
Signed-off-by: Shan Hai <shan.hai@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:30 +00:00
Andreas Müller
1020bc3de3 gdb-cross: build with python support
variable contents are displayed properly when debugging qt applications remotely

see [1] for further details

[1] http://qt-project.org/doc/qtcreator-2.6/creator-debugging-helpers.html#debugging-helpers-based-on-python

(From OE-Core rev: 440440363dded1d1549dc94a3eaccfcbb3cf517d)

(From OE-Core rev: 97567bfc23f925d9644b776cc885f56aa7ff983f)

Signed-off-by: Andreas Müller <schnitzeltony@googlemail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:29 +00:00
Martin Jansa
a291eb108b systemd: don't move libgudev around, it breaks libgudev-1.0.la
* libgudev-1.0.la still references /usr/lib and this change was breaking gypsy (detected in navit) and
  network-manager-applet

(From OE-Core rev: 7807d1d8b9535a87ba3e5ab7df21a2954708333f)

(From OE-Core rev: 35b72a6d7698c0b89efca2fc64dd473ee684743b)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:29 +00:00
Hongxu Jia
d0c969eeab multilib.bbclass/package_manager.py: fix <multilib>-meta-toolchain build failure
There is a failure to build lib32-meta-toolchain:
...
|ERROR: lib32-packagegroup-core-standalone-sdk-target not found in the base
feeds (qemux86_64 x86 noarch any all).
...

In package_manager.py, the variable 'DEFAULTTUNE_virtclass-multilib-lib32'
is used to process multilib image/toolchain. But for the build of lib32-
meta-toolchain, the value of 'DEFAULTTUNE_virtclass-multilib-lib32' is
deleted. In 'bitbake lib32-meta-toolchain -e', we got:
...
|# $DEFAULTTUNE_virtclass-multilib-lib32 [2 operations]
|#   set? /home/jiahongxu/yocto/build-20141010-yocto/conf/local.conf:237
|#     "x86"
|#   del data_smart.py:406 [finalize]
|#     ""
|# pre-expansion value:
|#   "None"
...

The commit 899d45b90061eb3cf3e71029072eee42cd80930c in oe-core deleted
it at DataSmart.finalize
...
Author: Richard Purdie <richard.purdie@linuxfoundation.org>
Date:   Tue May 31 23:52:50 2011 +0100

    bitbake/data_smart: Change overrides behaviour to remove
       expanded variables from the datastore
...

We add an internal variable 'DEFAULTTUNE_ML_<multilib>', assign it with the
value of 'DEFAULTTUNE_virtclass-multilib-lib32' before deleting.

For rpm backend in package_manager.py, we use DEFAULTTUNE_virtclass-multilib
-lib32 first, if it is not available, and try to use DEFAULTTUNE_ML_<multilib>

[YOCTO #6842]

(From OE-Core rev: 9c59d3d8b538d3a98ff4b5e5b189a4a23a85da2d)

(From OE-Core rev: e5fcc237807d064578028ecf8af51d82c5a66c18)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:28 +00:00
Hongxu Jia
6af63cc898 opkg: fix remove pkg with --force-removal-of-dependent-packages failed
opkg remove perl --force-removal-of-dependent-packages
...
Removing package perl-module-extutils-mm-dos from root...
...
Removing package perl-module-extutils-mm-dos from root...
You can force removal of packages with failed prerm scripts with the option:
	--force-remove
No packages removed.
Collected errors:
 * pkg_run_script: Internal error: perl-module-extutils-mm-dos has a
NULL tmp_unpack_dir.
 * opkg_remove_pkg: not removing package "perl-module-extutils-mm-dos",
prerm script failed
...

While remove pkg with '--force-removal-of-dependent-packages',
pkg may be added to remove list multiple times, add status
check to make sure pkg only be removed once.

[YOCTO #6819]

(From OE-Core rev: 476f864b1564265469b5c9074c1f262bce21f119)

(From OE-Core rev: 4e2da43842c6bbf5abf7ae9c6601bf7a6f1114da)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:28 +00:00
Yuanjie Huang
5211fb73f0 mtd-utils: Fix alignment trap triggered by NEON instructions
NEON instruction VLD1.64 was used to copy 64 bits data after type
casting, and they will trigger alignment trap.
This patch uses memcpy to avoid alignment problem.

(From OE-Core rev: a31080021ad3ecfb92220dcb8c717928db268f1e)

(From OE-Core rev: bb3606e8312bf339bb888cd5b0bc7e6190e971f7)

Signed-off-by: Yuanjie Huang <Yuanjie.Huang@windriver.com>
Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:28 +00:00
Roy Li
c8279678d4 python3: do not replace ccache in the middle of a path
Python recipe did a sed s/ccache/$(CCACHE) on the Makefile, which
replaces all "ccache" including ones that consist of a full path.
This leads to build error when building in a project path with
"ccache" in its name. Fix it by only replacing "ccache " with
"$(CCACHE) ".

Same fix on python 2.xx is:
1181112cf65bc[python: do not replace ccache in the ]

(From OE-Core rev: 9f2398a0ff42389052155d971f136a37c5dc80da)

(From OE-Core rev: 7e4e2301d95f897e2f91b1c37b56dbd190841acb)

Signed-off-by: Roy Li <rongqing.li@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:27 +00:00
Hongxu Jia
21b15bc6cd multilib.bbclass: fix incorrect TARGET_VENDOR in multilib image
While building multilib extended images such as libXX-core-image-minimal,
the WORKDIR has the same dir with the building of core-image-minimal.

$ ls tmp/work/qemux86_64-poky-linux/ -al
...
drwxrwxr-x  3 jiahongxu jiahongxu 4096 Oct 13 16:01 core-image-minimal
drwxrwxr-x  3 jiahongxu jiahongxu 4096 Oct 16 11:11 lib32-core-image-minimal
...

While image class is inherited, it did not assign OVERRIDES with
'virtclass-multilib-libXXX', so the reason is variable TARGET_VENDOR was
not override for multilib in that situation.

It refers what did for PN and MLPREFIX, and manually do the multilib
override for TARGET_VENDOR in RecipePreFinalise handler.

[YOCTO #6844]

(From OE-Core rev: 7ca012fb3addb11ba3f899efa0619ddd8d3c6946)

(From OE-Core rev: 733ae9a73704fdb1211a4e35a20f2d6337a16709)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:27 +00:00
Hongxu Jia
de6e6a5a62 classes/image: remove obsolete MULTILIB_VENDORS
In oe-core commit 03c5f39b4d7dd8c81e0a130b7d5884e5af039a24,
it removed obsolete codes about variable MULTILIB_VENDORS.

We clean up the rest obsolete codes related with
MULTILIB_VENDORS

(From OE-Core rev: 43a1c2dc08b4291e042b6c9ef981bd094ea2c477)

(From OE-Core rev: 18be5e2400fb2ca1a46ea504967f3c3522af4fdc)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:26 +00:00
Roy Li
5e7218e8b0 elfutils: fix elf_cvt_gnuhash
The 'dest' and 'src' can be same, we need to save the value of src32[2]
before swaping it.

(From OE-Core rev: b7936bacf0cc89bdda6722d317274bd4a3af840a)

(From OE-Core rev: 8a2f0192652b96675b6f5484f7548d4e4106db31)

Signed-off-by: Roy Li <rongqing.li@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:26 +00:00
Jackie Huang
bc6651cb31 which-2.18: Use foreign strictness to avoid automake errors
Fixed:
Makefile.am: error: required file './ChangeLog' not found

(From OE-Core rev: c84bfa0f519e0bb74aed833a6318c21d91fce377)

(From OE-Core rev: 21bffc855ed000d8419badb406343b6410c424b9)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:26 +00:00
Pascal Bach
a16aa96a08 image.py: Fix error in graph sorting
The graph sorting algorithm for image dependencies does a look for an
occurrence of a searched string instead of comparing the chunk to the
searched string. This leads to the problem that ubifs is recognized as ubi aswell.

This fixes this by splitting up the string into chunks.

(From OE-Core rev: cec9725c540c2d54c27092e40d159694cea75b5f)

(From OE-Core rev: 6fbe9615bd6667b5634fd471e25412fe627acb09)

Signed-off-by: Pascal Bach <pascal.bach@siemens.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:25 +00:00
Khem Raj
59c7cb37bc mklibs: Fix loader for mipsel
Additionally treat ld.so to be searched in sysroot

Change-Id: I8b4acb821d9855a1163c7149bc8e369c7c438856
(From OE-Core rev: 4cf539e67333ba2c3fe924b092e104da53e68ca0)

(From OE-Core rev: 2c327f75c293a68c39b46d72a27248d72ac80996)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:25 +00:00
Khem Raj
1eceece8f6 glibc: Delete ldconfig when USE_LDCONFIG is not set
This avoids below QA error/warning
/sbin/ldconfig [installed-vs-shipped]

Change-Id: I028b692eefeaa6e0e0e6507ab4108caa29e41e91
(From OE-Core rev: 2b499db19cd9bd14292457716b50dc62ed90515d)

(From OE-Core rev: 267dc0429e8da7cc292034e1a5ab3eae7786db4e)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:25 +00:00
Richard Purdie
0d9dd1d3da rm_work: Speed up rootfs/populate_sdk removal
Commands like bitbake X -c rootfs or bitbake X -c populate_sdk do not
trigger rm_work to clean up the directories afterwards since it
traditionally hooks onto do_build. This change means those two tasks now
clean up after themselves. We use the cleandirs function attribute to
handle this.

[YOCTO #6413]

(From OE-Core rev: 6bf06d80c2ce03dfdedac5ad8cf42ef8e36b0ecb)

(From OE-Core rev: 38b1f9d8e4fa9afb8644e4be55191fbe5cfd99a1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:24 +00:00
Maxin B. John
f09b49dd64 python: fix ssl import error
Fix this ssl import error:
Python 2.7.3 (default, Dec  5 2014, 16:24:17)
[GCC 4.9.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import ssl
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.7/ssl.py", line 92, in <module>
    import base64        # for DER-to-PEM translation
ImportError: No module named base64

(From OE-Core rev: dfa34e70a4c7543dc67835c2e9a270ccd011ac72)

(From OE-Core rev: 2defde75799c669d531fddee005758ec13884aab)

Signed-off-by: Maxin B. John <maxin.john@enea.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:24 +00:00
Bian Naimeng
b9304ab75c cpio: fix bug CVE-2014-9112 for cpio-2.11
Obtain detain from following URL.
  http://lists.gnu.org/archive/html/bug-cpio/2014-12/msg00000.html
  http://git.savannah.gnu.org/cgit/cpio.git/commit/?id=746f3ff670dcfcdd28fcc990e79cd6fccc7ae48d

(From OE-Core rev: 9a32da05f5a9bc62c592fd2d6057dc052e363261)

(From OE-Core rev: f5c196fdde79402119ae1893c6150b4bfbc137a1)

Signed-off-by: Bian Naimeng <biannm@cn.fujitsu.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:23 +00:00
Bian Naimeng
28c4a4976d cpio: fix bug CVE-2014-9112 for cpio-2.8
Obtain detain from following URL.
http://lists.gnu.org/archive/html/bug-cpio/2014-12/msg00000.html
http://git.savannah.gnu.org/cgit/cpio.git/commit/?id=746f3ff670dcfcdd28fcc990e79cd6fccc7ae48d

(From OE-Core rev: 732fc8de55a9c7987608162879959c03423de907)

(From OE-Core rev: 695d14dc92d7de89ae02dac0928f184519b8b57d)

Signed-off-by: Bian Naimeng <biannm@cn.fujitsu.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:23 +00:00
He Zhe
a6f13fe42f kernel.bbclass: Create modules directory even if there is no modules installed
During kernel_do_install it needs to make symbol link at
${D}/lib/modules/${KERNEL_VERSION}/build, but there will not be
${D}/lib/modules/${KERNEL_VERSION} if there is no modules installed for current
image, which will result in a build failure.
Add "mkdir -p ${D}/lib/modules/${KERNEL_VERSION}" here to avoid this failure
and the need of similar changes in other scripts that also expect it to exist.

(From OE-Core rev: f2f72f8ff623d24fffbb1b0ad40bc08f05ff31dd)

(From OE-Core rev: a3dae5c091017827a293affbb8ade179a23efd6d)

Signed-off-by: He Zhe <zhe.he@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:23 +00:00
Roy Li
e8404413fe gst-ffmpeg: fixes for CVE-2014-8548 and CVE-2014-8541
Issue: LIN7-1755
Issue: LIN7-1739

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-8541

libavcodec/mjpegdec.c in FFmpeg before 2.4.2 considers only dimension
differences, and not bits-per-pixel differences, when determining whether an
image size has changed, which allows remote attackers to cause a denial of
service (out-of-bounds access) or possibly have unspecified other impact via
crafted MJPEG data.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-8548

Off-by-one error in libavcodec/smc.c in FFmpeg before 2.4.2 allows remote
attackers to cause a denial of service (out-of-bounds access) or possibly
have unspecified other impact via crafted Quicktime Graphics (aka SMC) video
data.

(From OE-Core rev: 4bd50c5a967af2b8f0fe77b8f9c100169e4fc531)

(From OE-Core rev: fad70ea3495329a39329532f59de3b14c22c2d15)

Signed-off-by: Roy Li <rongqing.li@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:22 +00:00
Tom Zanussi
d6cbbee29c wic: Use overhead factor when creating partitions from rootfs directories
When creating partitions sized to given rootfs directories, filesystem
creation could fail in cases where the calculated target partition
size was too small to contain the filesystem created using mkfs.  This
occurred in particular when creating partitions to contain very large
filesystems such as those containing sdk image artifacts.

This same limition is present in the oe-core image creation classes,
which can be readily see by changing IMAGE_OVERHEAD_FACTOR from the
default 1.3 to 1.0 and building a sato-sdk image.

It should be possible to calculate required sizes exactly given the
source rootfs and target filesystem types, but for now, to address the
specific problem users are hitting in such situations, we'll just do
exactly what oe-core does and define and use an IMAGE_OVERHEAD_FACTOR
or 1.3 in those cases.

Fixes [YOCTO #6863].

(From OE-Core rev: bbaef3ff5833fc1d97b7b028d7770834f62789da)

(From OE-Core rev: c376804d451a200bf697d3f34e68d58726f5233c)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:18:22 +00:00
Martin Hundebøll
d6e0ea59b2 bitbake: progressbar: use '/usr/bin/env' in shebangs with python
To support yocto on systems with python3 as default version, scripts
should use /usr/bin/env python in the shebang, as this allows the use of
a fake env to mimic python2 as default version.

This patch simply replaces occurrences of #!/usr/bin/python with
 #!/usr/bin/env python and was done with this oneliner:

     git grep -lE '^#!/usr/bin/python' | xargs \
         sed -i 's|/usr/bin/python|/usr/bin/env python|'

(Bitbake rev: 0f9823adb7832c4ca3b2985391473aa6e8c22148)

Signed-off-by: Martin Hundebøll <martin@hundeboll.net>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:10:23 +00:00
Robert Yang
be22ea0314 bitbake: bitbake-worker: exit normally when SIGHUP
Fixed:
1) Run "bitbake recipe" in the terminal
2) Close the terminal while building
3) $ ps aux | grep bitbake-worker
There will be many processes, and they will keep the resources (e.g.,
memory), and won't exit unless kill or kill -9.

(Bitbake rev: 72536d4e0cc3379001b730950afa012f7a96a79b)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:10:22 +00:00
Paul Eggleton
3fc8d29953 bitbake: event: fix resetting class handlers object
If you don't explicitly specify to use a global variable when doing an
assignment, you will be setting a local variable instead, which means
this function wasn't working at all. It explains some odd behaviour we
have seen in the layer index where event handlers were sometimes
bleeding into other contexts where they should not have been.

(Bitbake rev: f12c738d3dc1f0fd105d457385511440024bffab)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:10:22 +00:00
Richard Purdie
becb32bb30 bitbake: data: Handle BASH_FUNC shellshock implication
The shellshock patches changed the way bash functions are exported.
Unfortunately different distros used slightly different formats,
Fedora went with BASH_FUNC_XXX()=() { echo foo; } and Ubuntu went with
BASH_FUNC_foo%%=() {  echo foo; }.

The former causes errors in dealing with out output from emit_env,
the functions are not exported in either case any more.

This patch handles things so the functions work as expected in either
case.

[YOCTO #6880]

(Bitbake rev: 4d4baf20487271aa83bd9f1a778e4ea9af6f6681)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:10:22 +00:00
Richard Purdie
fa34c42d19 bitbake: runqueue: Fix 100% cpu use after keyboard interrupt
After Ctrl+C is pressed to interrupt bitbake, it loops continually, running
at 100% cpu. This patch selects on the correct file descriptors resolving
the excess cpu usage.

(Bitbake rev: dca5d82830ef2838439e5272da9dac1f28954cf1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:10:21 +00:00
Michael Wood
942e35d651 bitbake: buildinfohelper: Make sure we use the orm defined value for loglevel
We need to consistently use LogMessage.INFO/WARNING/ERROR to make sure toaster knows
how to categories these rather than passing in the "raw" loglevel value
which in best case comes from python logging but worst case any value.

[YOCTO 6885]

(Bitbake rev: 926235aad806232bc73e33d6dd8955dd26562e6b)

Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:10:21 +00:00
Richard Purdie
c35cecebc6 bitbake: prserv: Use WAL mode
Ideally, we want the PR service to have minimal influence from
queued disk IO. sqlite tends to be paranoid about data loss and
locks/fsync calls. There is a "WAL mode" which changes the journalling
mechanism and would appear much better suited to our use case.

This patch therefore switches the database to use WAL mode. With this
change, write overhead appears significantly reduced.

(Bitbake rev: 90b05e79764b684b20ce8454e89f05763b02ac97)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:10:20 +00:00
Richard Purdie
c72d8913b3 bitbake: prserv/serv: Ensure sync happens in the correct thread
The sync/commit calls are happening in the submission thread which can
race against the handler. The handler may start new transactions which
then causes the submission thread to error with "cannot start a
transaction within a transaction".

The fix is to move the calls to the correct thread.

(Bitbake rev: 08cf468ab751f4c6e4ffdab2d8e5d748f7698593)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:10:20 +00:00
Ben Shelton
e3743bbe94 bitbake: prserv: don't wait until exit to sync
In the commit 'prserv: Ensure data is committed', the PR server moved to
only committing transactions to the database when the PR server is
stopped.  This improves performance, but it means that if the machine
running the PR server loses power unexpectedly or if the PR server
process gets SIGKILL, the uncommitted package revision data is lost.

To fix this issue, sync the database periodically, once per 30 seconds
by default, if it has been marked as dirty.  To be safe, continue to
sync the database at exit regardless of its status.

(Bitbake rev: 973ac2cc63323ca9c3e916effa4765747db3564c)

Signed-off-by: Ben Shelton <ben.shelton@ni.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-31 10:10:20 +00:00
Ross Burton
02627ad3d9 buildtools-tarball: package all of Python
Instead of cherry-picking pieces of Python to put into the buildtools tarball,
ship all of it.  We can't predict what bits of Python will be needed in the
future.

(From OE-Core rev: 1cf1edcd28a002291622d04dd2d0ee2c67e329e4)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-11 16:34:17 +00:00
Robert P. J. Day
ecf1e3d1b1 bitbake: bitbake-user-manual-metadata.xml: Updated do_package_write example
Given that the "do_package_write" task doesn't exist in OE anymore,
steal another, existing example to demonstrate the "rdeptask" flag.

(Bitbake rev: d412d3680f78eebe0517e4f933d853b8973df711)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-09 22:25:40 +00:00
Scott Rifenbark
2310ca25ed bitbake: bitbake-user-manual-metadata.xml: Added [eventmask] flag information.
Reported-by: Laszlo Papp <lpapp@kde.org>
(Bitbake rev: 1c7788f5c9b4f600063908fe93bfc4e5dfb3960f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-09 22:25:40 +00:00
Scott Rifenbark
0185dcd883 bitbake: bitbake-user-manual: Updated copyright to 2015.
(Bitbake rev: c2f68465dd97a8be0795384f971a3f8d05369416)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-09 22:14:50 +00:00
Scott Rifenbark
b9e61a3203 mega-manual.sed: Updated strings to support a 1.7.1 release.
This processes the links in the mega-manual.html file such that
they remain inside the manual and do not go outside to individual
manuals.

(From yocto-docs rev: 29a30b9ace435ad0c6260e026033ac1a86314d73)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-09 22:11:01 +00:00
Scott Rifenbark
e111bb329c poky.ent: Updated various variables to support the 1.7.1 release.
I hit all the variables needed to reflect all combinations of
1.7.1.  Additionally, incremented the copyright top-end year from
2014 to 2015 since this is a January 2015 release.

(From yocto-docs rev: 25c9a6c0a7113f67ec40307d567ac5a16f3db85b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-09 22:11:01 +00:00
Scott Rifenbark
28f6830a49 documentation: Updated manual tables for a 1.7.1 date.
Using January of 2015

(From yocto-docs rev: 0ff05cf9735a8e93a320b97800a4958a3fff9866)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-09 22:11:01 +00:00
Scott Rifenbark
c292340b7a dev-manual: Added link to ptest wiki page into Ptest section.
(From yocto-docs rev: 8ee7d8073056dfacc3afcce1eec8c79abd07881f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-09 22:11:00 +00:00
Scott Rifenbark
118cf7bc86 bsp-guide: Fixed ambiguous sentence.
In the example that creates a new BSP layer by using the yocto-bsp
script, the final step 6 could be interpreted as the script
creating the new layer in "poky".  Even though the sentence is
technically correct, sloppy reading could mis-interpret it.  I updated
the sentence so that nobody will be confused.

(From yocto-docs rev: b0d8703ed938152e7bbc61cc1308f75ed5af4a20)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-09 22:11:00 +00:00
Scott Rifenbark
4d1745feb5 profile-manual: Updates to the LTTng Documentation section.
The LTTng Documentation website has been updated to actually
have extensive documentation now.  Previously, in the profile-manual,
we were stating that documentation did not exist, which was true
at the time of writing.  I updated the section to link to the
main LTTng documentation website and altered some other text in
the section appropriately.

Additionally, I found and corrected a couple spelling errors in
this chapter.

(From yocto-docs rev: aa6712376cdf958683d70acfba632a686617ed63)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-09 22:11:00 +00:00
Scott Rifenbark
d6751f2293 dev-manual: Fixed broken link to the allarch class.
(From yocto-docs rev: ec4ec548840ef863403115ebb3271362a91f5b04)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-09 22:11:00 +00:00
Scott Rifenbark
880e8b26ed poky.ent: Updated the YOCTO_RELEASE_NOTES variable to new form.
This variable now needs to have the form
"&YOCTO_HOME_URL;/downloads/core/&DISTRO_NAME;&DISTRO_COMPRESSED;"
The old form was causing the release team to have to hand-redirect
the three links in the YP manuals that resolve to the release notes.

(From yocto-docs rev: 55d500cbc8cf98c51416efdcdd8a2384f4ec1ea3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-09 22:11:00 +00:00
Scott Rifenbark
8d5259d953 poky.ent: post release fix of the POKYVERSION_COMPRESSED variable.
Missed this one and it is used to resolve the YOCTO_RELEASE_NOTES
URL in the dev-manual and the ref-manual.  The value was left at
"1100" when it should have been "1200".  I changed it post-release.

This means that the tarball is bad but the HTML versions published on
the server are correct for dizzy.

(From yocto-docs rev: dc7918d39271691fb2ce5441fba162a783814983)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-09 22:11:00 +00:00
Saul Wold
9e8bb32215 babeltrace: Backport fix for unaligned integer
[YOCTO #6464]

(From OE-Core rev: 7c04085a0b5f978d7fd07f83b0799abbeb3b7052)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-26 17:05:47 +00:00
Andrei Gherzan
aa8bfdfa22 xkeyboard-config: Inherit gettext
In a GPLv3-free build we have two different versions of gettext in sysroot due
to GPLv3 restrictions. In this case we need gettext-native too so we can have
the needed macros and avoid errors like:
"error: possibly undefined macro: AM_GNU_GETTEXT"

The needed dependency is added by gettext class which is prefered because it
takes care of NLS flags too.

(From OE-Core rev: 23d8a4d64e9ff126d6460a69e6d086b1c86e87a9)

(From OE-Core rev: 1975981e7777748c2b45b16e47ec704a9c37b56b)

Signed-off-by: Andrei Gherzan <andrei.gherzan@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-26 17:05:47 +00:00
Aníbal Limón
5c69d24f56 package_manager: DpkgPM fix populate_sdk
DpkgPM change all_arch_list variable set from PACKAGE_ARCHS to passed
archs variable because is different when is executed from rootfs.py
and sdk.py.

Credits to: Ricardo Ribalda <ricardo.ribalda@gmail.com>

(From OE-Core rev: f6fb8c16f49fd9a2b124ad55f5c4fed82d7e6dca)

(From OE-Core rev: d9612ac36d59eb9e800f06339965d66f27c66ae0)

Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-26 17:05:47 +00:00
Wenzong Fan
b70ef7b95a python: Fix CVE-2014-7185
Integer overflow in bufferobject.c in Python before 2.7.8 allows
context-dependent attackers to obtain sensitive information from
process memory via a large size and offset in a "buffer" function.

This back-ported patch fixes CVE-2014-7185

(From OE-Core rev: 49ceed974e39ab8ac4be410e5caa5e1ef7a646d9)

(From OE-Core rev: 3dd696e03e66fa98b58a17b7f34ffe4002ddc9c6)

Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>

Conflicts:
	meta/recipes-devtools/python/python_2.7.3.bb

hand merged bb file since I did not take previous patch.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-26 17:05:47 +00:00
Javier Viguera
bd00bc3d0d shadow-securetty: add ttyAM[0-3] serial ports
Old version of the ARM AMBA serial port driver creates those device nodes.

(From OE-Core rev: fa17b9ea435f5c49e3bea56524152b21d915d464)

(From OE-Core rev: 0956df1596f899337afb3551db01a59bf1c38856)

Signed-off-by: Javier Viguera <javier.viguera@digi.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-26 17:05:47 +00:00
Tom Zanussi
e741ebf210 wic: Update bootimg-partition to use bootimg_dir
Update bootimg-partition to use bootimg_dir instead of img_deploy_dir,
to match similar usage in other plugins.

As mentioned elsewhere, plugins should use the passed-in value for
bootimg_dir directly if non-null, which corresponds to a user-assigned
value specified via a -b command-line param, and only fetch the value
from bitbake if that value is null.

(From OE-Core rev: 3822f8a7b33da56ecd9144b4bcae50734fb1af81)

(From OE-Core rev: f22bd26627595e3719d3b1f9e3d487d5011c9c42)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-26 17:05:46 +00:00
Tom Zanussi
db012b429f wic: Remove special-case bootimg_dir
The first iterations of wic very shortsightedly catered to two
specific use-cases and added special-purpose params for those cases so
that they could be directly given their corresponding boot artifacts.
(hdddir and staging_data_dir).

As more use-cases are added, it becomes rather obvious that such a
scheme doens't scale, and additionally causes confusion for plugin
writers.

This removes those special cases and states explicitly in the help
text that plugins are responsible for locating their own boot
artifacts.

(From OE-Core rev: 6ba3eb5ff7c47aee6b3419fb3a348a634fe74ac9)

(From OE-Core rev: e7ecb139a215484422652ef35de8282acbf18ed2)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-26 17:05:46 +00:00
Tom Zanussi
90a03e9c9d Revert "wic: set bootimg_dir when using image-name artifacts"
This reverts commit 7ce1dc13f9.

This patch broke the assumption that a non-null boot_dir means a
user-assigned (-b command-line param) value.

Reverting doesn't break anything, since the case it was added for
doesn't use the boot_dir for anything except debugging anyhow.

Fixes [YOCTO #6290]

(From OE-Core rev: db90f10bf31dec8d7d7bb2d3680d50e133662850)

(From OE-Core rev: 36c93423ee272c4d4aafeb50f83734fd4bb3bb29)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-26 17:05:46 +00:00
Tom Zanussi
b466e00cf6 wic: Update the help text to include -D (--debug)
The --debug option is missing from the wic help text; this adds it and
at the same time rearranges the usage into a more logical arrangement.

(From OE-Core rev: cf5144ef241d8f4ccaa3461ae5c9f89c2cf2f8d1)

(From OE-Core rev: e7f18c43f1b368b71acdc507e1a9035179d7e53f)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-26 17:05:46 +00:00
Tom Zanussi
00af5317eb wic: Don't allow mkfs to fail silently in partition command
The return code from the mkfs command used by the partition creation
command was being ignored, allowing it to silently fail and leaving
users mystified as to why the resulting filesystem was corrupted.

This became obvious when failures occurred when creating large
e.g. sdk filesystems [YOCTO #6863].

(From OE-Core rev: 8cef3b06f7e9f9d922673f430ddb3170d2fac000)

(From OE-Core rev: ac7b2eb0a35613d030eeef0b8df0d69ae0935b43)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-26 17:05:45 +00:00
Chong Lu
db7f4f31c9 nss: CVE-2014-1568
the patch comes from:
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-1568
https://bugzilla.mozilla.org/show_bug.cgi?id=1064636
nss ng log:
=====
changeset:   11252:ad411fb64046
user:        Kai Engert <kaie@kuix.de>
date:        Tue Sep 23 19:28:34 2014 +0200
summary:     Fix bug 1064636, patch part 2, r=rrelyea
=====
changeset:   11253:4e90910ad2f9
user:        Kai Engert <kaie@kuix.de>
date:        Tue Sep 23 19:28:45 2014 +0200
summary:     Fix bug    1064636, patch part 3, r=rrelyea
=====
changeset:   11254:fb7208e91ae8
user:        Kai Engert <kaie@kuix.de>
date:        Tue Sep 23 19:28:52 2014 +0200
summary:     Fix bug    1064636, patch part 1, r=rrelyea
=====
changeset:   11255:8dd6c6ac977d
user:        Kai Engert <kaie@kuix.de>
date:        Tue Sep 23 19:39:40 2014 +0200
summary:     Bug 1064636, follow up commit to fix Windows build bustage

(From OE-Core rev: 0ed9070619f959b802dcc4ee8399d252d0349583)

Signed-off-by: Li Wang <li.wang@windriver.com>
Signed-off-by: Chong Lu <Chong.Lu@windriver.com>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-24 16:24:55 +00:00
Richard Purdie
33e95afc83 curl: Fixup line ending merge issues
Somehow the patch line endings got messed up during merge. This restores
the delta.

(From OE-Core rev: 5dee4e241d64e6144d74967cca583d249689773a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-24 16:24:55 +00:00
Wenzong Fan
9bfb78bff6 serf: uprev to 1.3.7 for fixing CVE-2014-3504
The (1) serf_ssl_cert_issuer, (2) serf_ssl_cert_subject, and (3) serf_-
ssl_cert_certificate functions in Serf 0.2.0 through 1.3.x before 1.3.7
does not properly handle a NUL byte in a domain name in the subject's
Common Name (CN) field of an X.509 certificate, which allows man-in-
the-middle attackers to spoof arbitrary SSL servers via a crafted
certificate issued by a legitimate Certification Authority.

http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-3504

(From OE-Core rev: 832aa4c5a7989636dae3068f508ab2bff8b4ab23)

Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-21 16:50:48 +00:00
Armin Kuster
cccad8c33f tzdata: update to 2014j
(From OE-Core rev: 3ab9dfb703835fee21fd73c4e5cbad1c34c6a163)

(From OE-Core rev: 06ffe5637f23f6036aaf58b40f7f9a721624cd5b)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-21 16:49:37 +00:00
Armin Kuster
2138890fa6 tzcode: update to 2014j
(From OE-Core rev: 2f8940e8b2a0537f131a6d5410e85bba07a8c116)

(From OE-Core rev: 429077a21c7753dee64ea869a73309903b659f6a)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-21 16:49:37 +00:00
Chong Lu
19750cac36 curl: Security Advisory - curl - CVE-2014-3620
libcurl wrongly allows cookies to be set for Top Level Domains (TLDs), thus
making them apply broader than cookies are allowed. This can allow arbitrary
sites to set cookies that then would get sent to a different and unrelated site
or domain.

(From OE-Core rev: ddbaade8afbc9767583728bfdc220639203d6853)

(From OE-Core rev: db194a3af25a37ff2d6f091ef021894967ca5910)

Signed-off-by: Chong Lu <Chong.Lu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-21 16:49:37 +00:00
Chong Lu
5deb78802a curl: Security Advisory - curl - CVE-2014-3613
By not detecting and rejecting domain names for partial literal IP addresses
properly when parsing received HTTP cookies, libcurl can be fooled to both
sending cookies to wrong sites and into allowing arbitrary sites to set cookies
for others.

(From OE-Core rev: 985ef933208da1dd1f17645613ce08e6ad27e2c1)

(From OE-Core rev: 7c4dfa64fd88066f2e0fbc917d8660f5b35e00c4)

Signed-off-by: Chong Lu <Chong.Lu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-21 16:49:37 +00:00
Yue Tao
ffdef91586 subversion: Security Advisory - subversion - CVE-2014-3528
Apache Subversion 1.0.0 through 1.7.x before 1.7.17 and 1.8.x before
1.8.10 uses an MD5 hash of the URL and authentication realm to store
cached credentials, which makes it easier for remote servers to obtain
the credentials via a crafted authentication realm.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3528

(From OE-Core rev: e0dc0432b13f38d16f642bdadf8ebc78b7a74806)

(From OE-Core rev: 4ff3355e4daf841c66fb78e88bf2d6e26d8f9ced)

Signed-off-by: Yue Tao <Yue.Tao@windriver.com>
Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-21 16:49:37 +00:00
Yue Tao
09430c66b3 subversion: Security Advisory - subversion - CVE-2014-3522
The Serf RA layer in Apache Subversion 1.4.0 through 1.7.x before 1.7.18
and 1.8.x before 1.8.10 does not properly handle wildcards in the Common
Name (CN) or subjectAltName field of the X.509 certificate, which allows
man-in-the-middle attackers to spoof servers via a crafted
certificate.<a href=http://cwe.mitre.org/data/definitions/297.html
target=_blank>CWE-297: Improper Validation of Certificate with Host
Mismatch</a>

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3522

(From OE-Core rev: 06a33cd00ea11abec1ebe9d5883e44778075ccc6)

(From OE-Core rev: 529ce75be949944a6e54151cd4233703e40c6351)

Signed-off-by: Yue Tao <Yue.Tao@windriver.com>
Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-21 16:49:37 +00:00
Richard Purdie
929d04b404 bitbake: siggen: Fix shared work checksum mismatch/rebuild issues
Similar to the last shared work task signature bug, we've found another
one. Looking at the improved output of diffsigs in this case:

runtaskdeps changed from [
'autoconf_2.69.bb.do_populate_sysroot:virtual:native',
'gnu-config_20120814.bb.do_populate_sysroot:virtual:native',
'libgcc-initial_4.9.bb.do_patch:virtual:nativesdk'
] to [
'autoconf_2.69.bb.do_populate_sysroot:virtual:native',
'gcc-crosssdk-initial_4.9.bb.do_patch',
'gnu-config_20120814.bb.do_populate_sysroot:virtual:native'
]

so we can get a different task hash since libgcc sorts before gnu-config
and gcc sorts after it. We could do with a way of fixing this, the best
I can come up with is to include a single parent directory. Since
recipes are never at the top of any metadata trees I've seen, this
should suffice for now.

I'm planning to burn the concept of shared work within bitbake
and do something at the metadata level in the 1.8 timeframe as its just
too fragile as things stand and hard to fix well.

(Bitbake rev: fc7ebf3835a206a5daafd4e1b73bac2549714ad3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-20 17:24:48 +00:00
Stefan Müller-Klieser
081fddd3e4 bitbake: data_smart.py: fix variable splitting at _remove mechanism
If we split variables only at whitespaces, a slipped in tab will render
a value unremovable.

(Bitbake rev: 0da22ba3e930fbb060b31fc423fd3333ca8843a0)

Signed-off-by: Stefan Müller-Klieser <s.mueller-klieser@phytec.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-19 10:46:41 +00:00
Mark Hatle
9fcd5826d9 meta-environment: Fix config-site with a multilib config
[YOCTO #6951]

The TOOLCHAIN_CONFIGSITE_SYSROOTCACHE value was defaulting to the nativesdk
path and not the associated target path.  Set the value in toolchain-scripts
to the target path.

Be sure to set the MLPREFIX within the meta-environment script as multilibs
are processed.

Update the config_site file name to use -BPN- not PN.  Otherwise the
environment processing can't find the correct filename.

(From OE-Core rev: 26a2f98155a867a71217e52d33f761dcc60800ca)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-13 15:43:38 +00:00
Saul Wold
df87cb27ef readline: Patch for readline multikey dispatch issue
(From OE-Core rev: 4fc3553cfecb42c124b7cfff8e0d20ade14a3ffc)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-07 14:36:33 +00:00
Saul Wold
2eb659d765 wget: Fix for CVE-2014-4887
(From OE-Core rev: 6815a99d6735a39f4af09726d4f514ac27801406)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-07 14:36:33 +00:00
Jackie Huang
f3a177cf04 license.bbclass: canonicalise the licenses named with 'X+'
If INCOMPATIBLE_LICENSE=GPLv3, GPLv3+ should be excluded
as well but not now since there is no SPDXLICENSEMAP for
licenses named with 'X+', we can add all the SPDXLICENSEMAP
settings for licenses named with 'X+' in licenses.conf,
but it's more like a duplication, so improve the canonical_license
function to auto map for 'X+' if SPDXLICENSEMAP for 'X' is
available, so GPLv3+ becomes GPL-3.0+.

(From OE-Core rev: 1d6dab1dbbbfbcb32e58dba3111130157ef2b24f)

(From OE-Core rev: 652008fd9dc909836819e5c6808c63643eff6db6)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-05 12:03:16 +00:00
Ross Burton
58a629a1a0 poky.conf: add Debian 7.7 to SANITY_TESTED_DISTROS
(From meta-yocto rev: 28fde806133c413e40da18beaf94bfd2eb016d57)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-31 10:43:59 +00:00
Otavio Salvador
b9b5aeffa6 nativesdk-cmake: Adjust toolchain paths dynamically
This patch adds a flexible way to configure the CMake in SDKs. It adds
a toolchain configuration script which supports subscripts for
extensions, as for example Qt5.

(From OE-Core rev: 484502e4e062fae1130a60626f39f5512af4c5c8)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-31 10:43:59 +00:00
Dan McGregor
ff5510b3fa systemd: Use ${ROOT_HOME} instead of /root
systemd avoids using nss lookups for the root user, so
naturally it assumes that root's home directory is /root.
In OE that's not the case, and it can lead to long delays when
shutting down due to user shutdown unit failures.

(From OE-Core rev: e0e8a904cd287a23352e5713a93aeab3933e4563)

Signed-off-by: Dan McGregor <dan.mcgregor@usask.ca>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-31 10:43:58 +00:00
Scott Rifenbark
9aff3a4ec0 ref-manual: Updates to the migrating to YP 1.7 section.
Some minor wording changes and a new section added for local.conf
QEMU changes.  Also, reordered some sections.

(From yocto-docs rev: 65207b6afa6df7d82cd3482d61f10b308da6fac7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-28 22:31:06 +00:00
Scott Rifenbark
16ddd45421 dev-manual: Updates to "Performing Automated Runtime Testing"
Updated the section to account for some new variables and
several more ways to run tests against expanded targets.  Also
added power control section.

(From yocto-docs rev: a0f08466c00ae51a99d790fa6c9dccef2e0f1518)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-28 22:31:06 +00:00
Scott Rifenbark
1a6c3a385c ref-manual: Added some new test variables:
* TEST_SERIALCONTROL_CMD
 * TEST_SERIALCONTROL_EXTRA_ARGS
 * TEST_POWERCONTROL_CMD
 * TEST_POWERCONTROL_EXTRA_ARGS

(From yocto-docs rev: 25f196cc03178f07201ef183fb309721d412e971)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-28 22:31:06 +00:00
Scott Rifenbark
e95863cee0 ref-manual: Updated list of supported distros.
Added Debian 7.5 and 7.6 to the list.

(From yocto-docs rev: 35fd5d5399fe1759158aef19d7b6eb68f2a1af12)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-28 22:31:05 +00:00
6133 changed files with 482481 additions and 326255 deletions

8
.gitignore vendored
View File

@@ -1,20 +1,19 @@
*.pyc
*.pyo
/*.patch
/build*/
build*/
pyshtables.py
pstage/
scripts/oe-git-proxy-socks
sources/
meta-*/
!meta-skeleton
!meta-selftest
!meta-hob
hob-image-*.bb
*.swp
*.orig
*.rej
*~
!meta-poky
!meta-yocto
!meta-yocto-bsp
!meta-yocto-imported
@@ -22,6 +21,3 @@ documentation/user-manual/user-manual.html
documentation/user-manual/user-manual.pdf
documentation/user-manual/user-manual.tgz
pull-*/
bitbake/lib/toaster/contrib/tts/backlog.txt
bitbake/lib/toaster/contrib/tts/log/*
bitbake/lib/toaster/contrib/tts/.cache/*

View File

@@ -1,2 +1,2 @@
# Template settings
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf}
TEMPLATECONF=${TEMPLATECONF:-meta-yocto/conf}

29
README
View File

@@ -30,29 +30,20 @@ For information about OpenEmbedded, see the OpenEmbedded website:
Where to Send Patches
=====================
As Poky is an integration repository (built using a tool called combo-layer),
patches against the various components should be sent to their respective
upstreams:
As Poky is an integration repository, patches against the various components
should be sent to their respective upstreams.
bitbake:
Git repository: http://git.openembedded.org/bitbake/
Mailing list: bitbake-devel@lists.openembedded.org
bitbake-devel@lists.openembedded.org
documentation:
Git repository: http://git.yoctoproject.org/cgit/cgit.cgi/yocto-docs/
Mailing list: yocto@yoctoproject.org
meta-yocto:
poky@yoctoproject.org
meta-poky, meta-yocto-bsp:
Git repository: http://git.yoctoproject.org/cgit/cgit.cgi/meta-yocto(-bsp)
Mailing list: poky@yoctoproject.org
Everything else should be sent to the OpenEmbedded Core mailing list. If in
doubt, check the oe-core git repository for the content you intend to modify.
Most everything else should be sent to the OpenEmbedded Core mailing list. If
in doubt, check the oe-core git repository for the content you intend to modify.
Before sending, be sure the patches apply cleanly to the current oe-core git
repository.
openembedded-core@lists.openembedded.org
Git repository: http://git.openembedded.org/openembedded-core/
Mailing list: openembedded-core@lists.openembedded.org
Note: The scripts directory should be treated with extra care as it is a mix of
oe-core and poky-specific files.
Note: The scripts directory should be treated with extra care as it is a mix
of oe-core and poky-specific files.

View File

@@ -77,26 +77,37 @@ variable value corresponding to the device is given in brackets.
===============================
Intel x86 based PCs and devices (genericx86*)
=============================================
Intel x86 based PCs and devices (genericx86)
==========================================
The genericx86 and genericx86-64 MACHINE are tested on the following platforms:
The genericx86 MACHINE is tested on the following platforms:
Intel Xeon/Core i-Series:
+ Intel NUC5 Series - ix-52xx Series SOC (Broadwell)
+ Intel NUC6 Series - ix-62xx Series SOC (Skylake)
+ Intel Shumway Xeon Server
+ Intel Romley Server: Sandy Bridge Xeon processor, C600 PCH (Patsburg), (Canoe Pass CRB)
+ Intel Romley Server: Ivy Bridge Xeon processor, C600 PCH (Patsburg), (Intel SDP S2R3)
+ Intel Crystal Forest Server: Sandy Bridge Xeon processor, DH89xx PCH (Cave Creek), (Stargo CRB)
+ Intel Chief River Mobile: Ivy Bridge Mobile processor, QM77 PCH (Panther Point-M), (Emerald Lake II CRB, Sabino Canyon CRB)
+ Intel Huron River Mobile: Sandy Bridge processor, QM67 PCH (Cougar Point), (Emerald Lake CRB, EVOC EC7-1817LNAR board)
+ Intel Calpella Platform: Core i7 processor, QM57 PCH (Ibex Peak-M), (Red Fort CRB, Emerson MATXM CORE-411-B)
+ Intel Nehalem/Westmere-EP Server: Xeon 56xx/55xx processors, 5520 chipset, ICH10R IOH (82801), (Hanlan Creek CRB)
+ Intel Nehalem Workstation: Xeon 56xx/55xx processors, System SC5650SCWS (Greencity CRB)
+ Intel Picket Post Server: Xeon 56xx/55xx processors (Jasper Forest), 3420 chipset (Ibex Peak), (Osage CRB)
+ Intel Storage Platform: Sandy Bridge Xeon processor, C600 PCH (Patsburg), (Oak Creek Canyon CRB)
+ Intel Shark Bay Client Platform: Haswell processor, LynxPoint PCH, (Walnut Canyon CRB, Lava Canyon CRB, Basking Ridge CRB, Flathead Creek CRB)
+ Intel Shark Bay Ultrabook Platform: Haswell ULT processor, Lynx Point-LP PCH, (WhiteTip Mountain 1 CRB)
Intel Atom platforms:
+ MinnowBoard MAX - E3825 SOC (Bay Trail)
+ MinnowBoard MAX - Turbot (ADI Engineering) - E3826 SOC (Bay Trail)
- These boards can be either 32bot or 64bit modes depending on firmware
- See minnowboard.org for details
+ Intel Braswell SOC
+ Intel embedded Menlow: Intel Atom Z510/530 CPU, System Controller Hub US15W (Portwell NANO-8044)
+ Intel Luna Pier: Intel Atom N4xx/D5xx series CPU (aka: Pineview-D & -M), 82801HM I/O Hub (ICH8M), (Advantech AIMB-212, Moon Creek CRB)
+ Intel Queens Bay platform: Intel Atom E6xx CPU (aka: Tunnel Creek), Topcliff EG20T I/O Hub (Emerson NITX-315, Crown Bay CRB, Minnow Board)
+ Intel Fish River Island platform: Intel Atom E6xx CPU (aka: Tunnel Creek), Topcliff EG20T I/O Hub (Kontron KM2M806)
+ Intel Cedar Trail platform: Intel Atom N2000 & D2000 series CPU (aka: Cedarview), NM10 Express Chipset (Norco kit BIS-6630, Cedar Rock CRB)
and is likely to work on many unlisted Atom/Core/Xeon based devices. The MACHINE
type supports ethernet, wifi, sound, and Intel/vesa graphics by default in
addition to common PC input devices, busses, and so on.
addition to common PC input devices, busses, and so on. Note that it does not
included the binary-only graphic drivers used on some Atom platforms, for
accelerated graphics on these machines please refer to meta-intel.
Depending on the device, it can boot from a traditional hard-disk, a USB device,
or over the network. Writing generated images to physical media is
@@ -127,14 +138,53 @@ USB Device:
device, but the idea is to force BIOS to read the Cylinder/Head/Sector
geometry from the device.
2. Use a ".wic" image with an EFI partition
2. Without such an option, the BIOS generally boots the device in USB-ZIP
mode. To write an image to a USB device that will be bootable in
USB-ZIP mode, carry out the following actions:
a) With a default grub-efi bootloader:
# dd if=core-image-minimal-genericx86-64.wic of=/dev/sdb
a. Determine the geometry of your USB device using fdisk:
b) Use systemd-boot instead
- Build an image with EFI_PROVIDER="systemd-boot" then use the above
dd command to write the image to a USB stick.
# fdisk /dev/sdb
Command (m for help): p
Disk /dev/sdb: 4011 MB, 4011491328 bytes
124 heads, 62 sectors/track, 1019 cylinders, total 7834944 sectors
...
Command (m for help): q
b. Configure the USB device for USB-ZIP mode:
# mkdiskimage -4 /dev/sdb 1019 124 62
Where 1019, 124 and 62 are the cylinder, head and sectors/track counts
as reported by fdisk (substitute the values reported for your device).
When the operation has finished and the access LED (if any) on the
device stops flashing, remove and reinsert the device to allow the
kernel to detect the new partition layout.
c. Copy the contents of the image to the USB-ZIP mode device:
# mkdir /tmp/image
# mkdir /tmp/usbkey
# mount -o loop core-image-minimal-genericx86.hddimg /tmp/image
# mount /dev/sdb4 /tmp/usbkey
# cp -rf /tmp/image/* /tmp/usbkey
d. Install the syslinux boot loader:
# syslinux /dev/sdb4
e. Unmount everything:
# umount /tmp/image
# umount /tmp/usbkey
Install the boot device in the target board and configure the BIOS to boot
from it.
For more details on the USB-ZIP scenario, see the syslinux documentation:
http://git.kernel.org/?p=boot/syslinux/syslinux.git;a=blob_plain;f=doc/usbkey.txt;hb=HEAD
Texas Instruments Beaglebone (beaglebone)
@@ -160,17 +210,59 @@ this, issue the following commands from the u-boot prompt:
To further tailor these instructions for your board, please refer to the
documentation at http://www.beagleboard.org/bone and http://www.beagleboard.org/black
From a Linux system with access to the image files perform the following steps:
From a Linux system with access to the image files perform the following steps
as root, replacing mmcblk0* with the SD card device on your machine (such as sdc
if used via a usb card reader):
1. Build an image. For example:
1. Partition and format an SD card:
# fdisk -lu /dev/mmcblk0
$ bitbake core-image-minimal
Disk /dev/mmcblk0: 3951 MB, 3951034368 bytes
255 heads, 63 sectors/track, 480 cylinders, total 7716864 sectors
Units = sectors of 1 * 512 = 512 bytes
2. Use the "dd" utility to write the image to the SD card. For example:
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 * 63 144584 72261 c Win95 FAT32 (LBA)
/dev/mmcblk0p2 144585 465884 160650 83 Linux
# dd core-image-minimal-beaglebone.wic of=/dev/sdb
# mkfs.vfat -F 16 -n "boot" /dev/mmcblk0p1
# mke2fs -j -L "root" /dev/mmcblk0p2
The following assumes the SD card partitions 1 and 2 are mounted at
/media/boot and /media/root respectively. Removing the card and reinserting
it will do just that on most modern Linux desktop environments.
The files referenced below are made available after the build in
build/tmp/deploy/images.
2. Install the boot loaders
# cp MLO-beaglebone /media/boot/MLO
# cp u-boot-beaglebone.img /media/boot/u-boot.img
3. Install the root filesystem
# tar x -C /media/root -f core-image-$IMAGE_TYPE-beaglebone.tar.bz2
4. If using core-image-base or core-image-sato images, the SD card is ready
and rootfs already contains the kernel, modules and device tree (DTB)
files necessary to be booted with U-boot's default configuration, so
skip directly to step 8.
For core-image-minimal, proceed through next steps.
5. If using core-image-minimal rootfs, install the modules
# tar x -C /media/root -f modules-beaglebone.tgz
6. If using core-image-minimal rootfs, install the kernel uImage into /boot
directory of rootfs
# cp uImage-beaglebone.bin /media/root/boot/uImage
7. If using core-image-minimal rootfs, also install device tree (DTB) files
into /boot directory of rootfs
# cp uImage-am335x-bone.dtb /media/root/boot/am335x-bone.dtb
# cp uImage-am335x-boneblack.dtb /media/root/boot/am335x-boneblack.dtb
8. Unmount the SD partitions, insert the SD card into the Beaglebone, and
boot the Beaglebone
3. Insert the SD card into the Beaglebone and boot the board.
Freescale MPC8315E-RDB (mpc8315e-rdb)
=====================================
@@ -203,75 +295,6 @@ linux/arch/powerpc/boot/dts/mpc8315erdb.dts within the kernel source). If
you have left them at the factory default then you shouldn't need to do
anything here.
Note: To boot from USB disk you need u-boot that supports 'ext2load usb'
command. You need to setup TFTP server, load u-boot from there and
flash it to NOR flash.
Beware! Flashing bootloader is potentially dangerous operation that can
brick your device if done incorrectly. Please, make sure you understand
what below commands mean before executing them.
Load the new u-boot.bin from TFTP server to memory address 200000
=> tftp 200000 u-boot.bin
Disable flash protection
=> protect off all
Erase the old u-boot from fe000000 to fe06ffff in NOR flash.
The size is 0x70000 (458752 bytes)
=> erase fe000000 fe06ffff
Copy the new u-boot from address 200000 to fe000000
the size is 0x70000. It has to be greater or equal to u-boot.bin size
=> cp.b 200000 fe000000 70000
Enable flash protection again
=> protect on all
Reset the board
=> reset
--- Booting from USB disk ---
1. Flash partitioned image to the USB disk
# dd if=core-image-minimal-mpc8315e-rdb.wic of=/dev/sdb
2. Plug USB disk into the MPC8315 board
3. Connect the board's first serial port to your workstation and then start up
your favourite serial terminal so that you will be able to interact with
the serial console. If you don't have a favourite, picocom is suggested:
$ picocom /dev/ttyUSB0 -b 115200
4. Power up or reset the board and press a key on the terminal when prompted
to get to the U-Boot command line
5. Optional. Load the u-boot.bin from the USB disk:
=> usb start
=> ext2load usb 0:1 200000 u-boot.bin
and flash it to NOR flash as described above.
6. Set fdtaddr and loadaddr. This is not necessary if you set them before.
=> setenv fdtaddr a00000
=> setenv loadaddr 1000000
7. Load the kernel and dtb from first partition of the USB disk:
=> usb start
=> ext2load usb 0:1 $loadaddr uImage
=> ext2load usb 0:1 $fdtaddr dtb
8. Set bootargs and boot up the device
=> setenv bootargs root=/dev/sdb2 rw rootwait console=ttyS0,115200
=> bootm $loadaddr - $fdtaddr
--- Booting from NFS root ---
Load the kernel and dtb (device tree blob), and boot the system as follows:
@@ -329,14 +352,11 @@ Setup instructions
------------------
You will need the following:
* RJ45 -> serial ("rollover") cable connected from your PC to the CONSOLE
port on the device
* Ethernet connected to the first ethernet port on the board
If using NFS as part of the setup process, you will also need:
* NFS root setup on your workstation
* TFTP server installed on your workstation (if fetching the kernel from
TFTP, see below).
* TFTP server installed on your workstation
* RJ45 -> serial ("rollover") cable connected from your PC to the CONSOLE
port on the board
* Ethernet connected to the first ethernet port on the board
--- Preparation ---
@@ -344,7 +364,7 @@ Build an image (e.g. core-image-minimal) using "edgerouter" as the MACHINE.
In the following instruction it is based on core-image-minimal. Another target
may be similiar with it.
--- Booting from NFS root / kernel via TFTP ---
--- Booting from NFS root ---
Load the kernel, and boot the system as follows:
@@ -370,25 +390,75 @@ Load the kernel, and boot the system as follows:
=> tftp tftp $loadaddr vmlinux
=> bootoctlinux $loadaddr coremask=0x3 root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:<netmask>:edgerouter:eth0:off mtdparts=phys_mapped_flash:512k(boot0),512k(boot1),64k@3072k(eeprom)
--- Booting from USB disk ---
--- Booting from USB root ---
To boot from the USB disk, you either need to remove it from the edgerouter
box and populate it from another computer, or use a previously booted NFS
image and populate from the edgerouter itself.
Type 1: Use partitioned image
-----------------------------
Type 1: Mounted USB disk
------------------------
To boot from the USB disk there are two available partitions on the factory
USB storage. The rest of this guide assumes that these partitions are left
intact. If you change the partition scheme, you must update your boot method
appropriately.
The standard partitions are:
- 1: vfat partition containing factory kernels
- 2: ext3 partition for the root filesystem.
You can place the kernel on either partition 1, or partition 2, but the roofs
must go on partition 2 (due to its size).
Note: If you place the kernel on the ext3 partition, you must re-create the
ext3 filesystem, since the factory u-boot can only handle 128 byte inodes and
cannot read the partition otherwise.
Steps:
1. Remove the USB disk from the edgerouter and insert it into a computer
that has access to your build artifacts.
2. Flash the image.
2. Copy the kernel image to the USB storage (assuming discovered as 'sdb' on
the development machine):
# dd if=core-image-minimal-edgerouter.wic of=/dev/sdb
2a) if booting from vfat
# mount /dev/sdb1 /mnt
# cp tmp/deploy/images/edgerouter/vmlinux /mnt
# umount /mnt
3. Insert USB disk into the edgerouter and boot it.
2b) if booting from ext3
# mkfs.ext3 -I 128 /dev/sdb2
# mount /dev/sdb2 /mnt
# mkdir /mnt/boot
# cp tmp/deploy/images/edgerouter/vmlinux /mnt/boot
# umount /mnt
3. Extract the rootfs to the USB storage ext3 partition
# mount /dev/sdb2 /mnt
# tar -xvjpf core-image-minimal-XXX.tar.bz2 -C /mnt
# umount /mnt
4. Reboot the board and press a key on the terminal when prompted to get to the U-Boot
command line:
5. Load the kernel and boot:
5a) vfat boot
=> fatload usb 0:1 $loadaddr vmlinux
5b) ext3 boot
=> ext2load usb 0:2 $loadaddr boot/vmlinux
=> bootoctlinux $loadaddr coremask=0x3 root=/dev/sda2 rw rootwait mtdparts=phys_mapped_flash:512k(boot0),512k(boot1),64k@3072k(eeprom)
Type 2: NFS
-----------

View File

@@ -5,15 +5,6 @@ The following external components are distributed with this software:
* The Toaster Simple UI application is based upon the Django project template, the files of which are covered by the BSD license and are copyright (c) Django Software
Foundation and individual contributors.
* Twitter Bootstrap (including Glyphicons), redistributed under the MIT license
* Twitter Bootstrap (including Glyphicons), redistributed under the Apache License 2.0.
* jQuery is redistributed under the MIT license.
* Twitter typeahead.js redistributed under the MIT license. Note that the JS source has one small modification, so the full unminified file is currently included to make it obvious where this is.
* jsrender is redistributed under the MIT license.
* QUnit is redistributed under the MIT license.
* Font Awesome fonts redistributed under the SIL Open Font License 1.1
* simplediff is distributed under the zlib license.

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
@@ -23,34 +23,369 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import sys, logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),
'lib'))
import optparse
import warnings
from traceback import format_exception
try:
import bb
except RuntimeError as exc:
sys.exit(str(exc))
from bb import event
import bb.msg
from bb import cooker
from bb import ui
from bb import server
from bb import cookerdata
from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports utf-8.\nPython can't change the filesystem locale after loading so we need a utf-8 when python starts or things won't work.")
__version__ = "1.24.0"
logger = logging.getLogger("BitBake")
__version__ = "1.34.0"
# Python multiprocessing requires /dev/shm
if not os.access('/dev/shm', os.W_OK | os.X_OK):
sys.exit("FATAL: /dev/shm does not exist or is not writable")
# Unbuffer stdout to avoid log truncation in the event
# of an unorderly exit as well as to provide timely
# updates to log files for use with tail
try:
if sys.stdout.name == '<stdout>':
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
except:
pass
def get_ui(config):
if not config.ui:
# modify 'ui' attribute because it is also read by cooker
config.ui = os.environ.get('BITBAKE_UI', 'knotty')
interface = config.ui
try:
# Dynamically load the UI based on the ui name. Although we
# suggest a fixed set this allows you to have flexibility in which
# ones are available.
module = __import__("bb.ui", fromlist = [interface])
return getattr(module, interface)
except AttributeError:
sys.exit("FATAL: Invalid user interface '%s' specified.\n"
"Valid interfaces: depexp, goggle, ncurses, hob, knotty [default]." % interface)
# Display bitbake/OE warnings via the BitBake.Warnings logger, ignoring others"""
warnlog = logging.getLogger("BitBake.Warnings")
_warnings_showwarning = warnings.showwarning
def _showwarning(message, category, filename, lineno, file=None, line=None):
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno)
warnlog.warn(s)
warnings.showwarning = _showwarning
warnings.filterwarnings("ignore")
warnings.filterwarnings("default", module="(<string>$|(oe|bb)\.)")
warnings.filterwarnings("ignore", category=PendingDeprecationWarning)
warnings.filterwarnings("ignore", category=ImportWarning)
warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
class BitBakeConfigParameters(cookerdata.ConfigParameters):
def parseCommandLine(self):
parser = optparse.OptionParser(
version = "BitBake Build Tool Core version %s, %%prog version %s" % (bb.__version__, __version__),
usage = """%prog [options] [recipename/target ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.""")
parser.add_option("-b", "--buildfile", help = "Execute tasks from a specific .bb recipe directly. WARNING: Does not handle any dependencies from other recipes.",
action = "store", dest = "buildfile", default = None)
parser.add_option("-k", "--continue", help = "Continue as much as possible after an error. While the target that failed and anything depending on it cannot be built, as much as possible will be built before stopping.",
action = "store_false", dest = "abort", default = True)
parser.add_option("-a", "--tryaltconfigs", help = "Continue with builds by trying to use alternative providers where possible.",
action = "store_true", dest = "tryaltconfigs", default = False)
parser.add_option("-f", "--force", help = "Force the specified targets/task to run (invalidating any existing stamp file).",
action = "store_true", dest = "force", default = False)
parser.add_option("-c", "--cmd", help = "Specify the task to execute. The exact options available depend on the metadata. Some examples might be 'compile' or 'populate_sysroot' or 'listtasks' may give a list of the tasks available.",
action = "store", dest = "cmd")
parser.add_option("-C", "--clear-stamp", help = "Invalidate the stamp for the specified task such as 'compile' and then run the default task for the specified target(s).",
action = "store", dest = "invalidate_stamp")
parser.add_option("-r", "--read", help = "Read the specified file before bitbake.conf.",
action = "append", dest = "prefile", default = [])
parser.add_option("-R", "--postread", help = "Read the specified file after bitbake.conf.",
action = "append", dest = "postfile", default = [])
parser.add_option("-v", "--verbose", help = "Output more log message data to the terminal.",
action = "store_true", dest = "verbose", default = False)
parser.add_option("-D", "--debug", help = "Increase the debug level. You can specify this more than once.",
action = "count", dest="debug", default = 0)
parser.add_option("-n", "--dry-run", help = "Don't execute, just go through the motions.",
action = "store_true", dest = "dry_run", default = False)
parser.add_option("-S", "--dump-signatures", help = "Dump out the signature construction information, with no task execution. The SIGNATURE_HANDLER parameter is passed to the handler. Two common values are none and printdiff but the handler may define more/less. none means only dump the signature, printdiff means compare the dumped signature with the cached one.",
action = "append", dest = "dump_signatures", default = [], metavar="SIGNATURE_HANDLER")
parser.add_option("-p", "--parse-only", help = "Quit after parsing the BB recipes.",
action = "store_true", dest = "parse_only", default = False)
parser.add_option("-s", "--show-versions", help = "Show current and preferred versions of all recipes.",
action = "store_true", dest = "show_versions", default = False)
parser.add_option("-e", "--environment", help = "Show the global or per-recipe environment complete with information about where variables were set/changed.",
action = "store_true", dest = "show_environment", default = False)
parser.add_option("-g", "--graphviz", help = "Save dependency tree information for the specified targets in the dot syntax.",
action = "store_true", dest = "dot_graph", default = False)
parser.add_option("-I", "--ignore-deps", help = """Assume these dependencies don't exist and are already provided (equivalent to ASSUME_PROVIDED). Useful to make dependency graphs more appealing""",
action = "append", dest = "extra_assume_provided", default = [])
parser.add_option("-l", "--log-domains", help = """Show debug logging for the specified logging domains""",
action = "append", dest = "debug_domains", default = [])
parser.add_option("-P", "--profile", help = "Profile the command and save reports.",
action = "store_true", dest = "profile", default = False)
parser.add_option("-u", "--ui", help = "The user interface to use (e.g. knotty, hob, depexp).",
action = "store", dest = "ui")
parser.add_option("-t", "--servertype", help = "Choose which server to use, process or xmlrpc.",
action = "store", dest = "servertype")
parser.add_option("", "--token", help = "Specify the connection token to be used when connecting to a remote server.",
action = "store", dest = "xmlrpctoken")
parser.add_option("", "--revisions-changed", help = "Set the exit code depending on whether upstream floating revisions have changed or not.",
action = "store_true", dest = "revisions_changed", default = False)
parser.add_option("", "--server-only", help = "Run bitbake without a UI, only starting a server (cooker) process.",
action = "store_true", dest = "server_only", default = False)
parser.add_option("-B", "--bind", help = "The name/address for the bitbake server to bind to.",
action = "store", dest = "bind", default = False)
parser.add_option("", "--no-setscene", help = "Do not run any setscene tasks. sstate will be ignored and everything needed, built.",
action = "store_true", dest = "nosetscene", default = False)
parser.add_option("", "--remote-server", help = "Connect to the specified server.",
action = "store", dest = "remote_server", default = False)
parser.add_option("-m", "--kill-server", help = "Terminate the remote server.",
action = "store_true", dest = "kill_server", default = False)
parser.add_option("", "--observe-only", help = "Connect to a server as an observing-only client.",
action = "store_true", dest = "observe_only", default = False)
parser.add_option("", "--status-only", help = "Check the status of the remote bitbake server.",
action = "store_true", dest = "status_only", default = False)
options, targets = parser.parse_args(sys.argv)
# some environmental variables set also configuration options
if "BBSERVER" in os.environ:
options.servertype = "xmlrpc"
options.remote_server = os.environ["BBSERVER"]
if "BBTOKEN" in os.environ:
options.xmlrpctoken = os.environ["BBTOKEN"]
# if BBSERVER says to autodetect, let's do that
if options.remote_server:
[host, port] = options.remote_server.split(":", 2)
port = int(port)
# use automatic port if port set to -1, means read it from
# the bitbake.lock file; this is a bit tricky, but we always expect
# to be in the base of the build directory if we need to have a
# chance to start the server later, anyway
if port == -1:
lock_location = "./bitbake.lock"
# we try to read the address at all times; if the server is not started,
# we'll try to start it after the first connect fails, below
try:
lf = open(lock_location, 'r')
remotedef = lf.readline()
[host, port] = remotedef.split(":")
port = int(port)
lf.close()
options.remote_server = remotedef
except Exception as e:
sys.exit("Failed to read bitbake.lock (%s), invalid port" % str(e))
return options, targets[1:]
def start_server(servermodule, configParams, configuration, features):
server = servermodule.BitBakeServer()
if configParams.bind:
(host, port) = configParams.bind.split(':')
server.initServer((host, int(port)))
configuration.interface = [ server.serverImpl.host, server.serverImpl.port ]
else:
server.initServer()
configuration.interface = []
try:
configuration.setServerRegIdleCallback(server.getServerIdleCB())
cooker = bb.cooker.BBCooker(configuration, features)
server.addcooker(cooker)
server.saveConnectionDetails()
except Exception as e:
exc_info = sys.exc_info()
while hasattr(server, "event_queue"):
try:
import queue
except ImportError:
import Queue as queue
try:
event = server.event_queue.get(block=False)
except (queue.Empty, IOError):
break
if isinstance(event, logging.LogRecord):
logger.handle(event)
raise exc_info[1], None, exc_info[2]
server.detach()
cooker.lock.close()
return server
def main():
configParams = BitBakeConfigParameters()
configuration = cookerdata.CookerConfiguration()
configuration.setConfigParameters(configParams)
ui_module = get_ui(configParams)
# Server type can be xmlrpc or process currently, if nothing is specified,
# the default server is process
if configParams.servertype:
server_type = configParams.servertype
else:
server_type = 'process'
try:
module = __import__("bb.server", fromlist = [server_type])
servermodule = getattr(module, server_type)
except AttributeError:
sys.exit("FATAL: Invalid server type '%s' specified.\n"
"Valid interfaces: xmlrpc, process [default]." % server_type)
if configParams.server_only:
if configParams.servertype != "xmlrpc":
sys.exit("FATAL: If '--server-only' is defined, we must set the servertype as 'xmlrpc'.\n")
if not configParams.bind:
sys.exit("FATAL: The '--server-only' option requires a name/address to bind to with the -B option.\n")
if configParams.remote_server:
sys.exit("FATAL: The '--server-only' option conflicts with %s.\n" %
("the BBSERVER environment variable" if "BBSERVER" in os.environ else "the '--remote-server' option" ))
if configParams.bind and configParams.servertype != "xmlrpc":
sys.exit("FATAL: If '-B' or '--bind' is defined, we must set the servertype as 'xmlrpc'.\n")
if configParams.remote_server and configParams.servertype != "xmlrpc":
sys.exit("FATAL: If '--remote-server' is defined, we must set the servertype as 'xmlrpc'.\n")
if configParams.observe_only and (not configParams.remote_server or configParams.bind):
sys.exit("FATAL: '--observe-only' can only be used by UI clients connecting to a server.\n")
if configParams.kill_server and not configParams.remote_server:
sys.exit("FATAL: '--kill-server' can only be used to terminate a remote server")
if "BBDEBUG" in os.environ:
level = int(os.environ["BBDEBUG"])
if level > configuration.debug:
configuration.debug = level
bb.msg.init_msgconfig(configParams.verbose, configuration.debug,
configuration.debug_domains)
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
if not configParams.status_only:
# In status only mode there are no logs and no UI
logger.addHandler(handler)
# Clear away any spurious environment variables while we stoke up the cooker
cleanedvars = bb.utils.clean_environment()
featureset = []
if not configParams.server_only:
# Collect the feature set for the UI
featureset = getattr(ui_module, "featureSet", [])
if not configParams.remote_server:
# we start a server with a given configuration
server = start_server(servermodule, configParams, configuration, featureset)
bb.event.ui_queue = []
else:
# we start a stub server that is actually a XMLRPClient that connects to a real server
server = servermodule.BitBakeXMLRPCClient(configParams.observe_only, configParams.xmlrpctoken)
server.saveConnectionDetails(configParams.remote_server)
if not configParams.server_only:
try:
server_connection = server.establishConnection(featureset)
except Exception as e:
if configParams.kill_server:
sys.exit(0)
bb.fatal("Could not connect to server %s: %s" % (configParams.remote_server, str(e)))
# Restore the environment in case the UI needs it
for k in cleanedvars:
os.environ[k] = cleanedvars[k]
logger.removeHandler(handler)
if configParams.status_only:
server_connection.terminate()
sys.exit(0)
if configParams.kill_server:
server_connection.connection.terminateServer()
bb.event.ui_queue = []
sys.exit(0)
try:
return ui_module.main(server_connection.connection, server_connection.events, configParams)
finally:
bb.event.ui_queue = []
server_connection.terminate()
else:
print("server address: %s, server port: %s" % (server.serverImpl.host, server.serverImpl.port))
return 0
return 1
if __name__ == "__main__":
if __version__ != bb.__version__:
sys.exit("Bitbake core version and program version mismatch!")
try:
sys.exit(bitbake_main(BitBakeConfigParameters(sys.argv),
cookerdata.CookerConfiguration()))
except BBMainException as err:
sys.exit(err)
ret = main()
except bb.BBHandledException:
sys.exit(1)
ret = 1
except Exception:
ret = 1
import traceback
traceback.print_exc()
sys.exit(1)
sys.exit(ret)

View File

@@ -1,9 +1,9 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# bitbake-diffsigs
# BitBake task signature data comparison utility
#
# Copyright (C) 2012-2013, 2017 Intel Corporation
# Copyright (C) 2012-2013 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -22,19 +22,28 @@ import os
import sys
import warnings
import fnmatch
import argparse
import optparse
import logging
import pickle
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb.tinfoil
import bb.siggen
import bb.msg
logger = bb.msg.logger_create('bitbake-diffsigs')
def logger_create(name, output=sys.stderr):
logger = logging.getLogger(name)
console = logging.StreamHandler(output)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if output.isatty():
format.enable_color()
console.setFormatter(format)
logger.addHandler(console)
logger.setLevel(logging.INFO)
return logger
def find_compare_task(bbhandler, pn, taskname, sig1=None, sig2=None, color=False):
logger = logger_create('bitbake-diffsigs')
def find_compare_task(bbhandler, pn, taskname):
""" Find the most recent signature files for the specified PN/task and compare them """
if not hasattr(bb.siggen, 'find_siginfo'):
@@ -44,119 +53,70 @@ def find_compare_task(bbhandler, pn, taskname, sig1=None, sig2=None, color=False
if not taskname.startswith('do_'):
taskname = 'do_%s' % taskname
if sig1 and sig2:
sigfiles = bb.siggen.find_siginfo(pn, taskname, [sig1, sig2], bbhandler.config_data)
if len(sigfiles) == 0:
logger.error('No sigdata files found matching %s %s matching either %s or %s' % (pn, taskname, sig1, sig2))
sys.exit(1)
elif not sig1 in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig1))
sys.exit(1)
elif not sig2 in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig2))
sys.exit(1)
latestfiles = [sigfiles[sig1], sigfiles[sig2]]
filedates = bb.siggen.find_siginfo(pn, taskname, None, bbhandler.config_data)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-2:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
elif len(latestfiles) < 2:
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (pn, taskname))
sys.exit(1)
else:
filedates = bb.siggen.find_siginfo(pn, taskname, None, bbhandler.config_data)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-3:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
elif len(latestfiles) < 2:
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (pn, taskname))
sys.exit(1)
# Define recursion callback
def recursecb(key, hash1, hash2):
hashes = [hash1, hash2]
hashfiles = bb.siggen.find_siginfo(key, None, hashes, bbhandler.config_data)
# Define recursion callback
def recursecb(key, hash1, hash2):
hashes = [hash1, hash2]
hashfiles = bb.siggen.find_siginfo(key, None, hashes, bbhandler.config_data)
recout = []
if len(hashfiles) == 2:
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb)
recout.extend(list(' ' + l for l in out2))
else:
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
recout = []
if len(hashfiles) == 0:
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
elif not hash1 in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash1))
elif not hash2 in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash2))
else:
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb, color=color)
for change in out2:
for line in change.splitlines():
recout.append(' ' + line)
return recout
return recout
# Recurse into signature comparison
logger.debug("Signature file (previous): %s" % latestfiles[-2])
logger.debug("Signature file (latest): %s" % latestfiles[-1])
output = bb.siggen.compare_sigfiles(latestfiles[-2], latestfiles[-1], recursecb, color=color)
if output:
print('\n'.join(output))
# Recurse into signature comparison
output = bb.siggen.compare_sigfiles(latestfiles[0], latestfiles[1], recursecb)
if output:
print '\n'.join(output)
sys.exit(0)
parser = argparse.ArgumentParser(
description="Compares siginfo/sigdata files written out by BitBake")
parser = optparse.OptionParser(
description = "Compares siginfo/sigdata files written out by BitBake",
usage = """
%prog -t recipename taskname
%prog sigdatafile1 sigdatafile2
%prog sigdatafile1""")
parser.add_argument('-d', '--debug',
help='Enable debug output',
action='store_true')
parser.add_option("-t", "--task",
help = "find the signature data files for last two runs of the specified task and compare them",
action="store", dest="taskargs", nargs=2, metavar='recipename taskname')
parser.add_argument('--color',
help='Colorize output (where %(metavar)s is %(choices)s)',
choices=['auto', 'always', 'never'], default='auto', metavar='color')
parser.add_argument("-t", "--task",
help="find the signature data files for last two runs of the specified task and compare them",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
parser.add_argument("-s", "--signature",
help="With -t/--task, specify the signatures to look for instead of taking the last two",
action="store", dest="sigargs", nargs=2, metavar=('fromsig', 'tosig'))
parser.add_argument("sigdatafile1",
help="First signature file to compare (or signature file to dump, if second not specified). Not used when using -t/--task.",
action="store", nargs='?')
parser.add_argument("sigdatafile2",
help="Second signature file to compare",
action="store", nargs='?')
options = parser.parse_args()
if options.debug:
logger.setLevel(logging.DEBUG)
color = (options.color == 'always' or (options.color == 'auto' and sys.stdout.isatty()))
options, args = parser.parse_args(sys.argv)
if options.taskargs:
with bb.tinfoil.Tinfoil() as tinfoil:
tinfoil.prepare(config_only=True)
if options.sigargs:
find_compare_task(tinfoil, options.taskargs[0], options.taskargs[1], options.sigargs[0], options.sigargs[1], color=color)
else:
find_compare_task(tinfoil, options.taskargs[0], options.taskargs[1], color=color)
tinfoil = bb.tinfoil.Tinfoil()
tinfoil.prepare(config_only = True)
find_compare_task(tinfoil, options.taskargs[0], options.taskargs[1])
else:
if options.sigargs:
logger.error('-s/--signature can only be used together with -t/--task')
sys.exit(1)
try:
if options.sigdatafile1 and options.sigdatafile2:
output = bb.siggen.compare_sigfiles(options.sigdatafile1, options.sigdatafile2, color=color)
elif options.sigdatafile1:
output = bb.siggen.dump_sigfile(options.sigdatafile1)
else:
logger.error('Must specify signature file(s) or -t/--task')
parser.print_help()
if len(args) == 1:
parser.print_help()
else:
import cPickle
try:
if len(args) == 2:
output = bb.siggen.dump_sigfile(sys.argv[1])
else:
output = bb.siggen.compare_sigfiles(sys.argv[1], sys.argv[2])
except IOError as e:
logger.error(str(e))
sys.exit(1)
except cPickle.UnpicklingError, EOFError:
logger.error('Invalid signature data - ensure you are specifying sigdata/siginfo files')
sys.exit(1)
except IOError as e:
logger.error(str(e))
sys.exit(1)
except (pickle.UnpicklingError, EOFError):
logger.error('Invalid signature data - ensure you are specifying sigdata/siginfo files')
sys.exit(1)
if output:
print('\n'.join(output))
if output:
print '\n'.join(output)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# bitbake-dumpsig
# BitBake task signature dump utility
@@ -23,72 +23,43 @@ import sys
import warnings
import optparse
import logging
import pickle
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb.tinfoil
import bb.siggen
import bb.msg
logger = bb.msg.logger_create('bitbake-dumpsig')
def logger_create(name, output=sys.stderr):
logger = logging.getLogger(name)
console = logging.StreamHandler(output)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if output.isatty():
format.enable_color()
console.setFormatter(format)
logger.addHandler(console)
logger.setLevel(logging.INFO)
return logger
def find_siginfo_task(bbhandler, pn, taskname):
""" Find the most recent signature file for the specified PN/task """
if not hasattr(bb.siggen, 'find_siginfo'):
logger.error('Metadata does not support finding signature data files')
sys.exit(1)
if not taskname.startswith('do_'):
taskname = 'do_%s' % taskname
filedates = bb.siggen.find_siginfo(pn, taskname, None, bbhandler.config_data)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-1:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
return latestfiles[0]
logger = logger_create('bitbake-dumpsig')
parser = optparse.OptionParser(
description = "Dumps siginfo/sigdata files written out by BitBake",
usage = """
%prog -t recipename taskname
%prog sigdatafile""")
parser.add_option("-D", "--debug",
help = "enable debug",
action = "store_true", dest="debug", default = False)
parser.add_option("-t", "--task",
help = "find the signature data file for the specified task",
action="store", dest="taskargs", nargs=2, metavar='recipename taskname')
options, args = parser.parse_args(sys.argv)
if options.debug:
logger.setLevel(logging.DEBUG)
if options.taskargs:
tinfoil = bb.tinfoil.Tinfoil()
tinfoil.prepare(config_only = True)
file = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1])
logger.debug("Signature file: %s" % file)
elif len(args) == 1:
if len(args) == 1:
parser.print_help()
sys.exit(0)
else:
file = args[1]
import cPickle
try:
output = bb.siggen.dump_sigfile(args[1])
except IOError as e:
logger.error(str(e))
sys.exit(1)
except cPickle.UnpicklingError, EOFError:
logger.error('Invalid signature data - ensure you are specifying a sigdata/siginfo file')
sys.exit(1)
try:
output = bb.siggen.dump_sigfile(file)
except IOError as e:
logger.error(str(e))
sys.exit(1)
except (pickle.UnpicklingError, EOFError):
logger.error('Invalid signature data - ensure you are specifying a sigdata/siginfo file')
sys.exit(1)
if output:
print('\n'.join(output))
if output:
print '\n'.join(output)

View File

@@ -1,11 +1,11 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# This script has subcommands which operate against your bitbake layers, either
# displaying useful information, or acting against them.
# See the help output for details on available commands.
# Copyright (C) 2011 Mentor Graphics Corporation
# Copyright (C) 2011-2015 Intel Corporation
# Copyright (C) 2012 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -20,90 +20,739 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import cmd
import logging
import os
import sys
import argparse
import signal
import fnmatch
from collections import defaultdict
import re
bindir = os.path.dirname(__file__)
topdir = os.path.dirname(bindir)
sys.path[0:0] = [os.path.join(topdir, 'lib')]
import bb.cache
import bb.cooker
import bb.providers
import bb.utils
import bb.tinfoil
import bb.msg
logger = bb.msg.logger_create('bitbake-layers', sys.stdout)
def main():
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
parser = argparse.ArgumentParser(
description="BitBake layers utility",
epilog="Use %(prog)s <subcommand> --help to get help on a specific command",
add_help=False)
parser.add_argument('-d', '--debug', help='Enable debug output', action='store_true')
parser.add_argument('-q', '--quiet', help='Print only errors', action='store_true')
parser.add_argument('--color', choices=['auto', 'always', 'never'], default='auto', help='Colorize output (where %(metavar)s is %(choices)s)', metavar='COLOR')
global_args, unparsed_args = parser.parse_known_args()
# Help is added here rather than via add_help=True, as we don't want it to
# be handled by parse_known_args()
parser.add_argument('-h', '--help', action='help', default=argparse.SUPPRESS,
help='show this help message and exit')
subparsers = parser.add_subparsers(title='subcommands', metavar='<subcommand>')
subparsers.required = True
if global_args.debug:
logger.setLevel(logging.DEBUG)
elif global_args.quiet:
logger.setLevel(logging.ERROR)
# Need to re-run logger_create with color argument
# (will be the same logger since it has the same name)
bb.msg.logger_create('bitbake-layers', output=sys.stdout, color=global_args.color)
plugins = []
tinfoil = bb.tinfoil.Tinfoil(tracking=True)
tinfoil.logger.setLevel(logger.getEffectiveLevel())
try:
tinfoil.prepare(True)
for path in ([topdir] +
tinfoil.config_data.getVar('BBPATH').split(':')):
pluginpath = os.path.join(path, 'lib', 'bblayers')
bb.utils.load_plugins(logger, plugins, pluginpath)
registered = False
for plugin in plugins:
if hasattr(plugin, 'register_commands'):
registered = True
plugin.register_commands(subparsers)
if hasattr(plugin, 'tinfoil_init'):
plugin.tinfoil_init(tinfoil)
if not registered:
logger.error("No commands registered - missing plugins?")
sys.exit(1)
args = parser.parse_args(unparsed_args, namespace=global_args)
if getattr(args, 'parserecipes', False):
tinfoil.config_data.disableTracking()
tinfoil.parseRecipes()
tinfoil.config_data.enableTracking()
return args.func(args)
finally:
tinfoil.shutdown()
if __name__ == "__main__":
try:
ret = main()
except bb.BBHandledException:
ret = 1
except Exception:
ret = 1
import traceback
traceback.print_exc()
sys.exit(ret)
logger = logging.getLogger('BitBake')
def main(args):
cmds = Commands()
if args:
# Allow user to specify e.g. show-layers instead of show_layers
args = [args[0].replace('-', '_')] + args[1:]
cmds.onecmd(' '.join(args))
else:
cmds.do_help('')
return cmds.returncode
class Commands(cmd.Cmd):
def __init__(self):
self.bbhandler = None
self.returncode = 0
self.bblayers = []
cmd.Cmd.__init__(self)
def init_bbhandler(self, config_only = False):
if not self.bbhandler:
self.bbhandler = bb.tinfoil.Tinfoil()
self.bblayers = (self.bbhandler.config_data.getVar('BBLAYERS', True) or "").split()
self.bbhandler.prepare(config_only)
def default(self, line):
"""Handle unrecognised commands"""
sys.stderr.write("Unrecognised command or option\n")
self.do_help('')
def do_help(self, topic):
"""display general help or help on a specified command"""
if topic:
sys.stdout.write('%s: ' % topic)
cmd.Cmd.do_help(self, topic.replace('-', '_'))
else:
sys.stdout.write("usage: bitbake-layers <command> [arguments]\n\n")
sys.stdout.write("Available commands:\n")
procnames = list(set(self.get_names()))
for procname in procnames:
if procname[:3] == 'do_':
sys.stdout.write(" %s\n" % procname[3:].replace('_', '-'))
doc = getattr(self, procname).__doc__
if doc:
sys.stdout.write(" %s\n" % doc.splitlines()[0])
def do_show_layers(self, args):
"""show current configured layers"""
self.init_bbhandler(config_only = True)
logger.plain("%s %s %s" % ("layer".ljust(20), "path".ljust(40), "priority"))
logger.plain('=' * 74)
for layerdir in self.bblayers:
layername = self.get_layer_name(layerdir)
layerpri = 0
for layer, _, regex, pri in self.bbhandler.cooker.recipecache.bbfile_config_priorities:
if regex.match(os.path.join(layerdir, 'test')):
layerpri = pri
break
logger.plain("%s %s %d" % (layername.ljust(20), layerdir.ljust(40), layerpri))
def version_str(self, pe, pv, pr = None):
verstr = "%s" % pv
if pr:
verstr = "%s-%s" % (verstr, pr)
if pe:
verstr = "%s:%s" % (pe, verstr)
return verstr
def do_show_overlayed(self, args):
"""list overlayed recipes (where the same recipe exists in another layer)
usage: show-overlayed [-f] [-s]
Lists the names of overlayed recipes and the available versions in each
layer, with the preferred version first. Note that skipped recipes that
are overlayed will also be listed, with a " (skipped)" suffix.
Options:
-f instead of the default formatting, list filenames of higher priority
recipes with the ones they overlay indented underneath
-s only list overlayed recipes where the version is the same
"""
self.init_bbhandler()
show_filenames = False
show_same_ver_only = False
for arg in args.split():
if arg == '-f':
show_filenames = True
elif arg == '-s':
show_same_ver_only = True
else:
sys.stderr.write("show-overlayed: invalid option %s\n" % arg)
self.do_help('')
return
items_listed = self.list_recipes('Overlayed recipes', None, True, show_same_ver_only, show_filenames, True)
# Check for overlayed .bbclass files
classes = defaultdict(list)
for layerdir in self.bblayers:
classdir = os.path.join(layerdir, 'classes')
if os.path.exists(classdir):
for classfile in os.listdir(classdir):
if os.path.splitext(classfile)[1] == '.bbclass':
classes[classfile].append(classdir)
# Locating classes and other files is a bit more complicated than recipes -
# layer priority is not a factor; instead BitBake uses the first matching
# file in BBPATH, which is manipulated directly by each layer's
# conf/layer.conf in turn, thus the order of layers in bblayers.conf is a
# factor - however, each layer.conf is free to either prepend or append to
# BBPATH (or indeed do crazy stuff with it). Thus the order in BBPATH might
# not be exactly the order present in bblayers.conf either.
bbpath = str(self.bbhandler.config_data.getVar('BBPATH', True))
overlayed_class_found = False
for (classfile, classdirs) in classes.items():
if len(classdirs) > 1:
if not overlayed_class_found:
logger.plain('=== Overlayed classes ===')
overlayed_class_found = True
mainfile = bb.utils.which(bbpath, os.path.join('classes', classfile))
if show_filenames:
logger.plain('%s' % mainfile)
else:
# We effectively have to guess the layer here
logger.plain('%s:' % classfile)
mainlayername = '?'
for layerdir in self.bblayers:
classdir = os.path.join(layerdir, 'classes')
if mainfile.startswith(classdir):
mainlayername = self.get_layer_name(layerdir)
logger.plain(' %s' % mainlayername)
for classdir in classdirs:
fullpath = os.path.join(classdir, classfile)
if fullpath != mainfile:
if show_filenames:
print(' %s' % fullpath)
else:
print(' %s' % self.get_layer_name(os.path.dirname(classdir)))
if overlayed_class_found:
items_listed = True;
if not items_listed:
logger.plain('No overlayed files found.')
def do_show_recipes(self, args):
"""list available recipes, showing the layer they are provided by
usage: show-recipes [-f] [-m] [pnspec]
Lists the names of overlayed recipes and the available versions in each
layer, with the preferred version first. Optionally you may specify
pnspec to match a specified recipe name (supports wildcards). Note that
skipped recipes will also be listed, with a " (skipped)" suffix.
Options:
-f instead of the default formatting, list filenames of higher priority
recipes with other available recipes indented underneath
-m only list where multiple recipes (in the same layer or different
layers) exist for the same recipe name
"""
self.init_bbhandler()
show_filenames = False
show_multi_provider_only = False
pnspec = None
title = 'Available recipes:'
for arg in args.split():
if arg == '-f':
show_filenames = True
elif arg == '-m':
show_multi_provider_only = True
elif not arg.startswith('-'):
pnspec = arg
title = 'Available recipes matching %s:' % pnspec
else:
sys.stderr.write("show-recipes: invalid option %s\n" % arg)
self.do_help('')
return
self.list_recipes(title, pnspec, False, False, show_filenames, show_multi_provider_only)
def list_recipes(self, title, pnspec, show_overlayed_only, show_same_ver_only, show_filenames, show_multi_provider_only):
pkg_pn = self.bbhandler.cooker.recipecache.pkg_pn
(latest_versions, preferred_versions) = bb.providers.findProviders(self.bbhandler.config_data, self.bbhandler.cooker.recipecache, pkg_pn)
allproviders = bb.providers.allProviders(self.bbhandler.cooker.recipecache)
# Ensure we list skipped recipes
# We are largely guessing about PN, PV and the preferred version here,
# but we have no choice since skipped recipes are not fully parsed
skiplist = self.bbhandler.cooker.skiplist.keys()
skiplist.sort( key=lambda fileitem: self.bbhandler.cooker.collection.calc_bbfile_priority(fileitem) )
skiplist.reverse()
for fn in skiplist:
recipe_parts = os.path.splitext(os.path.basename(fn))[0].split('_')
p = recipe_parts[0]
if len(recipe_parts) > 1:
ver = (None, recipe_parts[1], None)
else:
ver = (None, 'unknown', None)
allproviders[p].append((ver, fn))
if not p in pkg_pn:
pkg_pn[p] = 'dummy'
preferred_versions[p] = (ver, fn)
def print_item(f, pn, ver, layer, ispref):
if f in skiplist:
skipped = ' (skipped)'
else:
skipped = ''
if show_filenames:
if ispref:
logger.plain("%s%s", f, skipped)
else:
logger.plain(" %s%s", f, skipped)
else:
if ispref:
logger.plain("%s:", pn)
logger.plain(" %s %s%s", layer.ljust(20), ver, skipped)
preffiles = []
items_listed = False
for p in sorted(pkg_pn):
if pnspec:
if not fnmatch.fnmatch(p, pnspec):
continue
if len(allproviders[p]) > 1 or not show_multi_provider_only:
pref = preferred_versions[p]
preffile = bb.cache.Cache.virtualfn2realfn(pref[1])[0]
if preffile not in preffiles:
preflayer = self.get_file_layer(preffile)
multilayer = False
same_ver = True
provs = []
for prov in allproviders[p]:
provfile = bb.cache.Cache.virtualfn2realfn(prov[1])[0]
provlayer = self.get_file_layer(provfile)
provs.append((provfile, provlayer, prov[0]))
if provlayer != preflayer:
multilayer = True
if prov[0] != pref[0]:
same_ver = False
if (multilayer or not show_overlayed_only) and (same_ver or not show_same_ver_only):
if not items_listed:
logger.plain('=== %s ===' % title)
items_listed = True
print_item(preffile, p, self.version_str(pref[0][0], pref[0][1]), preflayer, True)
for (provfile, provlayer, provver) in provs:
if provfile != preffile:
print_item(provfile, p, self.version_str(provver[0], provver[1]), provlayer, False)
# Ensure we don't show two entries for BBCLASSEXTENDed recipes
preffiles.append(preffile)
return items_listed
def do_flatten(self, args):
"""flattens layer configuration into a separate output directory.
usage: flatten [layer1 layer2 [layer3]...] <outputdir>
Takes the specified layers (or all layers in the current layer
configuration if none are specified) and builds a "flattened" directory
containing the contents of all layers, with any overlayed recipes removed
and bbappends appended to the corresponding recipes. Note that some manual
cleanup may still be necessary afterwards, in particular:
* where non-recipe files (such as patches) are overwritten (the flatten
command will show a warning for these)
* where anything beyond the normal layer setup has been added to
layer.conf (only the lowest priority number layer's layer.conf is used)
* overridden/appended items from bbappends will need to be tidied up
* when the flattened layers do not have the same directory structure (the
flatten command should show a warning when this will cause a problem)
Warning: if you flatten several layers where another layer is intended to
be used "inbetween" them (in layer priority order) such that recipes /
bbappends in the layers interact, and then attempt to use the new output
layer together with that other layer, you may no longer get the same
build results (as the layer priority order has effectively changed).
"""
arglist = args.split()
if len(arglist) < 1:
logger.error('Please specify an output directory')
self.do_help('flatten')
return
if len(arglist) == 2:
logger.error('If you specify layers to flatten you must specify at least two')
self.do_help('flatten')
return
outputdir = arglist[-1]
if os.path.exists(outputdir) and os.listdir(outputdir):
logger.error('Directory %s exists and is non-empty, please clear it out first' % outputdir)
return
self.init_bbhandler()
layers = self.bblayers
if len(arglist) > 2:
layernames = arglist[:-1]
found_layernames = []
found_layerdirs = []
for layerdir in layers:
layername = self.get_layer_name(layerdir)
if layername in layernames:
found_layerdirs.append(layerdir)
found_layernames.append(layername)
for layername in layernames:
if not layername in found_layernames:
logger.error('Unable to find layer %s in current configuration, please run "%s show-layers" to list configured layers' % (layername, os.path.basename(sys.argv[0])))
return
layers = found_layerdirs
else:
layernames = []
# Ensure a specified path matches our list of layers
def layer_path_match(path):
for layerdir in layers:
if path.startswith(os.path.join(layerdir, '')):
return layerdir
return None
appended_recipes = []
for layer in layers:
overlayed = []
for f in self.bbhandler.cooker.collection.overlayed.iterkeys():
for of in self.bbhandler.cooker.collection.overlayed[f]:
if of.startswith(layer):
overlayed.append(of)
logger.plain('Copying files from %s...' % layer )
for root, dirs, files in os.walk(layer):
for f1 in files:
f1full = os.sep.join([root, f1])
if f1full in overlayed:
logger.plain(' Skipping overlayed file %s' % f1full )
else:
ext = os.path.splitext(f1)[1]
if ext != '.bbappend':
fdest = f1full[len(layer):]
fdest = os.path.normpath(os.sep.join([outputdir,fdest]))
bb.utils.mkdirhier(os.path.dirname(fdest))
if os.path.exists(fdest):
if f1 == 'layer.conf' and root.endswith('/conf'):
logger.plain(' Skipping layer config file %s' % f1full )
continue
else:
logger.warn('Overwriting file %s', fdest)
bb.utils.copyfile(f1full, fdest)
if ext == '.bb':
if f1 in self.bbhandler.cooker.collection.appendlist:
appends = self.bbhandler.cooker.collection.appendlist[f1]
if appends:
logger.plain(' Applying appends to %s' % fdest )
for appendname in appends:
if layer_path_match(appendname):
self.apply_append(appendname, fdest)
appended_recipes.append(f1)
# Take care of when some layers are excluded and yet we have included bbappends for those recipes
for recipename in self.bbhandler.cooker.collection.appendlist.iterkeys():
if recipename not in appended_recipes:
appends = self.bbhandler.cooker.collection.appendlist[recipename]
first_append = None
for appendname in appends:
layer = layer_path_match(appendname)
if layer:
if first_append:
self.apply_append(appendname, first_append)
else:
fdest = appendname[len(layer):]
fdest = os.path.normpath(os.sep.join([outputdir,fdest]))
bb.utils.mkdirhier(os.path.dirname(fdest))
bb.utils.copyfile(appendname, fdest)
first_append = fdest
# Get the regex for the first layer in our list (which is where the conf/layer.conf file will
# have come from)
first_regex = None
layerdir = layers[0]
for layername, pattern, regex, _ in self.bbhandler.cooker.recipecache.bbfile_config_priorities:
if regex.match(os.path.join(layerdir, 'test')):
first_regex = regex
break
if first_regex:
# Find the BBFILES entries that match (which will have come from this conf/layer.conf file)
bbfiles = str(self.bbhandler.config_data.getVar('BBFILES', True)).split()
bbfiles_layer = []
for item in bbfiles:
if first_regex.match(item):
newpath = os.path.join(outputdir, item[len(layerdir)+1:])
bbfiles_layer.append(newpath)
if bbfiles_layer:
# Check that all important layer files match BBFILES
for root, dirs, files in os.walk(outputdir):
for f1 in files:
ext = os.path.splitext(f1)[1]
if ext in ['.bb', '.bbappend']:
f1full = os.sep.join([root, f1])
entry_found = False
for item in bbfiles_layer:
if fnmatch.fnmatch(f1full, item):
entry_found = True
break
if not entry_found:
logger.warning("File %s does not match the flattened layer's BBFILES setting, you may need to edit conf/layer.conf or move the file elsewhere" % f1full)
def get_file_layer(self, filename):
for layer, _, regex, _ in self.bbhandler.cooker.recipecache.bbfile_config_priorities:
if regex.match(filename):
for layerdir in self.bblayers:
if regex.match(os.path.join(layerdir, 'test')) and re.match(layerdir, filename):
return self.get_layer_name(layerdir)
return "?"
def get_file_layerdir(self, filename):
for layer, _, regex, _ in self.bbhandler.cooker.recipecache.bbfile_config_priorities:
if regex.match(filename):
for layerdir in self.bblayers:
if regex.match(os.path.join(layerdir, 'test')) and re.match(layerdir, filename):
return layerdir
return "?"
def remove_layer_prefix(self, f):
"""Remove the layer_dir prefix, e.g., f = /path/to/layer_dir/foo/blah, the
return value will be: layer_dir/foo/blah"""
f_layerdir = self.get_file_layerdir(f)
prefix = os.path.join(os.path.dirname(f_layerdir), '')
return f[len(prefix):] if f.startswith(prefix) else f
def get_layer_name(self, layerdir):
return os.path.basename(layerdir.rstrip(os.sep))
def apply_append(self, appendname, recipename):
appendfile = open(appendname, 'r')
recipefile = open(recipename, 'a')
recipefile.write('\n')
recipefile.write('##### bbappended from %s #####\n' % self.get_file_layer(appendname))
recipefile.writelines(appendfile.readlines())
recipefile.close()
appendfile.close()
def do_show_appends(self, args):
"""list bbappend files and recipe files they apply to
usage: show-appends
Recipes are listed with the bbappends that apply to them as subitems.
"""
self.init_bbhandler()
if not self.bbhandler.cooker.collection.appendlist:
logger.plain('No append files found')
return
logger.plain('=== Appended recipes ===')
pnlist = list(self.bbhandler.cooker_data.pkg_pn.keys())
pnlist.sort()
for pn in pnlist:
self.show_appends_for_pn(pn)
self.show_appends_for_skipped()
def show_appends_for_pn(self, pn):
filenames = self.bbhandler.cooker_data.pkg_pn[pn]
best = bb.providers.findBestProvider(pn,
self.bbhandler.config_data,
self.bbhandler.cooker_data,
self.bbhandler.cooker_data.pkg_pn)
best_filename = os.path.basename(best[3])
self.show_appends_output(filenames, best_filename)
def show_appends_for_skipped(self):
filenames = [os.path.basename(f)
for f in self.bbhandler.cooker.skiplist.iterkeys()]
self.show_appends_output(filenames, None, " (skipped)")
def show_appends_output(self, filenames, best_filename, name_suffix = ''):
appended, missing = self.get_appends_for_files(filenames)
if appended:
for basename, appends in appended:
logger.plain('%s%s:', basename, name_suffix)
for append in appends:
logger.plain(' %s', append)
if best_filename:
if best_filename in missing:
logger.warn('%s: missing append for preferred version',
best_filename)
self.returncode |= 1
def get_appends_for_files(self, filenames):
appended, notappended = [], []
for filename in filenames:
_, cls = bb.cache.Cache.virtualfn2realfn(filename)
if cls:
continue
basename = os.path.basename(filename)
appends = self.bbhandler.cooker.collection.get_file_appends(basename)
if appends:
appended.append((basename, list(appends)))
else:
notappended.append(basename)
return appended, notappended
def do_show_cross_depends(self, args):
"""figure out the dependency between recipes that crosses a layer boundary.
usage: show-cross-depends [-f] [-i layer1[,layer2[,layer3...]]]
Figure out the dependency between recipes that crosses a layer boundary.
Options:
-f show full file path
-i ignore dependencies on items in the specified layer(s)
NOTE:
The .bbappend file can impact the dependency.
"""
import optparse
parser = optparse.OptionParser(usage="show-cross-depends [-f] [-i layer1[,layer2[,layer3...]]]")
parser.add_option("-f", "",
action="store_true", dest="show_filenames")
parser.add_option("-i", "",
action="store", dest="ignore_layers", default="")
options, args = parser.parse_args(sys.argv)
ignore_layers = options.ignore_layers.split(',')
self.init_bbhandler()
pkg_fn = self.bbhandler.cooker_data.pkg_fn
bbpath = str(self.bbhandler.config_data.getVar('BBPATH', True))
self.require_re = re.compile(r"require\s+(.+)")
self.include_re = re.compile(r"include\s+(.+)")
self.inherit_re = re.compile(r"inherit\s+(.+)")
global_inherit = (self.bbhandler.config_data.getVar('INHERIT', True) or "").split()
# The bb's DEPENDS and RDEPENDS
for f in pkg_fn:
f = bb.cache.Cache.virtualfn2realfn(f)[0]
# Get the layername that the file is in
layername = self.get_file_layer(f)
# The DEPENDS
deps = self.bbhandler.cooker_data.deps[f]
for pn in deps:
if pn in self.bbhandler.cooker_data.pkg_pn:
best = bb.providers.findBestProvider(pn,
self.bbhandler.config_data,
self.bbhandler.cooker_data,
self.bbhandler.cooker_data.pkg_pn)
self.check_cross_depends("DEPENDS", layername, f, best[3], options.show_filenames, ignore_layers)
# The RDPENDS
all_rdeps = self.bbhandler.cooker_data.rundeps[f].values()
# Remove the duplicated or null one.
sorted_rdeps = {}
# The all_rdeps is the list in list, so we need two for loops
for k1 in all_rdeps:
for k2 in k1:
sorted_rdeps[k2] = 1
all_rdeps = sorted_rdeps.keys()
for rdep in all_rdeps:
all_p = bb.providers.getRuntimeProviders(self.bbhandler.cooker_data, rdep)
if all_p:
if f in all_p:
# The recipe provides this one itself, ignore
continue
best = bb.providers.filterProvidersRunTime(all_p, rdep,
self.bbhandler.config_data,
self.bbhandler.cooker_data)[0][0]
self.check_cross_depends("RDEPENDS", layername, f, best, options.show_filenames, ignore_layers)
# The RRECOMMENDS
all_rrecs = self.bbhandler.cooker_data.runrecs[f].values()
# Remove the duplicated or null one.
sorted_rrecs = {}
# The all_rrecs is the list in list, so we need two for loops
for k1 in all_rrecs:
for k2 in k1:
sorted_rrecs[k2] = 1
all_rrecs = sorted_rrecs.keys()
for rrec in all_rrecs:
all_p = bb.providers.getRuntimeProviders(self.bbhandler.cooker_data, rrec)
if all_p:
if f in all_p:
# The recipe provides this one itself, ignore
continue
best = bb.providers.filterProvidersRunTime(all_p, rrec,
self.bbhandler.config_data,
self.bbhandler.cooker_data)[0][0]
self.check_cross_depends("RRECOMMENDS", layername, f, best, options.show_filenames, ignore_layers)
# The inherit class
cls_re = re.compile('classes/')
if f in self.bbhandler.cooker_data.inherits:
inherits = self.bbhandler.cooker_data.inherits[f]
for cls in inherits:
# The inherits' format is [classes/cls, /path/to/classes/cls]
# ignore the classes/cls.
if not cls_re.match(cls):
classname = os.path.splitext(os.path.basename(cls))[0]
if classname in global_inherit:
continue
inherit_layername = self.get_file_layer(cls)
if inherit_layername != layername and not inherit_layername in ignore_layers:
if not options.show_filenames:
f_short = self.remove_layer_prefix(f)
cls = self.remove_layer_prefix(cls)
else:
f_short = f
logger.plain("%s inherits %s" % (f_short, cls))
# The 'require/include xxx' in the bb file
pv_re = re.compile(r"\${PV}")
fnfile = open(f, 'r')
line = fnfile.readline()
while line:
m, keyword = self.match_require_include(line)
# Found the 'require/include xxxx'
if m:
needed_file = m.group(1)
# Replace the ${PV} with the real PV
if pv_re.search(needed_file) and f in self.bbhandler.cooker_data.pkg_pepvpr:
pv = self.bbhandler.cooker_data.pkg_pepvpr[f][1]
needed_file = re.sub(r"\${PV}", pv, needed_file)
self.print_cross_files(bbpath, keyword, layername, f, needed_file, options.show_filenames, ignore_layers)
line = fnfile.readline()
fnfile.close()
# The "require/include xxx" in conf/machine/*.conf, .inc and .bbclass
conf_re = re.compile(".*/conf/machine/[^\/]*\.conf$")
inc_re = re.compile(".*\.inc$")
# The "inherit xxx" in .bbclass
bbclass_re = re.compile(".*\.bbclass$")
for layerdir in self.bblayers:
layername = self.get_layer_name(layerdir)
for dirpath, dirnames, filenames in os.walk(layerdir):
for name in filenames:
f = os.path.join(dirpath, name)
s = conf_re.match(f) or inc_re.match(f) or bbclass_re.match(f)
if s:
ffile = open(f, 'r')
line = ffile.readline()
while line:
m, keyword = self.match_require_include(line)
# Only bbclass has the "inherit xxx" here.
bbclass=""
if not m and f.endswith(".bbclass"):
m, keyword = self.match_inherit(line)
bbclass=".bbclass"
# Find a 'require/include xxxx'
if m:
self.print_cross_files(bbpath, keyword, layername, f, m.group(1) + bbclass, options.show_filenames, ignore_layers)
line = ffile.readline()
ffile.close()
def print_cross_files(self, bbpath, keyword, layername, f, needed_filename, show_filenames, ignore_layers):
"""Print the depends that crosses a layer boundary"""
needed_file = bb.utils.which(bbpath, needed_filename)
if needed_file:
# Which layer is this file from
needed_layername = self.get_file_layer(needed_file)
if needed_layername != layername and not needed_layername in ignore_layers:
if not show_filenames:
f = self.remove_layer_prefix(f)
needed_file = self.remove_layer_prefix(needed_file)
logger.plain("%s %s %s" %(f, keyword, needed_file))
def match_inherit(self, line):
"""Match the inherit xxx line"""
return (self.inherit_re.match(line), "inherits")
def match_require_include(self, line):
"""Match the require/include xxx line"""
m = self.require_re.match(line)
keyword = "requires"
if not m:
m = self.include_re.match(line)
keyword = "includes"
return (m, keyword)
def check_cross_depends(self, keyword, layername, f, needed_file, show_filenames, ignore_layers):
"""Print the DEPENDS/RDEPENDS file that crosses a layer boundary"""
best_realfn = bb.cache.Cache.virtualfn2realfn(needed_file)[0]
needed_layername = self.get_file_layer(best_realfn)
if needed_layername != layername and not needed_layername in ignore_layers:
if not show_filenames:
f = self.remove_layer_prefix(f)
best_realfn = self.remove_layer_prefix(best_realfn)
logger.plain("%s %s %s" % (f, keyword, best_realfn))
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]) or 0)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
import os
import sys,logging
import optparse
@@ -50,6 +50,6 @@ if __name__ == "__main__":
except Exception:
ret = 1
import traceback
traceback.print_exc()
traceback.print_exc(5)
sys.exit(ret)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
#
# Copyright (C) 2012 Richard Purdie
#
@@ -25,48 +25,25 @@ try:
except RuntimeError as exc:
sys.exit(str(exc))
tests = ["bb.tests.codeparser",
"bb.tests.cow",
"bb.tests.data",
"bb.tests.fetch",
"bb.tests.parse",
"bb.tests.utils"]
def usage():
print('usage: %s [testname1 [testname2]...]' % os.path.basename(sys.argv[0]))
if len(sys.argv) > 1:
if '--help' in sys.argv[1:]:
usage()
sys.exit(0)
tests = sys.argv[1:]
else:
tests = ["bb.tests.codeparser",
"bb.tests.cow",
"bb.tests.data",
"bb.tests.fetch",
"bb.tests.utils"]
for t in tests:
t = '.'.join(t.split('.')[:3])
__import__(t)
unittest.main(argv=["bitbake-selftest"] + tests)
# Set-up logging
class StdoutStreamHandler(logging.StreamHandler):
"""Special handler so that unittest is able to capture stdout"""
def __init__(self):
# Override __init__() because we don't want to set self.stream here
logging.Handler.__init__(self)
@property
def stream(self):
# We want to dynamically write wherever sys.stdout is pointing to
return sys.stdout
handler = StdoutStreamHandler()
bb.logger.addHandler(handler)
bb.logger.setLevel(logging.DEBUG)
ENV_HELP = """\
Environment variables:
BB_SKIP_NETTESTS set to 'yes' in order to skip tests using network
connection
BB_TMPDIR_NOCLEAN set to 'yes' to preserve test tmp directories
"""
class main(unittest.main):
def _print_help(self, *args, **kwargs):
super(main, self)._print_help(*args, **kwargs)
print(ENV_HELP)
if __name__ == '__main__':
main(defaultTest=tests, buffer=True)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
import os
import sys
@@ -10,14 +10,6 @@ import bb
import select
import errno
import signal
import pickle
import traceback
import queue
from multiprocessing import Lock
from threading import Thread
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports utf-8.\nPython can't change the filesystem locale after loading so we need a utf-8 when python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 2 or not sys.argv[1].startswith("decafbad"):
@@ -25,33 +17,24 @@ if len(sys.argv) != 2 or not sys.argv[1].startswith("decafbad"):
sys.exit(1)
profiling = False
if sys.argv[1].startswith("decafbadbad"):
if sys.argv[1] == "decafbadbad":
profiling = True
try:
import cProfile as profile
except:
import profile
# Unbuffer stdout to avoid log truncation in the event
# of an unorderly exit as well as to provide timely
# updates to log files for use with tail
try:
if sys.stdout.name == '<stdout>':
import fcntl
fl = fcntl.fcntl(sys.stdout.fileno(), fcntl.F_GETFL)
fl |= os.O_SYNC
fcntl.fcntl(sys.stdout.fileno(), fcntl.F_SETFL, fl)
#sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
except:
pass
logger = logging.getLogger("BitBake")
try:
import cPickle as pickle
except ImportError:
import pickle
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
worker_pipe = sys.stdout.fileno()
bb.utils.nonblockingfd(worker_pipe)
# Need to guard against multiprocessing being used in child processes
# and multiple processes trying to write to the parent at the same time
worker_pipe_lock = None
handler = bb.event.LogHandler()
logger.addHandler(handler)
@@ -66,61 +49,36 @@ if 0:
consolelog.setFormatter(conlogformat)
logger.addHandler(consolelog)
worker_queue = queue.Queue()
worker_queue = ""
def worker_fire(event, d):
data = b"<event>" + pickle.dumps(event) + b"</event>"
data = "<event>" + pickle.dumps(event) + "</event>"
worker_fire_prepickled(data)
def worker_fire_prepickled(event):
global worker_queue
worker_queue.put(event)
worker_queue = worker_queue + event
worker_flush()
#
# We can end up with write contention with the cooker, it can be trying to send commands
# and we can be trying to send event data back. Therefore use a separate thread for writing
# back data to cooker.
#
worker_thread_exit = False
def worker_flush():
global worker_queue, worker_pipe
def worker_flush(worker_queue):
worker_queue_int = b""
global worker_pipe, worker_thread_exit
if not worker_queue:
return
while True:
try:
worker_queue_int = worker_queue_int + worker_queue.get(True, 1)
except queue.Empty:
pass
while (worker_queue_int or not worker_queue.empty()):
try:
(_, ready, _) = select.select([], [worker_pipe], [], 1)
if not worker_queue.empty():
worker_queue_int = worker_queue_int + worker_queue.get()
written = os.write(worker_pipe, worker_queue_int)
worker_queue_int = worker_queue_int[written:]
except (IOError, OSError) as e:
if e.errno != errno.EAGAIN and e.errno != errno.EPIPE:
raise
if worker_thread_exit and worker_queue.empty() and not worker_queue_int:
return
worker_thread = Thread(target=worker_flush, args=(worker_queue,))
worker_thread.start()
try:
written = os.write(worker_pipe, worker_queue)
worker_queue = worker_queue[written:]
except (IOError, OSError) as e:
if e.errno != errno.EAGAIN:
raise
def worker_child_fire(event, d):
global worker_pipe
global worker_pipe_lock
data = b"<event>" + pickle.dumps(event) + b"</event>"
try:
worker_pipe_lock.acquire()
worker_pipe.write(data)
worker_pipe_lock.release()
except IOError:
sigterm_handler(None, None)
raise
data = "<event>" + pickle.dumps(event) + "</event>"
worker_pipe.write(data)
bb.event.worker_fire = worker_fire
@@ -136,7 +94,7 @@ def sigterm_handler(signum, frame):
os.killpg(0, signal.SIGTERM)
sys.exit()
def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, appends, taskdepdata, extraconfigdata, quieterrors=False, dry_run_exec=False):
def fork_off_task(cfg, data, workerdata, fn, task, taskname, appends, taskdepdata, quieterrors=False):
# We need to setup the environment BEFORE the fork, since
# a fork() or exec*() activates PSEUDO...
@@ -152,10 +110,8 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, append
except TypeError:
umask = taskdep['umask'][taskname]
dry_run = cfg.dry_run or dry_run_exec
# We can't use the fakeroot environment in a dry run as it possibly hasn't been built
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not dry_run:
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not cfg.dry_run:
envvars = (workerdata["fakerootenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
@@ -183,26 +139,22 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, append
pipeout = os.fdopen(pipeout, 'wb', 0)
pid = os.fork()
except OSError as e:
logger.critical("fork failed: %d (%s)" % (e.errno, e.strerror))
sys.exit(1)
bb.msg.fatal("RunQueue", "fork failed: %d (%s)" % (e.errno, e.strerror))
if pid == 0:
def child():
global worker_pipe
global worker_pipe_lock
pipein.close()
signal.signal(signal.SIGTERM, sigterm_handler)
# Let SIGHUP exit as SIGTERM
signal.signal(signal.SIGHUP, sigterm_handler)
bb.utils.signal_on_parent_exit("SIGTERM")
# Save out the PID so that the event can include it the
# events
bb.event.worker_pid = os.getpid()
bb.event.worker_fire = worker_child_fire
worker_pipe = pipeout
worker_pipe_lock = Lock()
# Make the child the process group leader and ensure no
# child process will be controlled by the current terminal
@@ -216,58 +168,37 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, append
if umask:
os.umask(umask)
data.setVar("BB_WORKERCONTEXT", "1")
data.setVar("BB_TASKDEPDATA", taskdepdata)
data.setVar("BUILDNAME", workerdata["buildname"])
data.setVar("DATE", workerdata["date"])
data.setVar("TIME", workerdata["time"])
bb.parse.siggen.set_taskdata(workerdata["sigdata"])
ret = 0
try:
bb_cache = bb.cache.NoCache(databuilder)
(realfn, virtual, mc) = bb.cache.virtualfn2realfn(fn)
the_data = databuilder.mcdata[mc]
the_data.setVar("BB_WORKERCONTEXT", "1")
the_data.setVar("BB_TASKDEPDATA", taskdepdata)
if cfg.limited_deps:
the_data.setVar("BB_LIMITEDDEPS", "1")
the_data.setVar("BUILDNAME", workerdata["buildname"])
the_data.setVar("DATE", workerdata["date"])
the_data.setVar("TIME", workerdata["time"])
for varname, value in extraconfigdata.items():
the_data.setVar(varname, value)
bb.parse.siggen.set_taskdata(workerdata["sigdata"])
ret = 0
the_data = bb_cache.loadDataFull(fn, appends)
the_data = bb.cache.Cache.loadDataFull(fn, appends, data)
the_data.setVar('BB_TASKHASH', workerdata["runq_hash"][task])
bb.utils.set_process_name("%s:%s" % (the_data.getVar("PN"), taskname.replace("do_", "")))
# exported_vars() returns a generator which *cannot* be passed to os.environ.update()
# successfully. We also need to unset anything from the environment which shouldn't be there
exports = bb.data.exported_vars(the_data)
bb.utils.empty_environment()
for e, v in exports:
os.environ[e] = v
for e in fakeenv:
os.environ[e] = fakeenv[e]
the_data.setVar(e, fakeenv[e])
the_data.setVarFlag(e, 'export', "1")
task_exports = the_data.getVarFlag(taskname, 'exports')
if task_exports:
for e in task_exports.split():
the_data.setVarFlag(e, 'export', '1')
v = the_data.getVar(e)
if v is not None:
os.environ[e] = v
if quieterrors:
the_data.setVarFlag(taskname, "quieterrors", "1")
except Exception:
except Exception as exc:
if not quieterrors:
logger.critical(traceback.format_exc())
logger.critical(str(exc))
os._exit(1)
try:
if dry_run:
if cfg.dry_run:
return 0
return bb.build.exec_task(fn, taskname, the_data, cfg.profile)
except:
@@ -284,7 +215,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, append
bb.utils.process_profilelog(profname)
os._exit(ret)
else:
for key, value in iter(envbackup.items()):
for key, value in envbackup.iteritems():
if value is None:
del os.environ[key]
else:
@@ -301,22 +232,22 @@ class runQueueWorkerPipe():
if pipeout:
pipeout.close()
bb.utils.nonblockingfd(self.input)
self.queue = b""
self.queue = ""
def read(self):
start = len(self.queue)
try:
self.queue = self.queue + (self.input.read(102400) or b"")
self.queue = self.queue + self.input.read(102400)
except (OSError, IOError) as e:
if e.errno != errno.EAGAIN:
raise
end = len(self.queue)
index = self.queue.find(b"</event>")
index = self.queue.find("</event>")
while index != -1:
worker_fire_prepickled(self.queue[:index+8])
self.queue = self.queue[index+8:]
index = self.queue.find(b"</event>")
index = self.queue.find("</event>")
return (end > start)
def close(self):
@@ -332,27 +263,22 @@ class BitbakeWorker(object):
def __init__(self, din):
self.input = din
bb.utils.nonblockingfd(self.input)
self.queue = b""
self.queue = ""
self.cookercfg = None
self.databuilder = None
self.data = None
self.extraconfigdata = None
self.build_pids = {}
self.build_pipes = {}
signal.signal(signal.SIGTERM, self.sigterm_exception)
# Let SIGHUP exit as SIGTERM
signal.signal(signal.SIGHUP, self.sigterm_exception)
if "beef" in sys.argv[1]:
bb.utils.set_process_name("Worker (Fakeroot)")
else:
bb.utils.set_process_name("Worker")
def sigterm_exception(self, signum, stackframe):
if signum == signal.SIGTERM:
bb.warn("Worker received SIGTERM, shutting down...")
bb.warn("Worker recieved SIGTERM, shutting down...")
elif signum == signal.SIGHUP:
bb.warn("Worker received SIGHUP, shutting down...")
bb.warn("Worker recieved SIGHUP, shutting down...")
self.handle_finishnow(None)
signal.signal(signal.SIGTERM, signal.SIG_DFL)
os.kill(os.getpid(), signal.SIGTERM)
@@ -360,39 +286,34 @@ class BitbakeWorker(object):
def serve(self):
while True:
(ready, _, _) = select.select([self.input] + [i.input for i in self.build_pipes.values()], [] , [], 1)
if self.input in ready:
if self.input in ready or len(self.queue):
start = len(self.queue)
try:
r = self.input.read()
if len(r) == 0:
# EOF on pipe, server must have terminated
self.sigterm_exception(signal.SIGTERM, None)
self.queue = self.queue + r
self.queue = self.queue + self.input.read()
except (OSError, IOError):
pass
if len(self.queue):
self.handle_item(b"cookerconfig", self.handle_cookercfg)
self.handle_item(b"extraconfigdata", self.handle_extraconfigdata)
self.handle_item(b"workerdata", self.handle_workerdata)
self.handle_item(b"runtask", self.handle_runtask)
self.handle_item(b"finishnow", self.handle_finishnow)
self.handle_item(b"ping", self.handle_ping)
self.handle_item(b"quit", self.handle_quit)
end = len(self.queue)
self.handle_item("cookerconfig", self.handle_cookercfg)
self.handle_item("workerdata", self.handle_workerdata)
self.handle_item("runtask", self.handle_runtask)
self.handle_item("finishnow", self.handle_finishnow)
self.handle_item("ping", self.handle_ping)
self.handle_item("quit", self.handle_quit)
for pipe in self.build_pipes:
if self.build_pipes[pipe].input in ready:
self.build_pipes[pipe].read()
self.build_pipes[pipe].read()
if len(self.build_pids):
while self.process_waitpid():
continue
self.process_waitpid()
worker_flush()
def handle_item(self, item, func):
if self.queue.startswith(b"<" + item + b">"):
index = self.queue.find(b"</" + item + b">")
if self.queue.startswith("<" + item + ">"):
index = self.queue.find("</" + item + ">")
while index != -1:
func(self.queue[(len(item) + 2):index])
self.queue = self.queue[(index + len(item) + 3):]
index = self.queue.find(b"</" + item + b">")
index = self.queue.find("</" + item + ">")
def handle_cookercfg(self, data):
self.cookercfg = pickle.loads(data)
@@ -400,22 +321,18 @@ class BitbakeWorker(object):
self.databuilder.parseBaseConfiguration()
self.data = self.databuilder.data
def handle_extraconfigdata(self, data):
self.extraconfigdata = pickle.loads(data)
def handle_workerdata(self, data):
self.workerdata = pickle.loads(data)
bb.msg.loggerDefaultDebugLevel = self.workerdata["logdefaultdebug"]
bb.msg.loggerDefaultVerbose = self.workerdata["logdefaultverbose"]
bb.msg.loggerVerboseLogs = self.workerdata["logdefaultverboselogs"]
bb.msg.loggerDefaultDomains = self.workerdata["logdefaultdomain"]
for mc in self.databuilder.mcdata:
self.databuilder.mcdata[mc].setVar("PRSERV_HOST", self.workerdata["prhost"])
self.data.setVar("PRSERV_HOST", self.workerdata["prhost"])
def handle_ping(self, _):
workerlog_write("Handling ping\n")
logger.warning("Pong from bitbake-worker!")
logger.warn("Pong from bitbake-worker!")
def handle_quit(self, data):
workerlog_write("Handling quit\n")
@@ -425,10 +342,10 @@ class BitbakeWorker(object):
sys.exit(0)
def handle_runtask(self, data):
fn, task, taskname, quieterrors, appends, taskdepdata, dry_run_exec = pickle.loads(data)
fn, task, taskname, quieterrors, appends, taskdepdata = pickle.loads(data)
workerlog_write("Handling runtask %s %s %s\n" % (task, fn, taskname))
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.databuilder, self.workerdata, fn, task, taskname, appends, taskdepdata, self.extraconfigdata, quieterrors, dry_run_exec)
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.workerdata, fn, task, taskname, appends, taskdepdata, quieterrors)
self.build_pids[pid] = task
self.build_pipes[pid] = runQueueWorkerPipe(pipein, pipeout)
@@ -441,9 +358,9 @@ class BitbakeWorker(object):
try:
pid, status = os.waitpid(-1, os.WNOHANG)
if pid == 0 or os.WIFSTOPPED(status):
return False
return None
except OSError:
return False
return None
workerlog_write("Exit code of %s for pid %s\n" % (status, pid))
@@ -460,14 +377,12 @@ class BitbakeWorker(object):
self.build_pipes[pid].close()
del self.build_pipes[pid]
worker_fire_prepickled(b"<exitcode>" + pickle.dumps((task, status)) + b"</exitcode>")
return True
worker_fire_prepickled("<exitcode>" + pickle.dumps((task, status)) + "</exitcode>")
def handle_finishnow(self, _):
if self.build_pids:
logger.info("Sending SIGTERM to remaining %s tasks", len(self.build_pids))
for k, v in iter(self.build_pids.items()):
for k, v in self.build_pids.iteritems():
try:
os.kill(-k, signal.SIGTERM)
os.waitpid(-1, 0)
@@ -477,7 +392,7 @@ class BitbakeWorker(object):
self.build_pipes[pipe].read()
try:
worker = BitbakeWorker(os.fdopen(sys.stdin.fileno(), 'rb'))
worker = BitbakeWorker(sys.stdin)
if not profiling:
worker.serve()
else:
@@ -493,10 +408,8 @@ except BaseException as e:
import traceback
sys.stderr.write(traceback.format_exc())
sys.stderr.write(str(e))
worker_thread_exit = True
worker_thread.join()
while len(worker_queue):
worker_flush()
workerlog_write("exitting")
sys.exit(0)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
@@ -462,7 +462,7 @@ def main():
state_group = 2
for key in bb.data.keys(documentation):
data = documentation.getVarFlag(key, "doc", False)
data = documentation.getVarFlag(key, "doc")
if not data:
continue

122
bitbake/bin/image-writer Executable file
View File

@@ -0,0 +1,122 @@
#!/usr/bin/env python
# Copyright (c) 2012 Wind River Systems, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
import os
import sys
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname( \
os.path.abspath(__file__))), 'lib'))
try:
import bb
except RuntimeError as exc:
sys.exit(str(exc))
import gtk
import optparse
import pygtk
from bb.ui.crumbs.hobwidget import HobAltButton, HobButton
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
from bb.ui.crumbs.hig.deployimagedialog import DeployImageDialog
from bb.ui.crumbs.hig.imageselectiondialog import ImageSelectionDialog
# I put all the fs bitbake supported here. Need more test.
DEPLOYABLE_IMAGE_TYPES = ["jffs2", "cramfs", "ext2", "ext3", "btrfs", "squashfs", "ubi", "vmdk"]
Title = "USB Image Writer"
class DeployWindow(gtk.Window):
def __init__(self, image_path=''):
super(DeployWindow, self).__init__()
if len(image_path) > 0:
valid = True
if not os.path.exists(image_path):
valid = False
lbl = "<b>Invalid image file path: %s.</b>\nPress <b>Select Image</b> to select an image." % image_path
else:
image_path = os.path.abspath(image_path)
extend_name = os.path.splitext(image_path)[1][1:]
if extend_name not in DEPLOYABLE_IMAGE_TYPES:
valid = False
lbl = "<b>Undeployable imge type: %s</b>\nPress <b>Select Image</b> to select an image." % extend_name
if not valid:
image_path = ''
crumbs_dialog = CrumbsMessageDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
button = crumbs_dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
crumbs_dialog.run()
crumbs_dialog.destroy()
self.deploy_dialog = DeployImageDialog(Title, image_path, self,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT
| gtk.DIALOG_NO_SEPARATOR, None, standalone=True)
close_button = self.deploy_dialog.add_button("Close", gtk.RESPONSE_NO)
HobAltButton.style_button(close_button)
close_button.connect('clicked', gtk.main_quit)
write_button = self.deploy_dialog.add_button("Write USB image", gtk.RESPONSE_YES)
HobAltButton.style_button(write_button)
self.deploy_dialog.connect('select_image_clicked', self.select_image_clicked_cb)
self.deploy_dialog.connect('destroy', gtk.main_quit)
response = self.deploy_dialog.show()
def select_image_clicked_cb(self, dialog):
cwd = os.getcwd()
dialog = ImageSelectionDialog(cwd, DEPLOYABLE_IMAGE_TYPES, Title, self, gtk.FILE_CHOOSER_ACTION_SAVE )
button = dialog.add_button("Cancel", gtk.RESPONSE_NO)
HobAltButton.style_button(button)
button = dialog.add_button("Open", gtk.RESPONSE_YES)
HobAltButton.style_button(button)
response = dialog.run()
if response == gtk.RESPONSE_YES:
if not dialog.image_names:
lbl = "<b>No selections made</b>\nClicked the radio button to select a image."
crumbs_dialog = CrumbsMessageDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
button = crumbs_dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
crumbs_dialog.run()
crumbs_dialog.destroy()
dialog.destroy()
return
# get the full path of image
image_path = os.path.join(dialog.image_folder, dialog.image_names[0])
self.deploy_dialog.set_image_text_buffer(image_path)
self.deploy_dialog.set_image_path(image_path)
dialog.destroy()
def main():
parser = optparse.OptionParser(
usage = """%prog [-h] [image_file]
%prog writes bootable images to USB devices. You can
provide the image file on the command line or select it using the GUI.""")
options, args = parser.parse_args(sys.argv)
image_file = args[1] if len(args) > 1 else ''
dw = DeployWindow(image_file)
if __name__ == '__main__':
try:
main()
gtk.main()
except Exception:
import traceback
traceback.print_exc(3)

View File

@@ -1,8 +1,5 @@
#!/bin/echo ERROR: This script needs to be sourced. Please run as .
# toaster - shell script to start Toaster
# Copyright (C) 2013-2015 Intel Corp.
#!/bin/bash
# (c) 2013 Intel Corp.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@@ -15,247 +12,256 @@
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
HELP="
Usage: source toaster start|stop [webport=<address:port>] [noweb]
Optional arguments:
[noweb] Setup the environment for building with toaster but don't start the development server
[webport] Set the development server (default: localhost:8000)
"
webserverKillAll()
# This script can be run in two modes.
# When used with "source", from a build directory,
# it enables toaster event logging and starts the bitbake resident server.
# use as: source toaster [start|stop] [noweb] [noui]
# When it is called as a stand-alone script, it starts just the
# web server, and the building shall be done through the web interface.
# As script, it will not return to the command prompt. Stop with Ctrl-C.
# Helper function to kill a background toaster development server
function webserverKillAll()
{
local pidfile
for pidfile in ${BUILDDIR}/.toastermain.pid ${BUILDDIR}/.runbuilds.pid; do
if [ -f ${pidfile} ]; then
pid=`cat ${pidfile}`
while kill -0 $pid 2>/dev/null; do
kill -SIGTERM -$pid 2>/dev/null
sleep 1
done
rm ${pidfile}
fi
done
local pidfile
for pidfile in ${BUILDDIR}/.toastermain.pid; do
if [ -f ${pidfile} ]; then
while kill -0 $(< ${pidfile}) 2>/dev/null; do
kill -SIGTERM -$(< ${pidfile}) 2>/dev/null
sleep 1;
# Kill processes if they are still running - may happen in interactive shells
ps fux | grep "python.*manage.py" | awk '{print $2}' | xargs kill
done;
rm ${pidfile}
fi
done
}
webserverStartAll()
function webserverStartAll()
{
# do not start if toastermain points to a valid process
if ! cat "${BUILDDIR}/.toastermain.pid" 2>/dev/null | xargs -I{} kill -0 {} ; then
retval=1
rm "${BUILDDIR}/.toastermain.pid"
fi
# do not start if toastermain points to a valid process
if ! cat "${BUILDDIR}/.toastermain.pid" 2>/dev/null | xargs -I{} kill -0 {} ; then
retval=1
rm "${BUILDDIR}/.toastermain.pid"
fi
retval=0
# you can always add a superuser later via
# ../bitbake/lib/toaster/manage.py createsuperuser --username=<ME>
$MANAGE migrate --noinput || retval=1
if [ $retval -eq 1 ]; then
echo "Failed migrations, aborting system start" 1>&2
retval=0
python $BBBASEDIR/lib/toaster/manage.py syncdb || retval=1
python $BBBASEDIR/lib/toaster/manage.py migrate orm || retval=2
if [ $retval -eq 1 ]; then
echo "Failed db sync, stopping system start" 1>&2
elif [ $retval -eq 2 ]; then
echo -e "\nError on migration, trying to recover... \n"
python $BBBASEDIR/lib/toaster/manage.py migrate orm 0001_initial --fake
retval=0
python $BBBASEDIR/lib/toaster/manage.py migrate orm || retval=1
fi
if [ "x$TOASTER_MANAGED" == "x1" ]; then
python $BBBASEDIR/lib/toaster/manage.py migrate bldcontrol || retval=1
python $BBBASEDIR/lib/toaster/manage.py checksettings --traceback || retval=1
fi
if [ $retval -eq 0 ]; then
echo "Starting webserver"
python $BBBASEDIR/lib/toaster/manage.py runserver 0.0.0.0:8000 </dev/null >${BUILDDIR}/toaster_web.log 2>&1 & echo $! >${BUILDDIR}/.toastermain.pid
sleep 1
if ! cat "${BUILDDIR}/.toastermain.pid" | xargs -I{} kill -0 {} ; then
retval=1
rm "${BUILDDIR}/.toastermain.pid"
fi
fi
return $retval
fi
# Make sure that checksettings can pick up any value for TEMPLATECONF
export TEMPLATECONF
$MANAGE checksettings --traceback || retval=1
}
if [ $retval -eq 1 ]; then
printf "\nError while checking settings; aborting\n"
return $retval
fi
# Helper functions to add a special configuration file
echo "Starting webserver..."
$MANAGE runserver "$ADDR_PORT" \
</dev/null >>${BUILDDIR}/toaster_web.log 2>&1 \
& echo $! >${BUILDDIR}/.toastermain.pid
sleep 1
if ! cat "${BUILDDIR}/.toastermain.pid" | xargs -I{} kill -0 {} ; then
retval=1
rm "${BUILDDIR}/.toastermain.pid"
else
echo "Toaster development webserver started at http://$ADDR_PORT"
echo -e "\nYou can now run 'bitbake <target>' on the command line and monitor your build in Toaster.\nYou can also use a Toaster project to configure and run a build.\n"
fi
return $retval
function addtoConfiguration()
{
echo "#Created by toaster start script" > ${BUILDDIR}/conf/$2
echo $1 >> ${BUILDDIR}/conf/$2
}
INSTOPSYSTEM=0
# define the stop command
stop_system()
function stop_system()
{
# prevent reentry
if [ $INSTOPSYSTEM -eq 1 ]; then return; fi
if [ $INSTOPSYSTEM == 1 ]; then return; fi
INSTOPSYSTEM=1
if [ -f ${BUILDDIR}/.toasterui.pid ]; then
kill $(< ${BUILDDIR}/.toasterui.pid ) 2>/dev/null
rm ${BUILDDIR}/.toasterui.pid
fi
BBSERVER=0.0.0.0:8200 bitbake -m
unset BBSERVER
webserverKillAll
# unset exported variables
unset TOASTER_DIR
unset BITBAKE_UI
unset BBBASEDIR
# force stop any misbehaving bitbake server
lsof bitbake.lock | awk '{print $2}' | grep "[0-9]\+" | xargs -n1 -r kill
trap - SIGHUP
#trap - SIGCHLD
INSTOPSYSTEM=0
}
verify_prereq() {
# Verify Django version
reqfile=$(python3 -c "import os; print(os.path.realpath('$BBBASEDIR/toaster-requirements.txt'))")
exp='s/Django\([><=]\+\)\([^,]\+\),\([><=]\+\)\(.\+\)/'
exp=$exp'import sys,django;version=django.get_version().split(".");'
exp=$exp'sys.exit(not (version \1 "\2".split(".") and version \3 "\4".split(".")))/p'
if ! sed -n "$exp" $reqfile | python3 - ; then
req=`grep ^Django $reqfile`
echo "This program needs $req"
echo "Please install with pip3 install -r $reqfile"
return 2
fi
return 0
function check_pidbyfile() {
[ -e $1 ] && kill -0 $(< $1) 2>/dev/null
}
# read command line parameters
if [ -n "$BASH_SOURCE" ] ; then
TOASTER=${BASH_SOURCE}
elif [ -n "$ZSH_NAME" ] ; then
TOASTER=${(%):-%x}
else
TOASTER=$0
fi
export BBBASEDIR=`dirname $TOASTER`/..
MANAGE="python3 $BBBASEDIR/lib/toaster/manage.py"
OE_ROOT=`dirname $TOASTER`/../..
# this is the configuraton file we are using for toaster
# we are using the same logic that oe-setup-builddir uses
# (based on TEMPLATECONF and .templateconf) to determine
# which toasterconf.json to use.
# note: There are a number of relative path assumptions
# in the local layers that currently make using an arbitrary
# toasterconf.json difficult.
. $OE_ROOT/.templateconf
if [ -n "$TEMPLATECONF" ]; then
if [ ! -d "$TEMPLATECONF" ]; then
# Allow TEMPLATECONF=meta-xyz/conf as a shortcut
if [ -d "$OE_ROOT/$TEMPLATECONF" ]; then
TEMPLATECONF="$OE_ROOT/$TEMPLATECONF"
fi
function notify_chldexit() {
if [ $NOTOASTERUI == 0 ]; then
check_pidbyfile ${BUILDDIR}/.toasterui.pid && return
stop_system
fi
}
BBBASEDIR=`dirname ${BASH_SOURCE}`/..
RUNNING=0
if [ -z "$ZSH_NAME" ] && [ `basename \"$0\"` = `basename \"$BASH_SOURCE\"` ]; then
# We are called as standalone. We refuse to run in a build environment - we need the interactive mode for that.
# Start just the web server, point the web browser to the interface, and start any Django services.
if [ -n "$BUILDDIR" ]; then
echo -e "Error: build/ directory detected. Toaster will not start in managed mode if a build environment is detected.\nUse a clean terminal to start Toaster." 1>&2;
exit 1;
fi
# Define a fake builddir where only the pid files are actually created. No real builds will take place here.
BUILDDIR=/tmp
RUNNING=1
function trap_ctrlc() {
echo "** Stopping system"
webserverKillAll
RUNNING=0
}
TOASTER_MANAGED=1
export TOASTER_MANAGED=1
if ! webserverStartAll; then
echo "Failed to start the web server, stopping" 1>&2;
exit 1;
fi
xdg-open http://0.0.0.0:8000/ >/dev/null 2>&1 &
trap trap_ctrlc SIGINT
echo "Running. Stop with Ctrl-C"
while [ $RUNNING -gt 0 ]; do
python $BBBASEDIR/lib/toaster/manage.py runbuilds
sleep 1
done
echo "**** Exit"
exit 0
fi
unset OE_ROOT
WEBSERVER=1
ADDR_PORT="localhost:8000"
unset CMD
for param in $*; do
case $param in
noweb )
WEBSERVER=0
;;
start )
CMD=$param
;;
stop )
CMD=$param
;;
webport=*)
ADDR_PORT="${param#*=}"
# Split the addr:port string
ADDR=`echo $ADDR_PORT | cut -f 1 -d ':'`
PORT=`echo $ADDR_PORT | cut -f 2 -d ':'`
# If only a port has been speified then set address to localhost.
if [ $ADDR = $PORT ] ; then
ADDR_PORT="localhost:$PORT"
fi
;;
--help)
echo "$HELP"
return 0
;;
*)
echo "$HELP"
return 1
;;
esac
done
if [ `basename \"$0\"` = `basename \"${TOASTER}\"` ]; then
echo "Error: This script needs to be sourced. Please run as . $TOASTER"
return 1
fi
verify_prereq || return 1
# We make sure we're running in the current shell and in a good environment
if [ -z "$BUILDDIR" ] || ! which bitbake >/dev/null 2>&1 ; then
echo "Error: Build environment is not setup or bitbake is not in path." 1>&2
if [ -z "$BUILDDIR" ] || [ -z `which bitbake` ]; then
echo "Error: Build environment is not setup or bitbake is not in path." 1>&2;
return 2
fi
# this defines the dir toaster will use for
# 1) clones of layers (in _toaster_clones )
# 2) the build dir (in build)
# 3) the sqlite db if that is being used.
# 4) pid's we need to clean up on exit/shutdown
export TOASTER_DIR=`dirname $BUILDDIR`
export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE TOASTER_DIR"
# Verify prerequisites
if ! echo "import django; print (1,) == django.VERSION[0:1] and django.VERSION[1:2][0] in (5,6)" | python 2>/dev/null | grep True >/dev/null; then
echo -e "This program needs Django 1.5 or 1.6. Please install with\n\npip install django==1.6"
return 2
fi
if ! echo "import south; print [0,8,4] == map(int,south.__version__.split(\".\"))" | python 2>/dev/null | grep True >/dev/null; then
echo -e "This program needs South 0.8.4. Please install with\n\npip install south==0.8.4"
return 2
fi
# Determine the action. If specified by arguments, fine, if not, toggle it
if [ "$CMD" = "start" ] ; then
if [ -n "$BBSERVER" ]; then
echo " Toaster is already running. Exiting..."
return 1
fi
elif [ "$CMD" = "" ]; then
echo "No command specified"
echo "$HELP"
return 1
if [ "x$1" == "xstart" ] || [ "x$1" == "xstop" ]; then
CMD="$1"
else
if [ -z "$BBSERVER" ]; then
CMD="start"
else
CMD="stop"
fi;
fi
NOTOASTERUI=0
WEBSERVER=1
for param in $*; do
case $param in
noui )
NOTOASTERUI=1
;;
noweb )
WEBSERVER=0
;;
esac
done
echo "The system will $CMD."
# Make sure it's safe to run by checking bitbake lock
lock=1
if [ -e $BUILDDIR/bitbake.lock ]; then
(flock -n 200 ) 200<$BUILDDIR/bitbake.lock || lock=0
fi
if [ ${CMD} == "start" ] && ( [ $lock -eq 0 ] || [ -e $BUILDDIR/.toastermain.pid ] ); then
echo "Error: bitbake lock state error. File locks show that the system is on." 2>&1
echo "If you see problems, stop and then start the system again." 2>&1
return 3
fi
# Execute the commands
case $CMD in
start )
# check if addr:port is not in use
if [ "$CMD" == 'start' ]; then
if [ $WEBSERVER -gt 0 ]; then
$MANAGE checksocket "$ADDR_PORT" || return 1
fi
fi
# Create configuration file
conf=${BUILDDIR}/conf/local.conf
line='INHERIT+="toaster buildhistory"'
grep -q "$line" $conf || echo $line >> $conf
start_success=1
addtoConfiguration "INHERIT+=\"toaster buildhistory\"" toaster.conf
if [ $WEBSERVER -gt 0 ] && ! webserverStartAll; then
echo "Failed ${CMD}."
return 4
fi
export BITBAKE_UI='toasterui'
$MANAGE runbuilds \
</dev/null >>${BUILDDIR}/toaster_runbuilds.log 2>&1 \
& echo $! >${BUILDDIR}/.runbuilds.pid
# set fail safe stop system on terminal exit
unset BBSERVER
bitbake --postread conf/toaster.conf --server-only -t xmlrpc -B 0.0.0.0:8200
if [ $? -ne 0 ]; then
start_success=0
echo "Bitbake server start failed"
else
export BBSERVER=0.0.0.0:8200
if [ $NOTOASTERUI == 0 ]; then # we start the TOASTERUI only if not inhibited
bitbake --observe-only -u toasterui >${BUILDDIR}/toaster_ui.log 2>&1 & echo $! >${BUILDDIR}/.toasterui.pid
fi
fi
if [ $start_success -eq 1 ]; then
# set fail safe stop system on terminal exit
trap stop_system SIGHUP
echo "Successful ${CMD}."
else
# failed start, do stop
stop_system
echo "Failed ${CMD}."
fi
# stop system on terminal exit
set -o monitor
trap stop_system SIGHUP
echo "Successful ${CMD}."
return 0
#trap notify_chldexit SIGCHLD
;;
stop )
stop_system
echo "Successful ${CMD}."
;;
esac

View File

@@ -1,126 +0,0 @@
#!/usr/bin/env python3
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Copyright (C) 2014 Alex Damian
#
# This file re-uses code spread throughout other Bitbake source files.
# As such, all other copyrights belong to their own right holders.
#
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
This command takes a filename as a single parameter. The filename is read
as a build eventlog, and the ToasterUI is used to process events in the file
and log data in the database
"""
import os
import sys
import json
import pickle
import codecs
from collections import namedtuple
# mangle syspath to allow easy import of modules
from os.path import join, dirname, abspath
sys.path.insert(0, join(dirname(dirname(abspath(__file__))), 'lib'))
import bb.cooker
from bb.ui import toasterui
class EventPlayer:
"""Emulate a connection to a bitbake server."""
def __init__(self, eventfile, variables):
self.eventfile = eventfile
self.variables = variables
self.eventmask = []
def waitEvent(self, _timeout):
"""Read event from the file."""
line = self.eventfile.readline().strip()
if not line:
return
try:
event_str = json.loads(line)['vars'].encode('utf-8')
event = pickle.loads(codecs.decode(event_str, 'base64'))
event_name = "%s.%s" % (event.__module__, event.__class__.__name__)
if event_name not in self.eventmask:
return
return event
except ValueError as err:
print("Failed loading ", line)
raise err
def runCommand(self, command_line):
"""Emulate running a command on the server."""
name = command_line[0]
if name == "getVariable":
var_name = command_line[1]
variable = self.variables.get(var_name)
if variable:
return variable['v'], None
return None, "Missing variable %s" % var_name
elif name == "getAllKeysWithFlags":
dump = {}
flaglist = command_line[1]
for key, val in self.variables.items():
try:
if not key.startswith("__"):
dump[key] = {
'v': val['v'],
'history' : val['history'],
}
for flag in flaglist:
dump[key][flag] = val[flag]
except Exception as err:
print(err)
return (dump, None)
elif name == 'setEventMask':
self.eventmask = command_line[-1]
return True, None
else:
raise Exception("Command %s not implemented" % command_line[0])
def getEventHandle(self):
"""
This method is called by toasterui.
The return value is passed to self.runCommand but not used there.
"""
pass
def main(argv):
with open(argv[-1]) as eventfile:
# load variables from the first line
variables = json.loads(eventfile.readline().strip())['allvariables']
params = namedtuple('ConfigParams', ['observe_only'])(True)
player = EventPlayer(eventfile, variables)
return toasterui.main(player, player, params)
# run toaster ui on our mock bitbake class
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: %s <event file>" % os.path.basename(sys.argv[0]))
sys.exit(1)
sys.exit(main(sys.argv))

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
@@ -29,14 +29,14 @@ import warnings
sys.path.insert(0, os.path.join(os.path.abspath(os.path.dirname(sys.argv[0])), '../lib'))
from bb.cache import CoreRecipeInfo
import pickle as pickle
import cPickle as pickle
def main(argv=None):
"""
Get the mapping for the target recipe.
"""
if len(argv) != 1:
print("Error, need one argument!", file=sys.stderr)
print >>sys.stderr, "Error, need one argument!"
return 2
cachefile = argv[0]
@@ -56,7 +56,7 @@ def main(argv=None):
continue
# 1.0 is the default version for a no PV recipe.
if "pv" in val.__dict__:
if val.__dict__.has_key("pv"):
pv = val.pv
else:
pv = "1.0"

View File

@@ -11,7 +11,7 @@
# validate: validates
# clean: removes files
#
# The Makefile generates an HTML version of every document. The
# The Makefile generates an HTML and PDF version of every document. The
# variable DOC indicates the folder name for a given manual.
#
# To build a manual, you must invoke 'make' with the DOC argument.
@@ -21,8 +21,8 @@
# make DOC=bitbake-user-manual
# make pdf DOC=bitbake-user-manual
#
# The first example generates the HTML version of the User Manual.
# The second example generates the PDF version of the User Manual.
# The first example generates the HTML and PDF versions of the User Manual.
# The second example generates the HTML version only of the User Manual.
#
ifeq ($(DOC),bitbake-user-manual)
@@ -31,9 +31,9 @@ XSLTOPTS = --stringparam html.stylesheet bitbake-user-manual-style.css \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
--xinclude
ALLPREQ = html tarball
TARFILES = bitbake-user-manual-style.css bitbake-user-manual.html figures/bitbake-title.png
MANUALS = $(DOC)/$(DOC).html
ALLPREQ = html pdf tarball
TARFILES = bitbake-user-manual-style.css bitbake-user-manual.html bitbake-user-manual.pdf figures/bitbake-title.png
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
FIGURES = figures
STYLESHEET = $(DOC)/*.css

View File

@@ -1,15 +1,7 @@
<?xml version='1.0'?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
<xsl:import href="http://downloads.yoctoproject.org/mirror/docbook-mirror/docbook-xsl-1.76.1/xhtml/docbook.xsl" />
<!--
<xsl:import href="../template/1.76.1/docbook-xsl-1.76.1/xhtml/docbook.xsl" />
<xsl:import href="http://docbook.sourceforge.net/release/xsl/1.76.1/xhtml/docbook.xsl" />
-->
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl" />
<xsl:include href="../template/permalinks.xsl"/>
<xsl:include href="../template/section.title.xsl"/>

View File

@@ -21,7 +21,7 @@
The execution process is launched using the following command
form:
<literallayout class='monospaced'>
$ bitbake <replaceable>target</replaceable>
$ bitbake &lt;target&gt;
</literallayout>
For information on the BitBake command and its options,
see
@@ -37,16 +37,14 @@
</para>
<para>
A common method to determine this value for your build host is to run
the following:
A common way to determine this value for your build host is to run:
<literallayout class='monospaced'>
$ grep processor /proc/cpuinfo
</literallayout>
This command returns the number of processors, which takes into
account hyper-threading.
Thus, a quad-core build host with hyper-threading most likely
shows eight processors, which is the value you would then assign to
<filename>BB_NUMBER_THREADS</filename>.
and count the number of processors displayed. Note that the number of
processors will take into account hyper-threading, so that a quad-core
build host with hyper-threading will most likely show eight processors,
which is the value you would then assign to that variable.
</para>
<para>
@@ -116,35 +114,16 @@
Prior to parsing configuration files, Bitbake looks
at certain variables, including:
<itemizedlist>
<listitem><para>
<link linkend='var-BB_ENV_WHITELIST'><filename>BB_ENV_WHITELIST</filename></link>
</para></listitem>
<listitem><para>
<link linkend='var-BB_ENV_EXTRAWHITE'><filename>BB_ENV_EXTRAWHITE</filename></link>
</para></listitem>
<listitem><para>
<link linkend='var-BB_PRESERVE_ENV'><filename>BB_PRESERVE_ENV</filename></link>
</para></listitem>
<listitem><para>
<link linkend='var-BB_ORIGENV'><filename>BB_ORIGENV</filename></link>
</para></listitem>
<listitem><para><link linkend='var-BB_ENV_WHITELIST'><filename>BB_ENV_WHITELIST</filename></link></para></listitem>
<listitem><para><link linkend='var-BB_PRESERVE_ENV'><filename>BB_PRESERVE_ENV</filename></link></para></listitem>
<listitem><para><link linkend='var-BB_ENV_EXTRAWHITE'><filename>BB_ENV_EXTRAWHITE</filename></link></para></listitem>
<listitem><para>
<link linkend='var-BITBAKE_UI'><filename>BITBAKE_UI</filename></link>
</para></listitem>
</itemizedlist>
The first four variables in this list relate to how BitBake treats shell
environment variables during task execution.
By default, BitBake cleans the environment variables and provides tight
control over the shell execution environment.
However, through the use of these first four variables, you can
apply your control regarding the
environment variables allowed to be used by BitBake in the shell
during execution of tasks.
See the
"<link linkend='passing-information-into-the-build-task-environment'>Passing Information Into the Build Task Environment</link>"
section and the information about these variables in the
variable glossary for more information on how they work and
on how to use them.
You can find information on how to pass environment variables into the BitBake
execution environment in the
"<link linkend='passing-information-into-the-build-task-environment'>Passing Information Into the Build Task Environment</link>" section.
</para>
<para>
@@ -306,8 +285,8 @@
<link linkend='var-PN'><filename>PN</filename></link> and
<link linkend='var-PV'><filename>PV</filename></link>:
<literallayout class='monospaced'>
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE'),d)[0] or 'defaultpkgname'}"
PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE'),d)[1] or '1.0'}"
</literallayout>
In this example, a recipe called "something_1.2.3.bb" would set
<filename>PN</filename> to "something" and
@@ -596,7 +575,7 @@
"<link linkend='checksums'>Checksums (Signatures)</link>"
section for information).
It is also possible to append extra metadata to the stamp using
the <filename>[stamp-extra-info]</filename> task flag.
the "stamp-extra-info" task flag.
For example, OpenEmbedded uses this flag to make some tasks machine-specific.
</para>
@@ -653,8 +632,7 @@
</itemizedlist>
It is possible to have functions run before and after a task's main
function.
This is done using the <filename>[prefuncs]</filename>
and <filename>[postfuncs]</filename> flags of the task
This is done using the "prefuncs" and "postfuncs" flags of the task
that lists the functions to run.
</para>
</section>
@@ -804,13 +782,13 @@
make some dependency and hash information available to the build.
This information includes:
<itemizedlist>
<listitem><para><filename>BB_BASEHASH_task-</filename><replaceable>taskname</replaceable>:
<listitem><para><filename>BB_BASEHASH_task-&lt;taskname&gt;</filename>:
The base hashes for each task in the recipe.
</para></listitem>
<listitem><para><filename>BB_BASEHASH_</filename><replaceable>filename</replaceable><filename>:</filename><replaceable>taskname</replaceable>:
<listitem><para><filename>BB_BASEHASH_&lt;filename:taskname&gt;</filename>:
The base hashes for each dependent task.
</para></listitem>
<listitem><para><filename>BBHASHDEPS_</filename><replaceable>filename</replaceable><filename>:</filename><replaceable>taskname</replaceable>:
<listitem><para><filename>BBHASHDEPS_&lt;filename:taskname&gt;</filename>:
The task dependencies for each task.
</para></listitem>
<listitem><para><filename>BB_TASKHASH</filename>:
@@ -828,7 +806,7 @@
itself.
The simplest parameter to pass is "none", which causes a
set of signature information to be written out into
<filename>STAMPS_DIR</filename>
<filename>STAMP_DIR</filename>
corresponding to the targets specified.
The other currently available parameter is "printdiff",
which causes BitBake to try to establish the closest
@@ -916,7 +894,7 @@
<para>
Finally, after all the setscene tasks have executed, BitBake calls the
function listed in
<link linkend='var-BB_SETSCENE_VERIFY_FUNCTION2'><filename>BB_SETSCENE_VERIFY_FUNCTION2</filename></link>
<link linkend='var-BB_SETSCENE_VERIFY_FUNCTION'><filename>BB_SETSCENE_VERIFY_FUNCTION</filename></link>
with the list of tasks BitBake thinks has been "covered".
The metadata can then ensure that this list is correct and can
inform BitBake that it wants specific tasks to be run regardless

View File

@@ -38,7 +38,7 @@
The code to execute the first part of this process, a fetch,
looks something like the following:
<literallayout class='monospaced'>
src_uri = (d.getVar('SRC_URI') or "").split()
src_uri = (d.getVar('SRC_URI', True) or "").split()
fetcher = bb.fetch2.Fetch(src_uri, d)
fetcher.download()
</literallayout>
@@ -52,7 +52,7 @@
<para>
The instantiation of the fetch class is usually followed by:
<literallayout class='monospaced'>
rootdir = l.getVar('WORKDIR')
rootdir = l.getVar('WORKDIR', True)
fetcher.unpack(rootdir)
</literallayout>
This code unpacks the downloaded files to the
@@ -157,8 +157,8 @@
<filename>SRC_URI</filename> variable with the appropriate
varflags as follows:
<literallayout class='monospaced'>
SRC_URI[md5sum] = "<replaceable>value</replaceable>"
SRC_URI[sha256sum] = "<replaceable>value</replaceable>"
SRC_URI[md5sum] = "value"
SRC_URI[sha256sum] = "value"
</literallayout>
You can also specify the checksums as parameters on the
<filename>SRC_URI</filename> as shown below:
@@ -268,6 +268,15 @@
<link linkend='var-FILESPATH'><filename>FILESPATH</filename></link>
variable is used in the same way
<filename>PATH</filename> is used to find executables.
Failing that,
<link linkend='var-FILESDIR'><filename>FILESDIR</filename></link>
is used to find the appropriate relative file.
<note>
<filename>FILESDIR</filename> is deprecated and can
be replaced with <filename>FILESPATH</filename>.
Because <filename>FILESDIR</filename> is likely to be
removed, you should not use this variable in any new code.
</note>
If the file cannot be found, it is assumed that it is available in
<link linkend='var-DL_DIR'><filename>DL_DIR</filename></link>
by the time the <filename>download()</filename> method is called.
@@ -316,25 +325,6 @@
SRC_URI = "ftp://you@oe.handhelds.org/home/you/secret.plan"
</literallayout>
</para>
<note>
Because URL parameters are delimited by semi-colons, this can
introduce ambiguity when parsing URLs that also contain semi-colons,
for example:
<literallayout class='monospaced'>
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git;a=snapshot;h=a5dd47"
</literallayout>
Such URLs should should be modified by replacing semi-colons with '&amp;' characters:
<literallayout class='monospaced'>
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&amp;a=snapshot&amp;h=a5dd47"
</literallayout>
In most cases this should work. Treating semi-colons and '&amp;' in queries
identically is recommended by the World Wide Web Consortium (W3C).
Note that due to the nature of the URL, you may have to specify the name
of the downloaded file as well:
<literallayout class='monospaced'>
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&amp;a=snapshot&amp;h=a5dd47;downloadfilename=myfile.bz2"
</literallayout>
</note>
</section>
<section id='cvs-fetcher'>
@@ -355,7 +345,7 @@
A special value of "now" causes the checkout to
be updated on every build.
</para></listitem>
<listitem><para><emphasis><link linkend='var-CVSDIR'><filename>CVSDIR</filename></link>:</emphasis>
<listitem><para><emphasis><filename>CVSDIR</filename>:</emphasis>
Specifies where a temporary checkout is saved.
The location is often <filename>DL_DIR/cvs</filename>.
</para></listitem>
@@ -376,8 +366,7 @@
The supported parameters are as follows:
<itemizedlist>
<listitem><para><emphasis>"method":</emphasis>
The protocol over which to communicate with the CVS
server.
The protocol over which to communicate with the CVS server.
By default, this protocol is "pserver".
If "method" is set to "ext", BitBake examines the
"rsh" parameter and sets <filename>CVS_RSH</filename>.
@@ -405,8 +394,7 @@
Effectively, you are renaming the output directory
to which the module is unpacked.
You are forcing the module into a special
directory relative to
<link linkend='var-CVSDIR'><filename>CVSDIR</filename></link>.
directory relative to <filename>CVSDIR</filename>.
</para></listitem>
<listitem><para><emphasis>"rsh"</emphasis>
Used in conjunction with the "method" parameter.
@@ -447,9 +435,9 @@
The executable used is specified by
<filename>FETCHCMD_svn</filename>, which defaults
to "svn".
The fetcher's temporary working directory is set by
<link linkend='var-SVNDIR'><filename>SVNDIR</filename></link>,
which is usually <filename>DL_DIR/svn</filename>.
The fetcher's temporary working directory is set
by <filename>SVNDIR</filename>, which is usually
<filename>DL_DIR/svn</filename>.
</para>
<para>
@@ -461,29 +449,25 @@
You can think of this parameter as the top-level
directory of the repository data you want.
</para></listitem>
<listitem><para><emphasis>"path_spec":</emphasis>
A specific directory in which to checkout the
specified svn module.
</para></listitem>
<listitem><para><emphasis>"protocol":</emphasis>
The protocol to use, which defaults to "svn".
If "protocol" is set to "svn+ssh", the "ssh"
parameter is also used.
Other options are "svn+ssh" and "rsh".
For "rsh", the "rsh" parameter is also used.
</para></listitem>
<listitem><para><emphasis>"rev":</emphasis>
The revision of the source code to checkout.
</para></listitem>
<listitem><para><emphasis>"date":</emphasis>
The date of the source code to checkout.
Specific revisions are generally much safer to checkout
rather than by date as they do not involve timezones
(e.g. they are much more deterministic).
</para></listitem>
<listitem><para><emphasis>"scmdata":</emphasis>
Causes the “.svn” directories to be available during
compile-time when set to "keep".
By default, these directories are removed.
</para></listitem>
<listitem><para><emphasis>"ssh":</emphasis>
An optional parameter used when "protocol" is set
to "svn+ssh".
You can use this parameter to specify the ssh
program used by svn.
</para></listitem>
<listitem><para><emphasis>"transportuser":</emphasis>
When required, sets the username for the transport.
By default, this parameter is empty.
@@ -492,11 +476,10 @@
command.
</para></listitem>
</itemizedlist>
Following are three examples using svn:
Following are two examples using svn:
<literallayout class='monospaced'>
SRC_URI = "svn://myrepos/proj1;module=vip;protocol=http;rev=667"
SRC_URI = "svn://myrepos/proj1;module=opie;protocol=svn+ssh"
SRC_URI = "svn://myrepos/proj1;module=trunk;protocol=http;path_spec=${MY_DIR}/proj1"
SRC_URI = "svn://svn.oe.handhelds.org/svn;module=vip;proto=http;rev=667"
SRC_URI = "svn://svn.oe.handhelds.org/svn/;module=opie;proto=svn+ssh;date=20060126"
</literallayout>
</para>
</section>
@@ -508,9 +491,8 @@
This fetcher submodule fetches code from the Git
source control system.
The fetcher works by creating a bare clone of the
remote into
<link linkend='var-GITDIR'><filename>GITDIR</filename></link>,
which is usually <filename>DL_DIR/git2</filename>.
remote into <filename>GITDIR</filename>, which is
usually <filename>DL_DIR/git2</filename>.
This bare clone is then cloned into the work directory during the
unpack stage when a specific tree is checked out.
This is done using alternates and by reference to
@@ -644,7 +626,7 @@
<literallayout class='monospaced'>
SRC_URI = "ccrc://cc.example.org/ccrc;vob=/example_vob;module=/example_module"
SRCREV = "EXAMPLE_CLEARCASE_TAG"
PV = "${@d.getVar("SRCREV", False).replace("/", "+")}"
PV = "${@d.getVar("SRCREV").replace("/", "+")}"
</literallayout>
The fetcher uses the <filename>rcleartool</filename> or
<filename>cleartool</filename> remote client, depending on
@@ -662,19 +644,13 @@
</para></listitem>
<listitem><para><emphasis><filename>module</filename></emphasis>:
The module, which must include the
prepending "/" character, in the selected VOB.
<note>
The <filename>module</filename> and <filename>vob</filename>
options are combined to create the <filename>load</filename> rule in
the view config spec.
As an example, consider the <filename>vob</filename> and
<filename>module</filename> values from the
<filename>SRC_URI</filename> statement at the start of this section.
Combining those values results in the following:
<literallayout class='monospaced'>
load /example_vob/example_module
</literallayout>
</note>
prepending "/" character, in the selected VOB
The <filename>module</filename> and <filename>vob</filename>
options are combined to create the following load rule in
the view config spec:
<literallayout class='monospaced'>
load &lt;vob&gt;&lt;module&gt;
</literallayout>
</para></listitem>
<listitem><para><emphasis><filename>proto</filename></emphasis>:
The protocol, which can be either <filename>http</filename> or
@@ -713,68 +689,6 @@
</para>
</section>
<section id='perforce-fetcher'>
<title>Perforce Fetcher (<filename>p4://</filename>)</title>
<para>
This fetcher submodule fetches code from the
<ulink url='https://www.perforce.com/'>Perforce</ulink>
source control system.
The executable used is specified by
<filename>FETCHCMD_p4</filename>, which defaults
to "p4".
The fetcher's temporary working directory is set by
<link linkend='var-P4DIR'><filename>P4DIR</filename></link>,
which defaults to "DL_DIR/p4".
</para>
<para>
To use this fetcher, make sure your recipe has proper
<link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>,
<link linkend='var-SRCREV'><filename>SRCREV</filename></link>, and
<link linkend='var-PV'><filename>PV</filename></link> values.
The p4 executable is able to use the config file defined by your
system's <filename>P4CONFIG</filename> environment variable in
order to define the Perforce server URL and port, username, and
password if you do not wish to keep those values in a recipe
itself.
If you choose not to use <filename>P4CONFIG</filename>,
or to explicitly set variables that <filename>P4CONFIG</filename>
can contain, you can specify the <filename>P4PORT</filename> value,
which is the server's URL and port number, and you can
specify a username and password directly in your recipe within
<filename>SRC_URI</filename>.
</para>
<para>
Here is an example that relies on <filename>P4CONFIG</filename>
to specify the server URL and port, username, and password, and
fetches the Head Revision:
<literallayout class='monospaced'>
SRC_URI = "p4://example-depot/main/source/..."
SRCREV = "${AUTOREV}"
PV = "p4-${SRCPV}"
S = "${WORKDIR}/p4"
</literallayout>
</para>
<para>
Here is an example that specifies the server URL and port,
username, and password, and fetches a Revision based on a Label:
<literallayout class='monospaced'>
P4PORT = "tcp:p4server.example.net:1666"
SRC_URI = "p4://user:passwd@example-depot/main/source/..."
SRCREV = "release-1.0"
PV = "p4-${SRCPV}"
S = "${WORKDIR}/p4"
</literallayout>
<note>
You should always set <filename>S</filename>
to <filename>"${WORKDIR}/p4"</filename> in your recipe.
</note>
</para>
</section>
<section id='other-fetchers'>
<title>Other Fetchers</title>
@@ -784,6 +698,9 @@
<listitem><para>
Bazaar (<filename>bzr://</filename>)
</para></listitem>
<listitem><para>
Perforce (<filename>p4://</filename>)
</para></listitem>
<listitem><para>
Trees using Git Annex (<filename>gitannex://</filename>)
</para></listitem>

View File

@@ -47,6 +47,7 @@
-rw-rw-r--. 1 wmat wmat 849 Nov 26 04:55 HEADER
drwxrwxr-x. 5 wmat wmat 4096 Jan 31 13:44 lib
-rw-rw-r--. 1 wmat wmat 195 Nov 26 04:55 MANIFEST.in
-rwxrwxr-x. 1 wmat wmat 3195 Jan 31 11:57 setup.py
-rw-rw-r--. 1 wmat wmat 2887 Nov 26 04:55 TODO
</literallayout>
</para>
@@ -134,7 +135,7 @@
<ulink url="http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html">Mailing List post - The BitBake equivalent of "Hello, World!"</ulink>
</para></listitem>
<listitem><para>
<ulink url="http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/">Hambedded Linux blog post - From Bitbake Hello World to an Image</ulink>
<ulink url="https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/">Hambedded Linux blog post - From Bitbake Hello World to an Image</ulink>
</para></listitem>
</itemizedlist>
</note>
@@ -220,7 +221,7 @@
<para>From your shell, enter the following commands to set and
export the <filename>BBPATH</filename> variable:
<literallayout class='monospaced'>
$ BBPATH="<replaceable>projectdirectory</replaceable>"
$ BBPATH="&lt;projectdirectory&gt;"
$ export BBPATH
</literallayout>
Use your actual project directory in the command.
@@ -269,7 +270,7 @@
and define some key BitBake variables.
For more information on the <filename>bitbake.conf</filename>,
see
<ulink url='http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#an-overview-of-bitbakeconf'></ulink>
<ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#an-overview-of-bitbakeconf'></ulink>
</para>
<para>Use the following commands to create the <filename>conf</filename>
directory in the project directory:
@@ -354,7 +355,7 @@ ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inh
supporting.
For more information on the <filename>base.bbclass</filename> file,
you can look at
<ulink url='http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#tasks'></ulink>.
<ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#tasks'></ulink>.
</para></listitem>
<listitem><para><emphasis>Run Bitbake:</emphasis>
After making sure that the <filename>classes/base.bbclass</filename>
@@ -376,7 +377,7 @@ ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inh
Thus, this example creates and uses a layer called "mylayer".
<note>
You can find additional information on adding a layer at
<ulink url='http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#adding-an-example-layer'></ulink>.
<ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#adding-an-example-layer'></ulink>.
</note>
</para>
<para>Minimally, you need a recipe file and a layer configuration
@@ -399,7 +400,7 @@ ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inh
<link linkend='var-BBFILES'>BBFILES</link> += "${LAYERDIR}/*.bb"
<link linkend='var-BBFILE_COLLECTIONS'>BBFILE_COLLECTIONS</link> += "mylayer"
<link linkend='var-BBFILE_PATTERN'>BBFILE_PATTERN_mylayer</link> := "^${LAYERDIR_RE}/"
<link linkend='var-BBFILE_PATTERN'>BBFILE_PATTERN_mylayer</link> := "^${LAYERDIR}/"
</literallayout>
For information on these variables, click the links
to go to the definitions in the glossary.</para>
@@ -470,7 +471,7 @@ ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inh
Time: 00:00:00
Parsing of 1 .bb files complete (0 cached, 1 parsed). 1 targets, 0 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
NOTE: Preparing RunQueue
NOTE: Preparing runqueue
NOTE: Executing RunQueue Tasks
********************
* *

View File

@@ -471,7 +471,7 @@
Following is the usage and syntax for BitBake:
<literallayout class='monospaced'>
$ bitbake -h
Usage: bitbake [options] [recipename/target recipe:do_task ...]
Usage: bitbake [options] [recipename/target ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
@@ -504,19 +504,9 @@
Read the specified file before bitbake.conf.
-R POSTFILE, --postread=POSTFILE
Read the specified file after bitbake.conf.
-v, --verbose Enable tracing of shell tasks (with 'set -x').
Also print bb.note(...) messages to stdout (in
addition to writing them to ${T}/log.do_&lt;task&gt;).
-D, --debug Increase the debug level. You can specify this
more than once. -D sets the debug level to 1,
where only bb.debug(1, ...) messages are printed
to stdout; -DD sets the debug level to 2, where
both bb.debug(1, ...) and bb.debug(2, ...)
messages are printed; etc. Without -D, no debug
messages are printed. Note that -D only affects
output to stdout. All debug messages are written
to ${T}/log.do_taskname, regardless of the debug
level.
-v, --verbose Output more log message data to the terminal.
-D, --debug Increase the debug level. You can specify this more
than once.
-n, --dry-run Don't execute, just go through the motions.
-S SIGNATURE_HANDLER, --dump-signatures=SIGNATURE_HANDLER
Dump out the signature construction information, with
@@ -539,11 +529,9 @@
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
-P, --profile Profile the command and save reports.
-u UI, --ui=UI The user interface to use (taskexp, knotty or
ncurses - default knotty).
-u UI, --ui=UI The user interface to use (e.g. knotty, hob, depexp).
-t SERVERTYPE, --servertype=SERVERTYPE
Choose which server type to use (process or xmlrpc -
default process).
Choose which server to use, process or xmlrpc.
--token=XMLRPCTOKEN Specify the connection token to be used when
connecting to a remote server.
--revisions-changed Set the exit code depending on whether upstream
@@ -553,16 +541,11 @@
-B BIND, --bind=BIND The name/address for the bitbake server to bind to.
--no-setscene Do not run any setscene tasks. sstate will be ignored
and everything needed, built.
--setscene-only Only run setscene tasks, don't run any real tasks.
--remote-server=REMOTE_SERVER
Connect to the specified server.
-m, --kill-server Terminate the remote server.
--observe-only Connect to a server as an observing-only client.
--status-only Check the status of the remote bitbake server.
-w WRITEEVENTLOG, --write-log=WRITEEVENTLOG
Writes the event log of the build to a bitbake event
json file. Use '' (empty string) to assign the name
automatically.
</literallayout>
</para>
</section>
@@ -645,25 +628,6 @@
</para>
</section>
<section id='executing-a-list-of-task-and-recipe-combinations'>
<title>Executing a List of Task and Recipe Combinations</title>
<para>
The BitBake command line supports specifying different
tasks for individual targets when you specify multiple
targets.
For example, suppose you had two targets (or recipes)
<filename>myfirstrecipe</filename> and
<filename>mysecondrecipe</filename> and you needed
BitBake to run <filename>taskA</filename> for the first
recipe and <filename>taskB</filename> for the second
recipe:
<literallayout class='monospaced'>
$ bitbake myfirstrecipe:do_taskA mysecondrecipe:do_taskB
</literallayout>
</para>
</section>
<section id='generating-dependency-graphs'>
<title>Generating Dependency Graphs</title>
@@ -676,21 +640,21 @@
</para>
<para>
When you generate a dependency graph, BitBake writes three files
When you generate a dependency graph, BitBake writes four files
to the current working directory:
<itemizedlist>
<listitem><para>
<emphasis><filename>recipe-depends.dot</filename>:</emphasis>
Shows dependencies between recipes (i.e. a collapsed version of
<filename>task-depends.dot</filename>).
<listitem><para><emphasis><filename>package-depends.dot</filename>:</emphasis>
Shows BitBake's knowledge of dependencies between
runtime targets.
</para></listitem>
<listitem><para>
<emphasis><filename>task-depends.dot</filename>:</emphasis>
<listitem><para><emphasis><filename>pn-depends.dot</filename>:</emphasis>
Shows dependencies between build-time targets
(i.e. recipes).
</para></listitem>
<listitem><para><emphasis><filename>task-depends.dot</filename>:</emphasis>
Shows dependencies between tasks.
These dependencies match BitBake's internal task execution list.
</para></listitem>
<listitem><para>
<emphasis><filename>pn-buildlist</filename>:</emphasis>
<listitem><para><emphasis><filename>pn-buildlist</filename>:</emphasis>
Shows a simple list of targets that are to be built.
</para></listitem>
</itemizedlist>

View File

@@ -43,8 +43,8 @@
<link linkend='var-DEFAULT_PREFERENCE'>D</link>
<link linkend='var-EXCLUDE_FROM_WORLD'>E</link>
<link linkend='var-FAKEROOT'>F</link>
<link linkend='var-GITDIR'>G</link>
<link linkend='var-HGDIR'>H</link>
<!-- <link linkend='var-GROUPADD_PARAM'>G</link> -->
<link linkend='var-HOMEPAGE'>H</link>
<!-- <link linkend='var-ICECC_DISABLED'>I</link> -->
<!-- <link linkend='var-glossary-j'>J</link> -->
<!-- <link linkend='var-KARCH'>K</link> -->
@@ -52,7 +52,7 @@
<link linkend='var-MIRRORS'>M</link>
<!-- <link linkend='var-glossary-n'>N</link> -->
<link linkend='var-OVERRIDES'>O</link>
<link linkend='var-P4DIR'>P</link>
<link linkend='var-PACKAGES'>P</link>
<!-- <link linkend='var-QMAKE_PROFILES'>Q</link> -->
<link linkend='var-RDEPENDS'>R</link>
<link linkend='var-SECTION'>S</link>
@@ -102,56 +102,6 @@
</glossdef>
</glossentry>
<glossentry id='var-BB_ALLOWED_NETWORKS'><glossterm>BB_ALLOWED_NETWORKS</glossterm>
<glossdef>
<para>
Specifies a space-delimited list of hosts that the fetcher
is allowed to use to obtain the required source code.
Following are considerations surrounding this variable:
<itemizedlist>
<listitem><para>
This host list is only used if
<link linkend='var-BB_NO_NETWORK'><filename>BB_NO_NETWORK</filename></link>
is either not set or set to "0".
</para></listitem>
<listitem><para>
Limited support for wildcard matching against the
beginning of host names exists.
For example, the following setting matches
<filename>git.gnu.org</filename>,
<filename>ftp.gnu.org</filename>, and
<filename>foo.git.gnu.org</filename>.
<literallayout class='monospaced'>
BB_ALLOWED_NETWORKS = "*.gnu.org"
</literallayout>
</para></listitem>
<listitem><para>
Mirrors not in the host list are skipped and
logged in debug.
</para></listitem>
<listitem><para>
Attempts to access networks not in the host list
cause a failure.
</para></listitem>
</itemizedlist>
Using <filename>BB_ALLOWED_NETWORKS</filename> in
conjunction with
<link linkend='var-PREMIRRORS'><filename>PREMIRRORS</filename></link>
is very useful.
Adding the host you want to use to
<filename>PREMIRRORS</filename> results in the source code
being fetched from an allowed location and avoids raising
an error when a host that is not allowed is in a
<link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>
statement.
This is because the fetcher does not attempt to use the
host listed in <filename>SRC_URI</filename> after a
successful fetch from the
<filename>PREMIRRORS</filename> occurs.
</para>
</glossdef>
</glossentry>
<glossentry id='var-BB_CONSOLELOG'><glossterm>BB_CONSOLELOG</glossterm>
<glossdef>
<para>
@@ -716,7 +666,7 @@
</glossdef>
</glossentry>
<glossentry id='var-BB_SETSCENE_VERIFY_FUNCTION2'><glossterm>BB_SETSCENE_VERIFY_FUNCTION2</glossterm>
<glossentry id='var-BB_SETSCENE_VERIFY_FUNCTION'><glossterm>BB_SETSCENE_VERIFY_FUNCTION</glossterm>
<glossdef>
<para>
Specifies a function to call that verifies the list of
@@ -856,56 +806,6 @@
</glossdef>
</glossentry>
<glossentry id='var-BB_TASK_IONICE_LEVEL'><glossterm>BB_TASK_IONICE_LEVEL</glossterm>
<glossdef>
<para>
Allows adjustment of a task's Input/Output priority.
During Autobuilder testing, random failures can occur
for tasks due to I/O starvation.
These failures occur during various QEMU runtime timeouts.
You can use the <filename>BB_TASK_IONICE_LEVEL</filename>
variable to adjust the I/O priority of these tasks.
<note>
This variable works similarly to the
<link linkend='var-BB_TASK_NICE_LEVEL'><filename>BB_TASK_NICE_LEVEL</filename></link>
variable except with a task's I/O priorities.
</note>
</para>
<para>
Set the variable as follows:
<literallayout class='monospaced'>
BB_TASK_IONICE_LEVEL = "<replaceable>class</replaceable>.<replaceable>prio</replaceable>"
</literallayout>
For <replaceable>class</replaceable>, the default value is
"2", which is a best effort.
You can use "1" for realtime and "3" for idle.
If you want to use realtime, you must have superuser
privileges.
</para>
<para>
For <replaceable>prio</replaceable>, you can use any
value from "0", which is the highest priority, to "7",
which is the lowest.
The default value is "4".
You do not need any special privileges to use this range
of priority values.
<note>
In order for your I/O priority settings to take effect,
you need the Completely Fair Queuing (CFQ) Scheduler
selected for the backing block device.
To select the scheduler, use the following command form
where <replaceable>device</replaceable> is the device
(e.g. sda, sdb, and so forth):
<literallayout class='monospaced'>
$ sudo sh -c “echo cfq > /sys/block/<replaceable>device</replaceable>/queu/scheduler
</literallayout>
</note>
</para>
</glossdef>
</glossentry>
<glossentry id='var-BB_TASK_NICE_LEVEL'><glossterm>BB_TASK_NICE_LEVEL</glossterm>
<glossdef>
<para>
@@ -972,7 +872,7 @@
that run on the target <filename>MACHINE</filename>;
"nativesdk", which targets the SDK machine instead of
<filename>MACHINE</filename>; and "mulitlibs" in the form
"<filename>multilib:</filename><replaceable>multilib_name</replaceable>".
"<filename>multilib:&lt;multilib_name&gt;</filename>".
</para>
<para>
@@ -984,31 +884,8 @@
metadata:
<literallayout class='monospaced'>
BBCLASSEXTEND =+ "native nativesdk"
BBCLASSEXTEND =+ "multilib:<replaceable>multilib_name</replaceable>"
BBCLASSEXTEND =+ "multilib:&lt;multilib_name&gt;"
</literallayout>
<note>
<para>
Internally, the <filename>BBCLASSEXTEND</filename>
mechanism generates recipe variants by rewriting
variable values and applying overrides such as
<filename>_class-native</filename>.
For example, to generate a native version of a recipe,
a
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
on "foo" is rewritten to a <filename>DEPENDS</filename>
on "foo-native".
</para>
<para>
Even when using <filename>BBCLASSEXTEND</filename>, the
recipe is only parsed once.
Parsing once adds some limitations.
For example, it is not possible to
include a different file depending on the variant,
since <filename>include</filename> statements are
processed when the recipe is parsed.
</para>
</note>
</para>
</glossdef>
</glossentry>
@@ -1017,7 +894,7 @@
<glossdef>
<para>
Sets the BitBake debug output level to a specific value
as incremented by the <filename>-D</filename> command line
as incremented by the <filename>-d</filename> command line
option.
<note>
You must set this variable in the external environment
@@ -1139,20 +1016,6 @@
</glossdef>
</glossentry>
<glossentry id='var-BBLAYERS_FETCH_DIR'><glossterm>BBLAYERS_FETCH_DIR</glossterm>
<glossdef>
<para>
Sets the base location where layers are stored.
By default, this location is set to
<filename>${COREBASE}</filename>.
This setting is used in conjunction with
<filename>bitbake-layers layerindex-fetch</filename> and
tells <filename>bitbake-layers</filename> where to place
the fetched layers.
</para>
</glossdef>
</glossentry>
<glossentry id='var-BBMASK'><glossterm>BBMASK</glossterm>
<glossdef>
<para>
@@ -1165,14 +1028,14 @@
to "hide" these <filename>.bb</filename> and
<filename>.bbappend</filename> files.
BitBake ignores any recipe or recipe append files that
match any of the expressions.
match the expression.
It is as if BitBake does not see them at all.
Consequently, matching files are not parsed or otherwise
used by BitBake.</para>
<para>
The values you provide are passed to Python's regular
The value you provide is passed to Python's regular
expression compiler.
The expressions are compared against the full paths to
The expression is compared against the full paths to
the files.
For complete syntax information, see Python's
documentation at
@@ -1188,16 +1051,18 @@
BBMASK = "meta-ti/recipes-misc/"
</literallayout>
If you want to mask out multiple directories or recipes,
you can specify multiple regular expression fragments.
use the vertical bar to separate the regular expression
fragments.
This next example masks out multiple directories and
individual recipes:
<literallayout class='monospaced'>
BBMASK += "/meta-ti/recipes-misc/ meta-ti/recipes-ti/packagegroup/"
BBMASK += "/meta-oe/recipes-support/"
BBMASK += "/meta-foo/.*/openldap"
BBMASK += "opencv.*\.bbappend"
BBMASK += "lzma"
BBMASK = "meta-ti/recipes-misc/|meta-ti/recipes-ti/packagegroup/"
BBMASK .= "|.*meta-oe/recipes-support/"
BBMASK .= "|.*openldap"
BBMASK .= "|.*opencv"
BBMASK .= "|.*lzma"
</literallayout>
Notice how the vertical bar is used to append the fragments.
<note>
When specifying a directory name, use the trailing
slash character to ensure you match just that directory
@@ -1226,9 +1091,9 @@
Set the variable as you would any environment variable
and then run BitBake:
<literallayout class='monospaced'>
$ BBPATH="<replaceable>build_directory</replaceable>"
$ BBPATH="&lt;build_directory&gt;"
$ export BBPATH
$ bitbake <replaceable>target</replaceable>
$ bitbake &lt;target&gt;
</literallayout>
</para>
</glossdef>
@@ -1244,15 +1109,6 @@
</glossdef>
</glossentry>
<glossentry id='var-BBTARGETS'><glossterm>BBTARGETS</glossterm>
<glossdef>
<para>
Allows you to use a configuration file to add to the list
of command-line target recipes you want to build.
</para>
</glossdef>
</glossentry>
<glossentry id='var-BBVERSIONS'><glossterm>BBVERSIONS</glossterm>
<glossdef>
<para>
@@ -1298,15 +1154,6 @@
</glossdef>
</glossentry>
<glossentry id='var-BZRDIR'><glossterm>BZRDIR</glossterm>
<glossdef>
<para>
The directory in which files checked out of a Bazaar
system are stored.
</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-c'><title>C</title>
@@ -1321,15 +1168,6 @@
</glossdef>
</glossentry>
<glossentry id='var-CVSDIR'><glossterm>CVSDIR</glossterm>
<glossdef>
<para>
The directory in which files checked out under the
CVS system are stored.
</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-d'><title>D</title>
@@ -1539,6 +1377,24 @@
</glossdef>
</glossentry>
<glossentry id='var-FILESDIR'><glossterm>FILESDIR</glossterm>
<glossdef>
<para>
Specifies directories BitBake uses when searching for
patches and files.
The "local" fetcher module uses these directories when
handling <filename>file://</filename> URLs if the file
was not found using
<link linkend='var-FILESPATH'><filename>FILESPATH</filename></link>.
<note>
The <filename>FILESDIR</filename> variable is
deprecated and you should use
<filename>FILESPATH</filename> in all new code.
</note>
</para>
</glossdef>
</glossentry>
<glossentry id='var-FILESPATH'><glossterm>FILESPATH</glossterm>
<glossdef>
<para>
@@ -1556,32 +1412,13 @@
</glossdiv>
<!--
<glossdiv id='var-glossary-g'><title>G</title>
<glossentry id='var-GITDIR'><glossterm>GITDIR</glossterm>
<glossdef>
<para>
The directory in which a local copy of a Git repository
is stored when it is cloned.
</para>
</glossdef>
</glossentry>
</glossdiv>
-->
<glossdiv id='var-glossary-h'><title>H</title>
<glossentry id='var-HGDIR'><glossterm>HGDIR</glossterm>
<glossdef>
<para>
The directory in which files checked out of a Mercurial
system are stored.
</para>
</glossdef>
</glossentry>
<glossentry id='var-HOMEPAGE'><glossterm>HOMEPAGE</glossterm>
<glossdef>
<para>Website where more information about the software the recipe is building
@@ -1641,17 +1478,6 @@
</glossdef>
</glossentry>
<glossentry id='var-LAYERDIR_RE'><glossterm>LAYERDIR_RE</glossterm>
<glossdef>
<para>When used inside the <filename>layer.conf</filename> configuration
file, this variable provides the path of the current layer,
escaped for use in a regular expression
(<link linkend='var-BBFILE_PATTERN'><filename>BBFILE_PATTERN</filename></link>).
This variable is not available outside of <filename>layer.conf</filename>
and references are expanded immediately when parsing of the file completes.</para>
</glossdef>
</glossentry>
<glossentry id='var-LAYERVERSION'><glossterm>LAYERVERSION</glossterm>
<glossdef>
<para>Optionally specifies the version of a layer as a single number.
@@ -1753,15 +1579,6 @@
<glossdiv id='var-glossary-p'><title>P</title>
<glossentry id='var-P4DIR'><glossterm>P4DIR</glossterm>
<glossdef>
<para>
The directory in which a local copy of a Perforce depot
is stored when it is fetched.
</para>
</glossdef>
</glossentry>
<glossentry id='var-PACKAGES'><glossterm>PACKAGES</glossterm>
<glossdef>
<para>The list of packages the recipe creates.
@@ -1958,27 +1775,6 @@
The <filename>PROVIDES</filename> statement results in
the "libav" recipe also being known as "libpostproc".
</para>
<para>
In addition to providing recipes under alternate names,
the <filename>PROVIDES</filename> mechanism is also used
to implement virtual targets.
A virtual target is a name that corresponds to some
particular functionality (e.g. a Linux kernel).
Recipes that provide the functionality in question list the
virtual target in <filename>PROVIDES</filename>.
Recipes that depend on the functionality in question can
include the virtual target in
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
to leave the choice of provider open.
</para>
<para>
Conventionally, virtual targets have names on the form
"virtual/function" (e.g. "virtual/kernel").
The slash is simply part of the name and has no
syntactical significance.
</para>
</glossdef>
</glossentry>
@@ -2055,7 +1851,7 @@
Here is the general syntax to specify versions with
the <filename>RDEPENDS</filename> variable:
<literallayout class='monospaced'>
RDEPENDS_${PN} = "<replaceable>package</replaceable> (<replaceable>operator</replaceable> <replaceable>version</replaceable>)"
RDEPENDS_${PN} = "&lt;package&gt; (&lt;operator&gt; &lt;version&gt;)"
</literallayout>
For <filename>operator</filename>, you can specify the
following:
@@ -2121,7 +1917,7 @@
Here is the general syntax to specify versions with
the <filename>RRECOMMENDS</filename> variable:
<literallayout class='monospaced'>
RRECOMMENDS_${PN} = "<replaceable>package</replaceable> (<replaceable>operator</replaceable> <replaceable>version</replaceable>)"
RRECOMMENDS_${PN} = "&lt;package&gt; (&lt;operator&gt; &lt;version&gt;)"
</literallayout>
For <filename>operator</filename>, you can specify the
following:
@@ -2304,15 +2100,6 @@
</glossdef>
</glossentry>
<glossentry id='var-SVNDIR'><glossterm>SVNDIR</glossterm>
<glossdef>
<para>
The directory in which files checked out of a Subversion
system are stored.
</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-t'><title>T</title>

View File

@@ -56,7 +56,7 @@
-->
<copyright>
<year>2004-2016</year>
<year>2004-2015</year>
<holder>Richard Purdie</holder>
<holder>Chris Larson</holder>
<holder>and Phil Blundell</holder>

View File

@@ -105,7 +105,7 @@ Show debug logging for the specified logging domains
profile the command and print a report
.TP
.B \-uUI, \-\-ui=UI
User interface to use. Currently, knotty, taskexp or ncurses can be specified as UI.
User interface to use. Currently, hob, depexp, goggle or ncurses can be specified as UI.
.TP
.B \-tSERVERTYPE, \-\-servertype=SERVERTYPE
Choose which server to use, none, process or xmlrpc.

View File

@@ -23,17 +23,19 @@
# Assign a file to __warn__ to get warnings about slow operations.
#
from __future__ import print_function
import copy
import types
ImmutableTypes = (
types.NoneType,
bool,
complex,
float,
int,
long,
tuple,
frozenset,
str
basestring
)
MUTABLE = "__mutable__"
@@ -59,7 +61,7 @@ class COWDictMeta(COWMeta):
__call__ = cow
def __setitem__(cls, key, value):
if value is not None and not isinstance(value, ImmutableTypes):
if not isinstance(value, ImmutableTypes):
if not isinstance(value, COWMeta):
cls.__hasmutable__ = True
key += MUTABLE
@@ -114,7 +116,7 @@ class COWDictMeta(COWMeta):
cls.__setitem__(key, cls.__marker__)
def __revertitem__(cls, key):
if key not in cls.__dict__:
if not cls.__dict__.has_key(key):
key += MUTABLE
delattr(cls, key)
@@ -181,7 +183,7 @@ class COWSetMeta(COWDictMeta):
COWDictMeta.__delitem__(cls, repr(hash(value)))
def __in__(cls, value):
return repr(hash(value)) in COWDictMeta
return COWDictMeta.has_key(repr(hash(value)))
def iterkeys(cls):
raise TypeError("sets don't have keys")
@@ -190,10 +192,12 @@ class COWSetMeta(COWDictMeta):
raise TypeError("sets don't have 'items'")
# These are the actual classes you use!
class COWDictBase(object, metaclass = COWDictMeta):
class COWDictBase(object):
__metaclass__ = COWDictMeta
__count__ = 0
class COWSetBase(object, metaclass = COWSetMeta):
class COWSetBase(object):
__metaclass__ = COWSetMeta
__count__ = 0
if __name__ == "__main__":
@@ -283,7 +287,7 @@ if __name__ == "__main__":
except KeyError:
print("Yay! deleted key raises error")
if 'b' in b:
if b.has_key('b'):
print("Boo!")
else:
print("Yay - has_key with delete works!")

View File

@@ -21,11 +21,11 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
__version__ = "1.34.0"
__version__ = "1.24.0"
import sys
if sys.version_info < (3, 4, 0):
raise RuntimeError("Sorry, python 3.4.0 or later is required for this version of bitbake")
if sys.version_info < (2, 7, 3):
raise RuntimeError("Sorry, python 2.7.3 or later is required for this version of bitbake")
class BBHandledException(Exception):
@@ -70,8 +70,6 @@ logger = logging.getLogger("BitBake")
logger.addHandler(NullHandler())
logger.setLevel(logging.DEBUG - 2)
mainlogger = logging.getLogger("BitBake.Main")
# This has to be imported after the setLoggerClass, as the import of bb.msg
# can result in construction of the various loggers.
import bb.msg
@@ -81,26 +79,26 @@ sys.modules['bb.fetch'] = sys.modules['bb.fetch2']
# Messaging convenience functions
def plain(*args):
mainlogger.plain(''.join(args))
logger.plain(''.join(args))
def debug(lvl, *args):
if isinstance(lvl, str):
mainlogger.warning("Passed invalid debug level '%s' to bb.debug", lvl)
if isinstance(lvl, basestring):
logger.warn("Passed invalid debug level '%s' to bb.debug", lvl)
args = (lvl,) + args
lvl = 1
mainlogger.debug(lvl, ''.join(args))
logger.debug(lvl, ''.join(args))
def note(*args):
mainlogger.info(''.join(args))
logger.info(''.join(args))
def warn(*args):
mainlogger.warning(''.join(args))
logger.warn(''.join(args))
def error(*args, **kwargs):
mainlogger.error(''.join(args), extra=kwargs)
def error(*args):
logger.error(''.join(args))
def fatal(*args, **kwargs):
mainlogger.critical(''.join(args), extra=kwargs)
def fatal(*args):
logger.critical(''.join(args))
raise BBHandledException()
def deprecated(func, name=None, advice=""):

View File

@@ -31,43 +31,23 @@ import logging
import shlex
import glob
import time
import stat
import bb
import bb.msg
import bb.process
import bb.progress
from bb import data, event, utils
from contextlib import nested
from bb import event, utils
bblogger = logging.getLogger('BitBake')
logger = logging.getLogger('BitBake.Build')
NULL = open(os.devnull, 'r+')
__mtime_cache = {}
def cached_mtime_noerror(f):
if f not in __mtime_cache:
try:
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
except OSError:
return 0
return __mtime_cache[f]
def reset_cache():
global __mtime_cache
__mtime_cache = {}
# When we execute a Python function, we'd like certain things
# in all namespaces, hence we add them to __builtins__.
# If we do not do this and use the exec globals, they will
# not be available to subfunctions.
if hasattr(__builtins__, '__setitem__'):
builtins = __builtins__
else:
builtins = __builtins__.__dict__
builtins['bb'] = bb
builtins['os'] = os
__builtins__['bb'] = bb
__builtins__['os'] = os
class FuncFailed(Exception):
def __init__(self, name = None, logfile = None):
@@ -91,14 +71,13 @@ class TaskBase(event.Event):
def __init__(self, t, logfile, d):
self._task = t
self._package = d.getVar("PF")
self._mc = d.getVar("BB_CURRENT_MC")
self.taskfile = d.getVar("FILE")
self._package = d.getVar("PF", True)
self.taskfile = d.getVar("FILE", True)
self.taskname = self._task
self.logfile = logfile
self.time = time.time()
event.Event.__init__(self)
self._message = "recipe %s: task %s: %s" % (d.getVar("PF"), t, self.getDisplayName())
self._message = "recipe %s: task %s: %s" % (d.getVar("PF", True), t, self.getDisplayName())
def getTask(self):
return self._task
@@ -139,25 +118,6 @@ class TaskInvalid(TaskBase):
super(TaskInvalid, self).__init__(task, None, metadata)
self._message = "No such task '%s'" % task
class TaskProgress(event.Event):
"""
Task made some progress that could be reported to the user, usually in
the form of a progress bar or similar.
NOTE: this class does not inherit from TaskBase since it doesn't need
to - it's fired within the task context itself, so we don't have any of
the context information that you do in the case of the other events.
The event PID can be used to determine which task it came from.
The progress value is normally 0-100, but can also be negative
indicating that progress has been made but we aren't able to determine
how much.
The rate is optional, this is simply an extra string to display to the
user if specified.
"""
def __init__(self, progress, rate=None):
self.progress = progress
self.rate = rate
event.Event.__init__(self)
class LogTee(object):
def __init__(self, logger, outfile):
@@ -181,27 +141,23 @@ class LogTee(object):
def flush(self):
self.outfile.flush()
#
# pythonexception allows the python exceptions generated to be raised
# as the real exceptions (not FuncFailed) and without a backtrace at the
# origin of the failure.
#
def exec_func(func, d, dirs = None, pythonexception=False):
def exec_func(func, d, dirs = None):
"""Execute a BB 'function'"""
try:
oldcwd = os.getcwd()
except:
oldcwd = None
body = d.getVar(func)
if not body:
if body is None:
logger.warn("Function %s doesn't exist", func)
return
flags = d.getVarFlags(func)
cleandirs = flags.get('cleandirs') if flags else None
cleandirs = flags.get('cleandirs')
if cleandirs:
for cdir in d.expand(cleandirs).split():
bb.utils.remove(cdir, True)
bb.utils.mkdirhier(cdir)
if flags and dirs is None:
if dirs is None:
dirs = flags.get('dirs')
if dirs:
dirs = d.expand(dirs).split()
@@ -211,13 +167,8 @@ def exec_func(func, d, dirs = None, pythonexception=False):
bb.utils.mkdirhier(adir)
adir = dirs[-1]
else:
adir = None
body = d.getVar(func, False)
if not body:
if body is None:
logger.warning("Function %s doesn't exist", func)
return
adir = d.getVar('B', True)
bb.utils.mkdirhier(adir)
ispython = flags.get('python')
@@ -227,17 +178,17 @@ def exec_func(func, d, dirs = None, pythonexception=False):
else:
lockfiles = None
tempdir = d.getVar('T')
tempdir = d.getVar('T', True)
# or func allows items to be executed outside of the normal
# task set, such as buildhistory
task = d.getVar('BB_RUNTASK') or func
task = d.getVar('BB_RUNTASK', True) or func
if task == func:
taskfunc = task
else:
taskfunc = "%s.%s" % (task, func)
runfmt = d.getVar('BB_RUNFMT') or "run.{func}.{pid}"
runfmt = d.getVar('BB_RUNFMT', True) or "run.{func}.{pid}"
runfn = runfmt.format(taskfunc=taskfunc, task=task, func=func, pid=os.getpid())
runfile = os.path.join(tempdir, runfn)
bb.utils.mkdirhier(os.path.dirname(runfile))
@@ -258,30 +209,22 @@ def exec_func(func, d, dirs = None, pythonexception=False):
with bb.utils.fileslocked(lockfiles):
if ispython:
exec_func_python(func, d, runfile, cwd=adir, pythonexception=pythonexception)
exec_func_python(func, d, runfile, cwd=adir)
else:
exec_func_shell(func, d, runfile, cwd=adir)
try:
curcwd = os.getcwd()
except:
curcwd = None
if oldcwd and curcwd != oldcwd:
try:
bb.warn("Task %s changed cwd to %s" % (func, curcwd))
os.chdir(oldcwd)
except:
pass
_functionfmt = """
def {function}(d):
{body}
{function}(d)
"""
logformatter = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
def exec_func_python(func, d, runfile, cwd=None, pythonexception=False):
def exec_func_python(func, d, runfile, cwd=None):
"""Execute a python BB 'function'"""
code = _functionfmt.format(function=func)
bbfile = d.getVar('FILE', True)
code = _functionfmt.format(function=func, body=d.getVar(func, True))
bb.utils.mkdirhier(os.path.dirname(runfile))
with open(runfile, 'w') as script:
bb.data.emit_func_python(func, script, d)
@@ -289,26 +232,18 @@ def exec_func_python(func, d, runfile, cwd=None, pythonexception=False):
if cwd:
try:
olddir = os.getcwd()
except OSError as e:
bb.warn("%s: Cannot get cwd: %s" % (func, e))
except OSError:
olddir = None
os.chdir(cwd)
bb.debug(2, "Executing python function %s" % func)
try:
text = "def %s(d):\n%s" % (func, d.getVar(func, False))
fn = d.getVarFlag(func, "filename", False)
lineno = int(d.getVarFlag(func, "lineno", False))
bb.methodpool.insert_method(func, text, fn, lineno - 1)
comp = utils.better_compile(code, func, "exec_python_func() autogenerated")
utils.better_exec(comp, {"d": d}, code, "exec_python_func() autogenerated", pythonexception=pythonexception)
comp = utils.better_compile(code, func, bbfile)
utils.better_exec(comp, {"d": d}, code, bbfile)
except (bb.parse.SkipRecipe, bb.build.FuncFailed):
raise
except:
if pythonexception:
raise
raise FuncFailed(func, None)
finally:
bb.debug(2, "Python function %s finished" % func)
@@ -316,8 +251,8 @@ def exec_func_python(func, d, runfile, cwd=None, pythonexception=False):
if cwd and olddir:
try:
os.chdir(olddir)
except OSError as e:
bb.warn("%s: Cannot restore cwd %s: %s" % (func, olddir, e))
except OSError:
pass
def shell_trap_code():
return '''#!/bin/sh\n
@@ -327,8 +262,9 @@ bb_exit_handler() {
case $ret in
0) ;;
*) case $BASH_VERSION in
"") echo "WARNING: exit code $ret from a shell command.";;
*) echo "WARNING: ${BASH_SOURCE[0]}:${BASH_LINENO[0]} exit $ret from '$BASH_COMMAND'";;
"") echo "WARNING: exit code $ret from a shell command.";;
*) echo "WARNING: ${BASH_SOURCE[0]}:${BASH_LINENO[0]} exit $ret from
\"$BASH_COMMAND\"";;
esac
exit $ret
esac
@@ -362,14 +298,14 @@ def exec_func_shell(func, d, runfile, cwd=None):
# cleanup
ret=$?
trap '' 0
exit $ret
exit $?
''')
os.chmod(runfile, 0o775)
os.chmod(runfile, 0775)
cmd = runfile
if d.getVarFlag(func, 'fakeroot', False):
fakerootcmd = d.getVar('FAKEROOT')
if d.getVarFlag(func, 'fakeroot'):
fakerootcmd = d.getVar('FAKEROOT', True)
if fakerootcmd:
cmd = [fakerootcmd, runfile]
@@ -378,75 +314,14 @@ exit $ret
else:
logfile = sys.stdout
progress = d.getVarFlag(func, 'progress')
if progress:
if progress == 'percent':
# Use default regex
logfile = bb.progress.BasicProgressHandler(d, outfile=logfile)
elif progress.startswith('percent:'):
# Use specified regex
logfile = bb.progress.BasicProgressHandler(d, regex=progress.split(':', 1)[1], outfile=logfile)
elif progress.startswith('outof:'):
# Use specified regex
logfile = bb.progress.OutOfProgressHandler(d, regex=progress.split(':', 1)[1], outfile=logfile)
else:
bb.warn('%s: invalid task progress varflag value "%s", ignoring' % (func, progress))
bb.debug(2, "Executing shell function %s" % func)
fifobuffer = bytearray()
def readfifo(data):
nonlocal fifobuffer
fifobuffer.extend(data)
while fifobuffer:
message, token, nextmsg = fifobuffer.partition(b"\00")
if token:
splitval = message.split(b' ', 1)
cmd = splitval[0].decode("utf-8")
if len(splitval) > 1:
value = splitval[1].decode("utf-8")
else:
value = ''
if cmd == 'bbplain':
bb.plain(value)
elif cmd == 'bbnote':
bb.note(value)
elif cmd == 'bbwarn':
bb.warn(value)
elif cmd == 'bberror':
bb.error(value)
elif cmd == 'bbfatal':
# The caller will call exit themselves, so bb.error() is
# what we want here rather than bb.fatal()
bb.error(value)
elif cmd == 'bbfatal_log':
bb.error(value, forcelog=True)
elif cmd == 'bbdebug':
splitval = value.split(' ', 1)
level = int(splitval[0])
value = splitval[1]
bb.debug(level, value)
else:
bb.warn("Unrecognised command '%s' on FIFO" % cmd)
fifobuffer = nextmsg
else:
break
tempdir = d.getVar('T')
fifopath = os.path.join(tempdir, 'fifo.%s' % os.getpid())
if os.path.exists(fifopath):
os.unlink(fifopath)
os.mkfifo(fifopath)
with open(fifopath, 'r+b', buffering=0) as fifo:
try:
bb.debug(2, "Executing shell function %s" % func)
try:
with open(os.devnull, 'r+') as stdin:
bb.process.run(cmd, shell=False, stdin=stdin, log=logfile, extrafiles=[(fifo,readfifo)])
except bb.process.CmdError:
logfn = d.getVar('BB_LOGFILE')
raise FuncFailed(func, logfn)
finally:
os.unlink(fifopath)
try:
with open(os.devnull, 'r+') as stdin:
bb.process.run(cmd, shell=False, stdin=stdin, log=logfile)
except bb.process.CmdError:
logfn = d.getVar('BB_LOGFILE', True)
raise FuncFailed(func, logfn)
bb.debug(2, "Shell function %s finished" % func)
@@ -466,7 +341,7 @@ def _exec_task(fn, task, d, quieterr):
Execution of a task involves a bit more setup than executing a function,
running it with its own local metadata, and with some useful variables set.
"""
if not d.getVarFlag(task, 'task', False):
if not d.getVarFlag(task, 'task'):
event.fire(TaskInvalid(task, d), d)
logger.error("No such task: %s" % task)
return 1
@@ -474,29 +349,22 @@ def _exec_task(fn, task, d, quieterr):
logger.debug(1, "Executing task %s", task)
localdata = _task_data(fn, task, d)
tempdir = localdata.getVar('T')
tempdir = localdata.getVar('T', True)
if not tempdir:
bb.fatal("T variable not set, unable to build")
# Change nice level if we're asked to
nice = localdata.getVar("BB_TASK_NICE_LEVEL")
nice = localdata.getVar("BB_TASK_NICE_LEVEL", True)
if nice:
curnice = os.nice(0)
nice = int(nice) - curnice
newnice = os.nice(nice)
logger.debug(1, "Renice to %s " % newnice)
ionice = localdata.getVar("BB_TASK_IONICE_LEVEL")
if ionice:
try:
cls, prio = ionice.split(".", 1)
bb.utils.ioprio_set(os.getpid(), int(cls), int(prio))
except:
bb.warn("Invalid ionice level %s" % ionice)
bb.utils.mkdirhier(tempdir)
# Determine the logfile to generate
logfmt = localdata.getVar('BB_LOGFMT') or 'log.{task}.{pid}'
logfmt = localdata.getVar('BB_LOGFMT', True) or 'log.{task}.{pid}'
logbase = logfmt.format(task=task, pid=os.getpid())
# Document the order of the tasks...
@@ -527,10 +395,7 @@ def _exec_task(fn, task, d, quieterr):
self.triggered = False
logging.Handler.__init__(self, logging.ERROR)
def emit(self, record):
if getattr(record, 'forcelog', False):
self.triggered = False
else:
self.triggered = True
self.triggered = True
# Handle logfiles
si = open('/dev/null', 'r')
@@ -563,36 +428,24 @@ def _exec_task(fn, task, d, quieterr):
localdata.setVar('BB_LOGFILE', logfn)
localdata.setVar('BB_RUNTASK', task)
localdata.setVar('BB_TASK_LOGGER', bblogger)
flags = localdata.getVarFlags(task)
event.fire(TaskStarted(task, logfn, flags, localdata), localdata)
try:
try:
event.fire(TaskStarted(task, logfn, flags, localdata), localdata)
except (bb.BBHandledException, SystemExit):
return 1
except FuncFailed as exc:
for func in (prefuncs or '').split():
exec_func(func, localdata)
exec_func(task, localdata)
for func in (postfuncs or '').split():
exec_func(func, localdata)
except FuncFailed as exc:
if quieterr:
event.fire(TaskFailedSilent(task, logfn, localdata), localdata)
else:
errprinted = errchk.triggered
logger.error(str(exc))
return 1
try:
for func in (prefuncs or '').split():
exec_func(func, localdata)
exec_func(task, localdata)
for func in (postfuncs or '').split():
exec_func(func, localdata)
except FuncFailed as exc:
if quieterr:
event.fire(TaskFailedSilent(task, logfn, localdata), localdata)
else:
errprinted = errchk.triggered
logger.error(str(exc))
event.fire(TaskFailed(task, logfn, localdata, errprinted), localdata)
return 1
except bb.BBHandledException:
event.fire(TaskFailed(task, logfn, localdata, True), localdata)
return 1
event.fire(TaskFailed(task, logfn, localdata, errprinted), localdata)
return 1
finally:
sys.stdout.flush()
sys.stderr.flush()
@@ -617,7 +470,7 @@ def _exec_task(fn, task, d, quieterr):
bb.utils.remove(loglink)
event.fire(TaskSucceeded(task, logfn, localdata), localdata)
if not localdata.getVarFlag(task, 'nostamp', False) and not localdata.getVarFlag(task, 'selfstamp', False):
if not localdata.getVarFlag(task, 'nostamp') and not localdata.getVarFlag(task, 'selfstamp'):
make_stamp(task, localdata)
return 0
@@ -625,11 +478,11 @@ def _exec_task(fn, task, d, quieterr):
def exec_task(fn, task, d, profile = False):
try:
quieterr = False
if d.getVarFlag(task, "quieterrors", False) is not None:
if d.getVarFlag(task, "quieterrors") is not None:
quieterr = True
if profile:
profname = "profile-%s.log" % (d.getVar("PN") + "-" + task)
profname = "profile-%s.log" % (d.getVar("PN", True) + "-" + task)
try:
import cProfile as profile
except:
@@ -652,7 +505,7 @@ def exec_task(fn, task, d, profile = False):
event.fire(failedevent, d)
return 1
def stamp_internal(taskname, d, file_name, baseonly=False, noextra=False):
def stamp_internal(taskname, d, file_name, baseonly=False):
"""
Internal stamp helper function
Makes sure the stamp directory exists
@@ -666,17 +519,15 @@ def stamp_internal(taskname, d, file_name, baseonly=False, noextra=False):
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stamp[file_name]
stamp = d.stamp_base[file_name].get(taskflagname) or d.stamp[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMP')
file_name = d.getVar('BB_FILENAME')
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info') or ""
stamp = d.getVarFlag(taskflagname, 'stamp-base', True) or d.getVar('STAMP', True)
file_name = d.getVar('BB_FILENAME', True)
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
if baseonly:
return stamp
if noextra:
extrainfo = ""
if not stamp:
return
@@ -684,7 +535,7 @@ def stamp_internal(taskname, d, file_name, baseonly=False, noextra=False):
stamp = bb.parse.siggen.stampfile(stamp, file_name, taskname, extrainfo)
stampdir = os.path.dirname(stamp)
if cached_mtime_noerror(stampdir) == 0:
if bb.parse.cached_mtime_noerror(stampdir) == 0:
bb.utils.mkdirhier(stampdir)
return stamp
@@ -702,12 +553,12 @@ def stamp_cleanmask_internal(taskname, d, file_name):
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stampclean[file_name]
stamp = d.stamp_base_clean[file_name].get(taskflagname) or d.stampclean[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMPCLEAN')
file_name = d.getVar('BB_FILENAME')
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info') or ""
stamp = d.getVarFlag(taskflagname, 'stamp-base-clean', True) or d.getVar('STAMPCLEAN', True)
file_name = d.getVar('BB_FILENAME', True)
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
if not stamp:
return []
@@ -725,7 +576,7 @@ def make_stamp(task, d, file_name = None):
for mask in cleanmask:
for name in glob.glob(mask):
# Preserve sigdata files in the stamps directory
if "sigdata" in name or "sigbasedata" in name:
if "sigdata" in name:
continue
# Preserve taint files in the stamps directory
if name.endswith('.taint'):
@@ -743,7 +594,7 @@ def make_stamp(task, d, file_name = None):
# as it completes
if not task.endswith("_setscene") and task != "do_setscene" and not file_name:
stampbase = stamp_internal(task, d, None, True)
file_name = d.getVar('BB_FILENAME')
file_name = d.getVar('BB_FILENAME', True)
bb.parse.siggen.dump_sigtask(file_name, task, stampbase, True)
def del_stamp(task, d, file_name = None):
@@ -765,22 +616,22 @@ def write_taint(task, d, file_name = None):
if file_name:
taintfn = d.stamp[file_name] + '.' + task + '.taint'
else:
taintfn = d.getVar('STAMP') + '.' + task + '.taint'
taintfn = d.getVar('STAMP', True) + '.' + task + '.taint'
bb.utils.mkdirhier(os.path.dirname(taintfn))
# The specific content of the taint file is not really important,
# we just need it to be random, so a random UUID is used
with open(taintfn, 'w') as taintf:
taintf.write(str(uuid.uuid4()))
def stampfile(taskname, d, file_name = None, noextra=False):
def stampfile(taskname, d, file_name = None):
"""
Return the stamp for a given task
(d can be a data dict or dataCache)
"""
return stamp_internal(taskname, d, file_name, noextra=noextra)
return stamp_internal(taskname, d, file_name)
def add_tasks(tasklist, d):
task_deps = d.getVar('_task_deps', False)
def add_tasks(tasklist, deltasklist, d):
task_deps = d.getVar('_task_deps')
if not task_deps:
task_deps = {}
if not 'tasks' in task_deps:
@@ -791,6 +642,9 @@ def add_tasks(tasklist, d):
for task in tasklist:
task = d.expand(task)
if task in deltasklist:
continue
d.setVarFlag(task, 'task', 1)
if not task in task_deps['tasks']:
@@ -827,12 +681,12 @@ def addtask(task, before, after, d):
task = "do_" + task
d.setVarFlag(task, "task", 1)
bbtasks = d.getVar('__BBTASKS', False) or []
if task not in bbtasks:
bbtasks = d.getVar('__BBTASKS') or []
if not task in bbtasks:
bbtasks.append(task)
d.setVar('__BBTASKS', bbtasks)
existing = d.getVarFlag(task, "deps", False) or []
existing = d.getVarFlag(task, "deps") or []
if after is not None:
# set up deps for function
for entry in after.split():
@@ -842,7 +696,7 @@ def addtask(task, before, after, d):
if before is not None:
# set up things that depend on this func
for entry in before.split():
existing = d.getVarFlag(entry, "deps", False) or []
existing = d.getVarFlag(entry, "deps") or []
if task not in existing:
d.setVarFlag(entry, "deps", [task] + existing)
@@ -850,58 +704,8 @@ def deltask(task, d):
if task[:3] != "do_":
task = "do_" + task
bbtasks = d.getVar('__BBTASKS', False) or []
if task in bbtasks:
bbtasks.remove(task)
d.delVarFlag(task, 'task')
d.setVar('__BBTASKS', bbtasks)
bbtasks = d.getVar('__BBDELTASKS') or []
if not task in bbtasks:
bbtasks.append(task)
d.setVar('__BBDELTASKS', bbtasks)
d.delVarFlag(task, 'deps')
for bbtask in d.getVar('__BBTASKS', False) or []:
deps = d.getVarFlag(bbtask, 'deps', False) or []
if task in deps:
deps.remove(task)
d.setVarFlag(bbtask, 'deps', deps)
def preceedtask(task, with_recrdeptasks, d):
"""
Returns a set of tasks in the current recipe which were specified as
precondition by the task itself ("after") or which listed themselves
as precondition ("before"). Preceeding tasks specified via the
"recrdeptask" are included in the result only if requested. Beware
that this may lead to the task itself being listed.
"""
preceed = set()
preceed.update(d.getVarFlag(task, 'deps') or [])
if with_recrdeptasks:
recrdeptask = d.getVarFlag(task, 'recrdeptask')
if recrdeptask:
preceed.update(recrdeptask.split())
return preceed
def tasksbetween(task_start, task_end, d):
"""
Return the list of tasks between two tasks in the current recipe,
where task_start is to start at and task_end is the task to end at
(and task_end has a dependency chain back to task_start).
"""
outtasks = []
tasks = list(filter(lambda k: d.getVarFlag(k, "task"), d.keys()))
def follow_chain(task, endtask, chain=None):
if not chain:
chain = []
chain.append(task)
for othertask in tasks:
if othertask == task:
continue
if task == endtask:
for ctask in chain:
if ctask not in outtasks:
outtasks.append(ctask)
else:
deps = d.getVarFlag(othertask, 'deps', False)
if task in deps:
follow_chain(othertask, endtask, chain)
chain.pop()
follow_chain(task_start, task_end)
return outtasks

View File

@@ -28,16 +28,22 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import logging
import pickle
from collections import defaultdict
import bb.utils
logger = logging.getLogger("BitBake.Cache")
__cache_version__ = "151"
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info("Importing cPickle failed. "
"Falling back to a very slow implementation.")
__cache_version__ = "148"
def getCacheFile(path, filename, data_hash):
return os.path.join(path, filename + "." + data_hash)
@@ -71,16 +77,16 @@ class RecipeInfoCommon(object):
@classmethod
def flaglist(cls, flag, varlist, metadata, squash=False):
out_dict = dict((var, metadata.getVarFlag(var, flag))
out_dict = dict((var, metadata.getVarFlag(var, flag, True))
for var in varlist)
if squash:
return dict((k,v) for (k,v) in out_dict.items() if v)
return dict((k,v) for (k,v) in out_dict.iteritems() if v)
else:
return out_dict
@classmethod
def getvar(cls, var, metadata, expand = True):
return metadata.getVar(var, expand) or ''
def getvar(cls, var, metadata):
return metadata.getVar(var, True) or ''
class CoreRecipeInfo(RecipeInfoCommon):
@@ -93,7 +99,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.timestamp = bb.parse.cached_mtime(filename)
self.variants = self.listvar('__VARIANTS', metadata) + ['']
self.appends = self.listvar('__BBAPPEND', metadata)
self.nocache = self.getvar('BB_DONT_CACHE', metadata)
self.nocache = self.getvar('__BB_DONT_CACHE', metadata)
self.skipreason = self.getvar('__SKIPPED', metadata)
if self.skipreason:
@@ -123,6 +129,8 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.not_world = self.getvar('EXCLUDE_FROM_WORLD', metadata)
self.stamp = self.getvar('STAMP', metadata)
self.stampclean = self.getvar('STAMPCLEAN', metadata)
self.stamp_base = self.flaglist('stamp-base', self.tasks, metadata)
self.stamp_base_clean = self.flaglist('stamp-base-clean', self.tasks, metadata)
self.stamp_extrainfo = self.flaglist('stamp-extra-info', self.tasks, metadata)
self.file_checksums = self.flaglist('file-checksums', self.tasks, metadata, True)
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
@@ -134,11 +142,10 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
self.rdepends_pkg = self.pkgvar('RDEPENDS', self.packages, metadata)
self.rrecommends_pkg = self.pkgvar('RRECOMMENDS', self.packages, metadata)
self.inherits = self.getvar('__inherit_cache', metadata, expand=False)
self.inherits = self.getvar('__inherit_cache', metadata)
self.fakerootenv = self.getvar('FAKEROOTENV', metadata)
self.fakerootdirs = self.getvar('FAKEROOTDIRS', metadata)
self.fakerootnoenv = self.getvar('FAKEROOTNOENV', metadata)
self.extradepsfunc = self.getvar('calculate_extra_depends', metadata)
@classmethod
def init_cacheData(cls, cachedata):
@@ -151,6 +158,8 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.stamp = {}
cachedata.stampclean = {}
cachedata.stamp_base = {}
cachedata.stamp_base_clean = {}
cachedata.stamp_extrainfo = {}
cachedata.file_checksums = {}
cachedata.fn_provides = {}
@@ -174,7 +183,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootenv = {}
cachedata.fakerootnoenv = {}
cachedata.fakerootdirs = {}
cachedata.extradepsfunc = {}
def add_cacheData(self, cachedata, fn):
cachedata.task_deps[fn] = self.task_deps
@@ -184,6 +192,8 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.pkg_dp[fn] = self.defaultpref
cachedata.stamp[fn] = self.stamp
cachedata.stampclean[fn] = self.stampclean
cachedata.stamp_base[fn] = self.stamp_base
cachedata.stamp_base_clean[fn] = self.stamp_base_clean
cachedata.stamp_extrainfo[fn] = self.stamp_extrainfo
cachedata.file_checksums[fn] = self.file_checksums
@@ -210,8 +220,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
rprovides += self.rprovides_pkg[package]
for rprovide in rprovides:
if fn not in cachedata.rproviders[rprovide]:
cachedata.rproviders[rprovide].append(fn)
cachedata.rproviders[rprovide].append(fn)
for package in self.packages_dynamic:
cachedata.packages_dynamic[package].append(fn)
@@ -234,7 +243,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.universe_target.append(self.pn)
cachedata.hashfn[fn] = self.hashfilename
for task, taskhash in self.basetaskhashes.items():
for task, taskhash in self.basetaskhashes.iteritems():
identifier = '%s.%s' % (fn, task)
cachedata.basetaskhash[identifier] = taskhash
@@ -242,146 +251,24 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootenv[fn] = self.fakerootenv
cachedata.fakerootnoenv[fn] = self.fakerootnoenv
cachedata.fakerootdirs[fn] = self.fakerootdirs
cachedata.extradepsfunc[fn] = self.extradepsfunc
def virtualfn2realfn(virtualfn):
"""
Convert a virtual file name to a real one + the associated subclass keyword
"""
mc = ""
if virtualfn.startswith('multiconfig:'):
elems = virtualfn.split(':')
mc = elems[1]
virtualfn = ":".join(elems[2:])
fn = virtualfn
cls = ""
if virtualfn.startswith('virtual:'):
elems = virtualfn.split(':')
cls = ":".join(elems[1:-1])
fn = elems[-1]
return (fn, cls, mc)
def realfn2virtual(realfn, cls, mc):
"""
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if cls:
realfn = "virtual:" + cls + ":" + realfn
if mc:
realfn = "multiconfig:" + mc + ":" + realfn
return realfn
def variant2virtual(realfn, variant):
"""
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if variant == "":
return realfn
if variant.startswith("multiconfig:"):
elems = variant.split(":")
if elems[2]:
return "multiconfig:" + elems[1] + ":virtual:" + ":".join(elems[2:]) + ":" + realfn
return "multiconfig:" + elems[1] + ":" + realfn
return "virtual:" + variant + ":" + realfn
def parse_recipe(bb_data, bbfile, appends, mc=''):
"""
Parse a recipe
"""
chdir_back = False
bb_data.setVar("__BBMULTICONFIG", mc)
# expand tmpdir to include this topdir
bb_data.setVar('TMPDIR', bb_data.getVar('TMPDIR') or "")
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
oldpath = os.path.abspath(os.getcwd())
bb.parse.cached_mtime_noerror(bbfile_loc)
# The ConfHandler first looks if there is a TOPDIR and if not
# then it would call getcwd().
# Previously, we chdir()ed to bbfile_loc, called the handler
# and finally chdir()ed back, a couple of thousand times. We now
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
if not bb_data.getVar('TOPDIR', False):
chdir_back = True
bb_data.setVar('TOPDIR', bbfile_loc)
try:
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
bb_data = bb.parse.handle(bbfile, bb_data)
if chdir_back:
os.chdir(oldpath)
return bb_data
except:
if chdir_back:
os.chdir(oldpath)
raise
class NoCache(object):
def __init__(self, databuilder):
self.databuilder = databuilder
self.data = databuilder.data
def loadDataFull(self, virtualfn, appends):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug(1, "Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = virtualfn2realfn(virtualfn)
bb_data = self.load_bbfile(virtualfn, appends, virtonly=True)
return bb_data[virtual]
def load_bbfile(self, bbfile, appends, virtonly = False):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
if virtonly:
(bbfile, virtual, mc) = virtualfn2realfn(bbfile)
bb_data = self.databuilder.mcdata[mc].createCopy()
bb_data.setVar("__ONLYFINALISE", virtual or "default")
datastores = parse_recipe(bb_data, bbfile, appends, mc)
return datastores
bb_data = self.data.createCopy()
datastores = parse_recipe(bb_data, bbfile, appends)
for mc in self.databuilder.mcdata:
if not mc:
continue
bb_data = self.databuilder.mcdata[mc].createCopy()
newstores = parse_recipe(bb_data, bbfile, appends, mc)
for ns in newstores:
datastores["multiconfig:%s:%s" % (mc, ns)] = newstores[ns]
return datastores
class Cache(NoCache):
class Cache(object):
"""
BitBake Cache implementation
"""
def __init__(self, databuilder, data_hash, caches_array):
super().__init__(databuilder)
data = databuilder.data
def __init__(self, data, data_hash, caches_array):
# Pass caches_array information into Cache Constructor
# It will be used later for deciding whether we
# need extra cache file dump/load support
self.caches_array = caches_array
self.cachedir = data.getVar("CACHE")
self.cachedir = data.getVar("CACHE", True)
self.clean = set()
self.checked = set()
self.depends_cache = {}
self.data = None
self.data_fn = None
self.cacheclean = True
self.data_hash = data_hash
@@ -401,78 +288,72 @@ class Cache(NoCache):
cache_ok = True
if self.caches_array:
for cache_class in self.caches_array:
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
cache_ok = cache_ok and os.path.exists(cachefile)
cache_class.init_cacheData(self)
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
cache_ok = cache_ok and os.path.exists(cachefile)
cache_class.init_cacheData(self)
if cache_ok:
self.load_cachefile()
elif os.path.isfile(self.cachefile):
logger.info("Out of date cache found, rebuilding...")
def load_cachefile(self):
# Firstly, using core cache file information for
# valid checking
with open(self.cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
try:
cache_ver = pickled.load()
bitbake_ver = pickled.load()
except Exception:
logger.info('Invalid cache, rebuilding...')
return
if cache_ver != __cache_version__:
logger.info('Cache version mismatch, rebuilding...')
return
elif bitbake_ver != bb.__version__:
logger.info('Bitbake version mismatch, rebuilding...')
return
cachesize = 0
previous_progress = 0
previous_percent = 0
# Calculate the correct cachesize of all those cache files
for cache_class in self.caches_array:
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
cachesize += os.fstat(cachefile.fileno()).st_size
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
cachesize += os.fstat(cachefile.fileno()).st_size
bb.event.fire(bb.event.CacheLoadStarted(cachesize), self.data)
for cache_class in self.caches_array:
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
# Check cache version information
try:
cache_ver = pickled.load()
bitbake_ver = pickled.load()
except Exception:
logger.info('Invalid cache, rebuilding...')
return
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
while cachefile:
try:
key = pickled.load()
value = pickled.load()
except Exception:
break
if self.depends_cache.has_key(key):
self.depends_cache[key].append(value)
else:
self.depends_cache[key] = [value]
# only fire events on even percentage boundaries
current_progress = cachefile.tell() + previous_progress
current_percent = 100 * current_progress / cachesize
if current_percent > previous_percent:
previous_percent = current_percent
bb.event.fire(bb.event.CacheLoadProgress(current_progress, cachesize),
self.data)
if cache_ver != __cache_version__:
logger.info('Cache version mismatch, rebuilding...')
return
elif bitbake_ver != bb.__version__:
logger.info('Bitbake version mismatch, rebuilding...')
return
# Load the rest of the cache file
current_progress = 0
while cachefile:
try:
key = pickled.load()
value = pickled.load()
except Exception:
break
if not isinstance(key, str):
bb.warn("%s from extras cache is not a string?" % key)
break
if not isinstance(value, RecipeInfoCommon):
bb.warn("%s from extras cache is not a RecipeInfoCommon class?" % value)
break
if key in self.depends_cache:
self.depends_cache[key].append(value)
else:
self.depends_cache[key] = [value]
# only fire events on even percentage boundaries
current_progress = cachefile.tell() + previous_progress
if current_progress > cachesize:
# we might have calculated incorrect total size because a file
# might've been written out just after we checked its size
cachesize = current_progress
current_percent = 100 * current_progress / cachesize
if current_percent > previous_percent:
previous_percent = current_percent
bb.event.fire(bb.event.CacheLoadProgress(current_progress, cachesize),
self.data)
previous_progress += current_progress
previous_progress += current_progress
# Note: depends cache number is corresponding to the parsing file numbers.
# The same file has several caches, still regarded as one item in the cache
@@ -480,33 +361,69 @@ class Cache(NoCache):
len(self.depends_cache)),
self.data)
def parse(self, filename, appends):
@staticmethod
def virtualfn2realfn(virtualfn):
"""
Convert a virtual file name to a real one + the associated subclass keyword
"""
fn = virtualfn
cls = ""
if virtualfn.startswith('virtual:'):
elems = virtualfn.split(':')
cls = ":".join(elems[1:-1])
fn = elems[-1]
return (fn, cls)
@staticmethod
def realfn2virtual(realfn, cls):
"""
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if cls == "":
return realfn
return "virtual:" + cls + ":" + realfn
@classmethod
def loadDataFull(cls, virtualfn, appends, cfgData):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
(fn, virtual) = cls.virtualfn2realfn(virtualfn)
logger.debug(1, "Parsing %s (full)", fn)
cfgData.setVar("__ONLYFINALISE", virtual or "default")
bb_data = cls.load_bbfile(fn, appends, cfgData)
return bb_data[virtual]
@classmethod
def parse(cls, filename, appends, configdata, caches_array):
"""Parse the specified filename, returning the recipe information"""
logger.debug(1, "Parsing %s", filename)
infos = []
datastores = self.load_bbfile(filename, appends)
datastores = cls.load_bbfile(filename, appends, configdata)
depends = []
variants = []
# Process the "real" fn last so we can store variants list
for variant, data in sorted(datastores.items(),
for variant, data in sorted(datastores.iteritems(),
key=lambda i: i[0],
reverse=True):
virtualfn = variant2virtual(filename, variant)
variants.append(variant)
virtualfn = cls.realfn2virtual(filename, variant)
depends = depends + (data.getVar("__depends", False) or [])
if depends and not variant:
data.setVar("__depends", depends)
if virtualfn == filename:
data.setVar("__VARIANTS", " ".join(variants))
info_array = []
for cache_class in self.caches_array:
info = cache_class(filename, data)
info_array.append(info)
for cache_class in caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
info = cache_class(filename, data)
info_array.append(info)
infos.append((virtualfn, info_array))
return infos
def load(self, filename, appends):
def load(self, filename, appends, configdata):
"""Obtain the recipe information for the specified filename,
using cached values if available, otherwise parsing.
@@ -520,20 +437,21 @@ class Cache(NoCache):
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
info_array = self.depends_cache[filename]
for variant in info_array[0].variants:
virtualfn = variant2virtual(filename, variant)
virtualfn = self.realfn2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
else:
logger.debug(1, "Parsing %s", filename)
return self.parse(filename, appends, configdata, self.caches_array)
return cached, infos
def loadData(self, fn, appends, cacheData):
def loadData(self, fn, appends, cfgData, cacheData):
"""Load the recipe info for the specified filename,
parsing and adding to the cache if necessary, and adding
the recipe information to the supplied CacheData instance."""
skipped, virtuals = 0, 0
cached, infos = self.load(fn, appends)
cached, infos = self.load(fn, appends, cfgData)
for virtualfn, info_array in infos:
if info_array[0].skipped:
logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
@@ -610,20 +528,7 @@ class Cache(NoCache):
if hasattr(info_array[0], 'file_checksums'):
for _, fl in info_array[0].file_checksums.items():
fl = fl.strip()
while fl:
# A .split() would be simpler but means spaces or colons in filenames would break
a = fl.find(":True")
b = fl.find(":False")
if ((a < 0) and b) or ((b > 0) and (b < a)):
f = fl[:b+6]
fl = fl[b+7:]
elif ((b < 0) and a) or ((a > 0) and (a < b)):
f = fl[:a+5]
fl = fl[a+6:]
else:
break
fl = fl.strip()
for f in fl.split():
if "*" in f:
continue
f, exist = f.split(":")
@@ -641,19 +546,16 @@ class Cache(NoCache):
invalid = False
for cls in info_array[0].variants:
virtualfn = variant2virtual(fn, cls)
virtualfn = self.realfn2virtual(fn, cls)
self.clean.add(virtualfn)
if virtualfn not in self.depends_cache:
logger.debug(2, "Cache: %s is not cached", virtualfn)
invalid = True
elif len(self.depends_cache[virtualfn]) != len(self.caches_array):
logger.debug(2, "Cache: Extra caches missing for %s?" % virtualfn)
invalid = True
# If any one of the variants is not present, mark as invalid for all
if invalid:
for cls in info_array[0].variants:
virtualfn = variant2virtual(fn, cls)
virtualfn = self.realfn2virtual(fn, cls)
if virtualfn in self.clean:
logger.debug(2, "Cache: Removing %s from cache", virtualfn)
self.clean.remove(virtualfn)
@@ -690,19 +592,30 @@ class Cache(NoCache):
logger.debug(2, "Cache is clean, not saving.")
return
file_dict = {}
pickler_dict = {}
for cache_class in self.caches_array:
cache_class_name = cache_class.__name__
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "wb") as f:
p = pickle.Pickler(f, pickle.HIGHEST_PROTOCOL)
p.dump(__cache_version__)
p.dump(bb.__version__)
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class_name = cache_class.__name__
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
file_dict[cache_class_name] = open(cachefile, "wb")
pickler_dict[cache_class_name] = pickle.Pickler(file_dict[cache_class_name], pickle.HIGHEST_PROTOCOL)
pickler_dict['CoreRecipeInfo'].dump(__cache_version__)
pickler_dict['CoreRecipeInfo'].dump(bb.__version__)
for key, info_array in self.depends_cache.items():
for info in info_array:
if isinstance(info, RecipeInfoCommon) and info.__class__.__name__ == cache_class_name:
p.dump(key)
p.dump(info)
try:
for key, info_array in self.depends_cache.iteritems():
for info in info_array:
if isinstance(info, RecipeInfoCommon):
cache_class_name = info.__class__.__name__
pickler_dict[cache_class_name].dump(key)
pickler_dict[cache_class_name].dump(info)
finally:
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class_name = cache_class.__name__
file_dict[cache_class_name].close()
del self.depends_cache
@@ -730,13 +643,50 @@ class Cache(NoCache):
Save data we need into the cache
"""
realfn = virtualfn2realfn(file_name)[0]
realfn = self.virtualfn2realfn(file_name)[0]
info_array = []
for cache_class in self.caches_array:
info_array.append(cache_class(realfn, data))
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
info_array.append(cache_class(realfn, data))
self.add_info(file_name, info_array, cacheData, parsed)
@staticmethod
def load_bbfile(bbfile, appends, config):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
chdir_back = False
from bb import data, parse
# expand tmpdir to include this topdir
data.setVar('TMPDIR', data.getVar('TMPDIR', config, 1) or "", config)
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
oldpath = os.path.abspath(os.getcwd())
parse.cached_mtime_noerror(bbfile_loc)
bb_data = data.init_db(config)
# The ConfHandler first looks if there is a TOPDIR and if not
# then it would call getcwd().
# Previously, we chdir()ed to bbfile_loc, called the handler
# and finally chdir()ed back, a couple of thousand times. We now
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
if not data.getVar('TOPDIR', bb_data):
chdir_back = True
data.setVar('TOPDIR', bbfile_loc, bb_data)
try:
if appends:
data.setVar('__BBAPPEND', " ".join(appends), bb_data)
bb_data = parse.handle(bbfile, bb_data)
if chdir_back:
os.chdir(oldpath)
return bb_data
except:
if chdir_back:
os.chdir(oldpath)
raise
def init(cooker):
"""
@@ -766,9 +716,8 @@ class CacheData(object):
def __init__(self, caches_array):
self.caches_array = caches_array
for cache_class in self.caches_array:
if not issubclass(cache_class, RecipeInfoCommon):
bb.error("Extra cache data class %s should subclass RecipeInfoCommon class" % cache_class)
cache_class.init_cacheData(self)
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class.init_cacheData(self)
# Direct cache variables
self.task_queues = {}
@@ -795,14 +744,13 @@ class MultiProcessCache(object):
self.cachedata = self.create_cachedata()
self.cachedata_extras = self.create_cachedata()
def init_cache(self, d, cache_file_name=None):
cachedir = (d.getVar("PERSISTENT_DIR") or
d.getVar("CACHE"))
def init_cache(self, d):
cachedir = (d.getVar("PERSISTENT_DIR", True) or
d.getVar("CACHE", True))
if cachedir in [None, '']:
return
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir,
cache_file_name or self.__class__.cache_file_name)
self.cachefile = os.path.join(cachedir, self.__class__.cache_file_name)
logger.debug(1, "Using cache in '%s'", self.cachefile)
glf = bb.utils.lockfile(self.cachefile + ".lock")
@@ -826,7 +774,7 @@ class MultiProcessCache(object):
data = [{}]
return data
def save_extras(self):
def save_extras(self, d):
if not self.cachefile:
return
@@ -856,7 +804,7 @@ class MultiProcessCache(object):
if h not in dest[j]:
dest[j][h] = source[j][h]
def save_merge(self):
def save_merge(self, d):
if not self.cachefile:
return

View File

@@ -15,17 +15,22 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import glob
import operator
import os
import stat
import pickle
import bb.utils
import logging
from bb.cache import MultiProcessCache
logger = logging.getLogger("BitBake.Cache")
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info("Importing cPickle failed. "
"Falling back to a very slow implementation.")
# mtime cache (non-persistent)
# based upon the assumption that files do not change during bitbake run
class FileMtimeCache(object):
@@ -83,52 +88,3 @@ class FileChecksumCache(MultiProcessCache):
dest[0][h] = source[0][h]
else:
dest[0][h] = source[0][h]
def get_checksums(self, filelist, pn):
"""Get checksums for a list of files"""
def checksum_file(f):
try:
checksum = self.get_checksum(f)
except OSError as e:
bb.warn("Unable to get checksum for %s SRC_URI entry %s: %s" % (pn, os.path.basename(f), e))
return None
return checksum
def checksum_dir(pth):
# Handle directories recursively
dirchecksums = []
for root, dirs, files in os.walk(pth):
for name in files:
fullpth = os.path.join(root, name)
checksum = checksum_file(fullpth)
if checksum:
dirchecksums.append((fullpth, checksum))
return dirchecksums
checksums = []
for pth in filelist.split():
exist = pth.split(":")[1]
if exist == "False":
continue
pth = pth.split(":")[0]
if '*' in pth:
# Handle globs
for f in glob.glob(pth):
if os.path.isdir(f):
if not os.path.islink(f):
checksums.extend(checksum_dir(f))
else:
checksum = checksum_file(f)
if checksum:
checksums.append((f, checksum))
elif os.path.isdir(pth):
if not os.path.islink(pth):
checksums.extend(checksum_dir(pth))
else:
checksum = checksum_file(pth)
if checksum:
checksums.append((pth, checksum))
checksums.sort(key=operator.itemgetter(1))
return checksums

View File

@@ -1,39 +1,21 @@
"""
BitBake code parser
Parses actual code (i.e. python and shell) for functions and in-line
expressions. Used mainly to determine dependencies on other functions
and variables within the BitBake metadata. Also provides a cache for
this information in order to speed up processing.
(Not to be confused with the code that parses the metadata itself,
see lib/bb/parse/ for that).
NOTE: if you change how the parsers gather information you will almost
certainly need to increment CodeParserCache.CACHE_VERSION below so that
any existing codeparser cache gets invalidated. Additionally you'll need
to increment __cache_version__ in cache.py in order to ensure that old
recipe caches don't trigger "Taskhash mismatch" errors.
"""
import ast
import sys
import codegen
import logging
import pickle
import bb.pysh as pysh
import os.path
import bb.utils, bb.data
import hashlib
from itertools import chain
from bb.pysh import pyshyacc, pyshlex, sherrors
from pysh import pyshyacc, pyshlex, sherrors
from bb.cache import MultiProcessCache
logger = logging.getLogger('BitBake.CodeParser')
def bbhash(s):
return hashlib.md5(s.encode("utf-8")).hexdigest()
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info('Importing cPickle failed. Falling back to a very slow implementation.')
def check_indent(codestr):
"""If the code is indented, add a top level piece of code to 'remove' the indentation"""
@@ -46,10 +28,6 @@ def check_indent(codestr):
return codestr
if codestr[i-1] == "\t" or codestr[i-1] == " ":
if codestr[0] == "\n":
# Since we're adding a line, we need to remove one line of any empty padding
# to ensure line numbers are correct
codestr = codestr[1:]
return "if 1:\n" + codestr
return codestr
@@ -86,12 +64,11 @@ class SetCache(object):
new = []
for i in items:
new.append(sys.intern(i))
new.append(intern(i))
s = frozenset(new)
h = hash(s)
if h in self.setcache:
return self.setcache[h]
self.setcache[h] = s
if hash(s) in self.setcache:
return self.setcache[hash(s)]
self.setcache[hash(s)] = s
return s
codecache = SetCache()
@@ -115,9 +92,6 @@ class pythonCacheLine(object):
for c in sorted(self.contains.keys()):
l = l + (c, hash(self.contains[c]))
return hash(l)
def __repr__(self):
return " ".join([str(self.refs), str(self.execs), str(self.contains)])
class shellCacheLine(object):
def __init__(self, execs):
@@ -131,16 +105,10 @@ class shellCacheLine(object):
self.__init__(execs)
def __hash__(self):
return hash(self.execs)
def __repr__(self):
return str(self.execs)
class CodeParserCache(MultiProcessCache):
cache_file_name = "bb_codeparser.dat"
# NOTE: you must increment this if you change how the parsers gather information,
# so that an existing cache gets invalidated. Additionally you'll need
# to increment __cache_version__ in cache.py in order to ensure that old
# recipe caches don't trigger "Taskhash mismatch" errors.
CACHE_VERSION = 9
CACHE_VERSION = 7
def __init__(self):
MultiProcessCache.__init__(self)
@@ -171,10 +139,6 @@ class CodeParserCache(MultiProcessCache):
return cacheline
def init_cache(self, d):
# Check if we already have the caches
if self.pythoncache:
return
MultiProcessCache.init_cache(self, d)
# cachedata gets re-assigned in the parent
@@ -190,11 +154,11 @@ codeparsercache = CodeParserCache()
def parser_cache_init(d):
codeparsercache.init_cache(d)
def parser_cache_save():
codeparsercache.save_extras()
def parser_cache_save(d):
codeparsercache.save_extras(d)
def parser_cache_savemerge():
codeparsercache.save_merge()
def parser_cache_savemerge(d):
codeparsercache.save_merge(d)
Logger = logging.getLoggerClass()
class BufferedLogger(Logger):
@@ -209,15 +173,12 @@ class BufferedLogger(Logger):
def flush(self):
for record in self.buffer:
if self.target.isEnabledFor(record.levelno):
self.target.handle(record)
self.target.handle(record)
self.buffer = []
class PythonParser():
getvars = (".getVar", ".appendVar", ".prependVar")
getvarflags = (".getVarFlag", ".appendVarFlag", ".prependVarFlag")
containsfuncs = ("bb.utils.contains", "base_contains")
containsanyfuncs = ("bb.utils.contains_any", "bb.utils.filter")
containsfuncs = ("bb.utils.contains", "base_contains", "oe.utils.contains", "bb.utils.contains_any")
execfuncs = ("bb.build.exec_func", "bb.build.exec_task")
def warn(self, func, arg):
@@ -236,37 +197,17 @@ class PythonParser():
def visit_Call(self, node):
name = self.called_node_name(node.func)
if name and (name.endswith(self.getvars) or name.endswith(self.getvarflags) or name in self.containsfuncs or name in self.containsanyfuncs):
if name and name.endswith(self.getvars) or name in self.containsfuncs:
if isinstance(node.args[0], ast.Str):
varname = node.args[0].s
if name in self.containsfuncs and isinstance(node.args[1], ast.Str):
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname].add(node.args[1].s)
elif name in self.containsanyfuncs and isinstance(node.args[1], ast.Str):
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname].update(node.args[1].s.split())
elif name.endswith(self.getvarflags):
if isinstance(node.args[1], ast.Str):
self.references.add('%s[%s]' % (varname, node.args[1].s))
else:
self.warn(node.func, node.args[1])
else:
self.references.add(varname)
else:
self.references.add(node.args[0].s)
else:
self.warn(node.func, node.args[0])
elif name and name.endswith(".expand"):
if isinstance(node.args[0], ast.Str):
value = node.args[0].s
d = bb.data.init()
parser = d.expandWithRefs(value, self.name)
self.references |= parser.references
self.execs |= parser.execs
for varname in parser.contains:
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname] |= parser.contains[varname]
elif name in self.execfuncs:
if isinstance(node.args[0], ast.Str):
self.var_execs.add(node.args[0].s)
@@ -289,7 +230,6 @@ class PythonParser():
break
def __init__(self, name, log):
self.name = name
self.var_execs = set()
self.contains = {}
self.execs = set()
@@ -299,11 +239,8 @@ class PythonParser():
self.unhandled_message = "in call of %s, argument '%s' is not a string literal"
self.unhandled_message = "while parsing %s, %s" % (name, self.unhandled_message)
def parse_python(self, node, lineno=0, filename="<string>"):
if not node or not node.strip():
return
h = bbhash(str(node))
def parse_python(self, node):
h = hash(str(node))
if h in codeparsercache.pythoncache:
self.references = set(codeparsercache.pythoncache[h].refs)
@@ -321,9 +258,7 @@ class PythonParser():
self.contains[i] = set(codeparsercache.pythoncacheextras[h].contains[i])
return
# We can't add to the linenumbers for compile, we can pad to the correct number of blank lines though
node = "\n" * int(lineno) + node
code = compile(check_indent(str(node)), filename, "exec",
code = compile(check_indent(str(node)), "<string>", "exec",
ast.PyCF_ONLY_AST)
for n in ast.walk(code):
@@ -348,7 +283,7 @@ class ShellParser():
commands it executes.
"""
h = bbhash(str(value))
h = hash(str(value))
if h in codeparsercache.shellcache:
self.execs = set(codeparsercache.shellcache[h].execs)
@@ -371,7 +306,8 @@ class ShellParser():
except pyshlex.NeedMore:
raise sherrors.ShellSyntaxError("Unexpected EOF")
self.process_tokens(tokens)
for token in tokens:
self.process_tokens(token)
def process_tokens(self, tokens):
"""Process a supplied portion of the syntax tree as returned by
@@ -417,24 +353,18 @@ class ShellParser():
"case_clause": case_clause,
}
def process_token_list(tokens):
for token in tokens:
if isinstance(token, list):
process_token_list(token)
continue
name, value = token
try:
more_tokens, words = token_handlers[name](value)
except KeyError:
raise NotImplementedError("Unsupported token type " + name)
for token in tokens:
name, value = token
try:
more_tokens, words = token_handlers[name](value)
except KeyError:
raise NotImplementedError("Unsupported token type " + name)
if more_tokens:
self.process_tokens(more_tokens)
if more_tokens:
self.process_tokens(more_tokens)
if words:
self.process_words(words)
process_token_list(tokens)
if words:
self.process_words(words)
def process_words(self, words):
"""Process a set of 'words' in pyshyacc parlance, which includes

View File

@@ -28,15 +28,8 @@ and must not trigger events, directly or indirectly.
Commands are queued in a CommandQueue
"""
from collections import OrderedDict, defaultdict
import bb.event
import bb.cooker
import bb.remotedata
class DataStoreConnectionHandle(object):
def __init__(self, dsindex=0):
self.dsindex = dsindex
class CommandCompleted(bb.event.Event):
pass
@@ -62,7 +55,6 @@ class Command:
self.cooker = cooker
self.cmds_sync = CommandsSync()
self.cmds_async = CommandsAsync()
self.remotedatastores = bb.remotedata.RemoteDatastores(cooker)
# FIXME Add lock for this
self.currentAsyncCommand = None
@@ -76,12 +68,10 @@ class Command:
if not hasattr(command_method, 'readonly') or False == getattr(command_method, 'readonly'):
return None, "Not able to execute not readonly commands in readonly mode"
try:
if getattr(command_method, 'needconfig', False):
self.cooker.updateCacheSync()
result = command_method(self, commandline)
except CommandError as exc:
return None, exc.args[0]
except (Exception, SystemExit):
except Exception:
import traceback
return None, traceback.format_exc()
else:
@@ -118,7 +108,7 @@ class Command:
return False
except SystemExit as exc:
arg = exc.args[0]
if isinstance(arg, str):
if isinstance(arg, basestring):
self.finishAsyncCommand(arg)
else:
self.finishAsyncCommand("Exited with %s" % arg)
@@ -133,20 +123,14 @@ class Command:
def finishAsyncCommand(self, msg=None, code=None):
if msg or msg == "":
bb.event.fire(CommandFailed(msg), self.cooker.data)
bb.event.fire(CommandFailed(msg), self.cooker.event_data)
elif code:
bb.event.fire(CommandExit(code), self.cooker.data)
bb.event.fire(CommandExit(code), self.cooker.event_data)
else:
bb.event.fire(CommandCompleted(), self.cooker.data)
bb.event.fire(CommandCompleted(), self.cooker.event_data)
self.currentAsyncCommand = None
self.cooker.finishcommand()
def split_mc_pn(pn):
if pn.startswith("multiconfig:"):
_, mc, pn = pn.split(":", 2)
return (mc, pn)
return ('', pn)
class CommandsSync:
"""
A class of synchronous commands
@@ -193,19 +177,8 @@ class CommandsSync:
"""
varname = params[0]
value = str(params[1])
command.cooker.extraconfigdata[varname] = value
command.cooker.data.setVar(varname, value)
def getSetVariable(self, command, params):
"""
Read the value of a variable from data and set it into the datastore
which effectively expands and locks the value.
"""
varname = params[0]
result = self.getVariable(command, params)
command.cooker.data.setVar(varname, result)
return result
def setConfig(self, command, params):
"""
Set the value of variable in configuration
@@ -231,7 +204,6 @@ class CommandsSync:
postfiles = params[1].split()
command.cooker.configuration.prefile = prefiles
command.cooker.configuration.postfile = postfiles
setPrePostConfFiles.needconfig = False
def getCpuCount(self, command, params):
"""
@@ -239,12 +211,10 @@ class CommandsSync:
"""
return bb.utils.cpu_count()
getCpuCount.readonly = True
getCpuCount.needconfig = False
def matchFile(self, command, params):
fMatch = params[0]
return command.cooker.matchFile(fMatch)
matchFile.needconfig = False
def generateNewImage(self, command, params):
image = params[0]
@@ -258,7 +228,6 @@ class CommandsSync:
def ensureDir(self, command, params):
directory = params[0]
bb.utils.mkdirhier(directory)
ensureDir.needconfig = False
def setVarFile(self, command, params):
"""
@@ -269,7 +238,6 @@ class CommandsSync:
default_file = params[2]
op = params[3]
command.cooker.modifyConfigurationVar(var, val, default_file, op)
setVarFile.needconfig = False
def removeVarFile(self, command, params):
"""
@@ -277,7 +245,6 @@ class CommandsSync:
"""
var = params[0]
command.cooker.removeConfigurationVar(var)
removeVarFile.needconfig = False
def createConfigFile(self, command, params):
"""
@@ -285,7 +252,6 @@ class CommandsSync:
"""
name = params[0]
command.cooker.createConfigFile(name)
createConfigFile.needconfig = False
def setEventMask(self, command, params):
handlerNum = params[0]
@@ -293,8 +259,6 @@ class CommandsSync:
debug_domains = params[2]
mask = params[3]
return bb.event.set_UIHmask(handlerNum, llevel, debug_domains, mask)
setEventMask.needconfig = False
setEventMask.readonly = True
def setFeatures(self, command, params):
"""
@@ -302,280 +266,14 @@ class CommandsSync:
"""
features = params[0]
command.cooker.setFeatures(features)
setFeatures.needconfig = False
# although we change the internal state of the cooker, this is transparent since
# we always take and leave the cooker in state.initial
setFeatures.readonly = True
def updateConfig(self, command, params):
options = params[0]
environment = params[1]
command.cooker.updateConfigOpts(options, environment)
updateConfig.needconfig = False
def parseConfiguration(self, command, params):
"""Instruct bitbake to parse its configuration
NOTE: it is only necessary to call this if you aren't calling any normal action
(otherwise parsing is taken care of automatically)
"""
command.cooker.parseConfiguration()
parseConfiguration.needconfig = False
def getLayerPriorities(self, command, params):
ret = []
# regex objects cannot be marshalled by xmlrpc
for collection, pattern, regex, pri in command.cooker.bbfile_config_priorities:
ret.append((collection, pattern, regex.pattern, pri))
return ret
getLayerPriorities.readonly = True
def getRecipes(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return list(command.cooker.recipecaches[mc].pkg_pn.items())
getRecipes.readonly = True
def getRecipeDepends(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return list(command.cooker.recipecaches[mc].deps.items())
getRecipeDepends.readonly = True
def getRecipeVersions(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].pkg_pepvpr
getRecipeVersions.readonly = True
def getRuntimeDepends(self, command, params):
ret = []
try:
mc = params[0]
except IndexError:
mc = ''
rundeps = command.cooker.recipecaches[mc].rundeps
for key, value in rundeps.items():
if isinstance(value, defaultdict):
value = dict(value)
ret.append((key, value))
return ret
getRuntimeDepends.readonly = True
def getRuntimeRecommends(self, command, params):
ret = []
try:
mc = params[0]
except IndexError:
mc = ''
runrecs = command.cooker.recipecaches[mc].runrecs
for key, value in runrecs.items():
if isinstance(value, defaultdict):
value = dict(value)
ret.append((key, value))
return ret
getRuntimeRecommends.readonly = True
def getRecipeInherits(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].inherits
getRecipeInherits.readonly = True
def getBbFilePriority(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].bbfile_priority
getBbFilePriority.readonly = True
def getDefaultPreference(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].pkg_dp
getDefaultPreference.readonly = True
def getSkippedRecipes(self, command, params):
# Return list sorted by reverse priority order
import bb.cache
skipdict = OrderedDict(sorted(command.cooker.skiplist.items(),
key=lambda x: (-command.cooker.collection.calc_bbfile_priority(bb.cache.virtualfn2realfn(x[0])[0]), x[0])))
return list(skipdict.items())
getSkippedRecipes.readonly = True
def getOverlayedRecipes(self, command, params):
return list(command.cooker.collection.overlayed.items())
getOverlayedRecipes.readonly = True
def getFileAppends(self, command, params):
fn = params[0]
return command.cooker.collection.get_file_appends(fn)
getFileAppends.readonly = True
def getAllAppends(self, command, params):
return command.cooker.collection.bbappends
getAllAppends.readonly = True
def findProviders(self, command, params):
return command.cooker.findProviders()
findProviders.readonly = True
def findBestProvider(self, command, params):
(mc, pn) = split_mc_pn(params[0])
return command.cooker.findBestProvider(pn, mc)
findBestProvider.readonly = True
def allProviders(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return list(bb.providers.allProviders(command.cooker.recipecaches[mc]).items())
allProviders.readonly = True
def getRuntimeProviders(self, command, params):
rprovide = params[0]
try:
mc = params[1]
except IndexError:
mc = ''
all_p = bb.providers.getRuntimeProviders(command.cooker.recipecaches[mc], rprovide)
if all_p:
best = bb.providers.filterProvidersRunTime(all_p, rprovide,
command.cooker.data,
command.cooker.recipecaches[mc])[0][0]
else:
best = None
return all_p, best
getRuntimeProviders.readonly = True
def dataStoreConnectorFindVar(self, command, params):
dsindex = params[0]
name = params[1]
datastore = command.remotedatastores[dsindex]
value, overridedata = datastore._findVar(name)
if value:
content = value.get('_content', None)
if isinstance(content, bb.data_smart.DataSmart):
# Value is a datastore (e.g. BB_ORIGENV) - need to handle this carefully
idx = command.remotedatastores.check_store(content, True)
return {'_content': DataStoreConnectionHandle(idx),
'_connector_origtype': 'DataStoreConnectionHandle',
'_connector_overrides': overridedata}
elif isinstance(content, set):
return {'_content': list(content),
'_connector_origtype': 'set',
'_connector_overrides': overridedata}
else:
value['_connector_overrides'] = overridedata
else:
value = {}
value['_connector_overrides'] = overridedata
return value
dataStoreConnectorFindVar.readonly = True
def dataStoreConnectorGetKeys(self, command, params):
dsindex = params[0]
datastore = command.remotedatastores[dsindex]
return list(datastore.keys())
dataStoreConnectorGetKeys.readonly = True
def dataStoreConnectorGetVarHistory(self, command, params):
dsindex = params[0]
name = params[1]
datastore = command.remotedatastores[dsindex]
return datastore.varhistory.variable(name)
dataStoreConnectorGetVarHistory.readonly = True
def dataStoreConnectorExpandPythonRef(self, command, params):
config_data_dict = params[0]
varname = params[1]
expr = params[2]
config_data = command.remotedatastores.receive_datastore(config_data_dict)
varparse = bb.data_smart.VariableParse(varname, config_data)
return varparse.python_sub(expr)
def dataStoreConnectorRelease(self, command, params):
dsindex = params[0]
if dsindex <= 0:
raise CommandError('dataStoreConnectorRelease: invalid index %d' % dsindex)
command.remotedatastores.release(dsindex)
def dataStoreConnectorSetVarFlag(self, command, params):
dsindex = params[0]
name = params[1]
flag = params[2]
value = params[3]
datastore = command.remotedatastores[dsindex]
datastore.setVarFlag(name, flag, value)
def dataStoreConnectorDelVar(self, command, params):
dsindex = params[0]
name = params[1]
datastore = command.remotedatastores[dsindex]
if len(params) > 2:
flag = params[2]
datastore.delVarFlag(name, flag)
else:
datastore.delVar(name)
def dataStoreConnectorRenameVar(self, command, params):
dsindex = params[0]
name = params[1]
newname = params[2]
datastore = command.remotedatastores[dsindex]
datastore.renameVar(name, newname)
def parseRecipeFile(self, command, params):
"""
Parse the specified recipe file (with or without bbappends)
and return a datastore object representing the environment
for the recipe.
"""
fn = params[0]
appends = params[1]
appendlist = params[2]
if len(params) > 3:
config_data_dict = params[3]
config_data = command.remotedatastores.receive_datastore(config_data_dict)
else:
config_data = None
if appends:
if appendlist is not None:
appendfiles = appendlist
else:
appendfiles = command.cooker.collection.get_file_appends(fn)
else:
appendfiles = []
# We are calling bb.cache locally here rather than on the server,
# but that's OK because it doesn't actually need anything from
# the server barring the global datastore (which we have a remote
# version of)
if config_data:
# We have to use a different function here if we're passing in a datastore
# NOTE: we took a copy above, so we don't do it here again
envdata = bb.cache.parse_recipe(config_data, fn, appendfiles)['']
else:
# Use the standard path
parser = bb.cache.NoCache(command.cooker.databuilder)
envdata = parser.loadDataFull(fn, appendfiles)
idx = command.remotedatastores.store(envdata)
return DataStoreConnectionHandle(idx)
parseRecipeFile.readonly = True
command.cooker.updateConfigOpts(options)
class CommandsAsync:
"""
@@ -590,12 +288,8 @@ class CommandsAsync:
"""
bfile = params[0]
task = params[1]
if len(params) > 2:
hidewarning = params[2]
else:
hidewarning = False
command.cooker.buildFile(bfile, task, hidewarning)
command.cooker.buildFile(bfile, task)
buildFile.needcache = False
def buildTargets(self, command, params):
@@ -755,11 +449,3 @@ class CommandsAsync:
command.finishAsyncCommand()
resetCooker.needcache = False
def clientComplete(self, command, params):
"""
Do the right thing when the controlling client exits
"""
command.cooker.clientComplete()
command.finishAsyncCommand()
clientComplete.needcache = False

File diff suppressed because it is too large Load Diff

View File

@@ -22,11 +22,9 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import logging
import os
import re
import sys
import os, sys
from functools import wraps
import logging
import bb
from bb import data
import bb.parse
@@ -35,8 +33,8 @@ logger = logging.getLogger("BitBake")
parselog = logging.getLogger("BitBake.Parsing")
class ConfigParameters(object):
def __init__(self, argv=sys.argv):
self.options, targets = self.parseCommandLine(argv)
def __init__(self):
self.options, targets = self.parseCommandLine()
self.environment = self.parseEnvironment()
self.options.pkgs_to_build = targets or []
@@ -48,7 +46,7 @@ class ConfigParameters(object):
for key, val in self.options.__dict__.items():
setattr(self, key, val)
def parseCommandLine(self, argv=sys.argv):
def parseCommandLine(self):
raise Exception("Caller must implement commandline option parsing")
def parseEnvironment(self):
@@ -65,21 +63,20 @@ class ConfigParameters(object):
raise Exception("Unable to set configuration option 'cmd' on the server: %s" % error)
if not self.options.pkgs_to_build:
bbpkgs, error = server.runCommand(["getVariable", "BBTARGETS"])
bbpkgs, error = server.runCommand(["getVariable", "BBPKGS"])
if error:
raise Exception("Unable to get the value of BBTARGETS from the server: %s" % error)
raise Exception("Unable to get the value of BBPKGS from the server: %s" % error)
if bbpkgs:
self.options.pkgs_to_build.extend(bbpkgs.split())
def updateToServer(self, server, environment):
def updateToServer(self, server):
options = {}
for o in ["abort", "tryaltconfigs", "force", "invalidate_stamp",
"verbose", "debug", "dry_run", "dump_signatures",
"debug_domains", "extra_assume_provided", "profile",
"prefile", "postfile"]:
"debug_domains", "extra_assume_provided", "profile"]:
options[o] = getattr(self.options, o)
ret, error = server.runCommand(["updateConfig", options, environment])
ret, error = server.runCommand(["updateConfig", options])
if error:
raise Exception("Unable to update the server configuration with local parameters: %s" % error)
@@ -131,24 +128,17 @@ class CookerConfiguration(object):
self.extra_assume_provided = []
self.prefile = []
self.postfile = []
self.prefile_server = []
self.postfile_server = []
self.debug = 0
self.cmd = None
self.abort = True
self.force = False
self.profile = False
self.nosetscene = False
self.setsceneonly = False
self.invalidate_stamp = False
self.dump_signatures = []
self.dry_run = False
self.tracking = False
self.interface = []
self.writeeventlog = False
self.server_only = False
self.limited_deps = False
self.runall = None
self.env = {}
@@ -182,26 +172,11 @@ def catch_parse_error(func):
def wrapped(fn, *args):
try:
return func(fn, *args)
except IOError as exc:
except (IOError, bb.parse.ParseError, bb.data_smart.ExpansionError) as exc:
import traceback
parselog.critical(traceback.format_exc())
parselog.critical( traceback.format_exc())
parselog.critical("Unable to parse %s: %s" % (fn, exc))
sys.exit(1)
except bb.data_smart.ExpansionError as exc:
import traceback
bbdir = os.path.dirname(__file__) + os.sep
exc_class, exc, tb = sys.exc_info()
for tb in iter(lambda: tb.tb_next, None):
# Skip frames in bitbake itself, we only want the metadata
fn, _, _, _ = traceback.extract_tb(tb, 1)[0]
if not fn.startswith(bbdir):
break
parselog.critical("Unable to parse %s" % fn, exc_info=(exc_class, exc, tb))
sys.exit(1)
except bb.parse.ParseError as exc:
parselog.critical(str(exc))
sys.exit(1)
return wrapped
@catch_parse_error
@@ -215,7 +190,7 @@ def _inherit(bbclass, data):
def findConfigFile(configfile, data):
search = []
bbpath = data.getVar("BBPATH")
bbpath = data.getVar("BBPATH", True)
if bbpath:
for i in bbpath.split(":"):
search.append(os.path.join(i, "conf", configfile))
@@ -240,9 +215,9 @@ class CookerDataBuilder(object):
bb.utils.set_context(bb.utils.clean_context())
bb.event.set_class_handlers(bb.event.clean_class_handlers())
self.basedata = bb.data.init()
self.data = bb.data.init()
if self.tracking:
self.basedata.enableTracking()
self.data.enableTracking()
# Keep a datastore of the initial environment variables and their
# values from when BitBake was launched to enable child processes
@@ -253,49 +228,16 @@ class CookerDataBuilder(object):
self.savedenv.setVar(k, cookercfg.env[k])
filtered_keys = bb.utils.approved_variables()
bb.data.inheritFromOS(self.basedata, self.savedenv, filtered_keys)
self.basedata.setVar("BB_ORIGENV", self.savedenv)
bb.data.inheritFromOS(self.data, self.savedenv, filtered_keys)
self.data.setVar("BB_ORIGENV", self.savedenv)
if worker:
self.basedata.setVar("BB_WORKERCONTEXT", "1")
self.data = self.basedata
self.mcdata = {}
self.data.setVar("BB_WORKERCONTEXT", "1")
def parseBaseConfiguration(self):
try:
bb.parse.init_parser(self.basedata)
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
if self.data.getVar("BB_WORKERCONTEXT", False) is None:
bb.fetch.fetcher_init(self.data)
bb.codeparser.parser_cache_init(self.data)
bb.event.fire(bb.event.ConfigParsed(), self.data)
reparse_cnt = 0
while self.data.getVar("BB_INVALIDCONF", False) is True:
if reparse_cnt > 20:
logger.error("Configuration has been re-parsed over 20 times, "
"breaking out of the loop...")
raise Exception("Too deep config re-parse loop. Check locations where "
"BB_INVALIDCONF is being set (ConfigParsed event handlers)")
self.data.setVar("BB_INVALIDCONF", False)
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
reparse_cnt += 1
bb.event.fire(bb.event.ConfigParsed(), self.data)
bb.parse.init_parser(self.data)
self.data_hash = self.data.get_hash()
self.mcdata[''] = self.data
multiconfig = (self.data.getVar("BBMULTICONFIG") or "").split()
for config in multiconfig:
mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), mcdata)
self.mcdata[config] = mcdata
except (SyntaxError, bb.BBHandledException):
self.parseConfigurationFiles(self.prefiles, self.postfiles)
except SyntaxError:
raise bb.BBHandledException
except bb.data_smart.ExpansionError as e:
logger.error(str(e))
@@ -307,9 +249,9 @@ class CookerDataBuilder(object):
def _findLayerConf(self, data):
return findConfigFile("bblayers.conf", data)
def parseConfigurationFiles(self, prefiles, postfiles, mc = "default"):
data = bb.data.createCopy(self.basedata)
data.setVar("BB_CURRENT_MC", mc)
def parseConfigurationFiles(self, prefiles, postfiles):
data = self.data
bb.parse.init_parser(data)
# Parse files for loading *before* bitbake.conf and any includes
for f in prefiles:
@@ -323,30 +265,18 @@ class CookerDataBuilder(object):
data.setVar("TOPDIR", os.path.dirname(os.path.dirname(layerconf)))
data = parse_config_file(layerconf, data)
layers = (data.getVar('BBLAYERS') or "").split()
layers = (data.getVar('BBLAYERS', True) or "").split()
data = bb.data.createCopy(data)
approved = bb.utils.approved_variables()
for layer in layers:
if not os.path.isdir(layer):
parselog.critical("Layer directory '%s' does not exist! "
"Please check BBLAYERS in %s" % (layer, layerconf))
sys.exit(1)
parselog.debug(2, "Adding layer %s", layer)
if 'HOME' in approved and '~' in layer:
layer = os.path.expanduser(layer)
if layer.endswith('/'):
layer = layer.rstrip('/')
data.setVar('LAYERDIR', layer)
data.setVar('LAYERDIR_RE', re.escape(layer))
data = parse_config_file(os.path.join(layer, "conf", "layer.conf"), data)
data.expandVarref('LAYERDIR')
data.expandVarref('LAYERDIR_RE')
data.delVar('LAYERDIR_RE')
data.delVar('LAYERDIR')
if not data.getVar("BBPATH"):
if not data.getVar("BBPATH", True):
msg = "The BBPATH variable is not set"
if not layerconf:
msg += (" and bitbake did not find a conf/bblayers.conf file in"
@@ -361,21 +291,29 @@ class CookerDataBuilder(object):
data = parse_config_file(p, data)
# Handle any INHERITs and inherit the base class
bbclasses = ["base"] + (data.getVar('INHERIT') or "").split()
bbclasses = ["base"] + (data.getVar('INHERIT', True) or "").split()
for bbclass in bbclasses:
data = _inherit(bbclass, data)
# Nomally we only register event handlers at the end of parsing .bb files
# We register any handlers we've found so far here...
for var in data.getVar('__BBHANDLERS', False) or []:
handlerfn = data.getVarFlag(var, "filename", False)
if not handlerfn:
parselog.critical("Undefined event handler function '%s'" % var)
sys.exit(1)
handlerln = int(data.getVarFlag(var, "lineno", False))
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
for var in data.getVar('__BBHANDLERS') or []:
bb.event.register(var, data.getVar(var), (data.getVarFlag(var, "eventmask", True) or "").split())
if data.getVar("BB_WORKERCONTEXT", False) is None:
bb.fetch.fetcher_init(data)
bb.codeparser.parser_cache_init(data)
bb.event.fire(bb.event.ConfigParsed(), data)
if data.getVar("BB_INVALIDCONF") is True:
data.setVar("BB_INVALIDCONF", False)
self.parseConfigurationFiles(self.prefiles, self.postfiles)
return
bb.parse.init_parser(data)
data.setVar('BBINCLUDED',bb.parse.get_file_depends(data))
self.data = data
self.data_hash = data.get_hash()
return data

View File

@@ -178,8 +178,8 @@ def createDaemon(function, logfile):
# os.dup2(0, 2) # standard error (2)
si = open('/dev/null', 'r')
so = open(logfile, 'w')
si = file('/dev/null', 'r')
so = file(logfile, 'w')
se = so

View File

@@ -78,6 +78,59 @@ def initVar(var, d):
"""Non-destructive var init for data structure"""
d.initVar(var)
def setVar(var, value, d):
"""Set a variable to a given value"""
d.setVar(var, value)
def getVar(var, d, exp = 0):
"""Gets the value of a variable"""
return d.getVar(var, exp)
def renameVar(key, newkey, d):
"""Renames a variable from key to newkey"""
d.renameVar(key, newkey)
def delVar(var, d):
"""Removes a variable from the data set"""
d.delVar(var)
def appendVar(var, value, d):
"""Append additional value to a variable"""
d.appendVar(var, value)
def setVarFlag(var, flag, flagvalue, d):
"""Set a flag for a given variable to a given value"""
d.setVarFlag(var, flag, flagvalue)
def getVarFlag(var, flag, d):
"""Gets given flag from given var"""
return d.getVarFlag(var, flag)
def delVarFlag(var, flag, d):
"""Removes a given flag from the variable's flags"""
d.delVarFlag(var, flag)
def setVarFlags(var, flags, d):
"""Set the flags for a given variable
Note:
setVarFlags will not clear previous
flags. Think of this method as
addVarFlags
"""
d.setVarFlags(var, flags)
def getVarFlags(var, d):
"""Gets a variable's flags"""
return d.getVarFlags(var)
def delVarFlags(var, d):
"""Removes a variable's flags"""
d.delVarFlags(var)
def keys(d):
"""Return a list of keys in d"""
return d.keys()
@@ -106,12 +159,13 @@ def expandKeys(alterdata, readdata = None):
# These two for loops are split for performance to maximise the
# usefulness of the expand cache
for key in sorted(todolist):
for key in todolist:
ekey = todolist[key]
newval = alterdata.getVar(ekey, False)
if newval is not None:
val = alterdata.getVar(key, False)
if val is not None:
newval = alterdata.getVar(ekey, 0)
if newval:
val = alterdata.getVar(key, 0)
if val is not None and newval is not None:
bb.warn("Variable key %s (%s) replaces original key %s (%s)." % (key, val, ekey, newval))
alterdata.renameVar(key, ekey)
@@ -121,7 +175,7 @@ def inheritFromOS(d, savedenv, permitted):
for s in savedenv.keys():
if s in permitted:
try:
d.setVar(s, savedenv.getVar(s), op = 'from env')
d.setVar(s, getVar(s, savedenv, True), op = 'from env')
if s in exportlist:
d.setVarFlag(s, "export", True, op = 'auto env export')
except TypeError:
@@ -129,39 +183,39 @@ def inheritFromOS(d, savedenv, permitted):
def emit_var(var, o=sys.__stdout__, d = init(), all=False):
"""Emit a variable to be sourced by a shell."""
func = d.getVarFlag(var, "func", False)
if d.getVarFlag(var, 'python', False) and func:
return False
if getVarFlag(var, "python", d):
return 0
export = d.getVarFlag(var, "export", False)
unexport = d.getVarFlag(var, "unexport", False)
export = getVarFlag(var, "export", d)
unexport = getVarFlag(var, "unexport", d)
func = getVarFlag(var, "func", d)
if not all and not export and not unexport and not func:
return False
return 0
try:
if all:
oval = d.getVar(var, False)
val = d.getVar(var)
oval = getVar(var, d, 0)
val = getVar(var, d, 1)
except (KeyboardInterrupt, bb.build.FuncFailed):
raise
except Exception as exc:
o.write('# expansion of %s threw %s: %s\n' % (var, exc.__class__.__name__, str(exc)))
return False
return 0
if all:
d.varhistory.emit(var, oval, val, o, d)
d.varhistory.emit(var, oval, val, o)
if (var.find("-") != -1 or var.find(".") != -1 or var.find('{') != -1 or var.find('}') != -1 or var.find('+') != -1) and not all:
return False
return 0
varExpanded = d.expand(var)
varExpanded = expand(var, d)
if unexport:
o.write('unset %s\n' % varExpanded)
return False
return 0
if val is None:
return False
return 0
val = str(val)
@@ -170,11 +224,10 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
val = val[3:] # Strip off "() "
o.write("%s() %s\n" % (varExpanded, val))
o.write("export -f %s\n" % (varExpanded))
return True
return 1
if func:
# NOTE: should probably check for unbalanced {} within the var
val = val.rstrip('\n')
o.write("%s() {\n%s\n}\n" % (varExpanded, val))
return 1
@@ -187,31 +240,29 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
alter = re.sub('\n', ' \\\n', alter)
alter = re.sub('\\$', '\\\\$', alter)
o.write('%s="%s"\n' % (varExpanded, alter))
return False
return 0
def emit_env(o=sys.__stdout__, d = init(), all=False):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
isfunc = lambda key: bool(d.getVarFlag(key, "func", False))
isfunc = lambda key: bool(d.getVarFlag(key, "func"))
keys = sorted((key for key in d.keys() if not key.startswith("__")), key=isfunc)
grouped = groupby(keys, isfunc)
for isfunc, keys in grouped:
for key in sorted(keys):
for key in keys:
emit_var(key, o, d, all and not isfunc) and o.write('\n')
def exported_keys(d):
return (key for key in d.keys() if not key.startswith('__') and
d.getVarFlag(key, 'export', False) and
not d.getVarFlag(key, 'unexport', False))
d.getVarFlag(key, 'export') and
not d.getVarFlag(key, 'unexport'))
def exported_vars(d):
k = list(exported_keys(d))
for key in k:
for key in exported_keys(d):
try:
value = d.getVar(key)
except Exception as err:
bb.warn("%s: Unable to export ${%s}: %s" % (d.getVar("FILE"), key, err))
continue
value = d.getVar(key, True)
except Exception:
pass
if value is not None:
yield key, str(value)
@@ -219,24 +270,23 @@ def exported_vars(d):
def emit_func(func, o=sys.__stdout__, d = init()):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
keys = (key for key in d.keys() if not key.startswith("__") and not d.getVarFlag(key, "func", False))
for key in sorted(keys):
emit_var(key, o, d, False)
keys = (key for key in d.keys() if not key.startswith("__") and not d.getVarFlag(key, "func"))
for key in keys:
emit_var(key, o, d, False) and o.write('\n')
o.write('\n')
emit_var(func, o, d, False) and o.write('\n')
newdeps = bb.codeparser.ShellParser(func, logger).parse_shell(d.getVar(func))
newdeps |= set((d.getVarFlag(func, "vardeps") or "").split())
newdeps = bb.codeparser.ShellParser(func, logger).parse_shell(d.getVar(func, True))
newdeps |= set((d.getVarFlag(func, "vardeps", True) or "").split())
seen = set()
while newdeps:
deps = newdeps
seen |= deps
newdeps = set()
for dep in deps:
if d.getVarFlag(dep, "func", False) and not d.getVarFlag(dep, "python", False):
if d.getVarFlag(dep, "func") and not d.getVarFlag(dep, "python"):
emit_var(dep, o, d, False) and o.write('\n')
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep))
newdeps |= set((d.getVarFlag(dep, "vardeps") or "").split())
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep, True))
newdeps |= set((d.getVarFlag(dep, "vardeps", True) or "").split())
newdeps -= seen
_functionfmt = """
@@ -247,7 +297,7 @@ def emit_func_python(func, o=sys.__stdout__, d = init()):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
def write_func(func, o, call = False):
body = d.getVar(func, False)
body = d.getVar(func, True)
if not body.startswith("def"):
body = _functionfmt.format(function=func, body=body)
@@ -257,21 +307,21 @@ def emit_func_python(func, o=sys.__stdout__, d = init()):
write_func(func, o, True)
pp = bb.codeparser.PythonParser(func, logger)
pp.parse_python(d.getVar(func, False))
pp.parse_python(d.getVar(func, True))
newdeps = pp.execs
newdeps |= set((d.getVarFlag(func, "vardeps") or "").split())
newdeps |= set((d.getVarFlag(func, "vardeps", True) or "").split())
seen = set()
while newdeps:
deps = newdeps
seen |= deps
newdeps = set()
for dep in deps:
if d.getVarFlag(dep, "func", False) and d.getVarFlag(dep, "python", False):
if d.getVarFlag(dep, "func") and d.getVarFlag(dep, "python"):
write_func(dep, o)
pp = bb.codeparser.PythonParser(dep, logger)
pp.parse_python(d.getVar(dep, False))
pp.parse_python(d.getVar(dep, True))
newdeps |= pp.execs
newdeps |= set((d.getVarFlag(dep, "vardeps") or "").split())
newdeps |= set((d.getVarFlag(dep, "vardeps", True) or "").split())
newdeps -= seen
def update_data(d):
@@ -288,21 +338,19 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps |= parser.references
deps = deps | (keys & parser.execs)
return deps, value
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "vardepvalueexclude", "postfuncs", "prefuncs"]) or {}
vardeps = varflags.get("vardeps")
value = d.getVar(key, False)
def handle_contains(value, contains, d):
newvalue = ""
for k in sorted(contains):
l = (d.getVar(k) or "").split()
for item in sorted(contains[k]):
for word in item.split():
if not word in l:
newvalue += "\n%s{%s} = Unset" % (k, item)
break
l = (d.getVar(k, True) or "").split()
for word in sorted(contains[k]):
if word in l:
newvalue += "\n%s{%s} = Set" % (k, word)
else:
newvalue += "\n%s{%s} = Set" % (k, item)
newvalue += "\n%s{%s} = Unset" % (k, word)
if not newvalue:
return value
if not value:
@@ -313,29 +361,27 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
value = varflags.get("vardepvalue")
elif varflags.get("func"):
if varflags.get("python"):
parsedvar = d.expandWithRefs(value, key)
parser = bb.codeparser.PythonParser(key, logger)
if value and "\t" in value:
logger.warning("Variable %s contains tabs, please remove these (%s)" % (key, d.getVar("FILE")))
parser.parse_python(value, filename=varflags.get("filename"), lineno=varflags.get("lineno"))
if parsedvar.value and "\t" in parsedvar.value:
logger.warn("Variable %s contains tabs, please remove these (%s)" % (key, d.getVar("FILE", True)))
parser.parse_python(parsedvar.value)
deps = deps | parser.references
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, d)
else:
parsedvar = d.expandWithRefs(value, key)
parser = bb.codeparser.ShellParser(key, logger)
parser.parse_shell(parsedvar.value)
deps = deps | shelldeps
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
value = handle_contains(value, parsedvar.contains, d)
if vardeps is None:
parser.log.flush()
if "prefuncs" in varflags:
deps = deps | set(varflags["prefuncs"].split())
if "postfuncs" in varflags:
deps = deps | set(varflags["postfuncs"].split())
if "exports" in varflags:
deps = deps | set(varflags["exports"].split())
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
value = handle_contains(value, parsedvar.contains, d)
else:
parser = d.expandWithRefs(value, key)
deps |= parser.references
@@ -359,11 +405,8 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps |= set((vardeps or "").split())
deps -= set(varflags.get("vardepsexclude", "").split())
except bb.parse.SkipRecipe:
raise
except Exception as e:
bb.warn("Exception during build_dependencies for %s" % key)
raise
raise bb.data_smart.ExpansionError(key, None, e)
return deps, value
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
#d.setVarFlag(key, "vardeps", deps)
@@ -371,13 +414,13 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
def generate_dependencies(d):
keys = set(key for key in d if not key.startswith("__"))
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export", False) and not d.getVarFlag(key, "unexport", False))
varflagsexcl = d.getVar('BB_SIGNATURE_EXCLUDE_FLAGS')
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export") and not d.getVarFlag(key, "unexport"))
varflagsexcl = d.getVar('BB_SIGNATURE_EXCLUDE_FLAGS', True)
deps = {}
values = {}
tasklist = d.getVar('__BBTASKS', False) or []
tasklist = d.getVar('__BBTASKS') or []
for task in tasklist:
deps[task], values[task] = build_dependencies(task, keys, shelldeps, varflagsexcl, d)
newdeps = deps[task]
@@ -395,7 +438,7 @@ def generate_dependencies(d):
return tasklist, deps, values
def inherits_class(klass, d):
val = d.getVar('__inherit_cache', False) or []
val = getVar('__inherit_cache', d) or []
needle = os.path.join('classes', '%s.bbclass' % klass)
for v in val:
if v.endswith(needle):

View File

@@ -40,7 +40,7 @@ logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = ["_append", "_prepend", "_remove"]
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>.*))?$')
__expand_var_regexp__ = re.compile(r"\${[^{}@\n\t :]+}")
__expand_var_regexp__ = re.compile(r"\${[^{}@\n\t ]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
def infer_caller_details(loginfo, parent = False, varval = True):
@@ -54,36 +54,27 @@ def infer_caller_details(loginfo, parent = False, varval = True):
return
# Infer caller's likely values for variable (var) and value (value),
# to reduce clutter in the rest of the code.
above = None
def set_above():
if varval and ('variable' not in loginfo or 'detail' not in loginfo):
try:
raise Exception
except Exception:
tb = sys.exc_info()[2]
if parent:
return tb.tb_frame.f_back.f_back.f_back
above = tb.tb_frame.f_back.f_back
else:
return tb.tb_frame.f_back.f_back
if varval and ('variable' not in loginfo or 'detail' not in loginfo):
if not above:
above = set_above()
lcls = above.f_locals.items()
above = tb.tb_frame.f_back
lcls = above.f_locals.items()
for k, v in lcls:
if k == 'value' and 'detail' not in loginfo:
loginfo['detail'] = v
if k == 'var' and 'variable' not in loginfo:
loginfo['variable'] = v
# Infer file/line/function from traceback
# Don't use traceback.extract_stack() since it fills the line contents which
# we don't need and that hits stat syscalls
if 'file' not in loginfo:
if not above:
above = set_above()
f = above.f_back
line = f.f_lineno
file = f.f_code.co_filename
func = f.f_code.co_name
depth = 3
if parent:
depth = 4
file, line, func, text = traceback.extract_stack(limit = depth)[0]
loginfo['file'] = file
loginfo['line'] = line
if func not in loginfo:
@@ -108,7 +99,7 @@ class VariableParse:
varparse = self.d.expand_cache[key]
var = varparse.value
else:
var = self.d.getVarFlag(key, "_content")
var = self.d.getVarFlag(key, "_content", True)
self.references.add(key)
if var is not None:
return var
@@ -116,21 +107,13 @@ class VariableParse:
return match.group()
def python_sub(self, match):
if isinstance(match, str):
code = match
else:
code = match.group()[3:-1]
if "_remote_data" in self.d:
connector = self.d["_remote_data"]
return connector.expandPythonRef(self.varname, code, self.d)
code = match.group()[3:-1]
codeobj = compile(code.strip(), self.varname or "<expansion>", "eval")
parser = bb.codeparser.PythonParser(self.varname, logger)
parser.parse_python(code)
if self.varname:
vardeps = self.d.getVarFlag(self.varname, "vardeps")
vardeps = self.d.getVarFlag(self.varname, "vardeps", True)
if vardeps is None:
parser.log.flush()
else:
@@ -143,7 +126,7 @@ class VariableParse:
self.contains[k] = parser.contains[k].copy()
else:
self.contains[k].update(parser.contains[k])
value = utils.better_eval(codeobj, DataContext(self.d), {'d' : self.d})
value = utils.better_eval(codeobj, DataContext(self.d))
return str(value)
@@ -154,8 +137,8 @@ class DataContext(dict):
self['d'] = metadata
def __missing__(self, key):
value = self.metadata.getVar(key)
if value is None or self.metadata.getVarFlag(key, 'func', False):
value = self.metadata.getVar(key, True)
if value is None or self.metadata.getVarFlag(key, 'func'):
raise KeyError(key)
else:
return value
@@ -230,19 +213,6 @@ class VariableHistory(object):
new.variables = self.variables.copy()
return new
def __getstate__(self):
vardict = {}
for k, v in self.variables.iteritems():
vardict[k] = v
return {'dataroot': self.dataroot,
'variables': vardict}
def __setstate__(self, state):
self.dataroot = state['dataroot']
self.variables = COWDictBase.copy()
for k, v in state['variables'].items():
self.variables[k] = v
def record(self, *kwonly, **loginfo):
if not self.dataroot._tracking:
return
@@ -261,37 +231,16 @@ class VariableHistory(object):
if var not in self.variables:
self.variables[var] = []
if not isinstance(self.variables[var], list):
return
if 'nodups' in loginfo and loginfo in self.variables[var]:
return
self.variables[var].append(loginfo.copy())
def variable(self, var):
remote_connector = self.dataroot.getVar('_remote_data', False)
if remote_connector:
varhistory = remote_connector.getVarHistory(var)
else:
varhistory = []
if var in self.variables:
varhistory.extend(self.variables[var])
return varhistory
return self.variables[var]
else:
return []
def emit(self, var, oval, val, o, d):
def emit(self, var, oval, val, o):
history = self.variable(var)
# Append override history
if var in d.overridedata:
for (r, override) in d.overridedata[var]:
for event in self.variable(r):
loginfo = event.copy()
if 'flag' in loginfo and not loginfo['flag'].startswith("_"):
continue
loginfo['variable'] = var
loginfo['op'] = 'override[%s]:%s' % (override, loginfo['op'])
history.append(loginfo)
commentVal = re.sub('\n', '\n#', str(oval))
if history:
if len(history) == 1:
@@ -338,31 +287,6 @@ class VariableHistory(object):
lines.append(line)
return lines
def get_variable_items_files(self, var, d):
"""
Use variable history to map items added to a list variable and
the files in which they were added.
"""
history = self.variable(var)
finalitems = (d.getVar(var) or '').split()
filemap = {}
isset = False
for event in history:
if 'flag' in event:
continue
if event['op'] == '_remove':
continue
if isset and event['op'] == 'set?':
continue
isset = True
items = d.expand(event['detail']).split()
for item in items:
# This is a little crude but is belt-and-braces to avoid us
# having to handle every possible operation type specifically
if item in finalitems and not item in filemap:
filemap[item] = event['file']
return filemap
def del_var_history(self, var, f=None, line=None):
"""If file f and line are not given, the entire history of var is deleted"""
if var in self.variables:
@@ -372,23 +296,18 @@ class VariableHistory(object):
self.variables[var] = []
class DataSmart(MutableMapping):
def __init__(self):
def __init__(self, special = COWDictBase.copy(), seen = COWDictBase.copy() ):
self.dict = {}
self.inchistory = IncludeHistory()
self.varhistory = VariableHistory(self)
self._tracking = False
self.expand_cache = {}
# cookie monster tribute
# Need to be careful about writes to overridedata as
# its only a shallow copy, could influence other data store
# copies!
self.overridedata = {}
self.overrides = None
self.overridevars = set(["OVERRIDES", "FILE"])
self.inoverride = False
self._special_values = special
self._seen_overrides = seen
self.expand_cache = {}
def enableTracking(self):
self._tracking = True
@@ -398,7 +317,7 @@ class DataSmart(MutableMapping):
def expandWithRefs(self, s, varname):
if not isinstance(s, str): # sanity check
if not isinstance(s, basestring): # sanity check
return VariableParse(varname, self, s)
if varname and varname in self.expand_cache:
@@ -410,12 +329,7 @@ class DataSmart(MutableMapping):
olds = s
try:
s = __expand_var_regexp__.sub(varparse.var_sub, s)
try:
s = __expand_python_regexp__.sub(varparse.python_sub, s)
except SyntaxError as e:
# Likely unmatched brackets, just don't expand the expression
if e.msg != "EOL while scanning string literal":
raise
s = __expand_python_regexp__.sub(varparse.python_sub, s)
if s == olds:
break
except ExpansionError:
@@ -423,7 +337,7 @@ class DataSmart(MutableMapping):
except bb.parse.SkipRecipe:
raise
except Exception as exc:
raise ExpansionError(varname, s, exc) from exc
raise ExpansionError(varname, s, exc)
varparse.value = s
@@ -435,34 +349,97 @@ class DataSmart(MutableMapping):
def expand(self, s, varname = None):
return self.expandWithRefs(s, varname).value
def finalize(self, parent = False):
return
def internal_finalize(self, parent = False):
"""Performs final steps upon the datastore, including application of overrides"""
self.overrides = None
def need_overrides(self):
if self.overrides is not None:
return
if self.inoverride:
return
for count in range(5):
self.inoverride = True
# Can end up here recursively so setup dummy values
self.overrides = []
self.overridesset = set()
self.overrides = (self.getVar("OVERRIDES") or "").split(":") or []
self.overridesset = set(self.overrides)
self.inoverride = False
self.expand_cache = {}
newoverrides = (self.getVar("OVERRIDES") or "").split(":") or []
if newoverrides == self.overrides:
break
self.overrides = newoverrides
self.overridesset = set(self.overrides)
else:
bb.fatal("Overrides could not be expanded into a stable state after 5 iterations, overrides must be being referenced by other overridden variables in some recursive fashion. Please provide your configuration to bitbake-devel so we can laugh, er, I mean try and understand how to make it work.")
overrides = (self.getVar("OVERRIDES", True) or "").split(":") or []
finalize_caller = {
'op': 'finalize',
}
infer_caller_details(finalize_caller, parent = parent, varval = False)
#
# Well let us see what breaks here. We used to iterate
# over each variable and apply the override and then
# do the line expanding.
# If we have bad luck - which we will have - the keys
# where in some order that is so important for this
# method which we don't have anymore.
# Anyway we will fix that and write test cases this
# time.
#
# First we apply all overrides
# Then we will handle _append and _prepend and store the _remove
# information for later.
#
# We only want to report finalization once per variable overridden.
finalizes_reported = {}
for o in overrides:
# calculate '_'+override
l = len(o) + 1
# see if one should even try
if o not in self._seen_overrides:
continue
vars = self._seen_overrides[o].copy()
for var in vars:
name = var[:-l]
try:
# Report only once, even if multiple changes.
if name not in finalizes_reported:
finalizes_reported[name] = True
finalize_caller['variable'] = name
finalize_caller['detail'] = 'was: ' + str(self.getVar(name, False))
self.varhistory.record(**finalize_caller)
# Copy history of the override over.
for event in self.varhistory.variable(var):
loginfo = event.copy()
loginfo['variable'] = name
loginfo['op'] = 'override[%s]:%s' % (o, loginfo['op'])
self.varhistory.record(**loginfo)
self.setVar(name, self.getVar(var, False), op = 'finalize', file = 'override[%s]' % o, line = '')
self.delVar(var)
except Exception:
logger.info("Untracked delVar")
# now on to the appends and prepends, and stashing the removes
for op in __setvar_keyword__:
if op in self._special_values:
appends = self._special_values[op] or []
for append in appends:
keep = []
for (a, o) in self.getVarFlag(append, op) or []:
match = True
if o:
for o2 in o.split("_"):
if not o2 in overrides:
match = False
if not match:
keep.append((a ,o))
continue
if op == "_append":
sval = self.getVar(append, False) or ""
sval += a
self.setVar(append, sval)
elif op == "_prepend":
sval = a + (self.getVar(append, False) or "")
self.setVar(append, sval)
elif op == "_remove":
removes = self.getVarFlag(append, "_removeactive", False) or []
removes.extend(a.split())
self.setVarFlag(append, "_removeactive", removes, ignore=True)
# We save overrides that may be applied at some later stage
if keep:
self.setVarFlag(append, op, keep, ignore=True)
else:
self.delVarFlag(append, op, ignore=True)
def initVar(self, var):
self.expand_cache = {}
@@ -473,22 +450,17 @@ class DataSmart(MutableMapping):
dest = self.dict
while dest:
if var in dest:
return dest[var], self.overridedata.get(var, None)
if "_remote_data" in dest:
connector = dest["_remote_data"]["_content"]
return connector.getVar(var)
return dest[var]
if "_data" not in dest:
break
dest = dest["_data"]
return None, self.overridedata.get(var, None)
def _makeShadowCopy(self, var):
if var in self.dict:
return
local_var, _ = self._findVar(var)
local_var = self._findVar(var)
if local_var:
self.dict[var] = copy.copy(local_var)
@@ -498,16 +470,6 @@ class DataSmart(MutableMapping):
def setVar(self, var, value, **loginfo):
#print("var=" + str(var) + " val=" + str(value))
parsing=False
if 'parsing' in loginfo:
parsing=True
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.setVar(var, value)
if not res:
return
if 'op' not in loginfo:
loginfo['op'] = "set"
self.expand_cache = {}
@@ -516,7 +478,7 @@ class DataSmart(MutableMapping):
base = match.group('base')
keyword = match.group("keyword")
override = match.group('add')
l = self.getVarFlag(base, keyword, False) or []
l = self.getVarFlag(base, keyword) or []
l.append([value, override])
self.setVarFlag(base, keyword, l, ignore=True)
# And cause that to be recorded:
@@ -529,119 +491,65 @@ class DataSmart(MutableMapping):
self.varhistory.record(**loginfo)
# todo make sure keyword is not __doc__ or __module__
# pay the cookie monster
try:
self._special_values[keyword].add(base)
except KeyError:
self._special_values[keyword] = set()
self._special_values[keyword].add(base)
# more cookies for the cookie monster
if '_' in var:
self._setvar_update_overrides(base, **loginfo)
if base in self.overridevars:
self._setvar_update_overridevars(var, value)
return
if not var in self.dict:
self._makeShadowCopy(var)
if not parsing:
if "_append" in self.dict[var]:
del self.dict[var]["_append"]
if "_prepend" in self.dict[var]:
del self.dict[var]["_prepend"]
if "_remove" in self.dict[var]:
del self.dict[var]["_remove"]
if var in self.overridedata:
active = []
self.need_overrides()
for (r, o) in self.overridedata[var]:
if o in self.overridesset:
active.append(r)
elif "_" in o:
if set(o.split("_")).issubset(self.overridesset):
active.append(r)
for a in active:
self.delVar(a)
del self.overridedata[var]
# more cookies for the cookie monster
if '_' in var:
self._setvar_update_overrides(var, **loginfo)
self._setvar_update_overrides(var)
# setting var
self.dict[var]["_content"] = value
self.varhistory.record(**loginfo)
if var in self.overridevars:
self._setvar_update_overridevars(var, value)
def _setvar_update_overridevars(self, var, value):
vardata = self.expandWithRefs(value, var)
new = vardata.references
new.update(vardata.contains.keys())
while not new.issubset(self.overridevars):
nextnew = set()
self.overridevars.update(new)
for i in new:
vardata = self.expandWithRefs(self.getVar(i), i)
nextnew.update(vardata.references)
nextnew.update(vardata.contains.keys())
new = nextnew
self.internal_finalize(True)
def _setvar_update_overrides(self, var, **loginfo):
def _setvar_update_overrides(self, var):
# aka pay the cookie monster
override = var[var.rfind('_')+1:]
shortvar = var[:var.rfind('_')]
while override and override.islower():
if shortvar not in self.overridedata:
self.overridedata[shortvar] = []
if [var, override] not in self.overridedata[shortvar]:
# Force CoW by recreating the list first
self.overridedata[shortvar] = list(self.overridedata[shortvar])
self.overridedata[shortvar].append([var, override])
while override:
if override not in self._seen_overrides:
self._seen_overrides[override] = set()
self._seen_overrides[override].add( var )
override = None
if "_" in shortvar:
override = var[shortvar.rfind('_')+1:]
shortvar = var[:shortvar.rfind('_')]
if len(shortvar) == 0:
override = None
def getVar(self, var, expand=True, noweakdefault=False, parsing=False):
return self.getVarFlag(var, "_content", expand, noweakdefault, parsing)
def getVar(self, var, expand=False, noweakdefault=False):
return self.getVarFlag(var, "_content", expand, noweakdefault)
def renameVar(self, key, newkey, **loginfo):
"""
Rename the variable key to newkey
"""
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.renameVar(key, newkey)
if not res:
return
val = self.getVar(key, 0, parsing=True)
val = self.getVar(key, 0)
if val is not None:
loginfo['variable'] = newkey
loginfo['op'] = 'rename from %s' % key
loginfo['detail'] = val
self.varhistory.record(**loginfo)
self.setVar(newkey, val, ignore=True, parsing=True)
self.setVar(newkey, val, ignore=True)
for i in (__setvar_keyword__):
src = self.getVarFlag(key, i, False)
src = self.getVarFlag(key, i)
if src is None:
continue
dest = self.getVarFlag(newkey, i, False) or []
dest = self.getVarFlag(newkey, i) or []
dest.extend(src)
self.setVarFlag(newkey, i, dest, ignore=True)
if key in self.overridedata:
self.overridedata[newkey] = []
for (v, o) in self.overridedata[key]:
self.overridedata[newkey].append([v.replace(key, newkey), o])
self.renameVar(v, v.replace(key, newkey))
if '_' in newkey and val is None:
self._setvar_update_overrides(newkey, **loginfo)
if i in self._special_values and key in self._special_values[i]:
self._special_values[i].remove(key)
self._special_values[i].add(newkey)
loginfo['variable'] = key
loginfo['op'] = 'rename (to)'
@@ -652,53 +560,27 @@ class DataSmart(MutableMapping):
def appendVar(self, var, value, **loginfo):
loginfo['op'] = 'append'
self.varhistory.record(**loginfo)
self.setVar(var + "_append", value, ignore=True, parsing=True)
newvalue = (self.getVar(var, False) or "") + value
self.setVar(var, newvalue, ignore=True)
def prependVar(self, var, value, **loginfo):
loginfo['op'] = 'prepend'
self.varhistory.record(**loginfo)
self.setVar(var + "_prepend", value, ignore=True, parsing=True)
newvalue = value + (self.getVar(var, False) or "")
self.setVar(var, newvalue, ignore=True)
def delVar(self, var, **loginfo):
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.delVar(var)
if not res:
return
loginfo['detail'] = ""
loginfo['op'] = 'del'
self.varhistory.record(**loginfo)
self.expand_cache = {}
self.dict[var] = {}
if var in self.overridedata:
del self.overridedata[var]
if '_' in var:
override = var[var.rfind('_')+1:]
shortvar = var[:var.rfind('_')]
while override and override.islower():
try:
if shortvar in self.overridedata:
# Force CoW by recreating the list first
self.overridedata[shortvar] = list(self.overridedata[shortvar])
self.overridedata[shortvar].remove([var, override])
except ValueError as e:
pass
override = None
if "_" in shortvar:
override = var[shortvar.rfind('_')+1:]
shortvar = var[:shortvar.rfind('_')]
if len(shortvar) == 0:
override = None
if override and override in self._seen_overrides and var in self._seen_overrides[override]:
self._seen_overrides[override].remove(var)
def setVarFlag(self, var, flag, value, **loginfo):
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.setVarFlag(var, flag, value)
if not res:
return
self.expand_cache = {}
if 'op' not in loginfo:
loginfo['op'] = "set"
loginfo['flag'] = flag
@@ -707,10 +589,8 @@ class DataSmart(MutableMapping):
self._makeShadowCopy(var)
self.dict[var][flag] = value
if flag == "_defaultval" and '_' in var:
self._setvar_update_overrides(var, **loginfo)
if flag == "_defaultval" and var in self.overridevars:
self._setvar_update_overridevars(var, value)
if flag == "defaultval" and '_' in var:
self._setvar_update_overrides(var)
if flag == "unexport" or flag == "export":
if not "__exportlist" in self.dict:
@@ -719,71 +599,14 @@ class DataSmart(MutableMapping):
self.dict["__exportlist"]["_content"] = set()
self.dict["__exportlist"]["_content"].add(var)
def getVarFlag(self, var, flag, expand=True, noweakdefault=False, parsing=False):
local_var, overridedata = self._findVar(var)
def getVarFlag(self, var, flag, expand=False, noweakdefault=False):
local_var = self._findVar(var)
value = None
if flag == "_content" and overridedata is not None and not parsing:
match = False
active = {}
self.need_overrides()
for (r, o) in overridedata:
# What about double overrides both with "_" in the name?
if o in self.overridesset:
active[o] = r
elif "_" in o:
if set(o.split("_")).issubset(self.overridesset):
active[o] = r
mod = True
while mod:
mod = False
for o in self.overrides:
for a in active.copy():
if a.endswith("_" + o):
t = active[a]
del active[a]
active[a.replace("_" + o, "")] = t
mod = True
elif a == o:
match = active[a]
del active[a]
if match:
value = self.getVar(match, False)
if local_var is not None and value is None:
if local_var is not None:
if flag in local_var:
value = copy.copy(local_var[flag])
elif flag == "_content" and "_defaultval" in local_var and not noweakdefault:
value = copy.copy(local_var["_defaultval"])
if flag == "_content" and local_var is not None and "_append" in local_var and not parsing:
if not value:
value = ""
self.need_overrides()
for (r, o) in local_var["_append"]:
match = True
if o:
for o2 in o.split("_"):
if not o2 in self.overrides:
match = False
if match:
value = value + r
if flag == "_content" and local_var is not None and "_prepend" in local_var and not parsing:
if not value:
value = ""
self.need_overrides()
for (r, o) in local_var["_prepend"]:
match = True
if o:
for o2 in o.split("_"):
if not o2 in self.overrides:
match = False
if match:
value = r + value
elif flag == "_content" and "defaultval" in local_var and not noweakdefault:
value = copy.copy(local_var["defaultval"])
if expand and value:
# Only getvar (flag == _content) hits the expand cache
cachename = None
@@ -792,38 +615,20 @@ class DataSmart(MutableMapping):
else:
cachename = var + "[" + flag + "]"
value = self.expand(value, cachename)
if value and flag == "_content" and local_var is not None and "_remove" in local_var:
removes = []
self.need_overrides()
for (r, o) in local_var["_remove"]:
match = True
if o:
for o2 in o.split("_"):
if not o2 in self.overrides:
match = False
if match:
removes.extend(self.expand(r).split())
if removes:
filtered = filter(lambda v: v not in removes,
value.split())
value = " ".join(filtered)
if expand and var in self.expand_cache:
# We need to ensure the expand cache has the correct value
# flag == "_content" here
self.expand_cache[var].value = value
if value and flag == "_content" and local_var is not None and "_removeactive" in local_var:
removes = [self.expand(r).split() for r in local_var["_removeactive"]]
removes = reduce(lambda a, b: a+b, removes, [])
filtered = filter(lambda v: v not in removes,
value.split())
value = " ".join(filtered)
if expand:
# We need to ensure the expand cache has the correct value
# flag == "_content" here
self.expand_cache[var].value = value
return value
def delVarFlag(self, var, flag, **loginfo):
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.delVarFlag(var, flag)
if not res:
return
self.expand_cache = {}
local_var, _ = self._findVar(var)
local_var = self._findVar(var)
if not local_var:
return
if not var in self.dict:
@@ -852,7 +657,6 @@ class DataSmart(MutableMapping):
self.setVarFlag(var, flag, newvalue, ignore=True)
def setVarFlags(self, var, flags, **loginfo):
self.expand_cache = {}
infer_caller_details(loginfo)
if not var in self.dict:
self._makeShadowCopy(var)
@@ -866,7 +670,7 @@ class DataSmart(MutableMapping):
self.dict[var][i] = flags[i]
def getVarFlags(self, var, expand = False, internalflags=False):
local_var, _ = self._findVar(var)
local_var = self._findVar(var)
flags = {}
if local_var:
@@ -882,7 +686,6 @@ class DataSmart(MutableMapping):
def delVarFlags(self, var, **loginfo):
self.expand_cache = {}
if not var in self.dict:
self._makeShadowCopy(var)
@@ -900,25 +703,20 @@ class DataSmart(MutableMapping):
else:
del self.dict[var]
def createCopy(self):
"""
Create a copy of self by setting _data to self
"""
# we really want this to be a DataSmart...
data = DataSmart()
data = DataSmart(seen=self._seen_overrides.copy(), special=self._special_values.copy())
data.dict["_data"] = self.dict
data.varhistory = self.varhistory.copy()
data.varhistory.dataroot = data
data.varhistory.datasmart = data
data.inchistory = self.inchistory.copy()
data._tracking = self._tracking
data.overrides = None
data.overridevars = copy.copy(self.overridevars)
# Should really be a deepcopy but has heavy overhead.
# Instead, we're careful with writes.
data.overridedata = copy.copy(self.overridedata)
return data
def expandVarref(self, variable, parents=False):
@@ -939,55 +737,29 @@ class DataSmart(MutableMapping):
def localkeys(self):
for key in self.dict:
if key not in ['_data', '_remote_data']:
if key != '_data':
yield key
def __iter__(self):
deleted = set()
overrides = set()
def keylist(d):
klist = set()
for key in d:
if key in ["_data", "_remote_data"]:
continue
if key in deleted:
continue
if key in overrides:
if key == "_data":
continue
if not d[key]:
deleted.add(key)
continue
klist.add(key)
if "_data" in d:
klist |= keylist(d["_data"])
if "_remote_data" in d:
connector = d["_remote_data"]["_content"]
for key in connector.getKeys():
if key in deleted:
continue
klist.add(key)
return klist
self.need_overrides()
for var in self.overridedata:
for (r, o) in self.overridedata[var]:
if o in self.overridesset:
overrides.add(var)
elif "_" in o:
if set(o.split("_")).issubset(self.overridesset):
overrides.add(var)
for k in keylist(self.dict):
yield k
for k in overrides:
yield k
def __len__(self):
return len(frozenset(iter(self)))
return len(frozenset(self))
def __getitem__(self, item):
value = self.getVar(item, False)
@@ -1006,8 +778,9 @@ class DataSmart(MutableMapping):
data = {}
d = self.createCopy()
bb.data.expandKeys(d)
bb.data.update_data(d)
config_whitelist = set((d.getVar("BB_HASHCONFIG_WHITELIST") or "").split())
config_whitelist = set((d.getVar("BB_HASHCONFIG_WHITELIST", True) or "").split())
keys = set(key for key in iter(d) if not key.startswith("__"))
for key in keys:
if key in config_whitelist:
@@ -1026,12 +799,13 @@ class DataSmart(MutableMapping):
for key in ["__BBTASKS", "__BBANONFUNCS", "__BBHANDLERS"]:
bb_list = d.getVar(key, False) or []
bb_list.sort()
data.update({key:str(bb_list)})
if key == "__BBANONFUNCS":
for i in bb_list:
value = d.getVar(i, False) or ""
value = d.getVar(i, True) or ""
data.update({i:value})
data_str = str([(k, data[k]) for k in sorted(data.keys())])
return hashlib.md5(data_str.encode("utf-8")).hexdigest()
return hashlib.md5(data_str).hexdigest()

View File

@@ -24,13 +24,13 @@ BitBake build tools.
import os, sys
import warnings
import pickle
try:
import cPickle as pickle
except ImportError:
import pickle
import logging
import atexit
import traceback
import ast
import threading
import bb.utils
import bb.compat
import bb.exceptions
@@ -48,16 +48,6 @@ class Event(object):
def __init__(self):
self.pid = worker_pid
class HeartbeatEvent(Event):
"""Triggered at regular time intervals of 10 seconds. Other events can fire much more often
(runQueueTaskStarted when there are many short tasks) or not at all for long periods
of time (again runQueueTaskStarted, when there is just one long-running task), so this
event is more suitable for doing some task-independent work occassionally."""
def __init__(self, time):
Event.__init__(self)
self.time = time
Registered = 10
AlreadyRegistered = 14
@@ -78,30 +68,9 @@ _ui_logfilters = {}
_ui_handler_seq = 0
_event_handler_map = {}
_catchall_handlers = {}
_eventfilter = None
_uiready = False
_thread_lock = threading.Lock()
_thread_lock_enabled = False
if hasattr(__builtins__, '__setitem__'):
builtins = __builtins__
else:
builtins = __builtins__.__dict__
def enable_threadlock():
global _thread_lock_enabled
_thread_lock_enabled = True
def disable_threadlock():
global _thread_lock_enabled
_thread_lock_enabled = False
def execute_handler(name, handler, event, d):
event.data = d
addedd = False
if 'd' not in builtins:
builtins['d'] = d
addedd = True
try:
ret = handler(event)
except (bb.parse.SkipRecipe, bb.BBHandledException):
@@ -117,8 +86,6 @@ def execute_handler(name, handler, event, d):
raise
finally:
del event.data
if addedd:
del builtins['d']
def fire_class_handlers(event, d):
if isinstance(event, logging.LogRecord):
@@ -126,11 +93,8 @@ def fire_class_handlers(event, d):
eid = str(event.__class__)[8:-2]
evt_hmap = _event_handler_map.get(eid, {})
for name, handler in list(_handlers.items()):
for name, handler in _handlers.iteritems():
if name in _catchall_handlers or name in evt_hmap:
if _eventfilter:
if not _eventfilter(name, handler, event, d):
continue
execute_handler(name, handler, event, d)
ui_queue = []
@@ -139,46 +103,33 @@ def print_ui_queue():
"""If we're exiting before a UI has been spawned, display any queued
LogRecords to the console."""
logger = logging.getLogger("BitBake")
if not _uiready:
if not _ui_handlers:
from bb.msg import BBLogFormatter
stdout = logging.StreamHandler(sys.stdout)
stderr = logging.StreamHandler(sys.stderr)
formatter = BBLogFormatter("%(levelname)s: %(message)s")
stdout.setFormatter(formatter)
stderr.setFormatter(formatter)
console = logging.StreamHandler(sys.stdout)
console.setFormatter(BBLogFormatter("%(levelname)s: %(message)s"))
logger.handlers = [console]
# First check to see if we have any proper messages
msgprint = False
for event in ui_queue[:]:
for event in ui_queue:
if isinstance(event, logging.LogRecord):
if event.levelno > logging.DEBUG:
if event.levelno >= logging.WARNING:
logger.addHandler(stderr)
else:
logger.addHandler(stdout)
logger.handle(event)
msgprint = True
if msgprint:
return
# Nope, so just print all of the messages we have (including debug messages)
logger.addHandler(stdout)
for event in ui_queue[:]:
for event in ui_queue:
if isinstance(event, logging.LogRecord):
logger.handle(event)
def fire_ui_handlers(event, d):
global _thread_lock
global _thread_lock_enabled
if not _uiready:
if not _ui_handlers:
# No UI handlers registered yet, queue up the messages
ui_queue.append(event)
return
if _thread_lock_enabled:
_thread_lock.acquire()
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
@@ -197,9 +148,6 @@ def fire_ui_handlers(event, d):
for h in errors:
del _ui_handlers[h]
if _thread_lock_enabled:
_thread_lock.release()
def fire(event, d):
"""Fire off an Event"""
@@ -218,7 +166,7 @@ def fire_from_worker(event, d):
fire_ui_handlers(event, d)
noop = lambda _: None
def register(name, handler, mask=None, filename=None, lineno=None):
def register(name, handler, mask=[]):
"""Register an Event handler"""
# already registered
@@ -227,18 +175,10 @@ def register(name, handler, mask=None, filename=None, lineno=None):
if handler is not None:
# handle string containing python code
if isinstance(handler, str):
if isinstance(handler, basestring):
tmp = "def %s(e):\n%s" % (name, handler)
try:
code = bb.methodpool.compile_cache(tmp)
if not code:
if filename is None:
filename = "%s(e)" % name
code = compile(tmp, filename, "exec", ast.PyCF_ONLY_AST)
if lineno is not None:
ast.increment_lineno(code, lineno-1)
code = compile(code, filename, "exec")
bb.methodpool.compile_cache_add(tmp, code)
code = compile(tmp, "%s(e)" % name, "exec")
except SyntaxError:
logger.error("Unable to register event handler '%s':\n%s", name,
''.join(traceback.format_exc(limit=0)))
@@ -265,21 +205,7 @@ def remove(name, handler):
"""Remove an Event handler"""
_handlers.pop(name)
def get_handlers():
return _handlers
def set_handlers(handlers):
global _handlers
_handlers = handlers
def set_eventfilter(func):
global _eventfilter
_eventfilter = func
def register_UIHhandler(handler, mainui=False):
if mainui:
global _uiready
_uiready = True
def register_UIHhandler(handler):
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
level, debug_domains = bb.msg.constructLogOptions()
@@ -361,17 +287,6 @@ class RecipeEvent(Event):
class RecipePreFinalise(RecipeEvent):
""" Recipe Parsing Complete but not yet finialised"""
class RecipeTaskPreProcess(RecipeEvent):
"""
Recipe Tasks about to be finalised
The list of tasks should be final at this point and handlers
are only able to change interdependencies
"""
def __init__(self, fn, tasklist):
self.fn = fn
self.tasklist = tasklist
Event.__init__(self)
class RecipeParsed(RecipeEvent):
""" Recipe Parsing Complete """
@@ -393,7 +308,7 @@ class StampUpdate(Event):
targets = property(getTargets)
class BuildBase(Event):
"""Base class for bitbake build events"""
"""Base class for bbmake run events"""
def __init__(self, n, p, failures = 0):
self._name = n
@@ -431,26 +346,21 @@ class BuildBase(Event):
class BuildInit(BuildBase):
"""buildFile or buildTargets was invoked"""
def __init__(self, p=[]):
name = None
BuildBase.__init__(self, name, p)
class BuildStarted(BuildBase, OperationStarted):
"""Event when builds start"""
"""bbmake build run started"""
def __init__(self, n, p, failures = 0):
OperationStarted.__init__(self, "Building Started")
BuildBase.__init__(self, n, p, failures)
class BuildCompleted(BuildBase, OperationCompleted):
"""Event when builds have completed"""
def __init__(self, total, n, p, failures=0, interrupted=0):
"""bbmake build run completed"""
def __init__(self, total, n, p, failures = 0):
if not failures:
OperationCompleted.__init__(self, total, "Building Succeeded")
else:
OperationCompleted.__init__(self, total, "Building Failed")
self._interrupted = interrupted
BuildBase.__init__(self, n, p, failures)
class DiskFull(Event):
@@ -462,27 +372,10 @@ class DiskFull(Event):
self._free = freespace
self._mountpoint = mountpoint
class DiskUsageSample:
def __init__(self, available_bytes, free_bytes, total_bytes):
# Number of bytes available to non-root processes.
self.available_bytes = available_bytes
# Number of bytes available to root processes.
self.free_bytes = free_bytes
# Total capacity of the volume.
self.total_bytes = total_bytes
class MonitorDiskEvent(Event):
"""If BB_DISKMON_DIRS is set, then this event gets triggered each time disk space is checked.
Provides information about devices that are getting monitored."""
def __init__(self, disk_usage):
Event.__init__(self)
# hash of device root path -> DiskUsageSample
self.disk_usage = disk_usage
class NoProvider(Event):
"""No Provider for an Event"""
def __init__(self, item, runtime=False, dependees=None, reasons=None, close_matches=None):
def __init__(self, item, runtime=False, dependees=None, reasons=[], close_matches=[]):
Event.__init__(self)
self._item = item
self._runtime = runtime
@@ -596,16 +489,6 @@ class TargetsTreeGenerated(Event):
Event.__init__(self)
self._model = model
class ReachableStamps(Event):
"""
An event listing all stamps reachable after parsing
which the metadata may use to clean up stale data
"""
def __init__(self, stamps):
Event.__init__(self)
self.stamps = stamps
class FilesMatchingFound(Event):
"""
Event when a list of files matching the supplied pattern has
@@ -683,10 +566,7 @@ class LogHandler(logging.Handler):
etype, value, tb = record.exc_info
if hasattr(tb, 'tb_next'):
tb = list(bb.exceptions.extract_traceback(tb, context=3))
# Need to turn the value into something the logging system can pickle
record.bb_exc_info = (etype, value, tb)
record.bb_exc_formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
value = str(value)
record.exc_info = None
fire(record, None)
@@ -717,33 +597,6 @@ class MetadataEvent(Event):
self.type = eventtype
self._localdata = eventdata
class ProcessStarted(Event):
"""
Generic process started event (usually part of the initial startup)
where further progress events will be delivered
"""
def __init__(self, processname, total):
Event.__init__(self)
self.processname = processname
self.total = total
class ProcessProgress(Event):
"""
Generic process progress event (usually part of the initial startup)
"""
def __init__(self, processname, progress):
Event.__init__(self)
self.processname = processname
self.progress = progress
class ProcessFinished(Event):
"""
Generic process finished event (usually part of the initial startup)
"""
def __init__(self, processname):
Event.__init__(self)
self.processname = processname
class SanityCheck(Event):
"""
Event to run sanity checks, either raise errors or generate events as return status.

View File

@@ -1,4 +1,4 @@
from __future__ import absolute_import
import inspect
import traceback
import bb.namedtuple_with_abc
@@ -86,6 +86,6 @@ def format_exception(etype, value, tb, context=1, limit=None, formatter=None):
def to_string(exc):
if isinstance(exc, SystemExit):
if not isinstance(exc.code, str):
if not isinstance(exc.code, basestring):
return 'Exited with "%d"' % exc.code
return str(exc)

File diff suppressed because it is too large Load Diff

View File

@@ -27,6 +27,7 @@ import os
import sys
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
@@ -42,14 +43,14 @@ class Bzr(FetchMethod):
"""
# Create paths to bzr checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(d.expand('${BZRDIR}'), ud.host, relpath)
ud.pkgdir = os.path.join(data.expand('${BZRDIR}', d), ud.host, relpath)
ud.setup_revisions(d)
ud.setup_revisons(d)
if not ud.revision:
ud.revision = self.latest_revision(ud, d)
ud.localfile = d.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision))
ud.localfile = data.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision), d)
def _buildbzrcommand(self, ud, d, command):
"""
@@ -57,7 +58,7 @@ class Bzr(FetchMethod):
command is "fetch", "update", "revno"
"""
basecmd = d.expand('${FETCHCMD_bzr}')
basecmd = data.expand('${FETCHCMD_bzr}', d)
proto = ud.parm.get('protocol', 'http')
@@ -87,25 +88,28 @@ class Bzr(FetchMethod):
bzrcmd = self._buildbzrcommand(ud, d, "update")
logger.debug(1, "BZR Update %s", ud.url)
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
runfetchcmd(bzrcmd, d, workdir=os.path.join(ud.pkgdir, os.path.basename(ud.path)))
os.chdir(os.path.join (ud.pkgdir, os.path.basename(ud.path)))
runfetchcmd(bzrcmd, d)
else:
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
logger.debug(1, "BZR Checkout %s", ud.url)
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", bzrcmd)
runfetchcmd(bzrcmd, d, workdir=ud.pkgdir)
runfetchcmd(bzrcmd, d)
os.chdir(ud.pkgdir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude='.bzr' --exclude='.bzrtags'"
tar_flags = "--exclude '.bzr' --exclude '.bzrtags'"
# tar them up to a defined filename
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(ud.pkgdir)),
d, cleanup=[ud.localpath], workdir=ud.pkgdir)
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(ud.pkgdir)), d, cleanup = [ud.localpath])
def supports_srcrev(self):
return True

View File

@@ -9,7 +9,7 @@ Usage in the recipe:
SRC_URI = "ccrc://cc.example.org/ccrc;vob=/example_vob;module=/example_module"
SRCREV = "EXAMPLE_CLEARCASE_TAG"
PV = "${@d.getVar("SRCREV", False).replace("/", "+")}"
PV = "${@d.getVar("SRCREV").replace("/", "+")}"
The fetcher uses the rcleartool or cleartool remote client, depending on which one is available.
@@ -65,6 +65,7 @@ import os
import sys
import shutil
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
@@ -107,13 +108,13 @@ class ClearCase(FetchMethod):
else:
ud.module = ""
ud.basecmd = d.getVar("FETCHCMD_ccrc") or spawn.find_executable("cleartool") or spawn.find_executable("rcleartool")
ud.basecmd = d.getVar("FETCHCMD_ccrc", True) or spawn.find_executable("cleartool") or spawn.find_executable("rcleartool")
if d.getVar("SRCREV") == "INVALID":
if data.getVar("SRCREV", d, True) == "INVALID":
raise FetchError("Set a valid SRCREV for the clearcase fetcher in your recipe, e.g. SRCREV = \"/main/LATEST\" or any other label of your choice.")
ud.label = d.getVar("SRCREV", False)
ud.customspec = d.getVar("CCASE_CUSTOM_CONFIG_SPEC")
ud.label = d.getVar("SRCREV")
ud.customspec = d.getVar("CCASE_CUSTOM_CONFIG_SPEC", True)
ud.server = "%s://%s%s" % (ud.proto, ud.host, ud.path)
@@ -123,7 +124,7 @@ class ClearCase(FetchMethod):
ud.viewname = "%s-view%s" % (ud.identifier, d.getVar("DATETIME", d, True))
ud.csname = "%s-config-spec" % (ud.identifier)
ud.ccasedir = os.path.join(d.getVar("DL_DIR"), ud.type)
ud.ccasedir = os.path.join(data.getVar("DL_DIR", d, True), ud.type)
ud.viewdir = os.path.join(ud.ccasedir, ud.viewname)
ud.configspecfile = os.path.join(ud.ccasedir, ud.csname)
ud.localfile = "%s.tar.gz" % (ud.identifier)
@@ -143,7 +144,7 @@ class ClearCase(FetchMethod):
self.debug("configspecfile = %s" % ud.configspecfile)
self.debug("localfile = %s" % ud.localfile)
ud.localfile = os.path.join(d.getVar("DL_DIR"), ud.localfile)
ud.localfile = os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def _build_ccase_command(self, ud, command):
"""
@@ -201,10 +202,11 @@ class ClearCase(FetchMethod):
def _remove_view(self, ud, d):
if os.path.exists(ud.viewdir):
os.chdir(ud.ccasedir)
cmd = self._build_ccase_command(ud, 'rmview');
logger.info("cleaning up [VOB=%s label=%s view=%s]", ud.vob, ud.label, ud.viewname)
bb.fetch2.check_network_access(d, cmd, ud.url)
output = runfetchcmd(cmd, d, workdir=ud.ccasedir)
output = runfetchcmd(cmd, d)
logger.info("rmview output: %s", output)
def need_update(self, ud, d):
@@ -239,10 +241,11 @@ class ClearCase(FetchMethod):
raise e
# Set configspec: Setting the configspec effectively fetches the files as defined in the configspec
os.chdir(ud.viewdir)
cmd = self._build_ccase_command(ud, 'setcs');
logger.info("fetching data [VOB=%s label=%s view=%s]", ud.vob, ud.label, ud.viewname)
bb.fetch2.check_network_access(d, cmd, ud.url)
output = runfetchcmd(cmd, d, workdir=ud.viewdir)
output = runfetchcmd(cmd, d)
logger.info("%s", output)
# Copy the configspec to the viewdir so we have it in our source tarball later

View File

@@ -63,7 +63,7 @@ class Cvs(FetchMethod):
if 'fullpath' in ud.parm:
fullpath = '_fullpath'
ud.localfile = d.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath))
ud.localfile = bb.data.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath), d)
def need_update(self, ud, d):
if (ud.date == "now"):
@@ -87,10 +87,10 @@ class Cvs(FetchMethod):
cvsroot = ud.path
else:
cvsroot = ":" + method
cvsproxyhost = d.getVar('CVS_PROXY_HOST')
cvsproxyhost = d.getVar('CVS_PROXY_HOST', True)
if cvsproxyhost:
cvsroot += ";proxy=" + cvsproxyhost
cvsproxyport = d.getVar('CVS_PROXY_PORT')
cvsproxyport = d.getVar('CVS_PROXY_PORT', True)
if cvsproxyport:
cvsroot += ";proxyport=" + cvsproxyport
cvsroot += ":" + ud.user
@@ -110,7 +110,7 @@ class Cvs(FetchMethod):
if ud.tag:
options.append("-r %s" % ud.tag)
cvsbasecmd = d.getVar("FETCHCMD_cvs")
cvsbasecmd = d.getVar("FETCHCMD_cvs", True)
cvscmd = cvsbasecmd + " '-d" + cvsroot + "' co " + " ".join(options) + " " + ud.module
cvsupdatecmd = cvsbasecmd + " '-d" + cvsroot + "' update -d -P " + " ".join(options)
@@ -120,26 +120,25 @@ class Cvs(FetchMethod):
# create module directory
logger.debug(2, "Fetch: checking for module directory")
pkg = d.getVar('PN')
pkgdir = os.path.join(d.getVar('CVSDIR'), pkg)
pkg = d.getVar('PN', True)
pkgdir = os.path.join(d.getVar('CVSDIR', True), pkg)
moddir = os.path.join(pkgdir, localdir)
workdir = None
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
logger.info("Update " + ud.url)
bb.fetch2.check_network_access(d, cvsupdatecmd, ud.url)
# update sources there
workdir = moddir
os.chdir(moddir)
cmd = cvsupdatecmd
else:
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(pkgdir)
workdir = pkgdir
os.chdir(pkgdir)
logger.debug(1, "Running %s", cvscmd)
bb.fetch2.check_network_access(d, cvscmd, ud.url)
cmd = cvscmd
runfetchcmd(cmd, d, cleanup=[moddir], workdir=workdir)
runfetchcmd(cmd, d, cleanup = [moddir])
if not os.access(moddir, os.R_OK):
raise FetchError("Directory %s was not readable despite sucessful fetch?!" % moddir, ud.url)
@@ -148,24 +147,24 @@ class Cvs(FetchMethod):
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude='CVS'"
tar_flags = "--exclude 'CVS'"
# tar them up to a defined filename
workdir = None
if 'fullpath' in ud.parm:
workdir = pkgdir
os.chdir(pkgdir)
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, localdir)
else:
workdir = os.path.dirname(os.path.realpath(moddir))
os.chdir(moddir)
os.chdir('..')
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(moddir))
runfetchcmd(cmd, d, cleanup=[ud.localpath], workdir=workdir)
runfetchcmd(cmd, d, cleanup = [ud.localpath])
def clean(self, ud, d):
""" Clean CVS Files and tarballs """
pkg = d.getVar('PN')
pkgdir = os.path.join(d.getVar("CVSDIR"), pkg)
pkg = d.getVar('PN', True)
pkgdir = os.path.join(d.getVar("CVSDIR", True), pkg)
bb.utils.remove(pkgdir, True)
bb.utils.remove(ud.localpath)

View File

@@ -49,10 +49,6 @@ Supported SRC_URI options are:
referring to commit which is valid in tag instead of branch.
The default is "0", set nobranch=1 if needed.
- usehead
For local git:// urls to use the current branch HEAD as the revision for use with
AUTOREV. Implies nobranch.
"""
#Copyright (C) 2005 Richard Purdie
@@ -70,57 +66,13 @@ Supported SRC_URI options are:
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import errno
import os
import re
import bb
import errno
import bb.progress
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class GitProgressHandler(bb.progress.LineFilterProgressHandler):
"""Extract progress information from git output"""
def __init__(self, d):
self._buffer = ''
self._count = 0
super(GitProgressHandler, self).__init__(d)
# Send an initial progress event so the bar gets shown
self._fire_progress(-1)
def write(self, string):
self._buffer += string
stages = ['Counting objects', 'Compressing objects', 'Receiving objects', 'Resolving deltas']
stage_weights = [0.2, 0.05, 0.5, 0.25]
stagenum = 0
for i, stage in reversed(list(enumerate(stages))):
if stage in self._buffer:
stagenum = i
self._buffer = ''
break
self._status = stages[stagenum]
percs = re.findall(r'(\d+)%', string)
if percs:
progress = int(round((int(percs[-1]) * stage_weights[stagenum]) + (sum(stage_weights[:stagenum]) * 100)))
rates = re.findall(r'([\d.]+ [a-zA-Z]*/s+)', string)
if rates:
rate = rates[-1]
else:
rate = None
self.update(progress, rate)
else:
if stagenum == 0:
percs = re.findall(r': (\d+)', string)
if percs:
count = int(percs[-1])
if count > self._count:
self._count = count
self._fire_progress(-count)
super(GitProgressHandler, self).write(string)
class Git(FetchMethod):
"""Class to fetch a module or modules from git repositories"""
def init(self, d):
@@ -156,13 +108,6 @@ class Git(FetchMethod):
ud.nobranch = ud.parm.get("nobranch","0") == "1"
# usehead implies nobranch
ud.usehead = ud.parm.get("usehead","0") == "1"
if ud.usehead:
if ud.proto != "file":
raise bb.fetch2.ParameterError("The usehead option is only for use with local ('protocol=file') git repositories", ud.url)
ud.nobranch = 1
# bareclone implies nocheckout
ud.bareclone = ud.parm.get("bareclone","0") == "1"
if ud.bareclone:
@@ -173,19 +118,16 @@ class Git(FetchMethod):
if len(branches) != len(ud.names):
raise bb.fetch2.ParameterError("The number of name and branch parameters is not balanced", ud.url)
ud.branches = {}
for pos, name in enumerate(ud.names):
branch = branches[pos]
for name in ud.names:
branch = branches[ud.names.index(name)]
ud.branches[name] = branch
ud.unresolvedrev[name] = branch
if ud.usehead:
ud.unresolvedrev['default'] = 'HEAD'
ud.basecmd = data.getVar("FETCHCMD_git", d, True) or "git -c core.fsyncobjectfiles=0"
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0"
ud.write_tarballs = ((data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True) or "0") != "0") or ud.rebaseable
ud.write_tarballs = ((d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0") != "0") or ud.rebaseable
ud.setup_revisions(d)
ud.setup_revisons(d)
for name in ud.names:
# Ensure anything that doesn't look like a sha256 checksum/revision is translated into one
@@ -194,10 +136,7 @@ class Git(FetchMethod):
ud.unresolvedrev[name] = ud.revisions[name]
ud.revisions[name] = self.latest_revision(ud, d, name)
gitsrcname = '%s%s' % (ud.host.replace(':', '.'), ud.path.replace('/', '.').replace('*', '.'))
if gitsrcname.startswith('.'):
gitsrcname = gitsrcname[1:]
gitsrcname = '%s%s' % (ud.host.replace(':','.'), ud.path.replace('/', '.').replace('*', '.'))
# for rebaseable git repo, it is necessary to keep mirror tar ball
# per revision, so that even the revision disappears from the
# upstream repo in the future, the mirror will remain intact and still
@@ -205,9 +144,9 @@ class Git(FetchMethod):
if ud.rebaseable:
for name in ud.names:
gitsrcname = gitsrcname + '_' + ud.revisions[name]
ud.mirrortarball = 'git2_%s.tar.gz' % gitsrcname
ud.fullmirror = os.path.join(d.getVar("DL_DIR"), ud.mirrortarball)
gitdir = d.getVar("GITDIR") or (d.getVar("DL_DIR") + "/git2/")
ud.mirrortarball = 'git2_%s.tar.gz' % (gitsrcname)
ud.fullmirror = os.path.join(d.getVar("DL_DIR", True), ud.mirrortarball)
gitdir = d.getVar("GITDIR", True) or (d.getVar("DL_DIR", True) + "/git2/")
ud.clonedir = os.path.join(gitdir, gitsrcname)
ud.localfile = ud.clonedir
@@ -218,8 +157,9 @@ class Git(FetchMethod):
def need_update(self, ud, d):
if not os.path.exists(ud.clonedir):
return True
os.chdir(ud.clonedir)
for name in ud.names:
if not self._contains_ref(ud, d, name, ud.clonedir):
if not self._contains_ref(ud, d, name):
return True
if ud.write_tarballs and not os.path.exists(ud.fullmirror):
return True
@@ -228,7 +168,7 @@ class Git(FetchMethod):
def try_premirror(self, ud, d):
# If we don't do this, updating an existing checkout with only premirrors
# is not possible
if d.getVar("BB_FETCH_PREMIRRORONLY") is not None:
if d.getVar("BB_FETCH_PREMIRRORONLY", True) is not None:
return True
if os.path.exists(ud.clonedir):
return False
@@ -237,69 +177,74 @@ class Git(FetchMethod):
def download(self, ud, d):
"""Fetch url"""
if ud.user:
username = ud.user + '@'
else:
username = ""
ud.repochanged = not os.path.exists(ud.fullmirror)
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.clonedir) and os.path.exists(ud.fullmirror):
bb.utils.mkdirhier(ud.clonedir)
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir)
os.chdir(ud.clonedir)
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d)
repourl = self._get_repo_url(ud)
repourl = "%s://%s%s%s" % (ud.proto, username, ud.host, ud.path)
# If the repo still doesn't exist, fallback to cloning it
if not os.path.exists(ud.clonedir):
# We do this since git will use a "-l" option automatically for local urls where possible
if repourl.startswith("file://"):
repourl = repourl[7:]
clone_cmd = "LANG=C %s clone --bare --mirror %s %s --progress" % (ud.basecmd, repourl, ud.clonedir)
clone_cmd = "%s clone --bare --mirror %s %s" % (ud.basecmd, repourl, ud.clonedir)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, clone_cmd, ud.url)
progresshandler = GitProgressHandler(d)
runfetchcmd(clone_cmd, d, log=progresshandler)
bb.fetch2.check_network_access(d, clone_cmd)
runfetchcmd(clone_cmd, d)
os.chdir(ud.clonedir)
# Update the checkout if needed
needupdate = False
for name in ud.names:
if not self._contains_ref(ud, d, name, ud.clonedir):
if not self._contains_ref(ud, d, name):
needupdate = True
if needupdate:
try:
runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s remote rm origin" % ud.basecmd, d)
except bb.fetch2.FetchError:
logger.debug(1, "No Origin")
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d, workdir=ud.clonedir)
fetch_cmd = "LANG=C %s fetch -f --prune --progress %s refs/*:refs/*" % (ud.basecmd, repourl)
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d)
fetch_cmd = "%s fetch -f --prune %s refs/*:refs/*" % (ud.basecmd, repourl)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
progresshandler = GitProgressHandler(d)
runfetchcmd(fetch_cmd, d, log=progresshandler, workdir=ud.clonedir)
runfetchcmd("%s prune-packed" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d, workdir=ud.clonedir)
try:
os.unlink(ud.fullmirror)
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
runfetchcmd(fetch_cmd, d)
runfetchcmd("%s prune-packed" % ud.basecmd, d)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
ud.repochanged = True
os.chdir(ud.clonedir)
for name in ud.names:
if not self._contains_ref(ud, d, name, ud.clonedir):
if not self._contains_ref(ud, d, name):
raise bb.fetch2.FetchError("Unable to find revision %s in branch %s even from upstream" % (ud.revisions[name], ud.branches[name]))
def build_mirror_data(self, ud, d):
# Generate a mirror tarball if needed
if ud.write_tarballs and not os.path.exists(ud.fullmirror):
if ud.write_tarballs and (ud.repochanged or not os.path.exists(ud.fullmirror)):
# it's possible that this symlink points to read-only filesystem with PREMIRROR
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
os.chdir(ud.clonedir)
logger.info("Creating tarball of git repository")
runfetchcmd("tar -czf %s ." % ud.fullmirror, d, workdir=ud.clonedir)
runfetchcmd("touch %s.done" % ud.fullmirror, d)
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, os.path.join(".") ), d)
runfetchcmd("touch %s.done" % (ud.fullmirror), d)
def unpack(self, ud, destdir, d):
""" unpack the downloaded src to destdir"""
subdir = ud.parm.get("subpath", "")
if subdir != "":
readpathspec = ":%s" % subdir
readpathspec = ":%s" % (subdir)
def_destsuffix = "%s/" % os.path.basename(subdir.rstrip('/'))
else:
readpathspec = ""
@@ -314,23 +259,30 @@ class Git(FetchMethod):
if ud.bareclone:
cloneflags += " --mirror"
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, cloneflags, ud.clonedir, destdir), d)
repourl = self._get_repo_url(ud)
runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, repourl), d, workdir=destdir)
if not ud.nocheckout:
if subdir != "":
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d,
workdir=destdir)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d, workdir=destdir)
elif not ud.nobranch:
branchname = ud.branches[ud.names[0]]
runfetchcmd("%s checkout -B %s %s" % (ud.basecmd, branchname, \
ud.revisions[ud.names[0]]), d, workdir=destdir)
runfetchcmd("%s branch --set-upstream %s origin/%s" % (ud.basecmd, branchname, \
branchname), d, workdir=destdir)
else:
runfetchcmd("%s checkout %s" % (ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=destdir)
# Versions of git prior to 1.7.9.2 have issues where foo.git and foo get confused
# and you end up with some horrible union of the two when you attempt to clone it
# The least invasive workaround seems to be a symlink to the real directory to
# fool git into ignoring any .git version that may also be present.
#
# The issue is fixed in more recent versions of git so we can drop this hack in future
# when that version becomes common enough.
clonedir = ud.clonedir
if not ud.path.endswith(".git"):
indirectiondir = destdir[:-1] + ".indirectionsymlink"
if os.path.exists(indirectiondir):
os.remove(indirectiondir)
bb.utils.mkdirhier(os.path.dirname(indirectiondir))
os.symlink(ud.clonedir, indirectiondir)
clonedir = indirectiondir
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, cloneflags, clonedir, destdir), d)
if not ud.nocheckout:
os.chdir(destdir)
if subdir != "":
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d)
else:
runfetchcmd("%s checkout %s" % (ud.basecmd, ud.revisions[ud.names[0]]), d)
return True
def clean(self, ud, d):
@@ -343,7 +295,7 @@ class Git(FetchMethod):
def supports_srcrev(self):
return True
def _contains_ref(self, ud, d, name, wd):
def _contains_ref(self, ud, d, name):
cmd = ""
if ud.nobranch:
cmd = "%s log --pretty=oneline -n 1 %s -- 2> /dev/null | wc -l" % (
@@ -352,23 +304,13 @@ class Git(FetchMethod):
cmd = "%s branch --contains %s --list %s 2> /dev/null | wc -l" % (
ud.basecmd, ud.revisions[name], ud.branches[name])
try:
output = runfetchcmd(cmd, d, quiet=True, workdir=wd)
output = runfetchcmd(cmd, d, quiet=True)
except bb.fetch2.FetchError:
return False
if len(output.split()) > 1:
raise bb.fetch2.FetchError("The command '%s' gave output with more then 1 line unexpectedly, output: '%s'" % (cmd, output))
return output.split()[0] != "0"
def _get_repo_url(self, ud):
"""
Return the repository URL
"""
if ud.user:
username = ud.user + '@'
else:
username = ""
return "%s://%s%s%s" % (ud.proto, username, ud.host, ud.path)
def _revision_key(self, ud, d, name):
"""
Return a unique key for the url
@@ -379,122 +321,38 @@ class Git(FetchMethod):
"""
Run git ls-remote with the specified search string
"""
# Prevent recursion e.g. in OE if SRCPV is in PV, PV is in WORKDIR,
# and WORKDIR is in PATH (as a result of RSS), our call to
# runfetchcmd() exports PATH so this function will get called again (!)
# In this scenario the return call of the function isn't actually
# important - WORKDIR isn't needed in PATH to call git ls-remote
# anyway.
if d.getVar('_BB_GIT_IN_LSREMOTE', False):
return ''
d.setVar('_BB_GIT_IN_LSREMOTE', '1')
try:
repourl = self._get_repo_url(ud)
cmd = "%s ls-remote %s %s" % \
(ud.basecmd, repourl, search)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, cmd, repourl)
output = runfetchcmd(cmd, d, True)
if not output:
raise bb.fetch2.FetchError("The command %s gave empty output unexpectedly" % cmd, ud.url)
finally:
d.delVar('_BB_GIT_IN_LSREMOTE')
if ud.user:
username = ud.user + '@'
else:
username = ""
cmd = "%s ls-remote %s://%s%s%s %s" % \
(ud.basecmd, ud.proto, username, ud.host, ud.path, search)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, cmd)
output = runfetchcmd(cmd, d, True)
if not output:
raise bb.fetch2.FetchError("The command %s gave empty output unexpectedly" % cmd, ud.url)
return output
def _latest_revision(self, ud, d, name):
"""
Compute the HEAD revision for the url
"""
output = self._lsremote(ud, d, "")
# Tags of the form ^{} may not work, need to fallback to other form
if ud.unresolvedrev[name][:5] == "refs/" or ud.usehead:
head = ud.unresolvedrev[name]
tag = ud.unresolvedrev[name]
if ud.unresolvedrev[name][:5] == "refs/":
search = "%s %s^{}" % (ud.unresolvedrev[name], ud.unresolvedrev[name])
else:
head = "refs/heads/%s" % ud.unresolvedrev[name]
tag = "refs/tags/%s" % ud.unresolvedrev[name]
for s in [head, tag + "^{}", tag]:
for l in output.strip().split('\n'):
sha1, ref = l.split()
if s == ref:
return sha1
raise bb.fetch2.FetchError("Unable to resolve '%s' in upstream git repository in git ls-remote output for %s" % \
(ud.unresolvedrev[name], ud.host+ud.path))
def latest_versionstring(self, ud, d):
"""
Compute the latest release name like "x.y.x" in "x.y.x+gitHASH"
by searching through the tags output of ls-remote, comparing
versions and returning the highest match.
"""
pupver = ('', '')
tagregex = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX') or "(?P<pver>([0-9][\.|_]?)+)")
try:
output = self._lsremote(ud, d, "refs/tags/*")
except bb.fetch2.FetchError or bb.fetch2.NetworkAccess:
return pupver
verstring = ""
revision = ""
for line in output.split("\n"):
if not line:
break
tag_head = line.split("/")[-1]
# Ignore non-released branches
m = re.search("(alpha|beta|rc|final)+", tag_head)
if m:
continue
# search for version in the line
tag = tagregex.search(tag_head)
if tag == None:
continue
tag = tag.group('pver')
tag = tag.replace("_", ".")
if verstring and bb.utils.vercmp(("0", tag, ""), ("0", verstring, "")) < 0:
continue
verstring = tag
revision = line.split()[0]
pupver = (verstring, revision)
return pupver
search = "refs/heads/%s refs/tags/%s^{}" % (ud.unresolvedrev[name], ud.unresolvedrev[name])
output = self._lsremote(ud, d, search)
return output.split()[0]
def _build_revision(self, ud, d, name):
return ud.revisions[name]
def gitpkgv_revision(self, ud, d, name):
"""
Return a sortable revision number by counting commits in the history
Based on gitpkgv.bblass in meta-openembedded
"""
rev = self._build_revision(ud, d, name)
localpath = ud.localpath
rev_file = os.path.join(localpath, "oe-gitpkgv_" + rev)
if not os.path.exists(localpath):
commits = None
else:
if not os.path.exists(rev_file) or not os.path.getsize(rev_file):
from pipes import quote
commits = bb.fetch2.runfetchcmd(
"git rev-list %s -- | wc -l" % quote(rev),
d, quiet=True).strip().lstrip('0')
if commits:
open(rev_file, "w").write("%d\n" % int(commits))
else:
commits = open(rev_file, "r").readline(128).strip()
if commits:
return False, "%s+%s" % (commits, rev[:7])
else:
return True, str(rev)
def checkstatus(self, fetch, ud, d):
def checkstatus(self, ud, d):
fetchcmd = "%s ls-remote %s" % (ud.basecmd, ud.url)
try:
self._lsremote(ud, d, "")
runfetchcmd(fetchcmd, d, quiet=True)
return True
except bb.fetch2.FetchError:
except FetchError:
return False

View File

@@ -22,6 +22,7 @@ BitBake 'Fetch' git annex implementation
import os
import bb
from bb import data
from bb.fetch2.git import Git
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
@@ -33,42 +34,43 @@ class GitANNEX(Git):
"""
return ud.type in ['gitannex']
def uses_annex(self, ud, d, wd):
def uses_annex(self, ud, d):
for name in ud.names:
try:
runfetchcmd("%s rev-list git-annex" % (ud.basecmd), d, quiet=True, workdir=wd)
runfetchcmd("%s rev-list git-annex" % (ud.basecmd), d, quiet=True)
return True
except bb.fetch.FetchError:
pass
return False
def update_annex(self, ud, d, wd):
def update_annex(self, ud, d):
try:
runfetchcmd("%s annex get --all" % (ud.basecmd), d, quiet=True, workdir=wd)
runfetchcmd("%s annex get --all" % (ud.basecmd), d, quiet=True)
except bb.fetch.FetchError:
return False
runfetchcmd("chmod u+w -R %s/annex" % (ud.clonedir), d, quiet=True, workdir=wd)
runfetchcmd("chmod u+w -R %s/annex" % (ud.clonedir), d, quiet=True)
return True
def download(self, ud, d):
Git.download(self, ud, d)
annex = self.uses_annex(ud, d, ud.clonedir)
os.chdir(ud.clonedir)
annex = self.uses_annex(ud, d)
if annex:
self.update_annex(ud, d, ud.clonedir)
self.update_annex(ud, d)
def unpack(self, ud, destdir, d):
Git.unpack(self, ud, destdir, d)
os.chdir(ud.destdir)
try:
runfetchcmd("%s annex init" % (ud.basecmd), d, workdir=ud.destdir)
runfetchcmd("%s annex sync" % (ud.basecmd), d)
except bb.fetch.FetchError:
pass
annex = self.uses_annex(ud, d, ud.destdir)
annex = self.uses_annex(ud, d)
if annex:
runfetchcmd("%s annex get" % (ud.basecmd), d, workdir=ud.destdir)
runfetchcmd("chmod u+w -R %s/.git/annex" % (ud.destdir), d, quiet=True, workdir=ud.destdir)
runfetchcmd("%s annex get" % (ud.basecmd), d)
runfetchcmd("chmod u+w -R %s/.git/annex" % (ud.destdir), d, quiet=True)

View File

@@ -31,6 +31,7 @@ NOTE: Switching a SRC_URI from "git://" to "gitsm://" requires a clean of your r
import os
import bb
from bb import data
from bb.fetch2.git import Git
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
@@ -42,10 +43,10 @@ class GitSM(Git):
"""
return ud.type in ['gitsm']
def uses_submodules(self, ud, d, wd):
def uses_submodules(self, ud, d):
for name in ud.names:
try:
runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revisions[name]), d, quiet=True, workdir=wd)
runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revisions[name]), d, quiet=True)
return True
except bb.fetch.FetchError:
pass
@@ -106,25 +107,30 @@ class GitSM(Git):
os.mkdir(tmpclonedir)
os.rename(ud.clonedir, gitdir)
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*true/bare = false/'", d)
runfetchcmd(ud.basecmd + " reset --hard", d, workdir=tmpclonedir)
runfetchcmd(ud.basecmd + " checkout -f " + ud.revisions[ud.names[0]], d, workdir=tmpclonedir)
runfetchcmd(ud.basecmd + " submodule update --init --recursive", d, workdir=tmpclonedir)
os.chdir(tmpclonedir)
runfetchcmd(ud.basecmd + " reset --hard", d)
runfetchcmd(ud.basecmd + " submodule init", d)
runfetchcmd(ud.basecmd + " submodule update", d)
self._set_relative_paths(tmpclonedir)
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*false/bare = true/'", d, workdir=tmpclonedir)
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*false/bare = true/'", d)
os.rename(gitdir, ud.clonedir,)
bb.utils.remove(tmpclonedir, True)
def download(self, ud, d):
Git.download(self, ud, d)
submodules = self.uses_submodules(ud, d, ud.clonedir)
os.chdir(ud.clonedir)
submodules = self.uses_submodules(ud, d)
if submodules:
self.update_submodules(ud, d)
def unpack(self, ud, destdir, d):
Git.unpack(self, ud, destdir, d)
submodules = self.uses_submodules(ud, d, ud.destdir)
os.chdir(ud.destdir)
submodules = self.uses_submodules(ud, d)
if submodules:
runfetchcmd(ud.basecmd + " checkout " + ud.revisions[ud.names[0]], d, workdir=ud.destdir)
runfetchcmd(ud.basecmd + " submodule update --init --recursive", d, workdir=ud.destdir)
runfetchcmd("cp -r " + ud.clonedir + "/modules " + ud.destdir + "/.git/", d)
runfetchcmd(ud.basecmd + " submodule init", d)
runfetchcmd(ud.basecmd + " submodule update", d)

View File

@@ -28,7 +28,7 @@ import os
import sys
import logging
import bb
import errno
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
@@ -43,13 +43,6 @@ class Hg(FetchMethod):
"""
return ud.type in ['hg']
def supports_checksum(self, urldata):
"""
Don't require checksums for local archives created from
repository checkouts.
"""
return False
def urldata_init(self, ud, d):
"""
init hg specific variable within url data
@@ -59,33 +52,19 @@ class Hg(FetchMethod):
ud.module = ud.parm["module"]
if 'protocol' in ud.parm:
ud.proto = ud.parm['protocol']
elif not ud.host:
ud.proto = 'file'
else:
ud.proto = "hg"
# Create paths to mercurial checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(data.expand('${HGDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
ud.setup_revisions(d)
ud.setup_revisons(d)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
elif not ud.revision:
ud.revision = self.latest_revision(ud, d)
# Create paths to mercurial checkouts
hgsrcname = '%s_%s_%s' % (ud.module.replace('/', '.'), \
ud.host, ud.path.replace('/', '.'))
ud.mirrortarball = 'hg_%s.tar.gz' % hgsrcname
ud.fullmirror = os.path.join(d.getVar("DL_DIR"), ud.mirrortarball)
hgdir = d.getVar("HGDIR") or (d.getVar("DL_DIR") + "/hg/")
ud.pkgdir = os.path.join(hgdir, hgsrcname)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
ud.localfile = ud.moddir
ud.basecmd = d.getVar("FETCHCMD_hg") or "/usr/bin/env hg"
ud.write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS")
ud.localfile = data.expand('%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
def need_update(self, ud, d):
revTag = ud.parm.get('rev', 'tip')
@@ -95,21 +74,14 @@ class Hg(FetchMethod):
return True
return False
def try_premirror(self, ud, d):
# If we don't do this, updating an existing checkout with only premirrors
# is not possible
if d.getVar("BB_FETCH_PREMIRRORONLY") is not None:
return True
if os.path.exists(ud.moddir):
return False
return True
def _buildhgcommand(self, ud, d, command):
"""
Build up an hg commandline based on ud
command is "fetch", "update", "info"
"""
basecmd = data.expand('${FETCHCMD_hg}', d)
proto = ud.parm.get('protocol', 'http')
host = ud.host
@@ -126,7 +98,7 @@ class Hg(FetchMethod):
hgroot = ud.user + "@" + host + ud.path
if command == "info":
return "%s identify -i %s://%s/%s" % (ud.basecmd, proto, hgroot, ud.module)
return "%s identify -i %s://%s/%s" % (basecmd, proto, hgroot, ud.module)
options = [];
@@ -139,22 +111,22 @@ class Hg(FetchMethod):
if command == "fetch":
if ud.user and ud.pswd:
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" clone %s %s://%s/%s %s" % (ud.basecmd, ud.user, ud.pswd, proto, " ".join(options), proto, hgroot, ud.module, ud.module)
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" clone %s %s://%s/%s %s" % (basecmd, ud.user, ud.pswd, proto, " ".join(options), proto, hgroot, ud.module, ud.module)
else:
cmd = "%s clone %s %s://%s/%s %s" % (ud.basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
cmd = "%s clone %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
elif command == "pull":
# do not pass options list; limiting pull to rev causes the local
# repo not to contain it and immediately following "update" command
# will crash
if ud.user and ud.pswd:
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" pull" % (ud.basecmd, ud.user, ud.pswd, proto)
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" pull" % (basecmd, ud.user, ud.pswd, proto)
else:
cmd = "%s pull" % (ud.basecmd)
cmd = "%s pull" % (basecmd)
elif command == "update":
if ud.user and ud.pswd:
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" update -C %s" % (ud.basecmd, ud.user, ud.pswd, proto, " ".join(options))
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" update -C %s" % (basecmd, ud.user, ud.pswd, proto, " ".join(options))
else:
cmd = "%s update -C %s" % (ud.basecmd, " ".join(options))
cmd = "%s update -C %s" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid hg command %s" % command, ud.url)
@@ -165,53 +137,40 @@ class Hg(FetchMethod):
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.pkgdir) and os.path.exists(ud.fullmirror):
bb.utils.mkdirhier(ud.pkgdir)
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d, workdir=ud.pkgdir)
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
# Found the source, check whether need pull
updatecmd = self._buildhgcommand(ud, d, "update")
updatecmd = self._buildhgcommand(ud, d, "pull")
logger.info("Update " + ud.url)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
try:
runfetchcmd(updatecmd, d, workdir=ud.moddir)
except bb.fetch2.FetchError:
# Runnning pull in the repo
pullcmd = self._buildhgcommand(ud, d, "pull")
logger.info("Pulling " + ud.url)
# update sources there
logger.debug(1, "Running %s", pullcmd)
bb.fetch2.check_network_access(d, pullcmd, ud.url)
runfetchcmd(pullcmd, d, workdir=ud.moddir)
try:
os.unlink(ud.fullmirror)
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
bb.fetch2.check_network_access(d, updatecmd, ud.url)
runfetchcmd(updatecmd, d)
# No source found, clone it.
if not os.path.exists(ud.moddir):
else:
fetchcmd = self._buildhgcommand(ud, d, "fetch")
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd, ud.url)
runfetchcmd(fetchcmd, d, workdir=ud.pkgdir)
runfetchcmd(fetchcmd, d)
# Even when we clone (fetch), we still need to update as hg's clone
# won't checkout the specified revision if its on a branch
updatecmd = self._buildhgcommand(ud, d, "update")
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
runfetchcmd(updatecmd, d, workdir=ud.moddir)
runfetchcmd(updatecmd, d)
def clean(self, ud, d):
""" Clean the hg dir """
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.hg' --exclude '.hgrags'"
bb.utils.remove(ud.localpath, True)
bb.utils.remove(ud.fullmirror)
bb.utils.remove(ud.fullmirror + ".done")
os.chdir(ud.pkgdir)
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d, cleanup = [ud.localpath])
def supports_srcrev(self):
return True
@@ -220,7 +179,7 @@ class Hg(FetchMethod):
"""
Compute tip revision for the url
"""
bb.fetch2.check_network_access(d, self._buildhgcommand(ud, d, "info"), ud.url)
bb.fetch2.check_network_access(d, self._buildhgcommand(ud, d, "info"))
output = runfetchcmd(self._buildhgcommand(ud, d, "info"), d)
return output.strip()
@@ -232,38 +191,3 @@ class Hg(FetchMethod):
Return a unique key for the url
"""
return "hg:" + ud.moddir
def build_mirror_data(self, ud, d):
# Generate a mirror tarball if needed
if ud.write_tarballs == "1" and not os.path.exists(ud.fullmirror):
# it's possible that this symlink points to read-only filesystem with PREMIRROR
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
logger.info("Creating tarball of hg repository")
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, ud.module), d, workdir=ud.pkgdir)
runfetchcmd("touch %s.done" % (ud.fullmirror), d, workdir=ud.pkgdir)
def localpath(self, ud, d):
return ud.pkgdir
def unpack(self, ud, destdir, d):
"""
Make a local clone or export for the url
"""
revflag = "-r %s" % ud.revision
subdir = ud.parm.get("destsuffix", ud.module)
codir = "%s/%s" % (destdir, subdir)
scmdata = ud.parm.get("scmdata", "")
if scmdata != "nokeep":
if not os.access(os.path.join(codir, '.hg'), os.R_OK):
logger.debug(2, "Unpack: creating new hg repository in '" + codir + "'")
runfetchcmd("%s init %s" % (ud.basecmd, codir), d)
logger.debug(2, "Unpack: updating source in '" + codir + "'")
runfetchcmd("%s pull %s" % (ud.basecmd, ud.moddir), d, workdir=codir)
runfetchcmd("%s up -C %s" % (ud.basecmd, revflag), d, workdir=codir)
else:
logger.debug(2, "Unpack: extracting source to '" + codir + "'")
runfetchcmd("%s archive -t files %s %s" % (ud.basecmd, revflag, codir), d, workdir=ud.moddir)

View File

@@ -26,9 +26,10 @@ BitBake build tools.
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import urllib.request, urllib.parse, urllib.error
import urllib
import bb
import bb.utils
from bb import data
from bb.fetch2 import FetchMethod, FetchError
from bb.fetch2 import logger
@@ -41,10 +42,9 @@ class Local(FetchMethod):
def urldata_init(self, ud, d):
# We don't set localfile as for this fetcher the file is already local!
ud.decodedurl = urllib.parse.unquote(ud.url.split("://")[1].split(";")[0])
ud.decodedurl = urllib.unquote(ud.url.split("://")[1].split(";")[0])
ud.basename = os.path.basename(ud.decodedurl)
ud.basepath = ud.decodedurl
ud.needdonestamp = False
return
def localpath(self, urldata, d):
@@ -62,11 +62,17 @@ class Local(FetchMethod):
newpath = path
if path[0] == "/":
return [path]
filespath = d.getVar('FILESPATH')
filespath = data.getVar('FILESPATH', d, True)
if filespath:
logger.debug(2, "Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
newpath, hist = bb.utils.which(filespath, path, history=True)
searched.extend(hist)
if not newpath:
filesdir = data.getVar('FILESDIR', d, True)
if filesdir:
logger.debug(2, "Searching for %s in path: %s" % (path, filesdir))
newpath = os.path.join(filesdir, path)
searched.append(newpath)
if (not newpath or not os.path.exists(newpath)) and path.find("*") != -1:
# For expressions using '*', best we can do is take the first directory in FILESPATH that exists
newpath, hist = bb.utils.which(filespath, ".", history=True)
@@ -74,7 +80,7 @@ class Local(FetchMethod):
logger.debug(2, "Searching for %s in path: %s" % (path, newpath))
return searched
if not os.path.exists(newpath):
dldirfile = os.path.join(d.getVar("DL_DIR"), path)
dldirfile = os.path.join(d.getVar("DL_DIR", True), path)
logger.debug(2, "Defaulting to %s for %s" % (dldirfile, path))
bb.utils.mkdirhier(os.path.dirname(dldirfile))
searched.append(dldirfile)
@@ -93,17 +99,20 @@ class Local(FetchMethod):
# no need to fetch local files, we'll deal with them in place.
if self.supports_checksum(urldata) and not os.path.exists(urldata.localpath):
locations = []
filespath = d.getVar('FILESPATH')
filespath = data.getVar('FILESPATH', d, True)
if filespath:
locations = filespath.split(":")
locations.append(d.getVar("DL_DIR"))
filesdir = data.getVar('FILESDIR', d, True)
if filesdir:
locations.append(filesdir)
locations.append(d.getVar("DL_DIR", True))
msg = "Unable to find file " + urldata.url + " anywhere. The paths that were searched were:\n " + "\n ".join(locations)
raise FetchError(msg)
return True
def checkstatus(self, fetch, urldata, d):
def checkstatus(self, urldata, d):
"""
Check the status of the url
"""

View File

@@ -1,305 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' NPM implementation
The NPM fetcher is used to retrieve files from the npmjs repository
Usage in the recipe:
SRC_URI = "npm://registry.npmjs.org/;name=${PN};version=${PV}"
Suported SRC_URI options are:
- name
- version
npm://registry.npmjs.org/${PN}/-/${PN}-${PV}.tgz would become npm://registry.npmjs.org;name=${PN};version=${PV}
The fetcher all triggers off the existence of ud.localpath. If that exists and has the ".done" stamp, its assumed the fetch is good/done
"""
import os
import sys
import urllib.request, urllib.parse, urllib.error
import json
import subprocess
import signal
import bb
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import ChecksumError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
from bb.fetch2 import UnpackError
from bb.fetch2 import ParameterError
from distutils import spawn
def subprocess_setup():
# Python installs a SIGPIPE handler by default. This is usually not what
# non-Python subprocesses expect.
# SIGPIPE errors are known issues with gzip/bash
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
class Npm(FetchMethod):
"""Class to fetch urls via 'npm'"""
def init(self, d):
pass
def supports(self, ud, d):
"""
Check to see if a given url can be fetched with npm
"""
return ud.type in ['npm']
def debug(self, msg):
logger.debug(1, "NpmFetch: %s", msg)
def clean(self, ud, d):
logger.debug(2, "Calling cleanup %s" % ud.pkgname)
bb.utils.remove(ud.localpath, False)
bb.utils.remove(ud.pkgdatadir, True)
bb.utils.remove(ud.fullmirror, False)
def urldata_init(self, ud, d):
"""
init NPM specific variable within url data
"""
if 'downloadfilename' in ud.parm:
ud.basename = ud.parm['downloadfilename']
else:
ud.basename = os.path.basename(ud.path)
# can't call it ud.name otherwise fetcher base class will start doing sha1stuff
# TODO: find a way to get an sha1/sha256 manifest of pkg & all deps
ud.pkgname = ud.parm.get("name", None)
if not ud.pkgname:
raise ParameterError("NPM fetcher requires a name parameter", ud.url)
ud.version = ud.parm.get("version", None)
if not ud.version:
raise ParameterError("NPM fetcher requires a version parameter", ud.url)
ud.bbnpmmanifest = "%s-%s.deps.json" % (ud.pkgname, ud.version)
ud.bbnpmmanifest = ud.bbnpmmanifest.replace('/', '-')
ud.registry = "http://%s" % (ud.url.replace('npm://', '', 1).split(';'))[0]
prefixdir = "npm/%s" % ud.pkgname
ud.pkgdatadir = d.expand("${DL_DIR}/%s" % prefixdir)
if not os.path.exists(ud.pkgdatadir):
bb.utils.mkdirhier(ud.pkgdatadir)
ud.localpath = d.expand("${DL_DIR}/npm/%s" % ud.bbnpmmanifest)
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -O -t 2 -T 30 -nv --passive-ftp --no-check-certificate "
ud.prefixdir = prefixdir
ud.write_tarballs = ((d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0") != "0")
ud.mirrortarball = 'npm_%s-%s.tar.xz' % (ud.pkgname, ud.version)
ud.mirrortarball = ud.mirrortarball.replace('/', '-')
ud.fullmirror = os.path.join(d.getVar("DL_DIR"), ud.mirrortarball)
def need_update(self, ud, d):
if os.path.exists(ud.localpath):
return False
return True
def _runwget(self, ud, d, command, quiet):
logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
bb.fetch2.check_network_access(d, command, ud.url)
dldir = d.getVar("DL_DIR")
runfetchcmd(command, d, quiet, workdir=dldir)
def _unpackdep(self, ud, pkg, data, destdir, dldir, d):
file = data[pkg]['tgz']
logger.debug(2, "file to extract is %s" % file)
if file.endswith('.tgz') or file.endswith('.tar.gz') or file.endswith('.tar.Z'):
cmd = 'tar xz --strip 1 --no-same-owner --warning=no-unknown-keyword -f %s/%s' % (dldir, file)
else:
bb.fatal("NPM package %s downloaded not a tarball!" % file)
# Change to subdir before executing command
if not os.path.exists(destdir):
os.makedirs(destdir)
path = d.getVar('PATH')
if path:
cmd = "PATH=\"%s\" %s" % (path, cmd)
bb.note("Unpacking %s to %s/" % (file, destdir))
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True, cwd=destdir)
if ret != 0:
raise UnpackError("Unpack command %s failed with return value %s" % (cmd, ret), ud.url)
if 'deps' not in data[pkg]:
return
for dep in data[pkg]['deps']:
self._unpackdep(ud, dep, data[pkg]['deps'], "%s/node_modules/%s" % (destdir, dep), dldir, d)
def unpack(self, ud, destdir, d):
dldir = d.getVar("DL_DIR")
with open("%s/npm/%s" % (dldir, ud.bbnpmmanifest)) as datafile:
workobj = json.load(datafile)
dldir = "%s/%s" % (os.path.dirname(ud.localpath), ud.pkgname)
if 'subdir' in ud.parm:
unpackdir = '%s/%s' % (destdir, ud.parm.get('subdir'))
else:
unpackdir = '%s/npmpkg' % destdir
self._unpackdep(ud, ud.pkgname, workobj, unpackdir, dldir, d)
def _parse_view(self, output):
'''
Parse the output of npm view --json; the last JSON result
is assumed to be the one that we're interested in.
'''
pdata = None
outdeps = {}
datalines = []
bracelevel = 0
for line in output.splitlines():
if bracelevel:
datalines.append(line)
elif '{' in line:
datalines = []
datalines.append(line)
bracelevel = bracelevel + line.count('{') - line.count('}')
if datalines:
pdata = json.loads('\n'.join(datalines))
return pdata
def _getdependencies(self, pkg, data, version, d, ud, optional=False, fetchedlist=None):
if fetchedlist is None:
fetchedlist = []
pkgfullname = pkg
if version != '*' and not '/' in version:
pkgfullname += "@'%s'" % version
logger.debug(2, "Calling getdeps on %s" % pkg)
fetchcmd = "npm view %s --json --registry %s" % (pkgfullname, ud.registry)
output = runfetchcmd(fetchcmd, d, True)
pdata = self._parse_view(output)
if not pdata:
raise FetchError("The command '%s' returned no output" % fetchcmd)
if optional:
pkg_os = pdata.get('os', None)
if pkg_os:
if not isinstance(pkg_os, list):
pkg_os = [pkg_os]
blacklist = False
for item in pkg_os:
if item.startswith('!'):
blacklist = True
break
if (not blacklist and 'linux' not in pkg_os) or '!linux' in pkg_os:
logger.debug(2, "Skipping %s since it's incompatible with Linux" % pkg)
return
#logger.debug(2, "Output URL is %s - %s - %s" % (ud.basepath, ud.basename, ud.localfile))
outputurl = pdata['dist']['tarball']
data[pkg] = {}
data[pkg]['tgz'] = os.path.basename(outputurl)
if not outputurl in fetchedlist:
self._runwget(ud, d, "%s --directory-prefix=%s %s" % (self.basecmd, ud.prefixdir, outputurl), False)
fetchedlist.append(outputurl)
dependencies = pdata.get('dependencies', {})
optionalDependencies = pdata.get('optionalDependencies', {})
dependencies.update(optionalDependencies)
depsfound = {}
optdepsfound = {}
data[pkg]['deps'] = {}
for dep in dependencies:
if dep in optionalDependencies:
optdepsfound[dep] = dependencies[dep]
else:
depsfound[dep] = dependencies[dep]
for dep, version in optdepsfound.items():
self._getdependencies(dep, data[pkg]['deps'], version, d, ud, optional=True, fetchedlist=fetchedlist)
for dep, version in depsfound.items():
self._getdependencies(dep, data[pkg]['deps'], version, d, ud, fetchedlist=fetchedlist)
def _getshrinkeddependencies(self, pkg, data, version, d, ud, lockdown, manifest, toplevel=True):
logger.debug(2, "NPM shrinkwrap file is %s" % data)
if toplevel:
name = data.get('name', None)
if name and name != pkg:
for obj in data.get('dependencies', []):
if obj == pkg:
self._getshrinkeddependencies(obj, data['dependencies'][obj], data['dependencies'][obj]['version'], d, ud, lockdown, manifest, False)
return
outputurl = "invalid"
if ('resolved' not in data) or (not data['resolved'].startswith('http')):
# will be the case for ${PN}
fetchcmd = "npm view %s@%s dist.tarball --registry %s" % (pkg, version, ud.registry)
logger.debug(2, "Found this matching URL: %s" % str(fetchcmd))
outputurl = runfetchcmd(fetchcmd, d, True)
else:
outputurl = data['resolved']
self._runwget(ud, d, "%s --directory-prefix=%s %s" % (self.basecmd, ud.prefixdir, outputurl), False)
manifest[pkg] = {}
manifest[pkg]['tgz'] = os.path.basename(outputurl).rstrip()
manifest[pkg]['deps'] = {}
if pkg in lockdown:
sha1_expected = lockdown[pkg][version]
sha1_data = bb.utils.sha1_file("npm/%s/%s" % (ud.pkgname, manifest[pkg]['tgz']))
if sha1_expected != sha1_data:
msg = "\nFile: '%s' has %s checksum %s when %s was expected" % (manifest[pkg]['tgz'], 'sha1', sha1_data, sha1_expected)
raise ChecksumError('Checksum mismatch!%s' % msg)
else:
logger.debug(2, "No lockdown data for %s@%s" % (pkg, version))
if 'dependencies' in data:
for obj in data['dependencies']:
logger.debug(2, "Found dep is %s" % str(obj))
self._getshrinkeddependencies(obj, data['dependencies'][obj], data['dependencies'][obj]['version'], d, ud, lockdown, manifest[pkg]['deps'], False)
def download(self, ud, d):
"""Fetch url"""
jsondepobj = {}
shrinkobj = {}
lockdown = {}
if not os.listdir(ud.pkgdatadir) and os.path.exists(ud.fullmirror):
dest = d.getVar("DL_DIR")
bb.utils.mkdirhier(dest)
runfetchcmd("tar -xJf %s" % (ud.fullmirror), d, workdir=dest)
return
shwrf = d.getVar('NPM_SHRINKWRAP')
logger.debug(2, "NPM shrinkwrap file is %s" % shwrf)
if shwrf:
try:
with open(shwrf) as datafile:
shrinkobj = json.load(datafile)
except Exception as e:
raise FetchError('Error loading NPM_SHRINKWRAP file "%s" for %s: %s' % (shwrf, ud.pkgname, str(e)))
elif not ud.ignore_checksums:
logger.warning('Missing shrinkwrap file in NPM_SHRINKWRAP for %s, this will lead to unreliable builds!' % ud.pkgname)
lckdf = d.getVar('NPM_LOCKDOWN')
logger.debug(2, "NPM lockdown file is %s" % lckdf)
if lckdf:
try:
with open(lckdf) as datafile:
lockdown = json.load(datafile)
except Exception as e:
raise FetchError('Error loading NPM_LOCKDOWN file "%s" for %s: %s' % (lckdf, ud.pkgname, str(e)))
elif not ud.ignore_checksums:
logger.warning('Missing lockdown file in NPM_LOCKDOWN for %s, this will lead to unreproducible builds!' % ud.pkgname)
if ('name' not in shrinkobj):
self._getdependencies(ud.pkgname, jsondepobj, ud.version, d, ud)
else:
self._getshrinkeddependencies(ud.pkgname, shrinkobj, ud.version, d, ud, lockdown, jsondepobj)
with open(ud.localpath, 'w') as outfile:
json.dump(jsondepobj, outfile)
def build_mirror_data(self, ud, d):
# Generate a mirror tarball if needed
if ud.write_tarballs and not os.path.exists(ud.fullmirror):
# it's possible that this symlink points to read-only filesystem with PREMIRROR
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
dldir = d.getVar("DL_DIR")
logger.info("Creating tarball of npm data")
runfetchcmd("tar -cJf %s npm/%s npm/%s" % (ud.fullmirror, ud.bbnpmmanifest, ud.pkgname), d,
workdir=dldir)
runfetchcmd("touch %s.done" % (ud.fullmirror), d, workdir=dldir)

View File

@@ -10,6 +10,7 @@ import os
import sys
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
@@ -33,20 +34,20 @@ class Osc(FetchMethod):
# Create paths to osc checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(d.getVar('OSCDIR'), ud.host)
ud.pkgdir = os.path.join(data.expand('${OSCDIR}', d), ud.host)
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
else:
pv = d.getVar("PV", False)
pv = data.getVar("PV", d, 0)
rev = bb.fetch2.srcrev_internal_helper(ud, d)
if rev and rev != True:
ud.revision = rev
else:
ud.revision = ""
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision))
ud.localfile = data.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision), d)
def _buildosccommand(self, ud, d, command):
"""
@@ -54,7 +55,7 @@ class Osc(FetchMethod):
command is "fetch", "update", "info"
"""
basecmd = d.expand('${FETCHCMD_osc}')
basecmd = data.expand('${FETCHCMD_osc}', d)
proto = ud.parm.get('protocol', 'ocs')
@@ -83,25 +84,27 @@ class Osc(FetchMethod):
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(d.getVar('OSCDIR'), ud.path, ud.module), os.R_OK):
if os.access(os.path.join(data.expand('${OSCDIR}', d), ud.path, ud.module), os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ ud.url)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", oscupdatecmd)
bb.fetch2.check_network_access(d, oscupdatecmd, ud.url)
runfetchcmd(oscupdatecmd, d, workdir=ud.moddir)
runfetchcmd(oscupdatecmd, d)
else:
oscfetchcmd = self._buildosccommand(ud, d, "fetch")
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", oscfetchcmd)
bb.fetch2.check_network_access(d, oscfetchcmd, ud.url)
runfetchcmd(oscfetchcmd, d, workdir=ud.pkgdir)
runfetchcmd(oscfetchcmd, d)
os.chdir(os.path.join(ud.pkgdir + ud.path))
# tar them up to a defined filename
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d,
cleanup=[ud.localpath], workdir=os.path.join(ud.pkgdir + ud.path))
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d, cleanup = [ud.localpath])
def supports_srcrev(self):
return False
@@ -111,7 +114,7 @@ class Osc(FetchMethod):
Generate a .oscrc to be used for this run.
"""
config_path = os.path.join(d.getVar('OSCDIR'), "oscrc")
config_path = os.path.join(data.expand('${OSCDIR}', d), "oscrc")
if (os.path.exists(config_path)):
os.remove(config_path)
@@ -120,8 +123,8 @@ class Osc(FetchMethod):
f.write("apisrv = %s\n" % ud.host)
f.write("scheme = http\n")
f.write("su-wrapper = su -c\n")
f.write("build-root = %s\n" % d.getVar('WORKDIR'))
f.write("urllist = %s\n" % d.getVar("OSCURLLIST"))
f.write("build-root = %s\n" % data.expand('${WORKDIR}', d))
f.write("urllist = http://moblin-obs.jf.intel.com:8888/build/%(project)s/%(repository)s/%(buildarch)s/:full/%(name)s.rpm\n")
f.write("extra-pkgs = gzip\n")
f.write("\n")
f.write("[%s]\n" % ud.host)

View File

@@ -1,12 +1,14 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementation for perforce
BitBake 'Fetch' implementations
Classes for obtaining upstream sources for the
BitBake build tools.
"""
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2016 Kodak Alaris, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -23,187 +25,163 @@ BitBake 'Fetch' implementation for perforce
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
from future_builtins import zip
import os
import subprocess
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
class Perforce(FetchMethod):
""" Class to fetch from perforce repositories """
def supports(self, ud, d):
""" Check to see if a given url can be fetched with perforce. """
return ud.type in ['p4']
def urldata_init(self, ud, d):
"""
Initialize perforce specific variables within url data. If P4CONFIG is
provided by the env, use it. If P4PORT is specified by the recipe, use
its values, which may override the settings in P4CONFIG.
"""
ud.basecmd = d.getVar('FETCHCMD_p4')
if not ud.basecmd:
ud.basecmd = "/usr/bin/env p4"
ud.dldir = d.getVar('P4DIR')
if not ud.dldir:
ud.dldir = '%s/%s' % (d.getVar('DL_DIR'), 'p4')
path = ud.url.split('://')[1]
path = path.split(';')[0]
delim = path.find('@');
def doparse(url, d):
parm = {}
path = url.split("://")[1]
delim = path.find("@");
if delim != -1:
(ud.user, ud.pswd) = path.split('@')[0].split(':')
ud.path = path.split('@')[1]
(user, pswd, host, port) = path.split('@')[0].split(":")
path = path.split('@')[1]
else:
ud.path = path
(host, port) = d.getVar('P4PORT').split(':')
user = ""
pswd = ""
ud.usingp4config = False
p4port = d.getVar('P4PORT')
if path.find(";") != -1:
keys=[]
values=[]
plist = path.split(';')
for item in plist:
if item.count('='):
(key, value) = item.split('=')
keys.append(key)
values.append(value)
if p4port:
logger.debug(1, 'Using recipe provided P4PORT: %s' % p4port)
ud.host = p4port
else:
logger.debug(1, 'Trying to use P4CONFIG to automatically set P4PORT...')
ud.usingp4config = True
p4cmd = '%s info | grep "Server address"' % ud.basecmd
bb.fetch2.check_network_access(d, p4cmd, ud.url)
ud.host = runfetchcmd(p4cmd, d, True)
ud.host = ud.host.split(': ')[1].strip()
logger.debug(1, 'Determined P4PORT to be: %s' % ud.host)
if not ud.host:
raise FetchError('Could not determine P4PORT from P4CONFIG')
if ud.path.find('/...') >= 0:
ud.pathisdir = True
else:
ud.pathisdir = False
parm = dict(zip(keys, values))
path = "//" + path.split(';')[0]
host += ":%s" % (port)
parm["cset"] = Perforce.getcset(d, path, host, user, pswd, parm)
cleanedpath = ud.path.replace('/...', '').replace('/', '.')
cleanedhost = ud.host.replace(':', '.')
ud.pkgdir = os.path.join(ud.dldir, cleanedhost, cleanedpath)
return host, path, user, pswd, parm
doparse = staticmethod(doparse)
ud.setup_revisions(d)
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, ud.revision))
def _buildp4command(self, ud, d, command, depot_filename=None):
"""
Build a p4 commandline. Valid commands are "changes", "print", and
"files". depot_filename is the full path to the file in the depot
including the trailing '#rev' value.
"""
def getcset(d, depot, host, user, pswd, parm):
p4opt = ""
if "cset" in parm:
return parm["cset"];
if user:
p4opt += " -u %s" % (user)
if pswd:
p4opt += " -P %s" % (pswd)
if host:
p4opt += " -p %s" % (host)
if ud.user:
p4opt += ' -u "%s"' % (ud.user)
p4date = d.getVar("P4DATE", True)
if "revision" in parm:
depot += "#%s" % (parm["revision"])
elif "label" in parm:
depot += "@%s" % (parm["label"])
elif p4date:
depot += "@%s" % (p4date)
if ud.pswd:
p4opt += ' -P "%s"' % (ud.pswd)
p4cmd = d.getVar('FETCHCMD_p4', True) or "p4"
logger.debug(1, "Running %s%s changes -m 1 %s", p4cmd, p4opt, depot)
p4file, errors = bb.process.run("%s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
cset = p4file.strip()
logger.debug(1, "READ %s", cset)
if not cset:
return -1
if ud.host and not ud.usingp4config:
p4opt += ' -p %s' % (ud.host)
return cset.split(' ')[1]
getcset = staticmethod(getcset)
if hasattr(ud, 'revision') and ud.revision:
pathnrev = '%s@%s' % (ud.path, ud.revision)
def urldata_init(self, ud, d):
(host, path, user, pswd, parm) = Perforce.doparse(ud.url, d)
base_path = path.replace('/...', '')
base_path = self._strip_leading_slashes(base_path)
if "label" in parm:
version = parm["label"]
else:
pathnrev = '%s' % (ud.path)
version = Perforce.getcset(d, path, host, user, pswd, parm)
if depot_filename:
if ud.pathisdir: # Remove leading path to obtain filename
filename = depot_filename[len(ud.path)-1:]
else:
filename = depot_filename[depot_filename.rfind('/'):]
filename = filename[:filename.find('#')] # Remove trailing '#rev'
if command == 'changes':
p4cmd = '%s%s changes -m 1 //%s' % (ud.basecmd, p4opt, pathnrev)
elif command == 'print':
if depot_filename != None:
p4cmd = '%s%s print -o "p4/%s" "%s"' % (ud.basecmd, p4opt, filename, depot_filename)
else:
raise FetchError('No depot file name provided to p4 %s' % command, ud.url)
elif command == 'files':
p4cmd = '%s%s files //%s' % (ud.basecmd, p4opt, pathnrev)
else:
raise FetchError('Invalid p4 command %s' % command, ud.url)
return p4cmd
def _p4listfiles(self, ud, d):
"""
Return a list of the file names which are present in the depot using the
'p4 files' command, including trailing '#rev' file revision indicator
"""
p4cmd = self._buildp4command(ud, d, 'files')
bb.fetch2.check_network_access(d, p4cmd, ud.url)
p4fileslist = runfetchcmd(p4cmd, d, True)
p4fileslist = [f.rstrip() for f in p4fileslist.splitlines()]
if not p4fileslist:
raise FetchError('Unable to fetch listing of p4 files from %s@%s' % (ud.host, ud.path))
count = 0
filelist = []
for filename in p4fileslist:
item = filename.split(' - ')
lastaction = item[1].split()
logger.debug(1, 'File: %s Last Action: %s' % (item[0], lastaction[0]))
if lastaction[0] == 'delete':
continue
filelist.append(item[0])
return filelist
ud.localfile = data.expand('%s+%s+%s.tar.gz' % (host, base_path.replace('/', '.'), version), d)
def download(self, ud, d):
""" Get the list of files, fetch each one """
filelist = self._p4listfiles(ud, d)
if not filelist:
raise FetchError('No files found in depot %s@%s' % (ud.host, ud.path))
"""
Fetch urls
"""
bb.utils.remove(ud.pkgdir, True)
bb.utils.mkdirhier(ud.pkgdir)
(host, depot, user, pswd, parm) = Perforce.doparse(ud.url, d)
for afile in filelist:
p4fetchcmd = self._buildp4command(ud, d, 'print', afile)
bb.fetch2.check_network_access(d, p4fetchcmd, ud.url)
runfetchcmd(p4fetchcmd, d, workdir=ud.pkgdir)
if depot.find('/...') != -1:
path = depot[:depot.find('/...')]
else:
path = depot
runfetchcmd('tar -czf %s p4' % (ud.localpath), d, cleanup=[ud.localpath], workdir=ud.pkgdir)
module = parm.get('module', os.path.basename(path))
def clean(self, ud, d):
""" Cleanup p4 specific files and dirs"""
bb.utils.remove(ud.localpath)
bb.utils.remove(ud.pkgdir, True)
# Get the p4 command
p4opt = ""
if user:
p4opt += " -u %s" % (user)
def supports_srcrev(self):
return True
if pswd:
p4opt += " -P %s" % (pswd)
def _revision_key(self, ud, d, name):
""" Return a unique key for the url """
return 'p4:%s' % ud.pkgdir
if host:
p4opt += " -p %s" % (host)
def _latest_revision(self, ud, d, name):
""" Return the latest upstream scm revision number """
p4cmd = self._buildp4command(ud, d, "changes")
bb.fetch2.check_network_access(d, p4cmd, ud.url)
tip = runfetchcmd(p4cmd, d, True)
p4cmd = d.getVar('FETCHCMD_p4', True) or "p4"
if not tip:
raise FetchError('Could not determine the latest perforce changelist')
# create temp directory
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(d.expand('${WORKDIR}'))
mktemp = d.getVar("FETCHCMD_p4mktemp", True) or d.expand("mktemp -d -q '${WORKDIR}/oep4.XXXXXX'")
tmpfile, errors = bb.process.run(mktemp)
tmpfile = tmpfile.strip()
if not tmpfile:
raise FetchError("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.", ud.url)
tipcset = tip.split(' ')[1]
logger.debug(1, 'p4 tip found to be changelist %s' % tipcset)
return tipcset
if "label" in parm:
depot = "%s@%s" % (depot, parm["label"])
else:
cset = Perforce.getcset(d, depot, host, user, pswd, parm)
depot = "%s@%s" % (depot, cset)
def sortable_revision(self, ud, d, name):
""" Return a sortable revision number """
return False, self._build_revision(ud, d)
os.chdir(tmpfile)
logger.info("Fetch " + ud.url)
logger.info("%s%s files %s", p4cmd, p4opt, depot)
p4file, errors = bb.process.run("%s%s files %s" % (p4cmd, p4opt, depot))
p4file = [f.rstrip() for f in p4file.splitlines()]
def _build_revision(self, ud, d):
return ud.revision
if not p4file:
raise FetchError("Fetch: unable to get the P4 files from %s" % depot, ud.url)
count = 0
for file in p4file:
list = file.split()
if list[2] == "delete":
continue
dest = list[0][len(path)+1:]
where = dest.find("#")
subprocess.call("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module, dest[:where], list[0]), shell=True)
count = count + 1
if count == 0:
logger.error()
raise FetchError("Fetch: No files gathered from the P4 fetch", ud.url)
runfetchcmd("tar -czf %s %s" % (ud.localpath, module), d, cleanup = [ud.localpath])
# cleanup
bb.utils.prunedir(tmpfile)

View File

@@ -25,6 +25,7 @@ BitBake "Fetch" repo (git) implementation
import os
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
@@ -50,17 +51,17 @@ class Repo(FetchMethod):
if not ud.manifest.endswith('.xml'):
ud.manifest += '.xml'
ud.localfile = d.expand("repo_%s%s_%s_%s.tar.gz" % (ud.host, ud.path.replace("/", "."), ud.manifest, ud.branch))
ud.localfile = data.expand("repo_%s%s_%s_%s.tar.gz" % (ud.host, ud.path.replace("/", "."), ud.manifest, ud.branch), d)
def download(self, ud, d):
"""Fetch url"""
if os.access(os.path.join(d.getVar("DL_DIR"), ud.localfile), os.R_OK):
if os.access(os.path.join(data.getVar("DL_DIR", d, True), ud.localfile), os.R_OK):
logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
return
gitsrcname = "%s%s" % (ud.host, ud.path.replace("/", "."))
repodir = d.getVar("REPODIR") or os.path.join(d.getVar("DL_DIR"), "repo")
repodir = data.getVar("REPODIR", d, True) or os.path.join(data.getVar("DL_DIR", d, True), "repo")
codir = os.path.join(repodir, gitsrcname, ud.manifest)
if ud.user:
@@ -68,23 +69,24 @@ class Repo(FetchMethod):
else:
username = ""
repodir = os.path.join(codir, "repo")
bb.utils.mkdirhier(repodir)
if not os.path.exists(os.path.join(repodir, ".repo")):
bb.utils.mkdirhier(os.path.join(codir, "repo"))
os.chdir(os.path.join(codir, "repo"))
if not os.path.exists(os.path.join(codir, "repo", ".repo")):
bb.fetch2.check_network_access(d, "repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), ud.url)
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d, workdir=repodir)
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d)
bb.fetch2.check_network_access(d, "repo sync %s" % ud.url, ud.url)
runfetchcmd("repo sync", d, workdir=repodir)
runfetchcmd("repo sync", d)
os.chdir(codir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude='.repo' --exclude='.git'"
tar_flags = "--exclude '.repo' --exclude '.git'"
# Create a cache
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.join(".", "*") ), d, workdir=codir)
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.join(".", "*") ), d)
def supports_srcrev(self):
return False

View File

@@ -1,98 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementation for Amazon AWS S3.
Class for fetching files from Amazon S3 using the AWS Command Line Interface.
The aws tool must be correctly installed and configured prior to use.
"""
# Copyright (C) 2017, Andre McCurdy <armccurdy@gmail.com>
#
# Based in part on bb.fetch2.wget:
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import bb
import urllib.request, urllib.parse, urllib.error
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
class S3(FetchMethod):
"""Class to fetch urls via 'aws s3'"""
def supports(self, ud, d):
"""
Check to see if a given url can be fetched with s3.
"""
return ud.type in ['s3']
def recommends_checksum(self, urldata):
return True
def urldata_init(self, ud, d):
if 'downloadfilename' in ud.parm:
ud.basename = ud.parm['downloadfilename']
else:
ud.basename = os.path.basename(ud.path)
ud.localfile = d.expand(urllib.parse.unquote(ud.basename))
ud.basecmd = d.getVar("FETCHCMD_s3") or "/usr/bin/env aws s3"
def download(self, ud, d):
"""
Fetch urls
Assumes localpath was called first
"""
cmd = '%s cp s3://%s%s %s' % (ud.basecmd, ud.host, ud.path, ud.localpath)
bb.fetch2.check_network_access(d, cmd, ud.url)
runfetchcmd(cmd, d)
# Additional sanity checks copied from the wget class (although there
# are no known issues which mean these are required, treat the aws cli
# tool with a little healthy suspicion).
if not os.path.exists(ud.localpath):
raise FetchError("The aws cp command returned success for s3://%s%s but %s doesn't exist?!" % (ud.host, ud.path, ud.localpath))
if os.path.getsize(ud.localpath) == 0:
os.remove(ud.localpath)
raise FetchError("The aws cp command for s3://%s%s resulted in a zero size file?! Deleting and failing since this isn't right." % (ud.host, ud.path))
return True
def checkstatus(self, fetch, ud, d):
"""
Check the status of a URL
"""
cmd = '%s ls s3://%s%s' % (ud.basecmd, ud.host, ud.path)
bb.fetch2.check_network_access(d, cmd, ud.url)
output = runfetchcmd(cmd, d)
# "aws s3 ls s3://mybucket/foo" will exit with success even if the file
# is not found, so check output of the command to confirm success.
if not output:
raise FetchError("The aws ls command for s3://%s%s gave empty output" % (ud.host, ud.path))
return True

View File

@@ -61,11 +61,14 @@ SRC_URI = "sftp://user@host.example.com/dir/path.file.txt"
import os
import bb
import urllib.request, urllib.parse, urllib.error
import urllib
import commands
from bb import data
from bb.fetch2 import URI
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
class SFTP(FetchMethod):
"""Class to fetch urls via 'sftp'"""
@@ -90,19 +93,19 @@ class SFTP(FetchMethod):
else:
ud.basename = os.path.basename(ud.path)
ud.localfile = d.expand(urllib.parse.unquote(ud.basename))
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
def download(self, ud, d):
"""Fetch urls"""
urlo = URI(ud.url)
basecmd = 'sftp -oBatchMode=yes'
basecmd = 'sftp -oPasswordAuthentication=no'
port = ''
if urlo.port:
port = '-P %d' % urlo.port
urlo.port = None
dldir = d.getVar('DL_DIR')
dldir = data.getVar('DL_DIR', d, True)
lpath = os.path.join(dldir, ud.localfile)
user = ''
@@ -118,7 +121,8 @@ class SFTP(FetchMethod):
remote = '%s%s:%s' % (user, urlo.hostname, path)
cmd = '%s %s %s %s' % (basecmd, port, remote, lpath)
cmd = '%s %s %s %s' % (basecmd, port, commands.mkarg(remote),
commands.mkarg(lpath))
bb.fetch2.check_network_access(d, cmd, ud.url)
runfetchcmd(cmd, d)

View File

@@ -43,6 +43,7 @@ IETF secsh internet draft:
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import re, os
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
@@ -86,11 +87,10 @@ class SSH(FetchMethod):
m = __pattern__.match(urldata.url)
path = m.group('path')
host = m.group('host')
urldata.localpath = os.path.join(d.getVar('DL_DIR'),
os.path.basename(os.path.normpath(path)))
urldata.localpath = os.path.join(d.getVar('DL_DIR', True), os.path.basename(path))
def download(self, urldata, d):
dldir = d.getVar('DL_DIR')
dldir = d.getVar('DL_DIR', True)
m = __pattern__.match(urldata.url)
path = m.group('path')
@@ -113,10 +113,12 @@ class SSH(FetchMethod):
fr = host
fr += ':%s' % path
import commands
cmd = 'scp -B -r %s %s %s/' % (
portarg,
fr,
dldir
commands.mkarg(fr),
commands.mkarg(dldir)
)
bb.fetch2.check_network_access(d, cmd, urldata.url)

View File

@@ -28,6 +28,7 @@ import sys
import logging
import bb
import re
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
@@ -49,26 +50,21 @@ class Svn(FetchMethod):
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
ud.basecmd = d.getVar('FETCHCMD_svn')
ud.basecmd = d.getVar('FETCHCMD_svn', True)
ud.module = ud.parm["module"]
if not "path_spec" in ud.parm:
ud.path_spec = ud.module
else:
ud.path_spec = ud.parm["path_spec"]
# Create paths to svn checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(d.expand('${SVNDIR}'), ud.host, relpath)
ud.pkgdir = os.path.join(data.expand('${SVNDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
ud.setup_revisions(d)
ud.setup_revisons(d)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
ud.localfile = d.expand('%s_%s_%s_%s_.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision))
ud.localfile = data.expand('%s_%s_%s_%s_.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
def _buildsvncommand(self, ud, d, command):
"""
@@ -78,9 +74,9 @@ class Svn(FetchMethod):
proto = ud.parm.get('protocol', 'svn')
svn_ssh = None
if proto == "svn+ssh" and "ssh" in ud.parm:
svn_ssh = ud.parm["ssh"]
svn_rsh = None
if proto == "svn+ssh" and "rsh" in ud.parm:
svn_rsh = ud.parm["rsh"]
svnroot = ud.host + ud.path
@@ -106,14 +102,14 @@ class Svn(FetchMethod):
if command == "fetch":
transportuser = ud.parm.get("transportuser", "")
svncmd = "%s co %s %s://%s%s/%s%s %s" % (ud.basecmd, " ".join(options), proto, transportuser, svnroot, ud.module, suffix, ud.path_spec)
svncmd = "%s co %s %s://%s%s/%s%s %s" % (ud.basecmd, " ".join(options), proto, transportuser, svnroot, ud.module, suffix, ud.module)
elif command == "update":
svncmd = "%s update %s" % (ud.basecmd, " ".join(options))
else:
raise FetchError("Invalid svn command %s" % command, ud.url)
if svn_ssh:
svncmd = "SVN_SSH=\"%s\" %s" % (svn_ssh, svncmd)
if svn_rsh:
svncmd = "svn_RSH=\"%s\" %s" % (svn_rsh, svncmd)
return svncmd
@@ -125,32 +121,35 @@ class Svn(FetchMethod):
if os.access(os.path.join(ud.moddir, '.svn'), os.R_OK):
svnupdatecmd = self._buildsvncommand(ud, d, "update")
logger.info("Update " + ud.url)
# update sources there
os.chdir(ud.moddir)
# We need to attempt to run svn upgrade first in case its an older working format
try:
runfetchcmd(ud.basecmd + " upgrade", d, workdir=ud.moddir)
runfetchcmd(ud.basecmd + " upgrade", d)
except FetchError:
pass
logger.debug(1, "Running %s", svnupdatecmd)
bb.fetch2.check_network_access(d, svnupdatecmd, ud.url)
runfetchcmd(svnupdatecmd, d, workdir=ud.moddir)
runfetchcmd(svnupdatecmd, d)
else:
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", svnfetchcmd)
bb.fetch2.check_network_access(d, svnfetchcmd, ud.url)
runfetchcmd(svnfetchcmd, d, workdir=ud.pkgdir)
runfetchcmd(svnfetchcmd, d)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude='.svn'"
tar_flags = "--exclude '.svn'"
os.chdir(ud.pkgdir)
# tar them up to a defined filename
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.path_spec), d,
cleanup=[ud.localpath], workdir=ud.pkgdir)
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d, cleanup = [ud.localpath])
def clean(self, ud, d):
""" Clean SVN specific files and dirs """
@@ -172,7 +171,7 @@ class Svn(FetchMethod):
"""
Return the latest upstream revision number
"""
bb.fetch2.check_network_access(d, self._buildsvncommand(ud, d, "log1"), ud.url)
bb.fetch2.check_network_access(d, self._buildsvncommand(ud, d, "log1"))
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "log1"), d, True)

View File

@@ -25,42 +25,15 @@ BitBake build tools.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import re
import tempfile
import subprocess
import os
import logging
import bb
import bb.progress
import urllib.request, urllib.parse, urllib.error
import urllib
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
from bb.utils import export_proxies
from bs4 import BeautifulSoup
from bs4 import SoupStrainer
class WgetProgressHandler(bb.progress.LineFilterProgressHandler):
"""
Extract progress information from wget output.
Note: relies on --progress=dot (with -v or without -q/-nv) being
specified on the wget command line.
"""
def __init__(self, d):
super(WgetProgressHandler, self).__init__(d)
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
def writeline(self, line):
percs = re.findall(r'(\d+)%\s+([\d.]+[A-Z])', line)
if percs:
progress = int(percs[-1][0])
rate = percs[-1][1] + '/s'
self.update(progress, rate)
return False
return True
class Wget(FetchMethod):
"""Class to fetch urls via 'wget'"""
@@ -83,19 +56,15 @@ class Wget(FetchMethod):
else:
ud.basename = os.path.basename(ud.path)
ud.localfile = d.expand(urllib.parse.unquote(ud.basename))
if not ud.localfile:
ud.localfile = d.expand(urllib.parse.unquote(ud.host + ud.path).replace("/", "."))
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30 --passive-ftp --no-check-certificate"
self.basecmd = d.getVar("FETCHCMD_wget", True) or "/usr/bin/env wget -t 2 -T 30 -nv --passive-ftp --no-check-certificate"
def _runwget(self, ud, d, command, quiet):
progresshandler = WgetProgressHandler(d)
logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
bb.fetch2.check_network_access(d, command, ud.url)
runfetchcmd(command + ' --progress=dot -v', d, quiet, log=progresshandler)
bb.fetch2.check_network_access(d, command)
runfetchcmd(command, d, quiet)
def download(self, ud, d):
"""Fetch urls"""
@@ -103,13 +72,10 @@ class Wget(FetchMethod):
fetchcmd = self.basecmd
if 'downloadfilename' in ud.parm:
dldir = d.getVar("DL_DIR")
dldir = d.getVar("DL_DIR", True)
bb.utils.mkdirhier(os.path.dirname(dldir + os.sep + ud.localfile))
fetchcmd += " -O " + dldir + os.sep + ud.localfile
if ud.user and ud.pswd:
fetchcmd += " --user=%s --password=%s --auth-no-challenge" % (ud.user, ud.pswd)
uri = ud.url.split(";")[0]
if os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
@@ -130,485 +96,11 @@ class Wget(FetchMethod):
return True
def checkstatus(self, fetch, ud, d, try_again=True):
import urllib.request, urllib.error, urllib.parse, socket, http.client
from urllib.response import addinfourl
from bb.fetch2 import FetchConnectionCache
def checkstatus(self, ud, d):
class HTTPConnectionCache(http.client.HTTPConnection):
if fetch.connection_cache:
def connect(self):
"""Connect to the host and port specified in __init__."""
uri = ud.url.split(";")[0]
fetchcmd = self.basecmd + " --spider '%s'" % uri
sock = fetch.connection_cache.get_connection(self.host, self.port)
if sock:
self.sock = sock
else:
self.sock = socket.create_connection((self.host, self.port),
self.timeout, self.source_address)
fetch.connection_cache.add_connection(self.host, self.port, self.sock)
self._runwget(ud, d, fetchcmd, True)
if self._tunnel_host:
self._tunnel()
class CacheHTTPHandler(urllib.request.HTTPHandler):
def http_open(self, req):
return self.do_open(HTTPConnectionCache, req)
def do_open(self, http_class, req):
"""Return an addinfourl object for the request, using http_class.
http_class must implement the HTTPConnection API from httplib.
The addinfourl return value is a file-like object. It also
has methods and attributes including:
- info(): return a mimetools.Message object for the headers
- geturl(): return the original request URL
- code: HTTP status code
"""
host = req.host
if not host:
raise urlllib2.URLError('no host given')
h = http_class(host, timeout=req.timeout) # will parse host:port
h.set_debuglevel(self._debuglevel)
headers = dict(req.unredirected_hdrs)
headers.update(dict((k, v) for k, v in list(req.headers.items())
if k not in headers))
# We want to make an HTTP/1.1 request, but the addinfourl
# class isn't prepared to deal with a persistent connection.
# It will try to read all remaining data from the socket,
# which will block while the server waits for the next request.
# So make sure the connection gets closed after the (only)
# request.
# Don't close connection when connection_cache is enabled,
if fetch.connection_cache is None:
headers["Connection"] = "close"
else:
headers["Connection"] = "Keep-Alive" # Works for HTTP/1.0
headers = dict(
(name.title(), val) for name, val in list(headers.items()))
if req._tunnel_host:
tunnel_headers = {}
proxy_auth_hdr = "Proxy-Authorization"
if proxy_auth_hdr in headers:
tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr]
# Proxy-Authorization should not be sent to origin
# server.
del headers[proxy_auth_hdr]
h.set_tunnel(req._tunnel_host, headers=tunnel_headers)
try:
h.request(req.get_method(), req.selector, req.data, headers)
except socket.error as err: # XXX what error?
# Don't close connection when cache is enabled.
if fetch.connection_cache is None:
h.close()
raise urllib.error.URLError(err)
else:
try:
r = h.getresponse(buffering=True)
except TypeError: # buffering kw not supported
r = h.getresponse()
# Pick apart the HTTPResponse object to get the addinfourl
# object initialized properly.
# Wrap the HTTPResponse object in socket's file object adapter
# for Windows. That adapter calls recv(), so delegate recv()
# to read(). This weird wrapping allows the returned object to
# have readline() and readlines() methods.
# XXX It might be better to extract the read buffering code
# out of socket._fileobject() and into a base class.
r.recv = r.read
# no data, just have to read
r.read()
class fp_dummy(object):
def read(self):
return ""
def readline(self):
return ""
def close(self):
pass
resp = addinfourl(fp_dummy(), r.msg, req.get_full_url())
resp.code = r.status
resp.msg = r.reason
# Close connection when server request it.
if fetch.connection_cache is not None:
if 'Connection' in r.msg and r.msg['Connection'] == 'close':
fetch.connection_cache.remove_connection(h.host, h.port)
return resp
class HTTPMethodFallback(urllib.request.BaseHandler):
"""
Fallback to GET if HEAD is not allowed (405 HTTP error)
"""
def http_error_405(self, req, fp, code, msg, headers):
fp.read()
fp.close()
newheaders = dict((k,v) for k,v in list(req.headers.items())
if k.lower() not in ("content-length", "content-type"))
return self.parent.open(urllib.request.Request(req.get_full_url(),
headers=newheaders,
origin_req_host=req.origin_req_host,
unverifiable=True))
"""
Some servers (e.g. GitHub archives, hosted on Amazon S3) return 403
Forbidden when they actually mean 405 Method Not Allowed.
"""
http_error_403 = http_error_405
"""
Some servers (e.g. FusionForge) returns 406 Not Acceptable when they
actually mean 405 Method Not Allowed.
"""
http_error_406 = http_error_405
class FixedHTTPRedirectHandler(urllib.request.HTTPRedirectHandler):
"""
urllib2.HTTPRedirectHandler resets the method to GET on redirect,
when we want to follow redirects using the original method.
"""
def redirect_request(self, req, fp, code, msg, headers, newurl):
newreq = urllib.request.HTTPRedirectHandler.redirect_request(self, req, fp, code, msg, headers, newurl)
newreq.get_method = lambda: req.get_method()
return newreq
exported_proxies = export_proxies(d)
handlers = [FixedHTTPRedirectHandler, HTTPMethodFallback]
if export_proxies:
handlers.append(urllib.request.ProxyHandler())
handlers.append(CacheHTTPHandler())
# XXX: Since Python 2.7.9 ssl cert validation is enabled by default
# see PEP-0476, this causes verification errors on some https servers
# so disable by default.
import ssl
if hasattr(ssl, '_create_unverified_context'):
handlers.append(urllib.request.HTTPSHandler(context=ssl._create_unverified_context()))
opener = urllib.request.build_opener(*handlers)
try:
uri = ud.url.split(";")[0]
r = urllib.request.Request(uri)
r.get_method = lambda: "HEAD"
def add_basic_auth(login_str, request):
'''Adds Basic auth to http request, pass in login:password as string'''
import base64
encodeuser = base64.b64encode(login_str.encode('utf-8')).decode("utf-8")
authheader = "Basic %s" % encodeuser
r.add_header("Authorization", authheader)
if ud.user:
add_basic_auth(ud.user, r)
try:
import netrc, urllib.parse
n = netrc.netrc()
login, unused, password = n.authenticators(urllib.parse.urlparse(uri).hostname)
add_basic_auth("%s:%s" % (login, password), r)
except (TypeError, ImportError, IOError, netrc.NetrcParseError):
pass
opener.open(r)
except urllib.error.URLError as e:
if try_again:
logger.debug(2, "checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug(2, "checkstatus() urlopen failed: %s" % e)
return False
return True
def _parse_path(self, regex, s):
"""
Find and group name, version and archive type in the given string s
"""
m = regex.search(s)
if m:
pname = ''
pver = ''
ptype = ''
mdict = m.groupdict()
if 'name' in mdict.keys():
pname = mdict['name']
if 'pver' in mdict.keys():
pver = mdict['pver']
if 'type' in mdict.keys():
ptype = mdict['type']
bb.debug(3, "_parse_path: %s, %s, %s" % (pname, pver, ptype))
return (pname, pver, ptype)
return None
def _modelate_version(self, version):
if version[0] in ['.', '-']:
if version[1].isdigit():
version = version[1] + version[0] + version[2:len(version)]
else:
version = version[1:len(version)]
version = re.sub('-', '.', version)
version = re.sub('_', '.', version)
version = re.sub('(rc)+', '.1000.', version)
version = re.sub('(beta)+', '.100.', version)
version = re.sub('(alpha)+', '.10.', version)
if version[0] == 'v':
version = version[1:len(version)]
return version
def _vercmp(self, old, new):
"""
Check whether 'new' is newer than 'old' version. We use existing vercmp() for the
purpose. PE is cleared in comparison as it's not for build, and PR is cleared too
for simplicity as it's somehow difficult to get from various upstream format
"""
(oldpn, oldpv, oldsuffix) = old
(newpn, newpv, newsuffix) = new
"""
Check for a new suffix type that we have never heard of before
"""
if (newsuffix):
m = self.suffix_regex_comp.search(newsuffix)
if not m:
bb.warn("%s has a possible unknown suffix: %s" % (newpn, newsuffix))
return False
"""
Not our package so ignore it
"""
if oldpn != newpn:
return False
oldpv = self._modelate_version(oldpv)
newpv = self._modelate_version(newpv)
return bb.utils.vercmp(("0", oldpv, ""), ("0", newpv, ""))
def _fetch_index(self, uri, ud, d):
"""
Run fetch checkstatus to get directory information
"""
f = tempfile.NamedTemporaryFile()
agent = "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12"
fetchcmd = self.basecmd
fetchcmd += " -O " + f.name + " --user-agent='" + agent + "' '" + uri + "'"
try:
self._runwget(ud, d, fetchcmd, True)
fetchresult = f.read()
except bb.fetch2.BBFetchException:
fetchresult = ""
f.close()
return fetchresult
def _check_latest_version(self, url, package, package_regex, current_version, ud, d):
"""
Return the latest version of a package inside a given directory path
If error or no version, return ""
"""
valid = 0
version = ['', '', '']
bb.debug(3, "VersionURL: %s" % (url))
soup = BeautifulSoup(self._fetch_index(url, ud, d), "html.parser", parse_only=SoupStrainer("a"))
if not soup:
bb.debug(3, "*** %s NO SOUP" % (url))
return ""
for line in soup.find_all('a', href=True):
bb.debug(3, "line['href'] = '%s'" % (line['href']))
bb.debug(3, "line = '%s'" % (str(line)))
newver = self._parse_path(package_regex, line['href'])
if not newver:
newver = self._parse_path(package_regex, str(line))
if newver:
bb.debug(3, "Upstream version found: %s" % newver[1])
if valid == 0:
version = newver
valid = 1
elif self._vercmp(version, newver) < 0:
version = newver
pupver = re.sub('_', '.', version[1])
bb.debug(3, "*** %s -> UpstreamVersion = %s (CurrentVersion = %s)" %
(package, pupver or "N/A", current_version[1]))
if valid:
return pupver
return ""
def _check_latest_version_by_dir(self, dirver, package, package_regex,
current_version, ud, d):
"""
Scan every directory in order to get upstream version.
"""
version_dir = ['', '', '']
version = ['', '', '']
dirver_regex = re.compile("(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])+(\d+))")
s = dirver_regex.search(dirver)
if s:
version_dir[1] = s.group('ver')
else:
version_dir[1] = dirver
dirs_uri = bb.fetch.encodeurl([ud.type, ud.host,
ud.path.split(dirver)[0], ud.user, ud.pswd, {}])
bb.debug(3, "DirURL: %s, %s" % (dirs_uri, package))
soup = BeautifulSoup(self._fetch_index(dirs_uri, ud, d), "html.parser", parse_only=SoupStrainer("a"))
if not soup:
return version[1]
for line in soup.find_all('a', href=True):
s = dirver_regex.search(line['href'].strip("/"))
if s:
sver = s.group('ver')
# When prefix is part of the version directory it need to
# ensure that only version directory is used so remove previous
# directories if exists.
#
# Example: pfx = '/dir1/dir2/v' and version = '2.5' the expected
# result is v2.5.
spfx = s.group('pfx').split('/')[-1]
version_dir_new = ['', sver, '']
if self._vercmp(version_dir, version_dir_new) <= 0:
dirver_new = spfx + sver
path = ud.path.replace(dirver, dirver_new, True) \
.split(package)[0]
uri = bb.fetch.encodeurl([ud.type, ud.host, path,
ud.user, ud.pswd, {}])
pupver = self._check_latest_version(uri,
package, package_regex, current_version, ud, d)
if pupver:
version[1] = pupver
version_dir = version_dir_new
return version[1]
def _init_regexes(self, package, ud, d):
"""
Match as many patterns as possible such as:
gnome-common-2.20.0.tar.gz (most common format)
gtk+-2.90.1.tar.gz
xf86-input-synaptics-12.6.9.tar.gz
dri2proto-2.3.tar.gz
blktool_4.orig.tar.gz
libid3tag-0.15.1b.tar.gz
unzip552.tar.gz
icu4c-3_6-src.tgz
genext2fs_1.3.orig.tar.gz
gst-fluendo-mp3
"""
# match most patterns which uses "-" as separator to version digits
pn_prefix1 = "[a-zA-Z][a-zA-Z0-9]*([-_][a-zA-Z]\w+)*\+?[-_]"
# a loose pattern such as for unzip552.tar.gz
pn_prefix2 = "[a-zA-Z]+"
# a loose pattern such as for 80325-quicky-0.4.tar.gz
pn_prefix3 = "[0-9]+[-]?[a-zA-Z]+"
# Save the Package Name (pn) Regex for use later
pn_regex = "(%s|%s|%s)" % (pn_prefix1, pn_prefix2, pn_prefix3)
# match version
pver_regex = "(([A-Z]*\d+[a-zA-Z]*[\.\-_]*)+)"
# match arch
parch_regex = "-source|_all_"
# src.rpm extension was added only for rpm package. Can be removed if the rpm
# packaged will always be considered as having to be manually upgraded
psuffix_regex = "(tar\.gz|tgz|tar\.bz2|zip|xz|tar\.lz|rpm|bz2|orig\.tar\.gz|tar\.xz|src\.tar\.gz|src\.tgz|svnr\d+\.tar\.bz2|stable\.tar\.gz|src\.rpm)"
# match name, version and archive type of a package
package_regex_comp = re.compile("(?P<name>%s?\.?v?)(?P<pver>%s)(?P<arch>%s)?[\.-](?P<type>%s$)"
% (pn_regex, pver_regex, parch_regex, psuffix_regex))
self.suffix_regex_comp = re.compile(psuffix_regex)
# compile regex, can be specific by package or generic regex
pn_regex = d.getVar('UPSTREAM_CHECK_REGEX')
if pn_regex:
package_custom_regex_comp = re.compile(pn_regex)
else:
version = self._parse_path(package_regex_comp, package)
if version:
package_custom_regex_comp = re.compile(
"(?P<name>%s)(?P<pver>%s)(?P<arch>%s)?[\.-](?P<type>%s)" %
(re.escape(version[0]), pver_regex, parch_regex, psuffix_regex))
else:
package_custom_regex_comp = None
return package_custom_regex_comp
def latest_versionstring(self, ud, d):
"""
Manipulate the URL and try to obtain the latest package version
sanity check to ensure same name and type.
"""
package = ud.path.split("/")[-1]
current_version = ['', d.getVar('PV'), '']
"""possible to have no version in pkg name, such as spectrum-fw"""
if not re.search("\d+", package):
current_version[1] = re.sub('_', '.', current_version[1])
current_version[1] = re.sub('-', '.', current_version[1])
return (current_version[1], '')
package_regex = self._init_regexes(package, ud, d)
if package_regex is None:
bb.warn("latest_versionstring: package %s don't match pattern" % (package))
return ('', '')
bb.debug(3, "latest_versionstring, regex: %s" % (package_regex.pattern))
uri = ""
regex_uri = d.getVar("UPSTREAM_CHECK_URI")
if not regex_uri:
path = ud.path.split(package)[0]
# search for version matches on folders inside the path, like:
# "5.7" in http://download.gnome.org/sources/${PN}/5.7/${PN}-${PV}.tar.gz
dirver_regex = re.compile("(?P<dirver>[^/]*(\d+\.)*\d+([-_]r\d+)*)/")
m = dirver_regex.search(path)
if m:
pn = d.getVar('PN')
dirver = m.group('dirver')
dirver_pn_regex = re.compile("%s\d?" % (re.escape(pn)))
if not dirver_pn_regex.search(dirver):
return (self._check_latest_version_by_dir(dirver,
package, package_regex, current_version, ud, d), '')
uri = bb.fetch.encodeurl([ud.type, ud.host, path, ud.user, ud.pswd, {}])
else:
uri = regex_uri
return (self._check_latest_version(uri, package, package_regex,
current_version, ud, d), '')

View File

@@ -1,552 +0,0 @@
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2003, 2004 Phil Blundell
# Copyright (C) 2003 - 2005 Michael 'Mickey' Lauer
# Copyright (C) 2005 Holger Hans Peter Freyther
# Copyright (C) 2005 ROAD GmbH
# Copyright (C) 2006 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import logging
import optparse
import warnings
import fcntl
import bb
from bb import event
import bb.msg
from bb import cooker
from bb import ui
from bb import server
from bb import cookerdata
logger = logging.getLogger("BitBake")
class BBMainException(Exception):
pass
def present_options(optionlist):
if len(optionlist) > 1:
return ' or '.join([', '.join(optionlist[:-1]), optionlist[-1]])
else:
return optionlist[0]
class BitbakeHelpFormatter(optparse.IndentedHelpFormatter):
def format_option(self, option):
# We need to do this here rather than in the text we supply to
# add_option() because we don't want to call list_extension_modules()
# on every execution (since it imports all of the modules)
# Note also that we modify option.help rather than the returned text
# - this is so that we don't have to re-format the text ourselves
if option.dest == 'ui':
valid_uis = list_extension_modules(bb.ui, 'main')
option.help = option.help.replace('@CHOICES@', present_options(valid_uis))
elif option.dest == 'servertype':
valid_server_types = list_extension_modules(bb.server, 'BitBakeServer')
option.help = option.help.replace('@CHOICES@', present_options(valid_server_types))
return optparse.IndentedHelpFormatter.format_option(self, option)
def list_extension_modules(pkg, checkattr):
"""
Lists extension modules in a specific Python package
(e.g. UIs, servers). NOTE: Calling this function will import all of the
submodules of the specified module in order to check for the specified
attribute; this can have unusual side-effects. As a result, this should
only be called when displaying help text or error messages.
Parameters:
pkg: previously imported Python package to list
checkattr: attribute to look for in module to determine if it's valid
as the type of extension you are looking for
"""
import pkgutil
pkgdir = os.path.dirname(pkg.__file__)
modules = []
for _, modulename, _ in pkgutil.iter_modules([pkgdir]):
if os.path.isdir(os.path.join(pkgdir, modulename)):
# ignore directories
continue
try:
module = __import__(pkg.__name__, fromlist=[modulename])
except:
# If we can't import it, it's not valid
continue
module_if = getattr(module, modulename)
if getattr(module_if, 'hidden_extension', False):
continue
if not checkattr or hasattr(module_if, checkattr):
modules.append(modulename)
return modules
def import_extension_module(pkg, modulename, checkattr):
try:
# Dynamically load the UI based on the ui name. Although we
# suggest a fixed set this allows you to have flexibility in which
# ones are available.
module = __import__(pkg.__name__, fromlist=[modulename])
return getattr(module, modulename)
except AttributeError:
modules = present_options(list_extension_modules(pkg, checkattr))
raise BBMainException('FATAL: Unable to import extension module "%s" from %s. '
'Valid extension modules: %s' % (modulename, pkg.__name__, modules))
# Display bitbake/OE warnings via the BitBake.Warnings logger, ignoring others"""
warnlog = logging.getLogger("BitBake.Warnings")
_warnings_showwarning = warnings.showwarning
def _showwarning(message, category, filename, lineno, file=None, line=None):
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno)
warnlog.warning(s)
warnings.showwarning = _showwarning
warnings.filterwarnings("ignore")
warnings.filterwarnings("default", module="(<string>$|(oe|bb)\.)")
warnings.filterwarnings("ignore", category=PendingDeprecationWarning)
warnings.filterwarnings("ignore", category=ImportWarning)
warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
class BitBakeConfigParameters(cookerdata.ConfigParameters):
def parseCommandLine(self, argv=sys.argv):
parser = optparse.OptionParser(
formatter=BitbakeHelpFormatter(),
version="BitBake Build Tool Core version %s" % bb.__version__,
usage="""%prog [options] [recipename/target recipe:do_task ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.""")
parser.add_option("-b", "--buildfile", action="store", dest="buildfile", default=None,
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
parser.add_option("-k", "--continue", action="store_false", dest="abort", default=True,
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
parser.add_option("-a", "--tryaltconfigs", action="store_true",
dest="tryaltconfigs", default=False,
help="Continue with builds by trying to use alternative providers "
"where possible.")
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
parser.add_option("-c", "--cmd", action="store", dest="cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-C", "--clear-stamp", action="store", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
parser.add_option("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
parser.add_option("-D", "--debug", action="count", dest="debug", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
parser.add_option("-q", "--quiet", action="count", dest="quiet", default=0,
help="Output less log message data to the terminal. You can specify this more than once.")
parser.add_option("-n", "--dry-run", action="store_true", dest="dry_run", default=False,
help="Don't execute, just go through the motions.")
parser.add_option("-S", "--dump-signatures", action="append", dest="dump_signatures",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means compare the dumped signature with the cached one.")
parser.add_option("-p", "--parse-only", action="store_true",
dest="parse_only", default=False,
help="Quit after parsing the BB recipes.")
parser.add_option("-s", "--show-versions", action="store_true",
dest="show_versions", default=False,
help="Show current and preferred versions of all recipes.")
parser.add_option("-e", "--environment", action="store_true",
dest="show_environment", default=False,
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-g", "--graphviz", action="store_true", dest="dot_graph", default=False,
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing")
parser.add_option("-l", "--log-domains", action="append", dest="debug_domains", default=[],
help="Show debug logging for the specified logging domains")
parser.add_option("-P", "--profile", action="store_true", dest="profile", default=False,
help="Profile the command and save reports.")
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
parser.add_option("-u", "--ui", action="store", dest="ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %default).")
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
parser.add_option("-t", "--servertype", action="store", dest="servertype",
default=["process", "xmlrpc"]["BBSERVER" in os.environ],
help="Choose which server type to use (@CHOICES@ - default %default).")
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
parser.add_option("", "--revisions-changed", action="store_true",
dest="revisions_changed", default=False,
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
parser.add_option("", "--server-only", action="store_true",
dest="server_only", default=False,
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
parser.add_option("", "--foreground", action="store_true",
help="Run bitbake server in foreground.")
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
help="The name/address for the bitbake server to bind to.")
parser.add_option("-T", "--idle-timeout", type=int,
default=int(os.environ.get("BBTIMEOUT", "0")),
help="Set timeout to unload bitbake server due to inactivity")
parser.add_option("", "--no-setscene", action="store_true",
dest="nosetscene", default=False,
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
parser.add_option("", "--setscene-only", action="store_true",
dest="setsceneonly", default=False,
help="Only run setscene tasks, don't run any real tasks.")
parser.add_option("", "--remote-server", action="store", dest="remote_server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
parser.add_option("-m", "--kill-server", action="store_true",
dest="kill_server", default=False,
help="Terminate the remote server.")
parser.add_option("", "--observe-only", action="store_true",
dest="observe_only", default=False,
help="Connect to a server as an observing-only client.")
parser.add_option("", "--status-only", action="store_true",
dest="status_only", default=False,
help="Check the status of the remote bitbake server.")
parser.add_option("-w", "--write-log", action="store", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
parser.add_option("", "--runall", action="store", dest="runall",
help="Run the specified task for all build targets and their dependencies.")
options, targets = parser.parse_args(argv)
if options.quiet and options.verbose:
parser.error("options --quiet and --verbose are mutually exclusive")
if options.quiet and options.debug:
parser.error("options --quiet and --debug are mutually exclusive")
# use configuration files from environment variables
if "BBPRECONF" in os.environ:
options.prefile.append(os.environ["BBPRECONF"])
if "BBPOSTCONF" in os.environ:
options.postfile.append(os.environ["BBPOSTCONF"])
# fill in proper log name if not supplied
if options.writeeventlog is not None and len(options.writeeventlog) == 0:
from datetime import datetime
eventlog = "bitbake_eventlog_%s.json" % datetime.now().strftime("%Y%m%d%H%M%S")
options.writeeventlog = eventlog
# if BBSERVER says to autodetect, let's do that
if options.remote_server:
port = -1
if options.remote_server != 'autostart':
host, port = options.remote_server.split(":", 2)
port = int(port)
# use automatic port if port set to -1, means read it from
# the bitbake.lock file; this is a bit tricky, but we always expect
# to be in the base of the build directory if we need to have a
# chance to start the server later, anyway
if port == -1:
lock_location = "./bitbake.lock"
# we try to read the address at all times; if the server is not started,
# we'll try to start it after the first connect fails, below
try:
lf = open(lock_location, 'r')
remotedef = lf.readline()
[host, port] = remotedef.split(":")
port = int(port)
lf.close()
options.remote_server = remotedef
except Exception as e:
if options.remote_server != 'autostart':
raise BBMainException("Failed to read bitbake.lock (%s), invalid port" % str(e))
return options, targets[1:]
def start_server(servermodule, configParams, configuration, features):
server = servermodule.BitBakeServer()
single_use = not configParams.server_only and os.getenv('BBSERVER') != 'autostart'
if configParams.bind:
(host, port) = configParams.bind.split(':')
server.initServer((host, int(port)), single_use=single_use,
idle_timeout=configParams.idle_timeout)
configuration.interface = [server.serverImpl.host, server.serverImpl.port]
else:
server.initServer(single_use=single_use)
configuration.interface = []
try:
configuration.setServerRegIdleCallback(server.getServerIdleCB())
cooker = bb.cooker.BBCooker(configuration, features)
server.addcooker(cooker)
server.saveConnectionDetails()
except Exception as e:
while hasattr(server, "event_queue"):
import queue
try:
event = server.event_queue.get(block=False)
except (queue.Empty, IOError):
break
if isinstance(event, logging.LogRecord):
logger.handle(event)
raise
if not configParams.foreground:
server.detach()
cooker.shutdown()
cooker.lock.close()
return server
def bitbake_main(configParams, configuration):
# Python multiprocessing requires /dev/shm on Linux
if sys.platform.startswith('linux') and not os.access('/dev/shm', os.W_OK | os.X_OK):
raise BBMainException("FATAL: /dev/shm does not exist or is not writable")
# Unbuffer stdout to avoid log truncation in the event
# of an unorderly exit as well as to provide timely
# updates to log files for use with tail
try:
if sys.stdout.name == '<stdout>':
# Reopen with O_SYNC (unbuffered)
fl = fcntl.fcntl(sys.stdout.fileno(), fcntl.F_GETFL)
fl |= os.O_SYNC
fcntl.fcntl(sys.stdout.fileno(), fcntl.F_SETFL, fl)
except:
pass
configuration.setConfigParameters(configParams)
if configParams.server_only:
if configParams.servertype != "xmlrpc":
raise BBMainException("FATAL: If '--server-only' is defined, we must set the "
"servertype as 'xmlrpc'.\n")
if not configParams.bind:
raise BBMainException("FATAL: The '--server-only' option requires a name/address "
"to bind to with the -B option.\n")
else:
try:
#Checking that the port is a number
int(configParams.bind.split(":")[1])
except (ValueError,IndexError):
raise BBMainException(
"FATAL: Malformed host:port bind parameter")
if configParams.remote_server:
raise BBMainException("FATAL: The '--server-only' option conflicts with %s.\n" %
("the BBSERVER environment variable" if "BBSERVER" in os.environ \
else "the '--remote-server' option"))
elif configParams.foreground:
raise BBMainException("FATAL: The '--foreground' option can only be used "
"with --server-only.\n")
if configParams.bind and configParams.servertype != "xmlrpc":
raise BBMainException("FATAL: If '-B' or '--bind' is defined, we must "
"set the servertype as 'xmlrpc'.\n")
if configParams.remote_server and configParams.servertype != "xmlrpc":
raise BBMainException("FATAL: If '--remote-server' is defined, we must "
"set the servertype as 'xmlrpc'.\n")
if configParams.observe_only and (not configParams.remote_server or configParams.bind):
raise BBMainException("FATAL: '--observe-only' can only be used by UI clients "
"connecting to a server.\n")
if configParams.kill_server and not configParams.remote_server:
raise BBMainException("FATAL: '--kill-server' can only be used to "
"terminate a remote server")
if "BBDEBUG" in os.environ:
level = int(os.environ["BBDEBUG"])
if level > configuration.debug:
configuration.debug = level
bb.msg.init_msgconfig(configParams.verbose, configuration.debug,
configuration.debug_domains)
server, server_connection, ui_module = setup_bitbake(configParams, configuration)
if server_connection is None and configParams.kill_server:
return 0
if not configParams.server_only:
if configParams.status_only:
server_connection.terminate()
return 0
try:
return ui_module.main(server_connection.connection, server_connection.events,
configParams)
finally:
bb.event.ui_queue = []
server_connection.terminate()
else:
print("Bitbake server address: %s, server port: %s" % (server.serverImpl.host,
server.serverImpl.port))
if configParams.foreground:
server.serverImpl.serve_forever()
return 0
return 1
def setup_bitbake(configParams, configuration, extrafeatures=None):
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
if not configParams.status_only:
# In status only mode there are no logs and no UI
logger.addHandler(handler)
# Clear away any spurious environment variables while we stoke up the cooker
cleanedvars = bb.utils.clean_environment()
if configParams.server_only:
featureset = []
ui_module = None
else:
ui_module = import_extension_module(bb.ui, configParams.ui, 'main')
# Collect the feature set for the UI
featureset = getattr(ui_module, "featureSet", [])
if configParams.server_only:
for param in ('prefile', 'postfile'):
value = getattr(configParams, param)
if value:
setattr(configuration, "%s_server" % param, value)
param = "%s_server" % param
if extrafeatures:
for feature in extrafeatures:
if not feature in featureset:
featureset.append(feature)
servermodule = import_extension_module(bb.server,
configParams.servertype,
'BitBakeServer')
if configParams.remote_server:
if os.getenv('BBSERVER') == 'autostart':
if configParams.remote_server == 'autostart' or \
not servermodule.check_connection(configParams.remote_server, timeout=2):
configParams.bind = 'localhost:0'
srv = start_server(servermodule, configParams, configuration, featureset)
configParams.remote_server = '%s:%d' % tuple(configuration.interface)
bb.event.ui_queue = []
# we start a stub server that is actually a XMLRPClient that connects to a real server
from bb.server.xmlrpc import BitBakeXMLRPCClient
server = servermodule.BitBakeXMLRPCClient(configParams.observe_only,
configParams.xmlrpctoken)
server.saveConnectionDetails(configParams.remote_server)
else:
# we start a server with a given configuration
server = start_server(servermodule, configParams, configuration, featureset)
bb.event.ui_queue = []
if configParams.server_only:
server_connection = None
else:
try:
server_connection = server.establishConnection(featureset)
except Exception as e:
bb.fatal("Could not connect to server %s: %s" % (configParams.remote_server, str(e)))
if configParams.kill_server:
server_connection.connection.terminateServer()
bb.event.ui_queue = []
return None, None, None
server_connection.setupEventQueue()
# Restore the environment in case the UI needs it
for k in cleanedvars:
os.environ[k] = cleanedvars[k]
logger.removeHandler(handler)
return server, server_connection, ui_module

View File

@@ -19,22 +19,11 @@
from bb.utils import better_compile, better_exec
def insert_method(modulename, code, fn, lineno):
def insert_method(modulename, code, fn):
"""
Add code of a module should be added. The methods
will be simply added, no checking will be done
"""
comp = better_compile(code, modulename, fn, lineno=lineno)
comp = better_compile(code, modulename, fn )
better_exec(comp, None, code, fn)
compilecache = {}
def compile_cache(code):
h = hash(code)
if h in compilecache:
return compilecache[h]
return None
def compile_cache_add(code, compileobj):
h = hash(code)
compilecache[h] = compileobj

View File

@@ -129,7 +129,7 @@ def getDiskData(BBDirs, configuration):
bb.utils.mkdirhier(path)
dev = getMountedDev(path)
# Use path/action as the key
devDict[(path, action)] = [dev, minSpace, minInode]
devDict[os.path.join(path, action)] = [dev, minSpace, minInode]
return devDict
@@ -141,7 +141,7 @@ def getInterval(configuration):
spaceDefault = 50 * 1024 * 1024
inodeDefault = 5 * 1024
interval = configuration.getVar("BB_DISKMON_WARNINTERVAL")
interval = configuration.getVar("BB_DISKMON_WARNINTERVAL", True)
if not interval:
return spaceDefault, inodeDefault
else:
@@ -179,7 +179,7 @@ class diskMonitor:
self.enableMonitor = False
self.configuration = configuration
BBDirs = configuration.getVar("BB_DISKMON_DIRS") or None
BBDirs = configuration.getVar("BB_DISKMON_DIRS", True) or None
if BBDirs:
self.devDict = getDiskData(BBDirs, configuration)
if self.devDict:
@@ -205,25 +205,22 @@ class diskMonitor:
""" Take action for the monitor """
if self.enableMonitor:
diskUsage = {}
for k, attributes in self.devDict.items():
path, action = k
dev, minSpace, minInode = attributes
for k in self.devDict:
path = os.path.dirname(k)
action = os.path.basename(k)
dev = self.devDict[k][0]
minSpace = self.devDict[k][1]
minInode = self.devDict[k][2]
st = os.statvfs(path)
# The available free space, integer number
# The free space, float point number
freeSpace = st.f_bavail * st.f_frsize
# Send all relevant information in the event.
freeSpaceRoot = st.f_bfree * st.f_frsize
totalSpace = st.f_blocks * st.f_frsize
diskUsage[dev] = bb.event.DiskUsageSample(freeSpace, freeSpaceRoot, totalSpace)
if minSpace and freeSpace < minSpace:
# Always show warning, the self.checked would always be False if the action is WARN
if self.preFreeS[k] == 0 or self.preFreeS[k] - freeSpace > self.spaceInterval and not self.checked[k]:
logger.warning("The free space of %s (%s) is running low (%.3fGB left)" % \
logger.warn("The free space of %s (%s) is running low (%.3fGB left)" % \
(path, dev, freeSpace / 1024 / 1024 / 1024.0))
self.preFreeS[k] = freeSpace
@@ -238,7 +235,7 @@ class diskMonitor:
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
# The free inodes, integer number
# The free inodes, float point number
freeInode = st.f_favail
if minInode and freeInode < minInode:
@@ -249,7 +246,7 @@ class diskMonitor:
continue
# Always show warning, the self.checked would always be False if the action is WARN
if self.preFreeI[k] == 0 or self.preFreeI[k] - freeInode > self.inodeInterval and not self.checked[k]:
logger.warning("The free inode of %s (%s) is running low (%.3fK left)" % \
logger.warn("The free inode of %s (%s) is running low (%.3fK left)" % \
(path, dev, freeInode / 1024.0))
self.preFreeI[k] = freeInode
@@ -263,6 +260,4 @@ class diskMonitor:
self.checked[k] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
bb.event.fire(bb.event.MonitorDiskEvent(diskUsage), self.configuration)
return

View File

@@ -57,7 +57,7 @@ class BBLogFormatter(logging.Formatter):
}
color_enabled = False
BASECOLOR, BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = list(range(29,38))
BASECOLOR, BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = range(29,38)
COLORS = {
DEBUG3 : CYAN,
@@ -90,9 +90,8 @@ class BBLogFormatter(logging.Formatter):
if self.color_enabled:
record = self.colorize(record)
msg = logging.Formatter.format(self, record)
if hasattr(record, 'bb_exc_formatted'):
msg += '\n' + ''.join(record.bb_exc_formatted)
elif hasattr(record, 'bb_exc_info'):
if hasattr(record, 'bb_exc_info'):
etype, value, tb = record.bb_exc_info
formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
msg += '\n' + ''.join(formatted)
@@ -151,7 +150,7 @@ loggerDefaultVerbose = False
loggerVerboseLogs = False
loggerDefaultDomains = []
def init_msgconfig(verbose, debug, debug_domains=None):
def init_msgconfig(verbose, debug, debug_domains = []):
"""
Set default verbosity and debug levels config the logger
"""
@@ -159,10 +158,7 @@ def init_msgconfig(verbose, debug, debug_domains=None):
bb.msg.loggerDefaultVerbose = verbose
if verbose:
bb.msg.loggerVerboseLogs = True
if debug_domains:
bb.msg.loggerDefaultDomains = debug_domains
else:
bb.msg.loggerDefaultDomains = []
bb.msg.loggerDefaultDomains = debug_domains
def constructLogOptions():
debug = loggerDefaultDebugLevel
@@ -182,12 +178,9 @@ def constructLogOptions():
debug_domains["BitBake.%s" % domainarg] = logging.DEBUG - dlevel + 1
return level, debug_domains
def addDefaultlogFilter(handler, cls = BBLogFilter, forcelevel=None):
def addDefaultlogFilter(handler, cls = BBLogFilter):
level, debug_domains = constructLogOptions()
if forcelevel is not None:
level = forcelevel
cls(handler, level, debug_domains)
#
@@ -201,18 +194,3 @@ def fatal(msgdomain, msg):
logger = logging.getLogger("BitBake")
logger.critical(msg)
sys.exit(1)
def logger_create(name, output=sys.stderr, level=logging.INFO, preserve_handlers=False, color='auto'):
"""Standalone logger creation function"""
logger = logging.getLogger(name)
console = logging.StreamHandler(output)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if color == 'always' or (color == 'auto' and output.isatty()):
format.enable_color()
console.setFormatter(format)
if preserve_handlers:
logger.addHandler(console)
else:
logger.handlers = [console]
logger.setLevel(level)
return logger

View File

@@ -26,10 +26,9 @@ File parsers for the BitBake build tools.
handlers = []
import errno
import logging
import os
import stat
import logging
import bb
import bb.utils
import bb.siggen
@@ -71,12 +70,7 @@ def cached_mtime_noerror(f):
return __mtime_cache[f]
def update_mtime(f):
try:
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
except OSError:
if f in __mtime_cache:
del __mtime_cache[f]
return 0
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
return __mtime_cache[f]
def update_cache(f):
@@ -87,7 +81,7 @@ def update_cache(f):
def mark_dependency(d, f):
if f.startswith('./'):
f = "%s/%s" % (os.getcwd(), f[2:])
deps = (d.getVar('__depends', False) or [])
deps = (d.getVar('__depends') or [])
s = (f, cached_mtime_noerror(f))
if s not in deps:
deps.append(s)
@@ -95,7 +89,7 @@ def mark_dependency(d, f):
def check_dependency(d, f):
s = (f, cached_mtime_noerror(f))
deps = (d.getVar('__depends', False) or [])
deps = (d.getVar('__depends') or [])
return s in deps
def supports(fn, data):
@@ -123,17 +117,17 @@ def init_parser(d):
def resolve_file(fn, d):
if not os.path.isabs(fn):
bbpath = d.getVar("BBPATH")
bbpath = d.getVar("BBPATH", True)
newfn, attempts = bb.utils.which(bbpath, fn, history=True)
for af in attempts:
mark_dependency(d, af)
if not newfn:
raise IOError(errno.ENOENT, "file %s not found in %s" % (fn, bbpath))
raise IOError("file %s not found in %s" % (fn, bbpath))
fn = newfn
mark_dependency(d, fn)
if not os.path.isfile(fn):
raise IOError(errno.ENOENT, "file %s not found" % fn)
raise IOError("file %s not found" % fn)
return fn
@@ -161,8 +155,8 @@ def vars_from_file(mypkg, d):
def get_file_depends(d):
'''Return the dependent files'''
dep_files = []
depends = d.getVar('__base_depends', False) or []
depends = depends + (d.getVar('__depends', False) or [])
depends = d.getVar('__base_depends', True) or []
depends = depends + (d.getVar('__depends', True) or [])
for (fn, _) in depends:
dep_files.append(os.path.abspath(fn))
return " ".join(dep_files)

View File

@@ -21,7 +21,8 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from __future__ import absolute_import
from future_builtins import filter
import re
import string
import logging
@@ -30,6 +31,8 @@ import itertools
from bb import methodpool
from bb.parse import logger
_bbversions_re = re.compile(r"\[(?P<from>[0-9]+)-(?P<to>[0-9]+)\]")
class StatementGroup(list):
def eval(self, data):
for statement in self:
@@ -67,33 +70,6 @@ class ExportNode(AstNode):
def eval(self, data):
data.setVarFlag(self.var, "export", 1, op = 'exported')
class UnsetNode(AstNode):
def __init__(self, filename, lineno, var):
AstNode.__init__(self, filename, lineno)
self.var = var
def eval(self, data):
loginfo = {
'variable': self.var,
'file': self.filename,
'line': self.lineno,
}
data.delVar(self.var,**loginfo)
class UnsetFlagNode(AstNode):
def __init__(self, filename, lineno, var, flag):
AstNode.__init__(self, filename, lineno)
self.var = var
self.flag = flag
def eval(self, data):
loginfo = {
'variable': self.var,
'file': self.filename,
'line': self.lineno,
}
data.delVarFlag(self.var, self.flag, **loginfo)
class DataNode(AstNode):
"""
Various data related updates. For the sake of sanity
@@ -107,9 +83,9 @@ class DataNode(AstNode):
def getFunc(self, key, data):
if 'flag' in self.groupd and self.groupd['flag'] != None:
return data.getVarFlag(key, self.groupd['flag'], expand=False, noweakdefault=True)
return data.getVarFlag(key, self.groupd['flag'], noweakdefault=True)
else:
return data.getVar(key, False, noweakdefault=True, parsing=True)
return data.getVar(key, noweakdefault=True)
def eval(self, data):
groupd = self.groupd
@@ -130,6 +106,7 @@ class DataNode(AstNode):
val = groupd["value"]
elif "colon" in groupd and groupd["colon"] != None:
e = data.createCopy()
bb.data.update_data(e)
op = "immediate"
val = e.expand(groupd["value"], key + "[:=]")
elif "append" in groupd and groupd["append"] != None:
@@ -151,7 +128,7 @@ class DataNode(AstNode):
if 'flag' in groupd and groupd['flag'] != None:
flag = groupd['flag']
elif groupd["lazyques"]:
flag = "_defaultval"
flag = "defaultval"
loginfo['op'] = op
loginfo['detail'] = groupd["value"]
@@ -159,42 +136,29 @@ class DataNode(AstNode):
if flag:
data.setVarFlag(key, flag, val, **loginfo)
else:
data.setVar(key, val, parsing=True, **loginfo)
data.setVar(key, val, **loginfo)
class MethodNode(AstNode):
tr_tbl = str.maketrans('/.+-@%&', '_______')
tr_tbl = string.maketrans('/.+-@%', '______')
def __init__(self, filename, lineno, func_name, body, python, fakeroot):
def __init__(self, filename, lineno, func_name, body):
AstNode.__init__(self, filename, lineno)
self.func_name = func_name
self.body = body
self.python = python
self.fakeroot = fakeroot
def eval(self, data):
text = '\n'.join(self.body)
funcname = self.func_name
if self.func_name == "__anonymous":
funcname = ("__anon_%s_%s" % (self.lineno, self.filename.translate(MethodNode.tr_tbl)))
self.python = True
text = "def %s(d):\n" % (funcname) + text
bb.methodpool.insert_method(funcname, text, self.filename, self.lineno - len(self.body))
anonfuncs = data.getVar('__BBANONFUNCS', False) or []
bb.methodpool.insert_method(funcname, text, self.filename)
anonfuncs = data.getVar('__BBANONFUNCS') or []
anonfuncs.append(funcname)
data.setVar('__BBANONFUNCS', anonfuncs)
if data.getVar(funcname, False):
# clean up old version of this piece of metadata, as its
# flags could cause problems
data.delVarFlag(funcname, 'python')
data.delVarFlag(funcname, 'fakeroot')
if self.python:
data.setVarFlag(funcname, "python", "1")
if self.fakeroot:
data.setVarFlag(funcname, "fakeroot", "1")
data.setVarFlag(funcname, "func", 1)
data.setVar(funcname, text, parsing=True)
data.setVarFlag(funcname, 'filename', self.filename)
data.setVarFlag(funcname, 'lineno', str(self.lineno - len(self.body)))
data.setVar(funcname, text)
else:
data.setVarFlag(self.func_name, "func", 1)
data.setVar(self.func_name, text)
class PythonMethodNode(AstNode):
def __init__(self, filename, lineno, function, modulename, body):
@@ -208,12 +172,31 @@ class PythonMethodNode(AstNode):
# 'this' file. This means we will not parse methods from
# bb classes twice
text = '\n'.join(self.body)
bb.methodpool.insert_method(self.modulename, text, self.filename, self.lineno - len(self.body) - 1)
bb.methodpool.insert_method(self.modulename, text, self.filename)
data.setVarFlag(self.function, "func", 1)
data.setVarFlag(self.function, "python", 1)
data.setVar(self.function, text, parsing=True)
data.setVarFlag(self.function, 'filename', self.filename)
data.setVarFlag(self.function, 'lineno', str(self.lineno - len(self.body) - 1))
data.setVar(self.function, text)
class MethodFlagsNode(AstNode):
def __init__(self, filename, lineno, key, m):
AstNode.__init__(self, filename, lineno)
self.key = key
self.m = m
def eval(self, data):
if data.getVar(self.key):
# clean up old version of this piece of metadata, as its
# flags could cause problems
data.setVarFlag(self.key, 'python', None)
data.setVarFlag(self.key, 'fakeroot', None)
if self.m.group("py") is not None:
data.setVarFlag(self.key, "python", "1")
else:
data.delVarFlag(self.key, "python")
if self.m.group("fr") is not None:
data.setVarFlag(self.key, "fakeroot", "1")
else:
data.delVarFlag(self.key, "fakeroot")
class ExportFuncsNode(AstNode):
def __init__(self, filename, lineno, fns, classname):
@@ -226,28 +209,26 @@ class ExportFuncsNode(AstNode):
for func in self.n:
calledfunc = self.classname + "_" + func
if data.getVar(func, False) and not data.getVarFlag(func, 'export_func', False):
if data.getVar(func) and not data.getVarFlag(func, 'export_func'):
continue
if data.getVar(func, False):
if data.getVar(func):
data.setVarFlag(func, 'python', None)
data.setVarFlag(func, 'func', None)
for flag in [ "func", "python" ]:
if data.getVarFlag(calledfunc, flag, False):
data.setVarFlag(func, flag, data.getVarFlag(calledfunc, flag, False))
if data.getVarFlag(calledfunc, flag):
data.setVarFlag(func, flag, data.getVarFlag(calledfunc, flag))
for flag in [ "dirs" ]:
if data.getVarFlag(func, flag, False):
data.setVarFlag(calledfunc, flag, data.getVarFlag(func, flag, False))
data.setVarFlag(func, "filename", "autogenerated")
data.setVarFlag(func, "lineno", 1)
if data.getVarFlag(func, flag):
data.setVarFlag(calledfunc, flag, data.getVarFlag(func, flag))
if data.getVarFlag(calledfunc, "python", False):
data.setVar(func, " bb.build.exec_func('" + calledfunc + "', d)\n", parsing=True)
if data.getVarFlag(calledfunc, "python"):
data.setVar(func, " bb.build.exec_func('" + calledfunc + "', d)\n")
else:
if "-" in self.classname:
bb.fatal("The classname %s contains a dash character and is calling an sh function %s using EXPORT_FUNCTIONS. Since a dash is illegal in sh function names, this cannot work, please rename the class or don't use EXPORT_FUNCTIONS." % (self.classname, calledfunc))
data.setVar(func, " " + calledfunc + "\n", parsing=True)
data.setVar(func, " " + calledfunc + "\n")
data.setVarFlag(func, 'export_func', '1')
class AddTaskNode(AstNode):
@@ -274,7 +255,7 @@ class BBHandlerNode(AstNode):
self.hs = fns.split()
def eval(self, data):
bbhands = data.getVar('__BBHANDLERS', False) or []
bbhands = data.getVar('__BBHANDLERS') or []
for h in self.hs:
bbhands.append(h)
data.setVarFlag(h, "handler", 1)
@@ -294,21 +275,18 @@ def handleInclude(statements, filename, lineno, m, force):
def handleExport(statements, filename, lineno, m):
statements.append(ExportNode(filename, lineno, m.group(1)))
def handleUnset(statements, filename, lineno, m):
statements.append(UnsetNode(filename, lineno, m.group(1)))
def handleUnsetFlag(statements, filename, lineno, m):
statements.append(UnsetFlagNode(filename, lineno, m.group(1), m.group(2)))
def handleData(statements, filename, lineno, groupd):
statements.append(DataNode(filename, lineno, groupd))
def handleMethod(statements, filename, lineno, func_name, body, python, fakeroot):
statements.append(MethodNode(filename, lineno, func_name, body, python, fakeroot))
def handleMethod(statements, filename, lineno, func_name, body):
statements.append(MethodNode(filename, lineno, func_name, body))
def handlePythonMethod(statements, filename, lineno, funcname, modulename, body):
statements.append(PythonMethodNode(filename, lineno, funcname, modulename, body))
def handleMethodFlags(statements, filename, lineno, key, m):
statements.append(MethodFlagsNode(filename, lineno, key, m))
def handleExportFuncs(statements, filename, lineno, m, classname):
statements.append(ExportFuncsNode(filename, lineno, m.group(1), classname))
@@ -336,34 +314,30 @@ def handleInherit(statements, filename, lineno, m):
statements.append(InheritNode(filename, lineno, classes))
def finalize(fn, d, variant = None):
saved_handlers = bb.event.get_handlers().copy()
for var in d.getVar('__BBHANDLERS', False) or []:
all_handlers = {}
for var in d.getVar('__BBHANDLERS') or []:
# try to add the handler
handlerfn = d.getVarFlag(var, "filename", False)
if not handlerfn:
bb.fatal("Undefined event handler function '%s'" % var)
handlerln = int(d.getVarFlag(var, "lineno", False))
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
bb.event.register(var, d.getVar(var), (d.getVarFlag(var, "eventmask", True) or "").split())
bb.event.fire(bb.event.RecipePreFinalise(fn), d)
bb.data.expandKeys(d)
bb.data.update_data(d)
code = []
for funcname in d.getVar("__BBANONFUNCS", False) or []:
for funcname in d.getVar("__BBANONFUNCS") or []:
code.append("%s(d)" % funcname)
bb.utils.better_exec("\n".join(code), {"d": d})
bb.data.update_data(d)
tasklist = d.getVar('__BBTASKS', False) or []
bb.event.fire(bb.event.RecipeTaskPreProcess(fn, list(tasklist)), d)
bb.build.add_tasks(tasklist, d)
tasklist = d.getVar('__BBTASKS') or []
deltasklist = d.getVar('__BBDELTASKS') or []
bb.build.add_tasks(tasklist, deltasklist, d)
bb.parse.siggen.finalise(fn, d, variant)
d.setVar('BBINCLUDED', bb.parse.get_file_depends(d))
bb.event.fire(bb.event.RecipeParsed(fn), d)
bb.event.set_handlers(saved_handlers)
def _create_variants(datastores, names, function, onlyfinalise):
def create_variant(name, orig_d, arg = None):
@@ -373,16 +347,37 @@ def _create_variants(datastores, names, function, onlyfinalise):
function(arg or name, new_d)
datastores[name] = new_d
for variant in list(datastores.keys()):
for variant, variant_d in datastores.items():
for name in names:
if not variant:
# Based on main recipe
create_variant(name, datastores[""])
create_variant(name, variant_d)
else:
create_variant("%s-%s" % (variant, name), datastores[variant], name)
create_variant("%s-%s" % (variant, name), variant_d, name)
def _expand_versions(versions):
def expand_one(version, start, end):
for i in xrange(start, end + 1):
ver = _bbversions_re.sub(str(i), version, 1)
yield ver
versions = iter(versions)
while True:
try:
version = next(versions)
except StopIteration:
break
range_ver = _bbversions_re.search(version)
if not range_ver:
yield version
else:
newversions = expand_one(version, int(range_ver.group("from")),
int(range_ver.group("to")))
versions = itertools.chain(newversions, versions)
def multi_finalize(fn, d):
appends = (d.getVar("__BBAPPEND") or "").split()
appends = (d.getVar("__BBAPPEND", True) or "").split()
for append in appends:
logger.debug(1, "Appending .bbappend file %s to %s", append, fn)
bb.parse.BBHandler.handle(append, d, True)
@@ -397,7 +392,51 @@ def multi_finalize(fn, d):
d.setVar("__SKIPPED", e.args[0])
datastores = {"": safe_d}
extended = d.getVar("BBCLASSEXTEND") or ""
versions = (d.getVar("BBVERSIONS", True) or "").split()
if versions:
pv = orig_pv = d.getVar("PV", True)
baseversions = {}
def verfunc(ver, d, pv_d = None):
if pv_d is None:
pv_d = d
overrides = d.getVar("OVERRIDES", True).split(":")
pv_d.setVar("PV", ver)
overrides.append(ver)
bpv = baseversions.get(ver) or orig_pv
pv_d.setVar("BPV", bpv)
overrides.append(bpv)
d.setVar("OVERRIDES", ":".join(overrides))
versions = list(_expand_versions(versions))
for pos, version in enumerate(list(versions)):
try:
pv, bpv = version.split(":", 2)
except ValueError:
pass
else:
versions[pos] = pv
baseversions[pv] = bpv
if pv in versions and not baseversions.get(pv):
versions.remove(pv)
else:
pv = versions.pop()
# This is necessary because our existing main datastore
# has already been finalized with the old PV, we need one
# that's been finalized with the new PV.
d = bb.data.createCopy(safe_d)
verfunc(pv, d, safe_d)
try:
finalize(fn, d)
except bb.parse.SkipRecipe as e:
d.setVar("__SKIPPED", e.args[0])
_create_variants(datastores, versions, verfunc, onlyfinalise)
extended = d.getVar("BBCLASSEXTEND", True) or ""
if extended:
# the following is to support bbextends with arguments, for e.g. multilib
# an example is as follows:
@@ -415,7 +454,7 @@ def multi_finalize(fn, d):
else:
extendedmap[ext] = ext
pn = d.getVar("PN")
pn = d.getVar("PN", True)
def extendfunc(name, d):
if name != extendedmap[name]:
d.setVar("BBEXTENDCURR", extendedmap[name])
@@ -427,13 +466,17 @@ def multi_finalize(fn, d):
safe_d.setVar("BBCLASSEXTEND", extended)
_create_variants(datastores, extendedmap.keys(), extendfunc, onlyfinalise)
for variant in datastores.keys():
for variant, variant_d in datastores.iteritems():
if variant:
try:
if not onlyfinalise or variant in onlyfinalise:
finalize(fn, datastores[variant], variant)
finalize(fn, variant_d, variant)
except bb.parse.SkipRecipe as e:
datastores[variant].setVar("__SKIPPED", e.args[0])
variant_d.setVar("__SKIPPED", e.args[0])
if len(datastores) > 1:
variants = filter(None, datastores.iterkeys())
safe_d.setVar("__VARIANTS", " ".join(variants))
datastores[""] = d
return datastores

View File

@@ -25,14 +25,14 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from __future__ import absolute_import
import re, bb, os
import logging
import bb.build, bb.utils
from bb import data
from . import ConfHandler
from .. import resolve_file, ast, logger, ParseError
from .. import resolve_file, ast, logger
from .ConfHandler import include, init
# For compatibility
@@ -47,26 +47,37 @@ __addhandler_regexp__ = re.compile( r"addhandler\s+(.+)" )
__def_regexp__ = re.compile( r"def\s+(\w+).*:" )
__python_func_regexp__ = re.compile( r"(\s+.*)|(^$)" )
__infunc__ = []
__infunc__ = ""
__inpython__ = False
__body__ = []
__classname__ = ""
cached_statements = {}
# We need to indicate EOF to the feeder. This code is so messy that
# factoring it out to a close_parse_file method is out of question.
# We will use the IN_PYTHON_EOF as an indicator to just close the method
#
# The two parts using it are tightly integrated anyway
IN_PYTHON_EOF = -9999999999999
def supports(fn, d):
"""Return True if fn has a supported extension"""
return os.path.splitext(fn)[-1] in [".bb", ".bbclass", ".inc"]
def inherit(files, fn, lineno, d):
__inherit_cache = d.getVar('__inherit_cache', False) or []
__inherit_cache = d.getVar('__inherit_cache') or []
files = d.expand(files).split()
for file in files:
if not os.path.isabs(file) and not file.endswith(".bbclass"):
file = os.path.join('classes', '%s.bbclass' % file)
if not os.path.isabs(file):
bbpath = d.getVar("BBPATH")
dname = os.path.dirname(fn)
bbpath = "%s:%s" % (dname, d.getVar("BBPATH", True))
abs_fn, attempts = bb.utils.which(bbpath, file, history=True)
for af in attempts:
if af != abs_fn:
@@ -79,7 +90,7 @@ def inherit(files, fn, lineno, d):
__inherit_cache.append( file )
d.setVar('__inherit_cache', __inherit_cache)
include(fn, file, lineno, d, "inherit")
__inherit_cache = d.getVar('__inherit_cache', False) or []
__inherit_cache = d.getVar('__inherit_cache') or []
def get_statements(filename, absolute_filename, base_name):
global cached_statements
@@ -87,20 +98,20 @@ def get_statements(filename, absolute_filename, base_name):
try:
return cached_statements[absolute_filename]
except KeyError:
with open(absolute_filename, 'r') as f:
statements = ast.StatementGroup()
lineno = 0
while True:
lineno = lineno + 1
s = f.readline()
if not s: break
s = s.rstrip()
feeder(lineno, s, filename, base_name, statements)
file = open(absolute_filename, 'r')
statements = ast.StatementGroup()
lineno = 0
while True:
lineno = lineno + 1
s = file.readline()
if not s: break
s = s.rstrip()
feeder(lineno, s, filename, base_name, statements)
file.close()
if __inpython__:
# add a blank line to close out any python definition
feeder(lineno, "", filename, base_name, statements, eof=True)
feeder(IN_PYTHON_EOF, "", filename, base_name, statements)
if filename.endswith(".bbclass") or filename.endswith(".inc"):
cached_statements[absolute_filename] = statements
@@ -109,7 +120,7 @@ def get_statements(filename, absolute_filename, base_name):
def handle(fn, d, include):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __infunc__, __body__, __residue__, __classname__
__body__ = []
__infunc__ = []
__infunc__ = ""
__classname__ = ""
__residue__ = []
@@ -119,13 +130,13 @@ def handle(fn, d, include):
if ext == ".bbclass":
__classname__ = root
__inherit_cache = d.getVar('__inherit_cache', False) or []
__inherit_cache = d.getVar('__inherit_cache') or []
if not fn in __inherit_cache:
__inherit_cache.append(fn)
d.setVar('__inherit_cache', __inherit_cache)
if include != 0:
oldfile = d.getVar('FILE', False)
oldfile = d.getVar('FILE')
else:
oldfile = None
@@ -138,7 +149,7 @@ def handle(fn, d, include):
statements = get_statements(fn, abs_fn, base_name)
# DONE WITH PARSING... time to evaluate
if ext != ".bbclass" and abs_fn != oldfile:
if ext != ".bbclass":
d.setVar('FILE', abs_fn)
try:
@@ -148,26 +159,21 @@ def handle(fn, d, include):
if include == 0:
return { "" : d }
if __infunc__:
raise ParseError("Shell function %s is never closed" % __infunc__[0], __infunc__[1], __infunc__[2])
if __residue__:
raise ParseError("Leftover unparsed (incomplete?) data %s from %s" % __residue__, fn)
if ext != ".bbclass" and include == 0:
return ast.multi_finalize(fn, d)
if ext != ".bbclass" and oldfile and abs_fn != oldfile:
if oldfile:
d.setVar("FILE", oldfile)
return d
def feeder(lineno, s, fn, root, statements, eof=False):
def feeder(lineno, s, fn, root, statements):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__, __infunc__, __body__, bb, __residue__, __classname__
if __infunc__:
if s == '}':
__body__.append('')
ast.handleMethod(statements, fn, lineno, __infunc__[0], __body__, __infunc__[3], __infunc__[4])
__infunc__ = []
ast.handleMethod(statements, fn, lineno, __infunc__, __body__)
__infunc__ = ""
__body__ = []
else:
__body__.append(s)
@@ -175,7 +181,7 @@ def feeder(lineno, s, fn, root, statements, eof=False):
if __inpython__:
m = __python_func_regexp__.match(s)
if m and not eof:
if m and lineno != IN_PYTHON_EOF:
__body__.append(s)
return
else:
@@ -184,7 +190,7 @@ def feeder(lineno, s, fn, root, statements, eof=False):
__body__ = []
__inpython__ = False
if eof:
if lineno == IN_PYTHON_EOF:
return
if s and s[0] == '#':
@@ -211,7 +217,8 @@ def feeder(lineno, s, fn, root, statements, eof=False):
m = __func_start_regexp__.match(s)
if m:
__infunc__ = [m.group("func") or "__anonymous", fn, lineno, m.group("py") is not None, m.group("fr") is not None]
__infunc__ = m.group("func") or "__anonymous"
ast.handleMethodFlags(statements, fn, lineno, __infunc__, m)
return
m = __def_regexp__.match(s)

View File

@@ -24,16 +24,15 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import errno
import re
import os
import re, os
import logging
import bb.utils
from bb.parse import ParseError, resolve_file, ast, logger, handle
from bb.parse import ParseError, resolve_file, ast, logger
__config_regexp__ = re.compile( r"""
^
(?P<exp>export\s*)?
(?P<var>[a-zA-Z0-9\-_+.${}/~]+?)
(?P<var>[a-zA-Z0-9\-~_+.${}/]+?)
(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
\s* (
@@ -56,12 +55,10 @@ __config_regexp__ = re.compile( r"""
""", re.X)
__include_regexp__ = re.compile( r"include\s+(.+)" )
__require_regexp__ = re.compile( r"require\s+(.+)" )
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)\[([a-zA-Z0-9\-_+.]+)\]$" )
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/]+)$" )
def init(data):
topdir = data.getVar('TOPDIR', False)
topdir = data.getVar('TOPDIR')
if not topdir:
data.setVar('TOPDIR', os.getcwd())
@@ -69,43 +66,40 @@ def init(data):
def supports(fn, d):
return fn[-5:] == ".conf"
def include(parentfn, fn, lineno, data, error_out):
def include(oldfn, fn, lineno, data, error_out):
"""
error_out: A string indicating the verb (e.g. "include", "inherit") to be
used in a ParseError that will be raised if the file to be included could
not be included. Specify False to avoid raising an error in this case.
"""
if parentfn == fn: # prevent infinite recursion
if oldfn == fn: # prevent infinite recursion
return None
import bb
fn = data.expand(fn)
parentfn = data.expand(parentfn)
oldfn = data.expand(oldfn)
if not os.path.isabs(fn):
dname = os.path.dirname(parentfn)
bbpath = "%s:%s" % (dname, data.getVar("BBPATH"))
dname = os.path.dirname(oldfn)
bbpath = "%s:%s" % (dname, data.getVar("BBPATH", True))
abs_fn, attempts = bb.utils.which(bbpath, fn, history=True)
if abs_fn and bb.parse.check_dependency(data, abs_fn):
logger.warning("Duplicate inclusion for %s in %s" % (abs_fn, data.getVar('FILE')))
bb.warn("Duplicate inclusion for %s in %s" % (abs_fn, data.getVar('FILE', True)))
for af in attempts:
bb.parse.mark_dependency(data, af)
if abs_fn:
fn = abs_fn
elif bb.parse.check_dependency(data, fn):
logger.warning("Duplicate inclusion for %s in %s" % (fn, data.getVar('FILE')))
bb.warn("Duplicate inclusion for %s in %s" % (fn, data.getVar('FILE', True)))
from bb.parse import handle
try:
bb.parse.handle(fn, data, True)
except (IOError, OSError) as exc:
if exc.errno == errno.ENOENT:
if error_out:
raise ParseError("Could not %s file %s" % (error_out, fn), parentfn, lineno)
logger.debug(2, "CONF file '%s' not found", fn)
else:
if error_out:
raise ParseError("Could not %s file %s: %s" % (error_out, fn, exc.strerror), parentfn, lineno)
else:
raise ParseError("Error parsing %s: %s" % (fn, exc.strerror), parentfn, lineno)
ret = handle(fn, data, True)
except (IOError, OSError):
if error_out:
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), oldfn, lineno)
logger.debug(2, "CONF file '%s' not found", fn)
bb.parse.mark_dependency(data, fn)
# We have an issue where a UI might want to enforce particular settings such as
# an empty DISTRO variable. If configuration files do something like assigning
@@ -121,7 +115,7 @@ def handle(fn, data, include):
if include == 0:
oldfile = None
else:
oldfile = data.getVar('FILE', False)
oldfile = data.getVar('FILE')
abs_fn = resolve_file(fn, data)
f = open(abs_fn, 'r')
@@ -187,16 +181,6 @@ def feeder(lineno, s, fn, statements):
ast.handleExport(statements, fn, lineno, m)
return
m = __unset_regexp__.match(s)
if m:
ast.handleUnset(statements, fn, lineno, m)
return
m = __unset_flag_regexp__.match(s)
if m:
ast.handleUnsetFlag(statements, fn, lineno, m)
return
raise ParseError("unparsed line: '%s'" % s, fn, lineno);
# Add us to the handlers list

View File

@@ -28,7 +28,11 @@ import sys
import warnings
from bb.compat import total_ordering
from collections import Mapping
import sqlite3
try:
import sqlite3
except ImportError:
from pysqlite2 import dbapi2 as sqlite3
sqlversion = sqlite3.sqlite_version_info
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
@@ -88,9 +92,9 @@ class SQLTable(collections.MutableMapping):
self._execute("DELETE from %s where key=?;" % self.table, [key])
def __setitem__(self, key, value):
if not isinstance(key, str):
if not isinstance(key, basestring):
raise TypeError('Only string keys are supported')
elif not isinstance(value, str):
elif not isinstance(value, basestring):
raise TypeError('Only string values are supported')
data = self._execute("SELECT * from %s where key=?;" %
@@ -174,7 +178,7 @@ class PersistData(object):
"""
Return a list of key + value pairs for a domain
"""
return list(self.data[domain].items())
return self.data[domain].items()
def getValue(self, domain, key):
"""
@@ -197,14 +201,13 @@ class PersistData(object):
def connect(database):
connection = sqlite3.connect(database, timeout=5, isolation_level=None)
connection.execute("pragma synchronous = off;")
connection.text_factory = str
return connection
def persist(domain, d):
"""Convenience factory for SQLTable objects based upon metadata"""
import bb.utils
cachedir = (d.getVar("PERSISTENT_DIR") or
d.getVar("CACHE"))
cachedir = (d.getVar("PERSISTENT_DIR", True) or
d.getVar("CACHE", True))
if not cachedir:
logger.critical("Please set the 'PERSISTENT_DIR' or 'CACHE' variable")
sys.exit(1)

View File

@@ -17,7 +17,7 @@ class CmdError(RuntimeError):
self.msg = msg
def __str__(self):
if not isinstance(self.command, str):
if not isinstance(self.command, basestring):
cmd = subprocess.list2cmdline(self.command)
else:
cmd = self.command
@@ -64,7 +64,7 @@ class Popen(subprocess.Popen):
options.update(kwargs)
subprocess.Popen.__init__(self, *args, **options)
def _logged_communicate(pipe, log, input, extrafiles):
def _logged_communicate(pipe, log, input):
if pipe.stdin:
if input is not None:
pipe.stdin.write(input)
@@ -79,26 +79,10 @@ def _logged_communicate(pipe, log, input, extrafiles):
if pipe.stderr is not None:
bb.utils.nonblockingfd(pipe.stderr.fileno())
rin.append(pipe.stderr)
for fobj, _ in extrafiles:
bb.utils.nonblockingfd(fobj.fileno())
rin.append(fobj)
def readextras(selected):
for fobj, func in extrafiles:
if fobj in selected:
try:
data = fobj.read()
except IOError as err:
if err.errno == errno.EAGAIN or err.errno == errno.EWOULDBLOCK:
data = None
if data is not None:
func(data)
try:
while pipe.poll() is None:
rlist = rin
stdoutbuf = b""
stderrbuf = b""
try:
r,w,e = select.select (rlist, [], [], 1)
except OSError as e:
@@ -106,48 +90,29 @@ def _logged_communicate(pipe, log, input, extrafiles):
raise
if pipe.stdout in r:
data = stdoutbuf + pipe.stdout.read()
if data is not None and len(data) > 0:
try:
data = data.decode("utf-8")
outdata.append(data)
log.write(data)
stdoutbuf = b""
except UnicodeDecodeError:
stdoutbuf = data
data = pipe.stdout.read()
if data is not None:
outdata.append(data)
log.write(data)
if pipe.stderr in r:
data = stderrbuf + pipe.stderr.read()
if data is not None and len(data) > 0:
try:
data = data.decode("utf-8")
errdata.append(data)
log.write(data)
stderrbuf = b""
except UnicodeDecodeError:
stderrbuf = data
readextras(r)
data = pipe.stderr.read()
if data is not None:
errdata.append(data)
log.write(data)
finally:
log.flush()
readextras([fobj for fobj, _ in extrafiles])
if pipe.stdout is not None:
pipe.stdout.close()
if pipe.stderr is not None:
pipe.stderr.close()
return ''.join(outdata), ''.join(errdata)
def run(cmd, input=None, log=None, extrafiles=None, **options):
def run(cmd, input=None, log=None, **options):
"""Convenience function to run a command and return its output, raising an
exception when the command fails"""
if not extrafiles:
extrafiles = []
if isinstance(cmd, str) and not "shell" in options:
if isinstance(cmd, basestring) and not "shell" in options:
options["shell"] = True
try:
@@ -159,13 +124,9 @@ def run(cmd, input=None, log=None, extrafiles=None, **options):
raise CmdError(cmd, exc)
if log:
stdout, stderr = _logged_communicate(pipe, log, input, extrafiles)
stdout, stderr = _logged_communicate(pipe, log, input)
else:
stdout, stderr = pipe.communicate(input)
if not stdout is None:
stdout = stdout.decode("utf-8")
if not stderr is None:
stderr = stderr.decode("utf-8")
if pipe.returncode != 0:
raise ExecutionError(cmd, pipe.returncode, stdout, stderr)

View File

@@ -1,276 +0,0 @@
"""
BitBake progress handling code
"""
# Copyright (C) 2016 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import sys
import re
import time
import inspect
import bb.event
import bb.build
class ProgressHandler(object):
"""
Base class that can pretend to be a file object well enough to be
used to build objects to intercept console output and determine the
progress of some operation.
"""
def __init__(self, d, outfile=None):
self._progress = 0
self._data = d
self._lastevent = 0
if outfile:
self._outfile = outfile
else:
self._outfile = sys.stdout
def _fire_progress(self, taskprogress, rate=None):
"""Internal function to fire the progress event"""
bb.event.fire(bb.build.TaskProgress(taskprogress, rate), self._data)
def write(self, string):
self._outfile.write(string)
def flush(self):
self._outfile.flush()
def update(self, progress, rate=None):
ts = time.time()
if progress > 100:
progress = 100
if progress != self._progress or self._lastevent + 1 < ts:
self._fire_progress(progress, rate)
self._lastevent = ts
self._progress = progress
class LineFilterProgressHandler(ProgressHandler):
"""
A ProgressHandler variant that provides the ability to filter out
the lines if they contain progress information. Additionally, it
filters out anything before the last line feed on a line. This can
be used to keep the logs clean of output that we've only enabled for
getting progress, assuming that that can be done on a per-line
basis.
"""
def __init__(self, d, outfile=None):
self._linebuffer = ''
super(LineFilterProgressHandler, self).__init__(d, outfile)
def write(self, string):
self._linebuffer += string
while True:
breakpos = self._linebuffer.find('\n') + 1
if breakpos == 0:
break
line = self._linebuffer[:breakpos]
self._linebuffer = self._linebuffer[breakpos:]
# Drop any line feeds and anything that precedes them
lbreakpos = line.rfind('\r') + 1
if lbreakpos:
line = line[lbreakpos:]
if self.writeline(line):
super(LineFilterProgressHandler, self).write(line)
def writeline(self, line):
return True
class BasicProgressHandler(ProgressHandler):
def __init__(self, d, regex=r'(\d+)%', outfile=None):
super(BasicProgressHandler, self).__init__(d, outfile)
self._regex = re.compile(regex)
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
def write(self, string):
percs = self._regex.findall(string)
if percs:
progress = int(percs[-1])
self.update(progress)
super(BasicProgressHandler, self).write(string)
class OutOfProgressHandler(ProgressHandler):
def __init__(self, d, regex, outfile=None):
super(OutOfProgressHandler, self).__init__(d, outfile)
self._regex = re.compile(regex)
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
def write(self, string):
nums = self._regex.findall(string)
if nums:
progress = (float(nums[-1][0]) / float(nums[-1][1])) * 100
self.update(progress)
super(OutOfProgressHandler, self).write(string)
class MultiStageProgressReporter(object):
"""
Class which allows reporting progress without the caller
having to know where they are in the overall sequence. Useful
for tasks made up of python code spread across multiple
classes / functions - the progress reporter object can
be passed around or stored at the object level and calls
to next_stage() and update() made whereever needed.
"""
def __init__(self, d, stage_weights, debug=False):
"""
Initialise the progress reporter.
Parameters:
* d: the datastore (needed for firing the events)
* stage_weights: a list of weight values, one for each stage.
The value is scaled internally so you only need to specify
values relative to other values in the list, so if there
are two stages and the first takes 2s and the second takes
10s you would specify [2, 10] (or [1, 5], it doesn't matter).
* debug: specify True (and ensure you call finish() at the end)
in order to show a printout of the calculated stage weights
based on timing each stage. Use this to determine what the
weights should be when you're not sure.
"""
self._data = d
total = sum(stage_weights)
self._stage_weights = [float(x)/total for x in stage_weights]
self._stage = -1
self._base_progress = 0
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
self._debug = debug
self._finished = False
if self._debug:
self._last_time = time.time()
self._stage_times = []
self._stage_total = None
self._callers = []
def _fire_progress(self, taskprogress):
bb.event.fire(bb.build.TaskProgress(taskprogress), self._data)
def next_stage(self, stage_total=None):
"""
Move to the next stage.
Parameters:
* stage_total: optional total for progress within the stage,
see update() for details
NOTE: you need to call this before the first stage.
"""
self._stage += 1
self._stage_total = stage_total
if self._stage == 0:
# First stage
if self._debug:
self._last_time = time.time()
else:
if self._stage < len(self._stage_weights):
self._base_progress = sum(self._stage_weights[:self._stage]) * 100
if self._debug:
currtime = time.time()
self._stage_times.append(currtime - self._last_time)
self._last_time = currtime
self._callers.append(inspect.getouterframes(inspect.currentframe())[1])
elif not self._debug:
bb.warn('ProgressReporter: current stage beyond declared number of stages')
self._base_progress = 100
self._fire_progress(self._base_progress)
def update(self, stage_progress):
"""
Update progress within the current stage.
Parameters:
* stage_progress: progress value within the stage. If stage_total
was specified when next_stage() was last called, then this
value is considered to be out of stage_total, otherwise it should
be a percentage value from 0 to 100.
"""
if self._stage_total:
stage_progress = (float(stage_progress) / self._stage_total) * 100
if self._stage < 0:
bb.warn('ProgressReporter: update called before first call to next_stage()')
elif self._stage < len(self._stage_weights):
progress = self._base_progress + (stage_progress * self._stage_weights[self._stage])
else:
progress = self._base_progress
if progress > 100:
progress = 100
self._fire_progress(progress)
def finish(self):
if self._finished:
return
self._finished = True
if self._debug:
import math
self._stage_times.append(time.time() - self._last_time)
mintime = max(min(self._stage_times), 0.01)
self._callers.append(None)
stage_weights = [int(math.ceil(x / mintime)) for x in self._stage_times]
bb.warn('Stage weights: %s' % stage_weights)
out = []
for stage_weight, caller in zip(stage_weights, self._callers):
if caller:
out.append('Up to %s:%d: %d' % (caller[1], caller[2], stage_weight))
else:
out.append('Up to finish: %d' % stage_weight)
bb.warn('Stage times:\n %s' % '\n '.join(out))
class MultiStageProcessProgressReporter(MultiStageProgressReporter):
"""
Version of MultiStageProgressReporter intended for use with
standalone processes (such as preparing the runqueue)
"""
def __init__(self, d, processname, stage_weights, debug=False):
self._processname = processname
self._started = False
MultiStageProgressReporter.__init__(self, d, stage_weights, debug)
def start(self):
if not self._started:
bb.event.fire(bb.event.ProcessStarted(self._processname, 100), self._data)
self._started = True
def _fire_progress(self, taskprogress):
if taskprogress == 0:
self.start()
return
bb.event.fire(bb.event.ProcessProgress(self._processname, taskprogress), self._data)
def finish(self):
MultiStageProgressReporter.finish(self)
bb.event.fire(bb.event.ProcessFinished(self._processname), self._data)
class DummyMultiStageProcessProgressReporter(MultiStageProgressReporter):
"""
MultiStageProcessProgressReporter that takes the calls and does nothing
with them (to avoid a bunch of "if progress_reporter:" checks)
"""
def __init__(self):
MultiStageProcessProgressReporter.__init__(self, "", None, [])
def _fire_progress(self, taskprogress, rate=None):
pass
def start(self):
pass
def next_stage(self, stage_total=None):
pass
def update(self, stage_progress):
pass
def finish(self):
pass

View File

@@ -48,6 +48,7 @@ def findProviders(cfgData, dataCache, pkg_pn = None):
# Need to ensure data store is expanded
localdata = data.createCopy(cfgData)
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
preferred_versions = {}
@@ -120,14 +121,11 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
preferred_file = None
preferred_ver = None
# pn can contain '_', e.g. gcc-cross-x86_64 and an override cannot
# hence we do this manually rather than use OVERRIDES
preferred_v = cfgData.getVar("PREFERRED_VERSION_pn-%s" % pn)
if not preferred_v:
preferred_v = cfgData.getVar("PREFERRED_VERSION_%s" % pn)
if not preferred_v:
preferred_v = cfgData.getVar("PREFERRED_VERSION")
localdata = data.createCopy(cfgData)
localdata.setVar('OVERRIDES', "%s:pn-%s:%s" % (data.getVar('OVERRIDES', localdata), pn, pn))
bb.data.update_data(localdata)
preferred_v = localdata.getVar('PREFERRED_VERSION', True)
if preferred_v:
m = re.match('(\d+:)*(.*)(_.*)*', preferred_v)
if m:
@@ -225,7 +223,7 @@ def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
def _filterProviders(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
environment variables
environment variables and previous build results
"""
eligible = []
preferred_versions = {}
@@ -244,7 +242,7 @@ def _filterProviders(providers, item, cfgData, dataCache):
pkg_pn[pn] = []
pkg_pn[pn].append(p)
logger.debug(1, "providers for %s are: %s", item, list(pkg_pn.keys()))
logger.debug(1, "providers for %s are: %s", item, pkg_pn.keys())
# First add PREFERRED_VERSIONS
for pn in pkg_pn:
@@ -282,13 +280,13 @@ def _filterProviders(providers, item, cfgData, dataCache):
def filterProviders(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
environment variables
environment variables and previous build results
Takes a "normal" target item
"""
eligible = _filterProviders(providers, item, cfgData, dataCache)
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % item)
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % item, True)
if prefervar:
dataCache.preferred[item] = prefervar
@@ -310,56 +308,38 @@ def filterProviders(providers, item, cfgData, dataCache):
def filterProvidersRunTime(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
environment variables
environment variables and previous build results
Takes a "runtime" target item
"""
eligible = _filterProviders(providers, item, cfgData, dataCache)
# First try and match any PREFERRED_RPROVIDER entry
prefervar = cfgData.getVar('PREFERRED_RPROVIDER_%s' % item)
foundUnique = False
if prefervar:
for p in eligible:
pn = dataCache.pkg_fn[p]
if prefervar == pn:
logger.verbose("selecting %s to satisfy %s due to PREFERRED_RPROVIDER", pn, item)
eligible.remove(p)
eligible = [p] + eligible
foundUnique = True
numberPreferred = 1
# Should use dataCache.preferred here?
preferred = []
preferred_vars = []
pns = {}
for p in eligible:
pns[dataCache.pkg_fn[p]] = p
for p in eligible:
pn = dataCache.pkg_fn[p]
provides = dataCache.pn_provides[pn]
for provide in provides:
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % provide, True)
#logger.debug(1, "checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
if prefervar in pns and pns[prefervar] not in preferred:
var = "PREFERRED_PROVIDER_%s = %s" % (provide, prefervar)
logger.verbose("selecting %s to satisfy runtime %s due to %s", prefervar, item, var)
preferred_vars.append(var)
pref = pns[prefervar]
eligible.remove(pref)
eligible = [pref] + eligible
preferred.append(pref)
break
# If we didn't find an RPROVIDER entry, try and infer the provider from PREFERRED_PROVIDER entries
# by looking through the provides of each eligible recipe and seeing if a PREFERRED_PROVIDER was set.
# This is most useful for virtual/ entries rather than having a RPROVIDER per entry.
if not foundUnique:
# Should use dataCache.preferred here?
preferred = []
preferred_vars = []
pns = {}
for p in eligible:
pns[dataCache.pkg_fn[p]] = p
for p in eligible:
pn = dataCache.pkg_fn[p]
provides = dataCache.pn_provides[pn]
for provide in provides:
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % provide)
#logger.debug(1, "checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
if prefervar in pns and pns[prefervar] not in preferred:
var = "PREFERRED_PROVIDER_%s = %s" % (provide, prefervar)
logger.verbose("selecting %s to satisfy runtime %s due to %s", prefervar, item, var)
preferred_vars.append(var)
pref = pns[prefervar]
eligible.remove(pref)
eligible = [pref] + eligible
preferred.append(pref)
break
numberPreferred = len(preferred)
numberPreferred = len(preferred)
if numberPreferred > 1:
logger.error("Trying to resolve runtime dependency %s resulted in conflicting PREFERRED_PROVIDER entries being found.\nThe providers found were: %s\nThe PREFERRED_PROVIDER entries resulting in this conflict were: %s. You could set PREFERRED_RPROVIDER_%s" % (item, preferred, preferred_vars, item))
logger.error("Trying to resolve runtime dependency %s resulted in conflicting PREFERRED_PROVIDER entries being found.\nThe providers found were: %s\nThe PREFERRED_PROVIDER entries resulting in this conflict were: %s", item, preferred, preferred_vars)
logger.debug(1, "sorted runtime providers for %s are: %s", item, eligible)
@@ -399,32 +379,3 @@ def getRuntimeProviders(dataCache, rdepend):
logger.debug(1, "Assuming %s is a dynamic package, but it may not exist" % rdepend)
return rproviders
def buildWorldTargetList(dataCache, task=None):
"""
Build package list for "bitbake world"
"""
if dataCache.world_target:
return
logger.debug(1, "collating packages for \"world\"")
for f in dataCache.possible_world:
terminal = True
pn = dataCache.pkg_fn[f]
if task and task not in dataCache.task_deps[f]['tasks']:
logger.debug(2, "World build skipping %s as task %s doesn't exist", f, task)
terminal = False
for p in dataCache.pn_provides[pn]:
if p.startswith('virtual/'):
logger.debug(2, "World build skipping %s due to %s provider starting with virtual/", f, p)
terminal = False
break
for pf in dataCache.providers[p]:
if dataCache.pkg_fn[pf] != pn:
logger.debug(2, "World build skipping %s due to both us and %s providing %s", f, pf, p)
terminal = False
break
if terminal:
dataCache.world_target.add(pn)

View File

@@ -527,7 +527,7 @@ def utility_sed(name, args, interp, env, stdin, stdout, stderr, debugflags):
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
# Scan pattern arguments and append a space if necessary
for i in range(len(args)):
for i in xrange(len(args)):
if not RE_SED.search(args[i]):
continue
args[i] = args[i] + ' '

View File

@@ -474,7 +474,7 @@ class Environment:
"""
# Save and remove previous arguments
prevargs = []
for i in range(int(self._env['#'])):
for i in xrange(int(self._env['#'])):
i = str(i+1)
prevargs.append(self._env[i])
del self._env[i]
@@ -488,7 +488,7 @@ class Environment:
return prevargs
def get_positional_args(self):
return [self._env[str(i+1)] for i in range(int(self._env['#']))]
return [self._env[str(i+1)] for i in xrange(int(self._env['#']))]
def get_variables(self):
return dict(self._env)

View File

@@ -20,7 +20,7 @@ except NameError:
from Set import Set as set
from ply import lex
from bb.pysh.sherrors import *
from sherrors import *
class NeedMore(Exception):
pass

View File

@@ -10,11 +10,11 @@
import os.path
import sys
import bb.pysh.pyshlex as pyshlex
import pyshlex
tokens = pyshlex.tokens
from ply import yacc
import bb.pysh.sherrors as sherrors
import sherrors
class IORedirect:
def __init__(self, op, filename, io_number=None):

View File

@@ -1,116 +0,0 @@
"""
BitBake 'remotedata' module
Provides support for using a datastore from the bitbake client
"""
# Copyright (C) 2016 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import bb.data
class RemoteDatastores:
"""Used on the server side to manage references to server-side datastores"""
def __init__(self, cooker):
self.cooker = cooker
self.datastores = {}
self.locked = []
self.nextindex = 1
def __len__(self):
return len(self.datastores)
def __getitem__(self, key):
if key is None:
return self.cooker.data
else:
return self.datastores[key]
def items(self):
return self.datastores.items()
def store(self, d, locked=False):
"""
Put a datastore into the collection. If locked=True then the datastore
is understood to be managed externally and cannot be released by calling
release().
"""
idx = self.nextindex
self.datastores[idx] = d
if locked:
self.locked.append(idx)
self.nextindex += 1
return idx
def check_store(self, d, locked=False):
"""
Put a datastore into the collection if it's not already in there;
in either case return the index
"""
for key, val in self.datastores.items():
if val is d:
idx = key
break
else:
idx = self.store(d, locked)
return idx
def release(self, idx):
"""Discard a datastore in the collection"""
if idx in self.locked:
raise Exception('Tried to release locked datastore %d' % idx)
del self.datastores[idx]
def receive_datastore(self, remote_data):
"""Receive a datastore object sent from the client (as prepared by transmit_datastore())"""
dct = dict(remote_data)
d = bb.data_smart.DataSmart()
d.dict = dct
while True:
if '_remote_data' in dct:
dsindex = dct['_remote_data']['_content']
del dct['_remote_data']
if dsindex is None:
dct['_data'] = self.cooker.data.dict
else:
dct['_data'] = self.datastores[dsindex].dict
break
elif '_data' in dct:
idct = dict(dct['_data'])
dct['_data'] = idct
dct = idct
else:
break
return d
@staticmethod
def transmit_datastore(d):
"""Prepare a datastore object for sending over IPC from the client end"""
# FIXME content might be a dict, need to turn that into a list as well
def copy_dicts(dct):
if '_remote_data' in dct:
dsindex = dct['_remote_data']['_content'].dsindex
newdct = dct.copy()
newdct['_remote_data'] = {'_content': dsindex}
return list(newdct.items())
elif '_data' in dct:
newdct = dct.copy()
newdata = copy_dicts(dct['_data'])
if newdata:
newdct['_data'] = newdata
return list(newdct.items())
return None
main_dict = copy_dicts(d.dict)
return main_dict

File diff suppressed because it is too large Load Diff

View File

@@ -63,9 +63,6 @@ class BitBakeBaseServerConnection():
def terminate(self):
pass
def setupEventQueue(self):
pass
""" BitBakeBaseServer class is the common ancestor to all Bitbake servers

View File

@@ -30,7 +30,7 @@ import signal
import sys
import time
import select
from queue import Empty
from Queue import Empty
from multiprocessing import Event, Process, util, Queue, Pipe, queues, Manager
from . import BitBakeBaseServer, BitBakeBaseServerConnection, BaseImplServer
@@ -53,12 +53,10 @@ class ServerCommunicator():
while True:
# don't let the user ctrl-c while we're waiting for a response
try:
for idx in range(0,4): # 0, 1, 2, 3
if self.connection.poll(5):
return self.connection.recv()
else:
bb.warn("Timeout while attempting to communicate with bitbake server")
bb.fatal("Gave up; Too many tries: timeout while attempting to communicate with bitbake server")
if self.connection.poll(20):
return self.connection.recv()
else:
bb.fatal("Timeout while attempting to communicate with bitbake server")
except KeyboardInterrupt:
pass
@@ -92,8 +90,6 @@ class ProcessServer(Process, BaseImplServer):
self.event = EventAdapter(event_queue)
self.featurelist = featurelist
self.quit = False
self.heartbeat_seconds = 1 # default, BB_HEARTBEAT_EVENT will be checked once we have a datastore.
self.next_heartbeat = time.time()
self.quitin, self.quitout = Pipe()
self.event_handle = multiprocessing.Value("i")
@@ -101,16 +97,8 @@ class ProcessServer(Process, BaseImplServer):
def run(self):
for event in bb.event.ui_queue:
self.event_queue.put(event)
self.event_handle.value = bb.event.register_UIHhandler(self, True)
self.event_handle.value = bb.event.register_UIHhandler(self)
heartbeat_event = self.cooker.data.getVar('BB_HEARTBEAT_EVENT')
if heartbeat_event:
try:
self.heartbeat_seconds = float(heartbeat_event)
except:
# Throwing an exception here causes bitbake to hang.
# Just warn about the invalid setting and continue
bb.warn('Ignoring invalid BB_HEARTBEAT_EVENT=%s, must be a float specifying seconds.' % heartbeat_event)
bb.cooker.server_main(self.cooker, self.main)
def main(self):
@@ -118,7 +106,6 @@ class ProcessServer(Process, BaseImplServer):
# the UI and communicated to us
self.quitin.close()
signal.signal(signal.SIGINT, signal.SIG_IGN)
bb.utils.set_process_name("Cooker")
while not self.quit:
try:
if self.command_channel.poll():
@@ -127,12 +114,8 @@ class ProcessServer(Process, BaseImplServer):
if self.quitout.poll():
self.quitout.recv()
self.quit = True
try:
self.runCommand(["stateForceShutdown"])
except:
pass
self.idle_commands(.1, [self.command_channel, self.quitout])
self.idle_commands(.1, [self.event_queue._reader, self.command_channel, self.quitout])
except Exception:
logger.exception('Running command %s', command)
@@ -140,14 +123,11 @@ class ProcessServer(Process, BaseImplServer):
bb.event.unregister_UIHhandler(self.event_handle.value)
self.command_channel.close()
self.cooker.shutdown(True)
self.quitout.close()
def idle_commands(self, delay, fds=None):
def idle_commands(self, delay, fds = []):
nextsleep = delay
if not fds:
fds = []
for function, data in list(self._idlefuns.items()):
for function, data in self._idlefuns.items():
try:
retval = function(self, data, False)
if retval is False:
@@ -155,7 +135,7 @@ class ProcessServer(Process, BaseImplServer):
nextsleep = None
elif retval is True:
nextsleep = None
elif isinstance(retval, float) and nextsleep:
elif isinstance(retval, float):
if (retval < nextsleep):
nextsleep = retval
elif nextsleep is None:
@@ -164,27 +144,11 @@ class ProcessServer(Process, BaseImplServer):
fds = fds + retval
except SystemExit:
raise
except Exception as exc:
if not isinstance(exc, bb.BBHandledException):
logger.exception('Running idle function')
except Exception:
logger.exception('Running idle function')
del self._idlefuns[function]
self.quit = True
# Create new heartbeat event?
now = time.time()
if now >= self.next_heartbeat:
# We might have missed heartbeats. Just trigger once in
# that case and continue after the usual delay.
self.next_heartbeat += self.heartbeat_seconds
if self.next_heartbeat <= now:
self.next_heartbeat = now + self.heartbeat_seconds
heartbeat = bb.event.HeartbeatEvent(now)
bb.event.fire(heartbeat, self.cooker.data)
if nextsleep and now + nextsleep > self.next_heartbeat:
# Shorten timeout so that we we wake up in time for
# the heartbeat.
nextsleep = self.next_heartbeat - now
if nextsleep is not None:
select.select(fds,[],[],nextsleep)
@@ -205,16 +169,12 @@ class BitBakeProcessServerConnection(BitBakeBaseServerConnection):
self.event_queue = event_queue
self.connection = ServerCommunicator(self.ui_channel, self.procserver.event_handle, self.procserver)
self.events = self.event_queue
self.terminated = False
def sigterm_terminate(self):
bb.error("UI received SIGTERM")
self.terminate()
def terminate(self):
if self.terminated:
return
self.terminated = True
def flushevents():
while True:
try:
@@ -224,6 +184,7 @@ class BitBakeProcessServerConnection(BitBakeBaseServerConnection):
if isinstance(event, logging.LogRecord):
logger.handle(event)
signal.signal(signal.SIGINT, signal.SIG_IGN)
self.procserver.stop()
while self.procserver.is_alive():
@@ -233,26 +194,23 @@ class BitBakeProcessServerConnection(BitBakeBaseServerConnection):
self.ui_channel.close()
self.event_queue.close()
self.event_queue.setexit()
# XXX: Call explicity close in _writer to avoid
# fd leakage because isn't called on Queue.close()
self.event_queue._writer.close()
# Wrap Queue to provide API which isn't server implementation specific
class ProcessEventQueue(multiprocessing.queues.Queue):
def __init__(self, maxsize):
multiprocessing.queues.Queue.__init__(self, maxsize, ctx=multiprocessing.get_context())
multiprocessing.queues.Queue.__init__(self, maxsize)
self.exit = False
bb.utils.set_process_name("ProcessEQueue")
def setexit(self):
self.exit = True
def waitEvent(self, timeout):
if self.exit:
return self.getEvent()
sys.exit(1)
try:
if not self.server.is_alive():
return self.getEvent()
self.setexit()
return None
return self.get(True, timeout)
except Empty:
return None
@@ -261,14 +219,14 @@ class ProcessEventQueue(multiprocessing.queues.Queue):
try:
if not self.server.is_alive():
self.setexit()
return None
return self.get(False)
except Empty:
if self.exit:
sys.exit(1)
return None
class BitBakeServer(BitBakeBaseServer):
def initServer(self, single_use=True):
def initServer(self):
# establish communication channels. We use bidirectional pipes for
# ui <--> server command/response pairs
# and a queue for server -> ui event notifications

View File

@@ -31,33 +31,31 @@
in the server's main loop.
"""
import os
import sys
import hashlib
import time
import socket
import signal
import threading
import pickle
import inspect
import select
import http.client
import xmlrpc.client
from xmlrpc.server import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
import bb
import xmlrpclib, sys
from bb import daemonize
from bb.ui import uievent
from . import BitBakeBaseServer, BitBakeBaseServerConnection, BaseImplServer
import hashlib, time
import socket
import os, signal
import threading
try:
import cPickle as pickle
except ImportError:
import pickle
DEBUG = False
class BBTransport(xmlrpc.client.Transport):
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
import inspect, select, httplib
from . import BitBakeBaseServer, BitBakeBaseServerConnection, BaseImplServer
class BBTransport(xmlrpclib.Transport):
def __init__(self, timeout):
self.timeout = timeout
self.connection_token = None
xmlrpc.client.Transport.__init__(self)
xmlrpclib.Transport.__init__(self)
# Modified from default to pass timeout to HTTPConnection
def make_connection(self, host):
@@ -69,7 +67,7 @@ class BBTransport(xmlrpc.client.Transport):
# create a HTTP connection object from a host descriptor
chost, self._extra_headers, x509 = self.get_host_info(host)
#store the host argument along with the connection object
self._connection = host, http.client.HTTPConnection(chost, timeout=self.timeout)
self._connection = host, httplib.HTTPConnection(chost, timeout=self.timeout)
return self._connection[1]
def set_connection_token(self, token):
@@ -78,30 +76,13 @@ class BBTransport(xmlrpc.client.Transport):
def send_content(self, h, body):
if self.connection_token:
h.putheader("Bitbake-token", self.connection_token)
xmlrpc.client.Transport.send_content(self, h, body)
xmlrpclib.Transport.send_content(self, h, body)
def _create_server(host, port, timeout = 60):
t = BBTransport(timeout)
s = xmlrpc.client.ServerProxy("http://%s:%d/" % (host, port), transport=t, allow_none=True, use_builtin_types=True)
s = xmlrpclib.ServerProxy("http://%s:%d/" % (host, port), transport=t, allow_none=True)
return s, t
def check_connection(remote, timeout):
try:
host, port = remote.split(":")
port = int(port)
except Exception as e:
bb.warn("Failed to read remote definition (%s)" % str(e))
raise e
server, _transport = _create_server(host, port, timeout)
try:
ret, err = server.runCommand(['getVariable', 'TOPDIR'])
if err or not ret:
return False
except ConnectionError:
return False
return True
class BitBakeServerCommands():
def __init__(self, server):
@@ -116,10 +97,10 @@ class BitBakeServerCommands():
# we don't allow connections if the cooker is running
if (self.cooker.state in [bb.cooker.state.parsing, bb.cooker.state.running]):
return None, "Cooker is busy: %s" % bb.cooker.state.get_name(self.cooker.state)
return None
self.event_handle = bb.event.register_UIHhandler(s, True)
return self.event_handle, 'OK'
self.event_handle = bb.event.register_UIHhandler(s)
return self.event_handle
def unregisterEventHandler(self, handlerNum):
"""
@@ -147,7 +128,7 @@ class BitBakeServerCommands():
def addClient(self):
if self.has_client:
return None
token = hashlib.md5(str(time.time()).encode("utf-8")).hexdigest()
token = hashlib.md5(str(time.time())).hexdigest()
self.server.set_connection_token(token)
self.has_client = True
return token
@@ -190,14 +171,14 @@ class BitBakeXMLRPCRequestHandler(SimpleXMLRPCRequestHandler):
self.send_header("Content-type", "text/plain")
self.send_header("Content-length", str(len(response)))
self.end_headers()
self.wfile.write(bytes(response, 'utf-8'))
self.wfile.write(response)
class XMLRPCProxyServer(BaseImplServer):
""" not a real working server, but a stub for a proxy server connection
"""
def __init__(self, host, port, use_builtin_types=True):
def __init__(self, host, port):
self.host = host
self.port = port
@@ -205,12 +186,13 @@ class XMLRPCServer(SimpleXMLRPCServer, BaseImplServer):
# remove this when you're done with debugging
# allow_reuse_address = True
def __init__(self, interface, single_use=False, idle_timeout=0):
def __init__(self, interface):
"""
Constructor
"""
BaseImplServer.__init__(self)
self.single_use = single_use
if (interface[1] == 0): # anonymous port, not getting reused
self.single_use = True
# Use auto port configuration
if (interface[1] == -1):
interface = (interface[0], 0)
@@ -223,10 +205,7 @@ class XMLRPCServer(SimpleXMLRPCServer, BaseImplServer):
self.commands = BitBakeServerCommands(self)
self.autoregister_all_functions(self.commands, "")
self.interface = interface
self.time = time.time()
self.idle_timeout = idle_timeout
if idle_timeout:
self.register_idle_function(self.handle_idle_timeout, self)
self.single_use = False
def addcooker(self, cooker):
BaseImplServer.addcooker(self, cooker)
@@ -242,12 +221,6 @@ class XMLRPCServer(SimpleXMLRPCServer, BaseImplServer):
if name.startswith(prefix):
self.register_function(method, name[len(prefix):])
def handle_idle_timeout(self, server, data, abort):
if not abort:
if time.time() - server.time > server.idle_timeout:
server.quit = True
print("Server idle timeout expired")
return []
def serve_forever(self):
# Start the actual XMLRPC server
@@ -261,8 +234,7 @@ class XMLRPCServer(SimpleXMLRPCServer, BaseImplServer):
while not self.quit:
fds = [self]
nextsleep = 0.1
for function, data in list(self._idlefuns.items()):
retval = None
for function, data in self._idlefuns.items():
try:
retval = function(self, data, False)
if retval is False:
@@ -279,9 +251,6 @@ class XMLRPCServer(SimpleXMLRPCServer, BaseImplServer):
except:
import traceback
traceback.print_exc()
if retval == None:
# the function execute failed; delete it
del self._idlefuns[function]
pass
socktimeout = self.socket.gettimeout() or nextsleep
@@ -290,15 +259,13 @@ class XMLRPCServer(SimpleXMLRPCServer, BaseImplServer):
try:
fd_sets = select.select(fds, [], [], socktimeout)
if fd_sets[0] and self in fd_sets[0]:
if self.idle_timeout:
self.time = time.time()
self._handle_request_noblock()
except IOError:
# we ignore interrupted calls
pass
# Tell idle functions we're exiting
for function, data in list(self._idlefuns.items()):
for function, data in self._idlefuns.items():
try:
retval = function(self, data, True)
except:
@@ -310,15 +277,12 @@ class XMLRPCServer(SimpleXMLRPCServer, BaseImplServer):
self.connection_token = token
class BitBakeXMLRPCServerConnection(BitBakeBaseServerConnection):
def __init__(self, serverImpl, clientinfo=("localhost", 0), observer_only = False, featureset = None):
def __init__(self, serverImpl, clientinfo=("localhost", 0), observer_only = False, featureset = []):
self.connection, self.transport = _create_server(serverImpl.host, serverImpl.port)
self.clientinfo = clientinfo
self.serverImpl = serverImpl
self.observer_only = observer_only
if featureset:
self.featureset = featureset
else:
self.featureset = []
self.featureset = featureset
def connect(self, token = None):
if token is None:
@@ -331,20 +295,18 @@ class BitBakeXMLRPCServerConnection(BitBakeBaseServerConnection):
return None
self.transport.set_connection_token(token)
return self
def setupEventQueue(self):
self.events = uievent.BBUIEventQueue(self.connection, self.clientinfo)
for event in bb.event.ui_queue:
self.events.queue_event(event)
_, error = self.connection.runCommand(["setFeatures", self.featureset])
if error:
# disconnect the client, we can't make the setFeature work
self.connection.removeClient()
# no need to log it here, the error shall be sent to the client
raise BaseException(error)
return self
def removeClient(self):
if not self.observer_only:
self.connection.removeClient()
@@ -363,10 +325,9 @@ class BitBakeXMLRPCServerConnection(BitBakeBaseServerConnection):
pass
class BitBakeServer(BitBakeBaseServer):
def initServer(self, interface = ("localhost", 0),
single_use = False, idle_timeout=0):
def initServer(self, interface = ("localhost", 0)):
self.interface = interface
self.serverImpl = XMLRPCServer(interface, single_use, idle_timeout)
self.serverImpl = XMLRPCServer(interface)
def detach(self):
daemonize.createDaemon(self.serverImpl.serve_forever, "bitbake-cookerdaemon.log")
@@ -411,7 +372,7 @@ class BitBakeXMLRPCClient(BitBakeBaseServer):
bb.warn("Could not create socket for %s:%s (%s)" % (host, port, str(e)))
raise e
try:
self.serverImpl = XMLRPCProxyServer(host, port, use_builtin_types=True)
self.serverImpl = XMLRPCProxyServer(host, port)
self.connection = BitBakeXMLRPCServerConnection(self.serverImpl, (ip, 0), self.observer_only, featureset)
return self.connection.connect(self.token)
except Exception as e:

View File

@@ -3,19 +3,21 @@ import logging
import os
import re
import tempfile
import pickle
import bb.data
import difflib
import simplediff
from bb.checksum import FileChecksumCache
logger = logging.getLogger('BitBake.SigGen')
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info('Importing cPickle failed. Falling back to a very slow implementation.')
def init(d):
siggens = [obj for obj in globals().values()
siggens = [obj for obj in globals().itervalues()
if type(obj) is type and issubclass(obj, SignatureGenerator)]
desired = d.getVar("BB_SIGNATURE_HANDLER") or "noop"
desired = d.getVar("BB_SIGNATURE_HANDLER", True) or "noop"
for sg in siggens:
if desired == sg.name:
return sg(d)
@@ -32,11 +34,9 @@ class SignatureGenerator(object):
name = "noop"
def __init__(self, data):
self.basehash = {}
self.taskhash = {}
self.runtaskdeps = {}
self.file_checksum_values = {}
self.taints = {}
def finalise(self, fn, d, varient):
return
@@ -44,8 +44,7 @@ class SignatureGenerator(object):
def get_taskhash(self, fn, task, deps, dataCache):
return "0"
def writeout_file_checksum_cache(self):
"""Write/update the file checksum cache onto disk"""
def set_taskdata(self, hashes, deps, checksum):
return
def stampfile(self, stampbase, file_name, taskname, extrainfo):
@@ -64,10 +63,11 @@ class SignatureGenerator(object):
return
def get_taskdata(self):
return (self.runtaskdeps, self.taskhash, self.file_checksum_values, self.taints, self.basehash)
return (self.runtaskdeps, self.taskhash, self.file_checksum_values)
def set_taskdata(self, data):
self.runtaskdeps, self.taskhash, self.file_checksum_values, self.taints, self.basehash = data
self.runtaskdeps, self.taskhash, self.file_checksum_values = data
class SignatureGeneratorBasic(SignatureGenerator):
"""
@@ -80,22 +80,15 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.taskdeps = {}
self.runtaskdeps = {}
self.file_checksum_values = {}
self.taints = {}
self.gendeps = {}
self.lookupcache = {}
self.pkgnameextract = re.compile("(?P<fn>.*)\..*")
self.basewhitelist = set((data.getVar("BB_HASHBASE_WHITELIST") or "").split())
self.basewhitelist = set((data.getVar("BB_HASHBASE_WHITELIST", True) or "").split())
self.taskwhitelist = None
self.init_rundepcheck(data)
checksum_cache_file = data.getVar("BB_HASH_CHECKSUM_CACHE_FILE")
if checksum_cache_file:
self.checksum_cache = FileChecksumCache()
self.checksum_cache.init_cache(data, checksum_cache_file)
else:
self.checksum_cache = None
def init_rundepcheck(self, data):
self.taskwhitelist = data.getVar("BB_HASHTASK_WHITELIST") or None
self.taskwhitelist = data.getVar("BB_HASHTASK_WHITELIST", True) or None
if self.taskwhitelist:
self.twl = re.compile(self.taskwhitelist)
else:
@@ -103,7 +96,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
def _build_data(self, fn, d):
ignore_mismatch = ((d.getVar("BB_HASH_IGNORE_MISMATCH") or '') == '1')
tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d)
taskdeps = {}
@@ -136,11 +128,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
var = lookupcache[dep]
if var is not None:
data = data + str(var)
datahash = hashlib.md5(data.encode("utf-8")).hexdigest()
k = fn + "." + task
if not ignore_mismatch and k in self.basehash and self.basehash[k] != datahash:
bb.error("When reparsing %s, the basehash value changed from %s to %s. The metadata is not deterministic and this needs to be fixed." % (k, self.basehash[k], datahash))
self.basehash[k] = datahash
self.basehash[fn + "." + task] = hashlib.md5(data).hexdigest()
taskdeps[task] = alldeps
self.taskdeps[fn] = taskdeps
@@ -151,21 +139,18 @@ class SignatureGeneratorBasic(SignatureGenerator):
def finalise(self, fn, d, variant):
mc = d.getVar("__BBMULTICONFIG", False) or ""
if variant or mc:
fn = bb.cache.realfn2virtual(fn, variant, mc)
if variant:
fn = "virtual:" + variant + ":" + fn
try:
taskdeps = self._build_data(fn, d)
except bb.parse.SkipRecipe:
raise
except:
bb.warn("Error during finalise of %s" % fn)
bb.note("Error during finalise of %s" % fn)
raise
#Slow but can be useful for debugging mismatched basehashes
#for task in self.taskdeps[fn]:
# self.dump_sigtask(fn, task, d.getVar("STAMP"), False)
# self.dump_sigtask(fn, task, d.getVar("STAMP", True), False)
for task in taskdeps:
d.setVar("BB_BASEHASH_task-%s" % task, self.basehash[fn + "." + task])
@@ -191,11 +176,9 @@ class SignatureGeneratorBasic(SignatureGenerator):
def get_taskhash(self, fn, task, deps, dataCache):
k = fn + "." + task
data = dataCache.basetaskhash[k]
self.basehash[k] = data
self.runtaskdeps[k] = []
self.file_checksum_values[k] = []
self.file_checksum_values[k] = {}
recipename = dataCache.pkg_fn[fn]
for dep in sorted(deps, key=clean_basepath):
depname = dataCache.pkg_fn[self.pkgnameextract.search(dep).group('fn')]
if not self.rundep_check(fn, recipename, task, dep, depname, dataCache):
@@ -206,50 +189,26 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.runtaskdeps[k].append(dep)
if task in dataCache.file_checksums[fn]:
if self.checksum_cache:
checksums = self.checksum_cache.get_checksums(dataCache.file_checksums[fn][task], recipename)
else:
checksums = bb.fetch2.get_file_checksums(dataCache.file_checksums[fn][task], recipename)
checksums = bb.fetch2.get_file_checksums(dataCache.file_checksums[fn][task], recipename)
for (f,cs) in checksums:
self.file_checksum_values[k].append((f,cs))
self.file_checksum_values[k][f] = cs
if cs:
data = data + cs
taskdep = dataCache.task_deps[fn]
if 'nostamp' in taskdep and task in taskdep['nostamp']:
# Nostamp tasks need an implicit taint so that they force any dependent tasks to run
import uuid
taint = str(uuid.uuid4())
data = data + taint
self.taints[k] = "nostamp:" + taint
taint = self.read_taint(fn, task, dataCache.stamp[fn])
if taint:
data = data + taint
self.taints[k] = taint
logger.warning("%s is tainted from a forced run" % k)
logger.warn("%s is tainted from a forced run" % k)
h = hashlib.md5(data.encode("utf-8")).hexdigest()
h = hashlib.md5(data).hexdigest()
self.taskhash[k] = h
#d.setVar("BB_TASKHASH_task-%s" % task, taskhash[task])
return h
def writeout_file_checksum_cache(self):
"""Write/update the file checksum cache onto disk"""
if self.checksum_cache:
self.checksum_cache.save_extras()
self.checksum_cache.save_merge()
else:
bb.fetch2.fetcher_parse_save()
bb.fetch2.fetcher_parse_done()
def dump_sigtask(self, fn, task, stampbase, runtime):
k = fn + "." + task
referencestamp = stampbase
if isinstance(runtime, str) and runtime.startswith("customfile"):
if runtime == "customfile":
sigfile = stampbase
referencestamp = runtime[11:]
elif runtime and k in self.taskhash:
sigfile = stampbase + "." + task + ".sigdata" + "." + self.taskhash[k]
else:
@@ -258,7 +217,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
bb.utils.mkdirhier(os.path.dirname(sigfile))
data = {}
data['task'] = task
data['basewhitelist'] = self.basewhitelist
data['taskwhitelist'] = self.taskwhitelist
data['taskdeps'] = self.taskdeps[fn][task]
@@ -274,35 +232,21 @@ class SignatureGeneratorBasic(SignatureGenerator):
if runtime and k in self.taskhash:
data['runtaskdeps'] = self.runtaskdeps[k]
data['file_checksum_values'] = [(os.path.basename(f), cs) for f,cs in self.file_checksum_values[k]]
data['file_checksum_values'] = [(os.path.basename(f), cs) for f,cs in self.file_checksum_values[k].items()]
data['runtaskhashes'] = {}
for dep in data['runtaskdeps']:
data['runtaskhashes'][dep] = self.taskhash[dep]
data['taskhash'] = self.taskhash[k]
taint = self.read_taint(fn, task, referencestamp)
taint = self.read_taint(fn, task, stampbase)
if taint:
data['taint'] = taint
if runtime and k in self.taints:
if 'nostamp:' in self.taints[k]:
data['taint'] = self.taints[k]
computed_basehash = calc_basehash(data)
if computed_basehash != self.basehash[k]:
bb.error("Basehash mismatch %s versus %s for %s" % (computed_basehash, self.basehash[k], k))
if runtime and k in self.taskhash:
computed_taskhash = calc_taskhash(data)
if computed_taskhash != self.taskhash[k]:
bb.error("Taskhash mismatch %s versus %s for %s" % (computed_taskhash, self.taskhash[k], k))
sigfile = sigfile.replace(self.taskhash[k], computed_taskhash)
fd, tmpfile = tempfile.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
try:
with os.fdopen(fd, "wb") as stream:
p = pickle.dump(data, stream, -1)
stream.flush()
os.chmod(tmpfile, 0o664)
os.chmod(tmpfile, 0664)
os.rename(tmpfile, sigfile)
except (OSError, IOError) as err:
try:
@@ -311,18 +255,16 @@ class SignatureGeneratorBasic(SignatureGenerator):
pass
raise err
def dump_sigfn(self, fn, dataCaches, options):
if fn in self.taskdeps:
def dump_sigs(self, dataCache, options):
for fn in self.taskdeps:
for task in self.taskdeps[fn]:
tid = fn + ":" + task
(mc, _, _) = bb.runqueue.split_tid(tid)
k = fn + "." + task
if k not in self.taskhash:
continue
if dataCaches[mc].basetaskhash[k] != self.basehash[k]:
if dataCache.basetaskhash[k] != self.basehash[k]:
bb.error("Bitbake's cached basehash does not match the one we just generated (%s)!" % k)
bb.error("The mismatched hashes were %s and %s" % (dataCaches[mc].basetaskhash[k], self.basehash[k]))
self.dump_sigtask(fn, task, dataCaches[mc].stamp[fn], True)
bb.error("The mismatched hashes were %s and %s" % (dataCache.basetaskhash[k], self.basehash[k]))
self.dump_sigtask(fn, task, dataCache.stamp[fn], True)
class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
name = "basichash"
@@ -350,71 +292,14 @@ class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
def dump_this_task(outfile, d):
import bb.parse
fn = d.getVar("BB_FILENAME")
task = "do_" + d.getVar("BB_CURRENTTASK")
referencestamp = bb.build.stamp_internal(task, d, None, True)
bb.parse.siggen.dump_sigtask(fn, task, outfile, "customfile:" + referencestamp)
def init_colors(enable_color):
"""Initialise colour dict for passing to compare_sigfiles()"""
# First set up the colours
colors = {'color_title': '\033[1;37;40m',
'color_default': '\033[0;37;40m',
'color_add': '\033[1;32;40m',
'color_remove': '\033[1;31;40m',
}
# Leave all keys present but clear the values
if not enable_color:
for k in colors.keys():
colors[k] = ''
return colors
def worddiff_str(oldstr, newstr, colors=None):
if not colors:
colors = init_colors(False)
diff = simplediff.diff(oldstr.split(' '), newstr.split(' '))
ret = []
for change, value in diff:
value = ' '.join(value)
if change == '=':
ret.append(value)
elif change == '+':
item = '{color_add}{{+{value}+}}{color_default}'.format(value=value, **colors)
ret.append(item)
elif change == '-':
item = '{color_remove}[-{value}-]{color_default}'.format(value=value, **colors)
ret.append(item)
whitespace_note = ''
if oldstr != newstr and ' '.join(oldstr.split()) == ' '.join(newstr.split()):
whitespace_note = ' (whitespace changed)'
return '"%s"%s' % (' '.join(ret), whitespace_note)
def list_inline_diff(oldlist, newlist, colors=None):
if not colors:
colors = init_colors(False)
diff = simplediff.diff(oldlist, newlist)
ret = []
for change, value in diff:
value = ' '.join(value)
if change == '=':
ret.append("'%s'" % value)
elif change == '+':
item = '{color_add}+{value}{color_default}'.format(value=value, **colors)
ret.append(item)
elif change == '-':
item = '{color_remove}-{value}{color_default}'.format(value=value, **colors)
ret.append(item)
return '[%s]' % (', '.join(ret))
fn = d.getVar("BB_FILENAME", True)
task = "do_" + d.getVar("BB_CURRENTTASK", True)
bb.parse.siggen.dump_sigtask(fn, task, outfile, "customfile")
def clean_basepath(a):
mc = None
if a.startswith("multiconfig:"):
_, mc, a = a.split(":", 2)
b = a.rsplit("/", 2)[1] + '/' + a.rsplit("/", 2)[2]
b = a.rsplit("/", 2)[1] + a.rsplit("/", 2)[2]
if a.startswith("virtual:"):
b = b + ":" + a.rsplit(":", 1)[0]
if mc:
b = b + ":multiconfig:" + mc
return b
def clean_basepaths(a):
@@ -423,38 +308,13 @@ def clean_basepaths(a):
b[clean_basepath(x)] = a[x]
return b
def clean_basepaths_list(a):
b = []
for x in a:
b.append(clean_basepath(x))
return b
def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
def compare_sigfiles(a, b, recursecb = None):
output = []
colors = init_colors(color)
def color_format(formatstr, **values):
"""
Return colour formatted string.
NOTE: call with the format string, not an already formatted string
containing values (otherwise you could have trouble with { and }
characters)
"""
if not formatstr.endswith('{color_default}'):
formatstr += '{color_default}'
# In newer python 3 versions you can pass both of these directly,
# but we only require 3.4 at the moment
formatparams = {}
formatparams.update(colors)
formatparams.update(values)
return formatstr.format(**formatparams)
with open(a, 'rb') as f:
p1 = pickle.Unpickler(f)
a_data = p1.load()
with open(b, 'rb') as f:
p2 = pickle.Unpickler(f)
b_data = p2.load()
p1 = pickle.Unpickler(open(a, "rb"))
a_data = p1.load()
p2 = pickle.Unpickler(open(b, "rb"))
b_data = p2.load()
def dict_diff(a, b, whitelist=set()):
sa = set(a.keys())
@@ -502,100 +362,50 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
return changed, added, removed
if 'basewhitelist' in a_data and a_data['basewhitelist'] != b_data['basewhitelist']:
output.append(color_format("{color_title}basewhitelist changed{color_default} from '%s' to '%s'") % (a_data['basewhitelist'], b_data['basewhitelist']))
output.append("basewhitelist changed from '%s' to '%s'" % (a_data['basewhitelist'], b_data['basewhitelist']))
if a_data['basewhitelist'] and b_data['basewhitelist']:
output.append("changed items: %s" % a_data['basewhitelist'].symmetric_difference(b_data['basewhitelist']))
if 'taskwhitelist' in a_data and a_data['taskwhitelist'] != b_data['taskwhitelist']:
output.append(color_format("{color_title}taskwhitelist changed{color_default} from '%s' to '%s'") % (a_data['taskwhitelist'], b_data['taskwhitelist']))
output.append("taskwhitelist changed from '%s' to '%s'" % (a_data['taskwhitelist'], b_data['taskwhitelist']))
if a_data['taskwhitelist'] and b_data['taskwhitelist']:
output.append("changed items: %s" % a_data['taskwhitelist'].symmetric_difference(b_data['taskwhitelist']))
if a_data['taskdeps'] != b_data['taskdeps']:
output.append(color_format("{color_title}Task dependencies changed{color_default} from:\n%s\nto:\n%s") % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps'])))
output.append("Task dependencies changed from:\n%s\nto:\n%s" % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps'])))
if a_data['basehash'] != b_data['basehash'] and not collapsed:
output.append(color_format("{color_title}basehash changed{color_default} from %s to %s") % (a_data['basehash'], b_data['basehash']))
if a_data['basehash'] != b_data['basehash']:
output.append("basehash changed from %s to %s" % (a_data['basehash'], b_data['basehash']))
changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basewhitelist'] & b_data['basewhitelist'])
if changed:
for dep in changed:
output.append(color_format("{color_title}List of dependencies for variable %s changed from '{color_default}%s{color_title}' to '{color_default}%s{color_title}'") % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep]))
output.append("List of dependencies for variable %s changed from '%s' to '%s'" % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep]))
if a_data['gendeps'][dep] and b_data['gendeps'][dep]:
output.append("changed items: %s" % a_data['gendeps'][dep].symmetric_difference(b_data['gendeps'][dep]))
if added:
for dep in added:
output.append(color_format("{color_title}Dependency on variable %s was added") % (dep))
output.append("Dependency on variable %s was added" % (dep))
if removed:
for dep in removed:
output.append(color_format("{color_title}Dependency on Variable %s was removed") % (dep))
output.append("Dependency on Variable %s was removed" % (dep))
changed, added, removed = dict_diff(a_data['varvals'], b_data['varvals'])
if changed:
for dep in changed:
oldval = a_data['varvals'][dep]
newval = b_data['varvals'][dep]
if newval and oldval and ('\n' in oldval or '\n' in newval):
diff = difflib.unified_diff(oldval.splitlines(), newval.splitlines(), lineterm='')
# Cut off the first two lines, since we aren't interested in
# the old/new filename (they are blank anyway in this case)
difflines = list(diff)[2:]
if color:
# Add colour to diff output
for i, line in enumerate(difflines):
if line.startswith('+'):
line = color_format('{color_add}{line}', line=line)
difflines[i] = line
elif line.startswith('-'):
line = color_format('{color_remove}{line}', line=line)
difflines[i] = line
output.append(color_format("{color_title}Variable {var} value changed:{color_default}\n{diff}", var=dep, diff='\n'.join(difflines)))
elif newval and oldval and (' ' in oldval or ' ' in newval):
output.append(color_format("{color_title}Variable {var} value changed:{color_default}\n{diff}", var=dep, diff=worddiff_str(oldval, newval, colors)))
else:
output.append(color_format("{color_title}Variable {var} value changed from '{color_default}{oldval}{color_title}' to '{color_default}{newval}{color_title}'{color_default}", var=dep, oldval=oldval, newval=newval))
if not 'file_checksum_values' in a_data:
a_data['file_checksum_values'] = {}
if not 'file_checksum_values' in b_data:
b_data['file_checksum_values'] = {}
output.append("Variable %s value changed from '%s' to '%s'" % (dep, a_data['varvals'][dep], b_data['varvals'][dep]))
changed, added, removed = file_checksums_diff(a_data['file_checksum_values'], b_data['file_checksum_values'])
if changed:
for f, old, new in changed:
output.append(color_format("{color_title}Checksum for file %s changed{color_default} from %s to %s") % (f, old, new))
output.append("Checksum for file %s changed from %s to %s" % (f, old, new))
if added:
for f in added:
output.append(color_format("{color_title}Dependency on checksum of file %s was added") % (f))
output.append("Dependency on checksum of file %s was added" % (f))
if removed:
for f in removed:
output.append(color_format("{color_title}Dependency on checksum of file %s was removed") % (f))
if not 'runtaskdeps' in a_data:
a_data['runtaskdeps'] = {}
if not 'runtaskdeps' in b_data:
b_data['runtaskdeps'] = {}
if not collapsed:
if len(a_data['runtaskdeps']) != len(b_data['runtaskdeps']):
changed = ["Number of task dependencies changed"]
else:
changed = []
for idx, task in enumerate(a_data['runtaskdeps']):
a = a_data['runtaskdeps'][idx]
b = b_data['runtaskdeps'][idx]
if a_data['runtaskhashes'][a] != b_data['runtaskhashes'][b] and not collapsed:
changed.append("%s with hash %s\n changed to\n%s with hash %s" % (clean_basepath(a), a_data['runtaskhashes'][a], clean_basepath(b), b_data['runtaskhashes'][b]))
if changed:
clean_a = clean_basepaths_list(a_data['runtaskdeps'])
clean_b = clean_basepaths_list(b_data['runtaskdeps'])
if clean_a != clean_b:
output.append(color_format("{color_title}runtaskdeps changed:{color_default}\n%s") % list_inline_diff(clean_a, clean_b, colors))
else:
output.append(color_format("{color_title}runtaskdeps changed:"))
output.append("\n".join(changed))
output.append("Dependency on checksum of file %s was removed" % (f))
if 'runtaskhashes' in a_data and 'runtaskhashes' in b_data:
@@ -611,7 +421,7 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
#output.append("Dependency on task %s was replaced by %s with same hash" % (dep, bdep))
bdep_found = True
if not bdep_found:
output.append(color_format("{color_title}Dependency on task %s was added{color_default} with hash %s") % (clean_basepath(dep), b[dep]))
output.append("Dependency on task %s was added with hash %s" % (clean_basepath(dep), b[dep]))
if removed:
for dep in removed:
adep_found = False
@@ -621,69 +431,30 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
#output.append("Dependency on task %s was replaced by %s with same hash" % (adep, dep))
adep_found = True
if not adep_found:
output.append(color_format("{color_title}Dependency on task %s was removed{color_default} with hash %s") % (clean_basepath(dep), a[dep]))
output.append("Dependency on task %s was removed with hash %s" % (clean_basepath(dep), a[dep]))
if changed:
for dep in changed:
if not collapsed:
output.append(color_format("{color_title}Hash for dependent task %s changed{color_default} from %s to %s") % (clean_basepath(dep), a[dep], b[dep]))
output.append("Hash for dependent task %s changed from %s to %s" % (clean_basepath(dep), a[dep], b[dep]))
if callable(recursecb):
# If a dependent hash changed, might as well print the line above and then defer to the changes in
# that hash since in all likelyhood, they're the same changes this task also saw.
recout = recursecb(dep, a[dep], b[dep])
if recout:
if collapsed:
output.extend(recout)
else:
# If a dependent hash changed, might as well print the line above and then defer to the changes in
# that hash since in all likelyhood, they're the same changes this task also saw.
output = [output[-1]] + recout
output = [output[-1]] + recout
a_taint = a_data.get('taint', None)
b_taint = b_data.get('taint', None)
if a_taint != b_taint:
output.append(color_format("{color_title}Taint (by forced/invalidated task) changed{color_default} from %s to %s") % (a_taint, b_taint))
output.append("Taint (by forced/invalidated task) changed from %s to %s" % (a_taint, b_taint))
return output
def calc_basehash(sigdata):
task = sigdata['task']
basedata = sigdata['varvals'][task]
if basedata is None:
basedata = ''
alldeps = sigdata['taskdeps']
for dep in alldeps:
basedata = basedata + dep
val = sigdata['varvals'][dep]
if val is not None:
basedata = basedata + str(val)
return hashlib.md5(basedata.encode("utf-8")).hexdigest()
def calc_taskhash(sigdata):
data = sigdata['basehash']
for dep in sigdata['runtaskdeps']:
data = data + sigdata['runtaskhashes'][dep]
for c in sigdata['file_checksum_values']:
data = data + c[1]
if 'taint' in sigdata:
if 'nostamp:' in sigdata['taint']:
data = data + sigdata['taint'][8:]
else:
data = data + sigdata['taint']
return hashlib.md5(data.encode("utf-8")).hexdigest()
def dump_sigfile(a):
output = []
with open(a, 'rb') as f:
p1 = pickle.Unpickler(f)
a_data = p1.load()
p1 = pickle.Unpickler(open(a, "rb"))
a_data = p1.load()
output.append("basewhitelist: %s" % (a_data['basewhitelist']))
@@ -712,13 +483,4 @@ def dump_sigfile(a):
if 'taint' in a_data:
output.append("Tainted (by forced/invalidated task): %s" % a_data['taint'])
if 'task' in a_data:
computed_basehash = calc_basehash(a_data)
output.append("Computed base hash is %s and from file %s" % (computed_basehash, a_data['basehash']))
else:
output.append("Unable to compute base hash")
computed_taskhash = calc_taskhash(a_data)
output.append("Computed task hash is %s" % computed_taskhash)
return output

View File

@@ -37,24 +37,27 @@ def re_match_strings(target, strings):
return any(name == target or re.match(name, target)
for name in strings)
class TaskEntry:
def __init__(self):
self.tdepends = []
self.idepends = []
self.irdepends = []
class TaskData:
"""
BitBake Task Data implementation
"""
def __init__(self, abort = True, tryaltconfigs = False, skiplist = None, allowincomplete = False):
def __init__(self, abort = True, tryaltconfigs = False, skiplist = None):
self.build_names_index = []
self.run_names_index = []
self.fn_index = []
self.build_targets = {}
self.run_targets = {}
self.external_targets = []
self.seenfns = []
self.taskentries = {}
self.tasks_fnid = []
self.tasks_name = []
self.tasks_tdepends = []
self.tasks_idepends = []
self.tasks_irdepends = []
# Cache to speed up task ID lookups
self.tasks_lookup = {}
self.depids = {}
self.rdepids = {}
@@ -63,14 +66,95 @@ class TaskData:
self.failed_deps = []
self.failed_rdeps = []
self.failed_fns = []
self.failed_fnids = []
self.abort = abort
self.tryaltconfigs = tryaltconfigs
self.allowincomplete = allowincomplete
self.skiplist = skiplist
def getbuild_id(self, name):
"""
Return an ID number for the build target name.
If it doesn't exist, create one.
"""
if not name in self.build_names_index:
self.build_names_index.append(name)
return len(self.build_names_index) - 1
return self.build_names_index.index(name)
def getrun_id(self, name):
"""
Return an ID number for the run target name.
If it doesn't exist, create one.
"""
if not name in self.run_names_index:
self.run_names_index.append(name)
return len(self.run_names_index) - 1
return self.run_names_index.index(name)
def getfn_id(self, name):
"""
Return an ID number for the filename.
If it doesn't exist, create one.
"""
if not name in self.fn_index:
self.fn_index.append(name)
return len(self.fn_index) - 1
return self.fn_index.index(name)
def gettask_ids(self, fnid):
"""
Return an array of the ID numbers matching a given fnid.
"""
ids = []
if fnid in self.tasks_lookup:
for task in self.tasks_lookup[fnid]:
ids.append(self.tasks_lookup[fnid][task])
return ids
def gettask_id_fromfnid(self, fnid, task):
"""
Return an ID number for the task matching fnid and task.
"""
if fnid in self.tasks_lookup:
if task in self.tasks_lookup[fnid]:
return self.tasks_lookup[fnid][task]
return None
def gettask_id(self, fn, task, create = True):
"""
Return an ID number for the task matching fn and task.
If it doesn't exist, create one by default.
Optionally return None instead.
"""
fnid = self.getfn_id(fn)
if fnid in self.tasks_lookup:
if task in self.tasks_lookup[fnid]:
return self.tasks_lookup[fnid][task]
if not create:
return None
self.tasks_name.append(task)
self.tasks_fnid.append(fnid)
self.tasks_tdepends.append([])
self.tasks_idepends.append([])
self.tasks_irdepends.append([])
listid = len(self.tasks_name) - 1
if fnid not in self.tasks_lookup:
self.tasks_lookup[fnid] = {}
self.tasks_lookup[fnid][task] = listid
return listid
def add_tasks(self, fn, dataCache):
"""
Add tasks for a given fn to the database
@@ -78,60 +162,58 @@ class TaskData:
task_deps = dataCache.task_deps[fn]
if fn in self.failed_fns:
fnid = self.getfn_id(fn)
if fnid in self.failed_fnids:
bb.msg.fatal("TaskData", "Trying to re-add a failed file? Something is broken...")
# Check if we've already seen this fn
if fn in self.seenfns:
if fnid in self.tasks_fnid:
return
self.seenfns.append(fn)
self.add_extra_deps(fn, dataCache)
# Common code for dep_name/depends = 'depends'/idepends and 'rdepends'/irdepends
def handle_deps(task, dep_name, depends, seen):
if dep_name in task_deps and task in task_deps[dep_name]:
ids = []
for dep in task_deps[dep_name][task].split():
if dep:
parts = dep.split(":")
if len(parts) != 2:
bb.msg.fatal("TaskData", "Error for %s:%s[%s], dependency %s in '%s' does not contain exactly one ':' character.\n Task '%s' should be specified in the form 'packagename:task'" % (fn, task, dep_name, dep, task_deps[dep_name][task], dep_name))
ids.append((parts[0], parts[1]))
seen(parts[0])
depends.extend(ids)
for task in task_deps['tasks']:
tid = "%s:%s" % (fn, task)
self.taskentries[tid] = TaskEntry()
# Work out task dependencies
parentids = []
for dep in task_deps['parents'][task]:
if dep not in task_deps['tasks']:
bb.debug(2, "Not adding dependeny of %s on %s since %s does not exist" % (task, dep, dep))
continue
parentid = "%s:%s" % (fn, dep)
parentid = self.gettask_id(fn, dep)
parentids.append(parentid)
self.taskentries[tid].tdepends.extend(parentids)
taskid = self.gettask_id(fn, task)
self.tasks_tdepends[taskid].extend(parentids)
# Touch all intertask dependencies
handle_deps(task, 'depends', self.taskentries[tid].idepends, self.seen_build_target)
handle_deps(task, 'rdepends', self.taskentries[tid].irdepends, self.seen_run_target)
if 'depends' in task_deps and task in task_deps['depends']:
ids = []
for dep in task_deps['depends'][task].split():
if dep:
if ":" not in dep:
bb.msg.fatal("TaskData", "Error for %s, dependency %s does not contain ':' character\n. Task 'depends' should be specified in the form 'packagename:task'" % (fn, dep))
ids.append(((self.getbuild_id(dep.split(":")[0])), dep.split(":")[1]))
self.tasks_idepends[taskid].extend(ids)
if 'rdepends' in task_deps and task in task_deps['rdepends']:
ids = []
for dep in task_deps['rdepends'][task].split():
if dep:
if ":" not in dep:
bb.msg.fatal("TaskData", "Error for %s, dependency %s does not contain ':' character\n. Task 'rdepends' should be specified in the form 'packagename:task'" % (fn, dep))
ids.append(((self.getrun_id(dep.split(":")[0])), dep.split(":")[1]))
self.tasks_irdepends[taskid].extend(ids)
# Work out build dependencies
if not fn in self.depids:
dependids = set()
if not fnid in self.depids:
dependids = {}
for depend in dataCache.deps[fn]:
dependids.add(depend)
self.depids[fn] = list(dependids)
dependids[self.getbuild_id(depend)] = None
self.depids[fnid] = dependids.keys()
logger.debug(2, "Added dependencies %s for %s", str(dataCache.deps[fn]), fn)
# Work out runtime dependencies
if not fn in self.rdepids:
rdependids = set()
if not fnid in self.rdepids:
rdependids = {}
rdepends = dataCache.rundeps[fn]
rrecs = dataCache.runrecs[fn]
rdependlist = []
@@ -139,48 +221,33 @@ class TaskData:
for package in rdepends:
for rdepend in rdepends[package]:
rdependlist.append(rdepend)
rdependids.add(rdepend)
rdependids[self.getrun_id(rdepend)] = None
for package in rrecs:
for rdepend in rrecs[package]:
rreclist.append(rdepend)
rdependids.add(rdepend)
rdependids[self.getrun_id(rdepend)] = None
if rdependlist:
logger.debug(2, "Added runtime dependencies %s for %s", str(rdependlist), fn)
if rreclist:
logger.debug(2, "Added runtime recommendations %s for %s", str(rreclist), fn)
self.rdepids[fn] = list(rdependids)
self.rdepids[fnid] = rdependids.keys()
for dep in self.depids[fn]:
self.seen_build_target(dep)
for dep in self.depids[fnid]:
if dep in self.failed_deps:
self.fail_fn(fn)
self.fail_fnid(fnid)
return
for dep in self.rdepids[fn]:
self.seen_run_target(dep)
for dep in self.rdepids[fnid]:
if dep in self.failed_rdeps:
self.fail_fn(fn)
self.fail_fnid(fnid)
return
def add_extra_deps(self, fn, dataCache):
func = dataCache.extradepsfunc.get(fn, None)
if func:
bb.providers.buildWorldTargetList(dataCache)
pn = dataCache.pkg_fn[fn]
params = {'deps': dataCache.deps[fn],
'world_target': dataCache.world_target,
'pkg_pn': dataCache.pkg_pn,
'self_pn': pn}
funcname = '_%s_calculate_extra_depends' % pn.replace('-', '_')
paramlist = ','.join(params.keys())
func = 'def %s(%s):\n%s\n\n%s(%s)' % (funcname, paramlist, func, funcname, paramlist)
bb.utils.better_exec(func, params)
def have_build_target(self, target):
"""
Have we a build target matching this name?
"""
if target in self.build_targets and self.build_targets[target]:
targetid = self.getbuild_id(target)
if targetid in self.build_targets:
return True
return False
@@ -188,54 +255,50 @@ class TaskData:
"""
Have we a runtime target matching this name?
"""
if target in self.run_targets and self.run_targets[target]:
targetid = self.getrun_id(target)
if targetid in self.run_targets:
return True
return False
def seen_build_target(self, name):
"""
Maintain a list of build targets
"""
if name not in self.build_targets:
self.build_targets[name] = []
def add_build_target(self, fn, item):
"""
Add a build target.
If already present, append the provider fn to the list
"""
if item in self.build_targets:
if fn in self.build_targets[item]:
return
self.build_targets[item].append(fn)
return
self.build_targets[item] = [fn]
targetid = self.getbuild_id(item)
fnid = self.getfn_id(fn)
def seen_run_target(self, name):
"""
Maintain a list of runtime build targets
"""
if name not in self.run_targets:
self.run_targets[name] = []
if targetid in self.build_targets:
if fnid in self.build_targets[targetid]:
return
self.build_targets[targetid].append(fnid)
return
self.build_targets[targetid] = [fnid]
def add_runtime_target(self, fn, item):
"""
Add a runtime target.
If already present, append the provider fn to the list
"""
if item in self.run_targets:
if fn in self.run_targets[item]:
return
self.run_targets[item].append(fn)
return
self.run_targets[item] = [fn]
targetid = self.getrun_id(item)
fnid = self.getfn_id(fn)
def mark_external_target(self, target):
if targetid in self.run_targets:
if fnid in self.run_targets[targetid]:
return
self.run_targets[targetid].append(fnid)
return
self.run_targets[targetid] = [fnid]
def mark_external_target(self, item):
"""
Mark a build target as being externally requested
"""
if target not in self.external_targets:
self.external_targets.append(target)
targetid = self.getbuild_id(item)
if targetid not in self.external_targets:
self.external_targets.append(targetid)
def get_unresolved_build_targets(self, dataCache):
"""
@@ -243,12 +306,12 @@ class TaskData:
are unknown.
"""
unresolved = []
for target in self.build_targets:
for target in self.build_names_index:
if re_match_strings(target, dataCache.ignored_dependencies):
continue
if target in self.failed_deps:
if self.build_names_index.index(target) in self.failed_deps:
continue
if not self.build_targets[target]:
if not self.have_build_target(target):
unresolved.append(target)
return unresolved
@@ -258,12 +321,12 @@ class TaskData:
are unknown.
"""
unresolved = []
for target in self.run_targets:
for target in self.run_names_index:
if re_match_strings(target, dataCache.ignored_dependencies):
continue
if target in self.failed_rdeps:
if self.run_names_index.index(target) in self.failed_rdeps:
continue
if not self.run_targets[target]:
if not self.have_runtime_target(target):
unresolved.append(target)
return unresolved
@@ -271,26 +334,50 @@ class TaskData:
"""
Return a list of providers of item
"""
return self.build_targets[item]
targetid = self.getbuild_id(item)
def get_dependees(self, item):
return self.build_targets[targetid]
def get_dependees(self, itemid):
"""
Return a list of targets which depend on item
"""
dependees = []
for fn in self.depids:
if item in self.depids[fn]:
dependees.append(fn)
for fnid in self.depids:
if itemid in self.depids[fnid]:
dependees.append(fnid)
return dependees
def get_rdependees(self, item):
def get_dependees_str(self, item):
"""
Return a list of targets which depend on item as a user readable string
"""
itemid = self.getbuild_id(item)
dependees = []
for fnid in self.depids:
if itemid in self.depids[fnid]:
dependees.append(self.fn_index[fnid])
return dependees
def get_rdependees(self, itemid):
"""
Return a list of targets which depend on runtime item
"""
dependees = []
for fn in self.rdepids:
if item in self.rdepids[fn]:
dependees.append(fn)
for fnid in self.rdepids:
if itemid in self.rdepids[fnid]:
dependees.append(fnid)
return dependees
def get_rdependees_str(self, item):
"""
Return a list of targets which depend on runtime item as a user readable string
"""
itemid = self.getrun_id(item)
dependees = []
for fnid in self.rdepids:
if itemid in self.rdepids[fnid]:
dependees.append(self.fn_index[fnid])
return dependees
def get_reasons(self, item, runtime=False):
@@ -326,7 +413,7 @@ class TaskData:
except bb.providers.NoProvider:
if self.abort:
raise
self.remove_buildtarget(item)
self.remove_buildtarget(self.getbuild_id(item))
self.mark_external_target(item)
@@ -341,14 +428,7 @@ class TaskData:
return
if not item in dataCache.providers:
close_matches = self.get_close_matches(item, list(dataCache.providers.keys()))
# Is it in RuntimeProviders ?
all_p = bb.providers.getRuntimeProviders(dataCache, item)
for fn in all_p:
new = dataCache.pkg_fn[fn] + " RPROVIDES " + item
if new not in close_matches:
close_matches.append(new)
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees(item), reasons=self.get_reasons(item), close_matches=close_matches), cfgData)
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees_str(item), reasons=self.get_reasons(item), close_matches=self.get_close_matches(item, dataCache.providers.keys())), cfgData)
raise bb.providers.NoProvider(item)
if self.have_build_target(item):
@@ -357,10 +437,10 @@ class TaskData:
all_p = dataCache.providers[item]
eligible, foundUnique = bb.providers.filterProviders(all_p, item, cfgData, dataCache)
eligible = [p for p in eligible if not p in self.failed_fns]
eligible = [p for p in eligible if not self.getfn_id(p) in self.failed_fnids]
if not eligible:
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees(item), reasons=["No eligible PROVIDERs exist for '%s'" % item]), cfgData)
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees_str(item), reasons=["No eligible PROVIDERs exist for '%s'" % item]), cfgData)
raise bb.providers.NoProvider(item)
if len(eligible) > 1 and foundUnique == False:
@@ -372,7 +452,8 @@ class TaskData:
self.consider_msgs_cache.append(item)
for fn in eligible:
if fn in self.failed_fns:
fnid = self.getfn_id(fn)
if fnid in self.failed_fnids:
continue
logger.debug(2, "adding %s to satisfy %s", fn, item)
self.add_build_target(fn, item)
@@ -396,14 +477,14 @@ class TaskData:
all_p = bb.providers.getRuntimeProviders(dataCache, item)
if not all_p:
bb.event.fire(bb.event.NoProvider(item, runtime=True, dependees=self.get_rdependees(item), reasons=self.get_reasons(item, True)), cfgData)
bb.event.fire(bb.event.NoProvider(item, runtime=True, dependees=self.get_rdependees_str(item), reasons=self.get_reasons(item, True)), cfgData)
raise bb.providers.NoRProvider(item)
eligible, numberPreferred = bb.providers.filterProvidersRunTime(all_p, item, cfgData, dataCache)
eligible = [p for p in eligible if not p in self.failed_fns]
eligible = [p for p in eligible if not self.getfn_id(p) in self.failed_fnids]
if not eligible:
bb.event.fire(bb.event.NoProvider(item, runtime=True, dependees=self.get_rdependees(item), reasons=["No eligible RPROVIDERs exist for '%s'" % item]), cfgData)
bb.event.fire(bb.event.NoProvider(item, runtime=True, dependees=self.get_rdependees_str(item), reasons=["No eligible RPROVIDERs exist for '%s'" % item]), cfgData)
raise bb.providers.NoRProvider(item)
if len(eligible) > 1 and numberPreferred == 0:
@@ -425,80 +506,80 @@ class TaskData:
# run through the list until we find one that we can build
for fn in eligible:
if fn in self.failed_fns:
fnid = self.getfn_id(fn)
if fnid in self.failed_fnids:
continue
logger.debug(2, "adding '%s' to satisfy runtime '%s'", fn, item)
self.add_runtime_target(fn, item)
self.add_tasks(fn, dataCache)
def fail_fn(self, fn, missing_list=None):
def fail_fnid(self, fnid, missing_list = []):
"""
Mark a file as failed (unbuildable)
Remove any references from build and runtime provider lists
missing_list, A list of missing requirements for this target
"""
if fn in self.failed_fns:
if fnid in self.failed_fnids:
return
if not missing_list:
missing_list = []
logger.debug(1, "File '%s' is unbuildable, removing...", fn)
self.failed_fns.append(fn)
logger.debug(1, "File '%s' is unbuildable, removing...", self.fn_index[fnid])
self.failed_fnids.append(fnid)
for target in self.build_targets:
if fn in self.build_targets[target]:
self.build_targets[target].remove(fn)
if fnid in self.build_targets[target]:
self.build_targets[target].remove(fnid)
if len(self.build_targets[target]) == 0:
self.remove_buildtarget(target, missing_list)
for target in self.run_targets:
if fn in self.run_targets[target]:
self.run_targets[target].remove(fn)
if fnid in self.run_targets[target]:
self.run_targets[target].remove(fnid)
if len(self.run_targets[target]) == 0:
self.remove_runtarget(target, missing_list)
def remove_buildtarget(self, target, missing_list=None):
def remove_buildtarget(self, targetid, missing_list = []):
"""
Mark a build target as failed (unbuildable)
Trigger removal of any files that have this as a dependency
"""
if not missing_list:
missing_list = [target]
missing_list = [self.build_names_index[targetid]]
else:
missing_list = [target] + missing_list
logger.verbose("Target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s", target, missing_list)
self.failed_deps.append(target)
dependees = self.get_dependees(target)
for fn in dependees:
self.fail_fn(fn, missing_list)
for tid in self.taskentries:
for (idepend, idependtask) in self.taskentries[tid].idepends:
if idepend == target:
fn = tid.rsplit(":",1)[0]
self.fail_fn(fn, missing_list)
missing_list = [self.build_names_index[targetid]] + missing_list
logger.verbose("Target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s", self.build_names_index[targetid], missing_list)
self.failed_deps.append(targetid)
dependees = self.get_dependees(targetid)
for fnid in dependees:
self.fail_fnid(fnid, missing_list)
for taskid in xrange(len(self.tasks_idepends)):
idepends = self.tasks_idepends[taskid]
for (idependid, idependtask) in idepends:
if idependid == targetid:
self.fail_fnid(self.tasks_fnid[taskid], missing_list)
if self.abort and target in self.external_targets:
if self.abort and targetid in self.external_targets:
target = self.build_names_index[targetid]
logger.error("Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s", target, missing_list)
raise bb.providers.NoProvider(target)
def remove_runtarget(self, target, missing_list=None):
def remove_runtarget(self, targetid, missing_list = []):
"""
Mark a run target as failed (unbuildable)
Trigger removal of any files that have this as a dependency
"""
if not missing_list:
missing_list = [target]
missing_list = [self.run_names_index[targetid]]
else:
missing_list = [target] + missing_list
missing_list = [self.run_names_index[targetid]] + missing_list
logger.info("Runtime target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s", target, missing_list)
self.failed_rdeps.append(target)
dependees = self.get_rdependees(target)
for fn in dependees:
self.fail_fn(fn, missing_list)
for tid in self.taskentries:
for (idepend, idependtask) in self.taskentries[tid].irdepends:
if idepend == target:
fn = tid.rsplit(":",1)[0]
self.fail_fn(fn, missing_list)
logger.info("Runtime target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s", self.run_names_index[targetid], missing_list)
self.failed_rdeps.append(targetid)
dependees = self.get_rdependees(targetid)
for fnid in dependees:
self.fail_fnid(fnid, missing_list)
for taskid in xrange(len(self.tasks_irdepends)):
irdepends = self.tasks_irdepends[taskid]
for (idependid, idependtask) in irdepends:
if idependid == targetid:
self.fail_fnid(self.tasks_fnid[taskid], missing_list)
def add_unresolved(self, cfgData, dataCache):
"""
@@ -512,68 +593,59 @@ class TaskData:
self.add_provider_internal(cfgData, dataCache, target)
added = added + 1
except bb.providers.NoProvider:
if self.abort and target in self.external_targets and not self.allowincomplete:
targetid = self.getbuild_id(target)
if self.abort and targetid in self.external_targets:
raise
if not self.allowincomplete:
self.remove_buildtarget(target)
self.remove_buildtarget(targetid)
for target in self.get_unresolved_run_targets(dataCache):
try:
self.add_rprovider(cfgData, dataCache, target)
added = added + 1
except (bb.providers.NoRProvider, bb.providers.MultipleRProvider):
self.remove_runtarget(target)
self.remove_runtarget(self.getrun_id(target))
logger.debug(1, "Resolved " + str(added) + " extra dependencies")
if added == 0:
break
# self.dump_data()
def get_providermap(self, prefix=None):
provmap = {}
for name in self.build_targets:
if prefix and not name.startswith(prefix):
continue
if self.have_build_target(name):
provider = self.get_provider(name)
if provider:
provmap[name] = provider[0]
return provmap
def dump_data(self):
"""
Dump some debug information on the internal data structures
"""
logger.debug(3, "build_names:")
logger.debug(3, ", ".join(self.build_targets))
logger.debug(3, ", ".join(self.build_names_index))
logger.debug(3, "run_names:")
logger.debug(3, ", ".join(self.run_targets))
logger.debug(3, ", ".join(self.run_names_index))
logger.debug(3, "build_targets:")
for target in self.build_targets:
for buildid in xrange(len(self.build_names_index)):
target = self.build_names_index[buildid]
targets = "None"
if target in self.build_targets:
targets = self.build_targets[target]
logger.debug(3, " %s: %s", target, targets)
if buildid in self.build_targets:
targets = self.build_targets[buildid]
logger.debug(3, " (%s)%s: %s", buildid, target, targets)
logger.debug(3, "run_targets:")
for target in self.run_targets:
for runid in xrange(len(self.run_names_index)):
target = self.run_names_index[runid]
targets = "None"
if target in self.run_targets:
targets = self.run_targets[target]
logger.debug(3, " %s: %s", target, targets)
if runid in self.run_targets:
targets = self.run_targets[runid]
logger.debug(3, " (%s)%s: %s", runid, target, targets)
logger.debug(3, "tasks:")
for tid in self.taskentries:
logger.debug(3, " %s: %s %s %s",
tid,
self.taskentries[tid].idepends,
self.taskentries[tid].irdepends,
self.taskentries[tid].tdepends)
for task in xrange(len(self.tasks_name)):
logger.debug(3, " (%s)%s - %s: %s",
task,
self.fn_index[self.tasks_fnid[task]],
self.tasks_name[task],
self.tasks_tdepends[task])
logger.debug(3, "dependency ids (per fn):")
for fn in self.depids:
logger.debug(3, " %s: %s", fn, self.depids[fn])
for fnid in self.depids:
logger.debug(3, " %s %s: %s", fnid, self.fn_index[fnid], self.depids[fnid])
logger.debug(3, "runtime dependency ids (per fn):")
for fn in self.rdepids:
logger.debug(3, " %s: %s", fn, self.rdepids[fn])
for fnid in self.rdepids:
logger.debug(3, " %s %s: %s", fnid, self.fn_index[fnid], self.rdepids[fnid])

View File

@@ -49,9 +49,6 @@ class ReferenceTest(unittest.TestCase):
def assertExecs(self, execs):
self.assertEqual(self.execs, execs)
def assertContains(self, contains):
self.assertEqual(self.contains, contains)
class VariableReferenceTest(ReferenceTest):
def parseExpression(self, exp):
@@ -71,7 +68,7 @@ class VariableReferenceTest(ReferenceTest):
def test_python_reference(self):
self.setEmptyVars(["BAR"])
self.parseExpression("${@d.getVar('BAR') + 'foo'}")
self.parseExpression("${@bb.data.getVar('BAR', d, True) + 'foo'}")
self.assertReferences(set(["BAR"]))
class ShellReferenceTest(ReferenceTest):
@@ -194,8 +191,8 @@ class PythonReferenceTest(ReferenceTest):
if hasattr(bb.utils, "_context"):
self.context = bb.utils._context
else:
import builtins
self.context = builtins.__dict__
import __builtin__
self.context = __builtin__.__dict__
def parseExpression(self, exp):
parsedvar = self.d.expandWithRefs(exp, None)
@@ -204,7 +201,6 @@ class PythonReferenceTest(ReferenceTest):
self.references = parsedvar.references | parser.references
self.execs = parser.execs
self.contains = parser.contains
@staticmethod
def indent(value):
@@ -213,17 +209,17 @@ be. These unit tests are testing snippets."""
return " " + value
def test_getvar_reference(self):
self.parseExpression("d.getVar('foo')")
self.parseExpression("bb.data.getVar('foo', d, True)")
self.assertReferences(set(["foo"]))
self.assertExecs(set())
def test_getvar_computed_reference(self):
self.parseExpression("d.getVar('f' + 'o' + 'o')")
self.parseExpression("bb.data.getVar('f' + 'o' + 'o', d, True)")
self.assertReferences(set())
self.assertExecs(set())
def test_getvar_exec_reference(self):
self.parseExpression("eval('d.getVar(\"foo\")')")
self.parseExpression("eval('bb.data.getVar(\"foo\", d, True)')")
self.assertReferences(set())
self.assertExecs(set(["eval"]))
@@ -269,35 +265,15 @@ be. These unit tests are testing snippets."""
self.assertExecs(set(["testget"]))
del self.context["testget"]
def test_contains(self):
self.parseExpression('bb.utils.contains("TESTVAR", "one", "true", "false", d)')
self.assertContains({'TESTVAR': {'one'}})
def test_contains_multi(self):
self.parseExpression('bb.utils.contains("TESTVAR", "one two", "true", "false", d)')
self.assertContains({'TESTVAR': {'one two'}})
def test_contains_any(self):
self.parseExpression('bb.utils.contains_any("TESTVAR", "hello", "true", "false", d)')
self.assertContains({'TESTVAR': {'hello'}})
def test_contains_any_multi(self):
self.parseExpression('bb.utils.contains_any("TESTVAR", "one two three", "true", "false", d)')
self.assertContains({'TESTVAR': {'one', 'two', 'three'}})
def test_contains_filter(self):
self.parseExpression('bb.utils.filter("TESTVAR", "hello there world", d)')
self.assertContains({'TESTVAR': {'hello', 'there', 'world'}})
class DependencyReferenceTest(ReferenceTest):
pydata = """
d.getVar('somevar')
bb.data.getVar('somevar', d, True)
def test(d):
foo = 'bar %s' % 'foo'
def test2(d):
d.getVar(foo)
d.getVar(foo, True)
d.getVar('bar', False)
test2(d)
@@ -309,24 +285,19 @@ def a():
test(d)
d.expand(d.getVar("something", False))
d.expand("${inexpand} somethingelse")
d.getVar(a(), False)
bb.data.expand(bb.data.getVar("something", False, d), d)
bb.data.expand("${inexpand} somethingelse", d)
bb.data.getVar(a(), d, False)
"""
def test_python(self):
self.d.setVar("FOO", self.pydata)
self.setEmptyVars(["inexpand", "a", "test2", "test"])
self.d.setVarFlags("FOO", {
"func": True,
"python": True,
"lineno": 1,
"filename": "example.bb",
})
self.d.setVarFlags("FOO", {"func": True, "python": True})
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["somevar", "bar", "something", "inexpand", "test", "test2", "a"]))
self.assertEquals(deps, set(["somevar", "bar", "something", "inexpand", "test", "test2", "a"]))
shelldata = """
@@ -373,7 +344,7 @@ esac
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["somevar", "inverted"] + execs))
self.assertEquals(deps, set(["somevar", "inverted"] + execs))
def test_vardeps(self):
@@ -383,7 +354,7 @@ esac
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["oe_libinstall"]))
self.assertEquals(deps, set(["oe_libinstall"]))
def test_vardeps_expand(self):
self.d.setVar("oe_libinstall", "echo test")
@@ -392,31 +363,7 @@ esac
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["oe_libinstall"]))
def test_contains_vardeps(self):
expr = '${@bb.utils.filter("TESTVAR", "somevalue anothervalue", d)} \
${@bb.utils.contains("TESTVAR", "testval testval2", "yetanothervalue", "", d)} \
${@bb.utils.contains("TESTVAR", "testval2 testval3", "blah", "", d)} \
${@bb.utils.contains_any("TESTVAR", "testval2 testval3", "lastone", "", d)}'
parsedvar = self.d.expandWithRefs(expr, None)
# Check contains
self.assertEqual(parsedvar.contains, {'TESTVAR': {'testval2 testval3', 'anothervalue', 'somevalue', 'testval testval2', 'testval2', 'testval3'}})
# Check dependencies
self.d.setVar('ANOTHERVAR', expr)
self.d.setVar('TESTVAR', 'anothervalue testval testval2')
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(sorted(values.splitlines()),
sorted([expr,
'TESTVAR{anothervalue} = Set',
'TESTVAR{somevalue} = Unset',
'TESTVAR{testval testval2} = Set',
'TESTVAR{testval2 testval3} = Unset',
'TESTVAR{testval2} = Set',
'TESTVAR{testval3} = Unset'
]))
# Check final value
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['anothervalue', 'yetanothervalue', 'lastone'])
self.assertEquals(deps, set(["oe_libinstall"]))
#Currently no wildcard support
#def test_vardeps_wildcards(self):

View File

@@ -34,14 +34,14 @@ class COWTestCase(unittest.TestCase):
from bb.COW import COWDictBase
a = COWDictBase.copy()
self.assertEqual(False, 'a' in a)
self.assertEquals(False, a.has_key('a'))
a['a'] = 'a'
a['b'] = 'b'
self.assertEqual(True, 'a' in a)
self.assertEqual(True, 'b' in a)
self.assertEqual('a', a['a'] )
self.assertEqual('b', a['b'] )
self.assertEquals(True, a.has_key('a'))
self.assertEquals(True, a.has_key('b'))
self.assertEquals('a', a['a'] )
self.assertEquals('b', a['b'] )
def testCopyCopy(self):
"""
@@ -60,31 +60,31 @@ class COWTestCase(unittest.TestCase):
c['a'] = 30
# test separation of the two instances
self.assertEqual(False, 'c' in c)
self.assertEqual(30, c['a'])
self.assertEqual(10, b['a'])
self.assertEquals(False, c.has_key('c'))
self.assertEquals(30, c['a'])
self.assertEquals(10, b['a'])
# test copy
b_2 = b.copy()
c_2 = c.copy()
self.assertEqual(False, 'c' in c_2)
self.assertEqual(10, b_2['a'])
self.assertEquals(False, c_2.has_key('c'))
self.assertEquals(10, b_2['a'])
b_2['d'] = 40
self.assertEqual(False, 'd' in c_2)
self.assertEqual(True, 'd' in b_2)
self.assertEqual(40, b_2['d'])
self.assertEqual(False, 'd' in b)
self.assertEqual(False, 'd' in c)
self.assertEquals(False, c_2.has_key('d'))
self.assertEquals(True, b_2.has_key('d'))
self.assertEquals(40, b_2['d'])
self.assertEquals(False, b.has_key('d'))
self.assertEquals(False, c.has_key('d'))
c_2['d'] = 30
self.assertEqual(True, 'd' in c_2)
self.assertEqual(True, 'd' in b_2)
self.assertEqual(30, c_2['d'])
self.assertEqual(40, b_2['d'])
self.assertEqual(False, 'd' in b)
self.assertEqual(False, 'd' in c)
self.assertEquals(True, c_2.has_key('d'))
self.assertEquals(True, b_2.has_key('d'))
self.assertEquals(30, c_2['d'])
self.assertEquals(40, b_2['d'])
self.assertEquals(False, b.has_key('d'))
self.assertEquals(False, c.has_key('d'))
# test copy of the copy
c_3 = c_2.copy()
@@ -92,19 +92,19 @@ class COWTestCase(unittest.TestCase):
b_3_2 = b_2.copy()
c_3['e'] = 4711
self.assertEqual(4711, c_3['e'])
self.assertEqual(False, 'e' in c_2)
self.assertEqual(False, 'e' in b_3)
self.assertEqual(False, 'e' in b_3_2)
self.assertEqual(False, 'e' in b_2)
self.assertEquals(4711, c_3['e'])
self.assertEquals(False, c_2.has_key('e'))
self.assertEquals(False, b_3.has_key('e'))
self.assertEquals(False, b_3_2.has_key('e'))
self.assertEquals(False, b_2.has_key('e'))
b_3['e'] = 'viel'
self.assertEqual('viel', b_3['e'])
self.assertEqual(4711, c_3['e'])
self.assertEqual(False, 'e' in c_2)
self.assertEqual(True, 'e' in b_3)
self.assertEqual(False, 'e' in b_3_2)
self.assertEqual(False, 'e' in b_2)
self.assertEquals('viel', b_3['e'])
self.assertEquals(4711, c_3['e'])
self.assertEquals(False, c_2.has_key('e'))
self.assertEquals(True, b_3.has_key('e'))
self.assertEquals(False, b_3_2.has_key('e'))
self.assertEquals(False, b_2.has_key('e'))
def testCow(self):
from bb.COW import COWDictBase
@@ -115,12 +115,12 @@ class COWTestCase(unittest.TestCase):
copy = c.copy()
self.assertEqual(1027, c['123'])
self.assertEqual(4711, c['other'])
self.assertEqual({'abc':10, 'bcd':20}, c['d'])
self.assertEqual(1027, copy['123'])
self.assertEqual(4711, copy['other'])
self.assertEqual({'abc':10, 'bcd':20}, copy['d'])
self.assertEquals(1027, c['123'])
self.assertEquals(4711, c['other'])
self.assertEquals({'abc':10, 'bcd':20}, c['d'])
self.assertEquals(1027, copy['123'])
self.assertEquals(4711, copy['other'])
self.assertEquals({'abc':10, 'bcd':20}, copy['d'])
# cow it now
copy['123'] = 1028
@@ -128,9 +128,9 @@ class COWTestCase(unittest.TestCase):
copy['d']['abc'] = 20
self.assertEqual(1027, c['123'])
self.assertEqual(4711, c['other'])
self.assertEqual({'abc':10, 'bcd':20}, c['d'])
self.assertEqual(1028, copy['123'])
self.assertEqual(4712, copy['other'])
self.assertEqual({'abc':20, 'bcd':20}, copy['d'])
self.assertEquals(1027, c['123'])
self.assertEquals(4711, c['other'])
self.assertEquals({'abc':10, 'bcd':20}, c['d'])
self.assertEquals(1028, copy['123'])
self.assertEquals(4712, copy['other'])
self.assertEquals({'abc':20, 'bcd':20}, copy['d'])

View File

@@ -24,30 +24,6 @@ import unittest
import bb
import bb.data
import bb.parse
import logging
class LogRecord():
def __enter__(self):
logs = []
class LogHandler(logging.Handler):
def emit(self, record):
logs.append(record)
logger = logging.getLogger("BitBake")
handler = LogHandler()
self.handler = handler
logger.addHandler(handler)
return logs
def __exit__(self, type, value, traceback):
logger = logging.getLogger("BitBake")
logger.removeHandler(self.handler)
return
def logContains(item, logs):
for l in logs:
m = l.getMessage()
if item in m:
return True
return False
class DataExpansions(unittest.TestCase):
def setUp(self):
@@ -77,14 +53,9 @@ class DataExpansions(unittest.TestCase):
self.assertEqual(str(val), "boo value_of_foo")
def test_python_snippet_getvar(self):
val = self.d.expand("${@d.getVar('foo') + ' ${bar}'}")
val = self.d.expand("${@d.getVar('foo', True) + ' ${bar}'}")
self.assertEqual(str(val), "value_of_foo value_of_bar")
def test_python_unexpanded(self):
self.d.setVar("bar", "${unsetvar}")
val = self.d.expand("${@d.getVar('foo') + ' ${bar}'}")
self.assertEqual(str(val), "${@d.getVar('foo') + ' ${unsetvar}'}")
def test_python_snippet_syntax_error(self):
self.d.setVar("FOO", "${@foo = 5}")
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
@@ -99,7 +70,7 @@ class DataExpansions(unittest.TestCase):
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
def test_value_containing_value(self):
val = self.d.expand("${@d.getVar('foo') + ' ${bar}'}")
val = self.d.expand("${@d.getVar('foo', True) + ' ${bar}'}")
self.assertEqual(str(val), "value_of_foo value_of_bar")
def test_reference_undefined_var(self):
@@ -109,7 +80,7 @@ class DataExpansions(unittest.TestCase):
def test_double_reference(self):
self.d.setVar("BAR", "bar value")
self.d.setVar("FOO", "${BAR} foo ${BAR}")
val = self.d.getVar("FOO")
val = self.d.getVar("FOO", True)
self.assertEqual(str(val), "bar value foo bar value")
def test_direct_recursion(self):
@@ -129,32 +100,26 @@ class DataExpansions(unittest.TestCase):
def test_incomplete_varexp_single_quotes(self):
self.d.setVar("FOO", "sed -i -e 's:IP{:I${:g' $pc")
val = self.d.getVar("FOO")
val = self.d.getVar("FOO", True)
self.assertEqual(str(val), "sed -i -e 's:IP{:I${:g' $pc")
def test_nonstring(self):
self.d.setVar("TEST", 5)
val = self.d.getVar("TEST")
val = self.d.getVar("TEST", True)
self.assertEqual(str(val), "5")
def test_rename(self):
self.d.renameVar("foo", "newfoo")
self.assertEqual(self.d.getVar("newfoo", False), "value_of_foo")
self.assertEqual(self.d.getVar("foo", False), None)
self.assertEqual(self.d.getVar("newfoo"), "value_of_foo")
self.assertEqual(self.d.getVar("foo"), None)
def test_deletion(self):
self.d.delVar("foo")
self.assertEqual(self.d.getVar("foo", False), None)
self.assertEqual(self.d.getVar("foo"), None)
def test_keys(self):
keys = list(self.d.keys())
self.assertCountEqual(keys, ['value_of_foo', 'foo', 'bar'])
def test_keys_deletion(self):
newd = bb.data.createCopy(self.d)
newd.delVar("bar")
keys = list(newd.keys())
self.assertCountEqual(keys, ['value_of_foo', 'foo'])
keys = self.d.keys()
self.assertEqual(keys, ['value_of_foo', 'foo', 'bar'])
class TestNestedExpansions(unittest.TestCase):
def setUp(self):
@@ -201,28 +166,28 @@ class TestMemoize(unittest.TestCase):
def test_memoized(self):
d = bb.data.init()
d.setVar("FOO", "bar")
self.assertTrue(d.getVar("FOO", False) is d.getVar("FOO", False))
self.assertTrue(d.getVar("FOO") is d.getVar("FOO"))
def test_not_memoized(self):
d1 = bb.data.init()
d2 = bb.data.init()
d1.setVar("FOO", "bar")
d2.setVar("FOO", "bar2")
self.assertTrue(d1.getVar("FOO", False) is not d2.getVar("FOO", False))
self.assertTrue(d1.getVar("FOO") is not d2.getVar("FOO"))
def test_changed_after_memoized(self):
d = bb.data.init()
d.setVar("foo", "value of foo")
self.assertEqual(str(d.getVar("foo", False)), "value of foo")
self.assertEqual(str(d.getVar("foo")), "value of foo")
d.setVar("foo", "second value of foo")
self.assertEqual(str(d.getVar("foo", False)), "second value of foo")
self.assertEqual(str(d.getVar("foo")), "second value of foo")
def test_same_value(self):
d = bb.data.init()
d.setVar("foo", "value of")
d.setVar("bar", "value of")
self.assertEqual(d.getVar("foo", False),
d.getVar("bar", False))
self.assertEqual(d.getVar("foo"),
d.getVar("bar"))
class TestConcat(unittest.TestCase):
def setUp(self):
@@ -234,19 +199,19 @@ class TestConcat(unittest.TestCase):
def test_prepend(self):
self.d.setVar("TEST", "${VAL}")
self.d.prependVar("TEST", "${FOO}:")
self.assertEqual(self.d.getVar("TEST"), "foo:val")
self.assertEqual(self.d.getVar("TEST", True), "foo:val")
def test_append(self):
self.d.setVar("TEST", "${VAL}")
self.d.appendVar("TEST", ":${BAR}")
self.assertEqual(self.d.getVar("TEST"), "val:bar")
self.assertEqual(self.d.getVar("TEST", True), "val:bar")
def test_multiple_append(self):
self.d.setVar("TEST", "${VAL}")
self.d.prependVar("TEST", "${FOO}:")
self.d.appendVar("TEST", ":val2")
self.d.appendVar("TEST", ":${BAR}")
self.assertEqual(self.d.getVar("TEST"), "foo:val:val2:bar")
self.assertEqual(self.d.getVar("TEST", True), "foo:val:val2:bar")
class TestConcatOverride(unittest.TestCase):
def setUp(self):
@@ -258,66 +223,55 @@ class TestConcatOverride(unittest.TestCase):
def test_prepend(self):
self.d.setVar("TEST", "${VAL}")
self.d.setVar("TEST_prepend", "${FOO}:")
self.assertEqual(self.d.getVar("TEST"), "foo:val")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "foo:val")
def test_append(self):
self.d.setVar("TEST", "${VAL}")
self.d.setVar("TEST_append", ":${BAR}")
self.assertEqual(self.d.getVar("TEST"), "val:bar")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "val:bar")
def test_multiple_append(self):
self.d.setVar("TEST", "${VAL}")
self.d.setVar("TEST_prepend", "${FOO}:")
self.d.setVar("TEST_append", ":val2")
self.d.setVar("TEST_append", ":${BAR}")
self.assertEqual(self.d.getVar("TEST"), "foo:val:val2:bar")
def test_append_unset(self):
self.d.setVar("TEST_prepend", "${FOO}:")
self.d.setVar("TEST_append", ":val2")
self.d.setVar("TEST_append", ":${BAR}")
self.assertEqual(self.d.getVar("TEST"), "foo::val2:bar")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "foo:val:val2:bar")
def test_remove(self):
self.d.setVar("TEST", "${VAL} ${BAR}")
self.d.setVar("TEST_remove", "val")
self.assertEqual(self.d.getVar("TEST"), "bar")
def test_remove_cleared(self):
self.d.setVar("TEST", "${VAL} ${BAR}")
self.d.setVar("TEST_remove", "val")
self.d.setVar("TEST", "${VAL} ${BAR}")
self.assertEqual(self.d.getVar("TEST"), "val bar")
# Ensure the value is unchanged if we have an inactive remove override
# (including that whitespace is preserved)
def test_remove_inactive_override(self):
self.d.setVar("TEST", "${VAL} ${BAR} 123")
self.d.setVar("TEST_remove_inactiveoverride", "val")
self.assertEqual(self.d.getVar("TEST"), "val bar 123")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "bar")
def test_doubleref_remove(self):
self.d.setVar("TEST", "${VAL} ${BAR}")
self.d.setVar("TEST_remove", "val")
self.d.setVar("TEST_TEST", "${TEST} ${TEST}")
self.assertEqual(self.d.getVar("TEST_TEST"), "bar bar")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST_TEST", True), "bar bar")
def test_empty_remove(self):
self.d.setVar("TEST", "")
self.d.setVar("TEST_remove", "val")
self.assertEqual(self.d.getVar("TEST"), "")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "")
def test_remove_expansion(self):
self.d.setVar("BAR", "Z")
self.d.setVar("TEST", "${BAR}/X Y")
self.d.setVar("TEST_remove", "${BAR}/X")
self.assertEqual(self.d.getVar("TEST"), "Y")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "Y")
def test_remove_expansion_items(self):
self.d.setVar("TEST", "A B C D")
self.d.setVar("BAR", "B D")
self.d.setVar("TEST_remove", "${BAR}")
self.assertEqual(self.d.getVar("TEST"), "A C")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "A C")
class TestOverrides(unittest.TestCase):
def setUp(self):
@@ -326,67 +280,21 @@ class TestOverrides(unittest.TestCase):
self.d.setVar("TEST", "testvalue")
def test_no_override(self):
self.assertEqual(self.d.getVar("TEST"), "testvalue")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "testvalue")
def test_one_override(self):
self.d.setVar("TEST_bar", "testvalue2")
self.assertEqual(self.d.getVar("TEST"), "testvalue2")
def test_one_override_unset(self):
self.d.setVar("TEST2_bar", "testvalue2")
self.assertEqual(self.d.getVar("TEST2"), "testvalue2")
self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST2', 'OVERRIDES', 'TEST2_bar'])
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "testvalue2")
def test_multiple_override(self):
self.d.setVar("TEST_bar", "testvalue2")
self.d.setVar("TEST_local", "testvalue3")
self.d.setVar("TEST_foo", "testvalue4")
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST_foo', 'OVERRIDES', 'TEST_bar', 'TEST_local'])
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "testvalue3")
def test_multiple_combined_overrides(self):
self.d.setVar("TEST_local_foo_bar", "testvalue3")
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
def test_multiple_overrides_unset(self):
self.d.setVar("TEST2_local_foo_bar", "testvalue3")
self.assertEqual(self.d.getVar("TEST2"), "testvalue3")
def test_keyexpansion_override(self):
self.d.setVar("LOCAL", "local")
self.d.setVar("TEST_bar", "testvalue2")
self.d.setVar("TEST_${LOCAL}", "testvalue3")
self.d.setVar("TEST_foo", "testvalue4")
bb.data.expandKeys(self.d)
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
def test_rename_override(self):
self.d.setVar("ALTERNATIVE_ncurses-tools_class-target", "a")
self.d.setVar("OVERRIDES", "class-target")
self.d.renameVar("ALTERNATIVE_ncurses-tools", "ALTERNATIVE_lib32-ncurses-tools")
self.assertEqual(self.d.getVar("ALTERNATIVE_lib32-ncurses-tools"), "a")
def test_underscore_override(self):
self.d.setVar("TEST_bar", "testvalue2")
self.d.setVar("TEST_some_val", "testvalue3")
self.d.setVar("TEST_foo", "testvalue4")
self.d.setVar("OVERRIDES", "foo:bar:some_val")
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
class TestKeyExpansion(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
self.d.setVar("FOO", "foo")
self.d.setVar("BAR", "foo")
def test_keyexpand(self):
self.d.setVar("VAL_${FOO}", "A")
self.d.setVar("VAL_${BAR}", "B")
with LogRecord() as logs:
bb.data.expandKeys(self.d)
self.assertTrue(logContains("Variable key VAL_${FOO} (A) replaces original key VAL_foo (B)", logs))
self.assertEqual(self.d.getVar("VAL_foo"), "A")
class TestFlags(unittest.TestCase):
def setUp(self):
@@ -396,13 +304,13 @@ class TestFlags(unittest.TestCase):
self.d.setVarFlag("foo", "flag2", "value of flag2")
def test_setflag(self):
self.assertEqual(self.d.getVarFlag("foo", "flag1", False), "value of flag1")
self.assertEqual(self.d.getVarFlag("foo", "flag2", False), "value of flag2")
self.assertEqual(self.d.getVarFlag("foo", "flag1"), "value of flag1")
self.assertEqual(self.d.getVarFlag("foo", "flag2"), "value of flag2")
def test_delflag(self):
self.d.delVarFlag("foo", "flag2")
self.assertEqual(self.d.getVarFlag("foo", "flag1", False), "value of flag1")
self.assertEqual(self.d.getVarFlag("foo", "flag2", False), None)
self.assertEqual(self.d.getVarFlag("foo", "flag1"), "value of flag1")
self.assertEqual(self.d.getVarFlag("foo", "flag2"), None)
class Contains(unittest.TestCase):
@@ -441,167 +349,3 @@ class Contains(unittest.TestCase):
self.assertFalse(bb.utils.contains_any("SOMEFLAG", "x", True, False, self.d))
self.assertFalse(bb.utils.contains_any("SOMEFLAG", "x y z", True, False, self.d))
class Serialize(unittest.TestCase):
def test_serialize(self):
import tempfile
import pickle
d = bb.data.init()
d.enableTracking()
d.setVar('HELLO', 'world')
d.setVarFlag('HELLO', 'other', 'planet')
with tempfile.NamedTemporaryFile(delete=False) as tmpfile:
tmpfilename = tmpfile.name
pickle.dump(d, tmpfile)
with open(tmpfilename, 'rb') as f:
newd = pickle.load(f)
os.remove(tmpfilename)
self.assertEqual(d, newd)
self.assertEqual(newd.getVar('HELLO'), 'world')
self.assertEqual(newd.getVarFlag('HELLO', 'other'), 'planet')
# Remote datastore tests
# These really only test the interface, since in actual usage we have a
# tinfoil connector that does everything over RPC, and this doesn't test
# that.
class TestConnector:
d = None
def __init__(self, d):
self.d = d
def getVar(self, name):
return self.d._findVar(name)
def getKeys(self):
return set(self.d.keys())
def getVarHistory(self, name):
return self.d.varhistory.variable(name)
def expandPythonRef(self, varname, expr, d):
localdata = self.d.createCopy()
for key in d.localkeys():
localdata.setVar(d.getVar(key))
varparse = bb.data_smart.VariableParse(varname, localdata)
return varparse.python_sub(expr)
def setVar(self, name, value):
self.d.setVar(name, value)
def setVarFlag(self, name, flag, value):
self.d.setVarFlag(name, flag, value)
def delVar(self, name):
self.d.delVar(name)
return False
def delVarFlag(self, name, flag):
self.d.delVarFlag(name, flag)
return False
def renameVar(self, name, newname):
self.d.renameVar(name, newname)
return False
class Remote(unittest.TestCase):
def test_remote(self):
d1 = bb.data.init()
d1.enableTracking()
d2 = bb.data.init()
d2.enableTracking()
connector = TestConnector(d1)
d2.setVar('_remote_data', connector)
d1.setVar('HELLO', 'world')
d1.setVarFlag('OTHER', 'flagname', 'flagvalue')
self.assertEqual(d2.getVar('HELLO'), 'world')
self.assertEqual(d2.expand('${HELLO}'), 'world')
self.assertEqual(d2.expand('${@d.getVar("HELLO")}'), 'world')
self.assertIn('flagname', d2.getVarFlags('OTHER'))
self.assertEqual(d2.getVarFlag('OTHER', 'flagname'), 'flagvalue')
self.assertEqual(d1.varhistory.variable('HELLO'), d2.varhistory.variable('HELLO'))
# Test setVar on client side affects server
d2.setVar('HELLO', 'other-world')
self.assertEqual(d1.getVar('HELLO'), 'other-world')
# Test setVarFlag on client side affects server
d2.setVarFlag('HELLO', 'flagname', 'flagvalue')
self.assertEqual(d1.getVarFlag('HELLO', 'flagname'), 'flagvalue')
# Test client side data is incorporated in python expansion (which is done on server)
d2.setVar('FOO', 'bar')
self.assertEqual(d2.expand('${@d.getVar("FOO")}'), 'bar')
# Test overrides work
d1.setVar('FOO_test', 'baz')
d1.appendVar('OVERRIDES', ':test')
self.assertEqual(d2.getVar('FOO'), 'baz')
# Remote equivalents of local test classes
# Note that these aren't perfect since we only test in one direction
class RemoteDataExpansions(DataExpansions):
def setUp(self):
self.d1 = bb.data.init()
self.d = bb.data.init()
self.d1["foo"] = "value_of_foo"
self.d1["bar"] = "value_of_bar"
self.d1["value_of_foo"] = "value_of_'value_of_foo'"
connector = TestConnector(self.d1)
self.d.setVar('_remote_data', connector)
class TestRemoteNestedExpansions(TestNestedExpansions):
def setUp(self):
self.d1 = bb.data.init()
self.d = bb.data.init()
self.d1["foo"] = "foo"
self.d1["bar"] = "bar"
self.d1["value_of_foobar"] = "187"
connector = TestConnector(self.d1)
self.d.setVar('_remote_data', connector)
class TestRemoteConcat(TestConcat):
def setUp(self):
self.d1 = bb.data.init()
self.d = bb.data.init()
self.d1.setVar("FOO", "foo")
self.d1.setVar("VAL", "val")
self.d1.setVar("BAR", "bar")
connector = TestConnector(self.d1)
self.d.setVar('_remote_data', connector)
class TestRemoteConcatOverride(TestConcatOverride):
def setUp(self):
self.d1 = bb.data.init()
self.d = bb.data.init()
self.d1.setVar("FOO", "foo")
self.d1.setVar("VAL", "val")
self.d1.setVar("BAR", "bar")
connector = TestConnector(self.d1)
self.d.setVar('_remote_data', connector)
class TestRemoteOverrides(TestOverrides):
def setUp(self):
self.d1 = bb.data.init()
self.d = bb.data.init()
self.d1.setVar("OVERRIDES", "foo:bar:local")
self.d1.setVar("TEST", "testvalue")
connector = TestConnector(self.d1)
self.d.setVar('_remote_data', connector)
class TestRemoteKeyExpansion(TestKeyExpansion):
def setUp(self):
self.d1 = bb.data.init()
self.d = bb.data.init()
self.d1.setVar("FOO", "foo")
self.d1.setVar("BAR", "foo")
connector = TestConnector(self.d1)
self.d.setVar('_remote_data', connector)
class TestRemoteFlags(TestFlags):
def setUp(self):
self.d1 = bb.data.init()
self.d = bb.data.init()
self.d1.setVar("foo", "value of foo")
self.d1.setVarFlag("foo", "flag1", "value of flag1")
self.d1.setVarFlag("foo", "flag2", "value of flag2")
connector = TestConnector(self.d1)
self.d.setVar('_remote_data', connector)

View File

@@ -22,10 +22,8 @@
import unittest
import tempfile
import subprocess
import collections
import os
from bb.fetch2 import URI
from bb.fetch2 import FetchMethod
import bb
class URITest(unittest.TestCase):
@@ -134,10 +132,10 @@ class URITest(unittest.TestCase):
'userinfo': 'anoncvs:anonymous',
'username': 'anoncvs',
'password': 'anonymous',
'params': collections.OrderedDict([
('tag', 'V0-99-81'),
('module', 'familiar/dist/ipkg')
]),
'params': {
'tag': 'V0-99-81',
'module': 'familiar/dist/ipkg'
},
'query': {},
'relative': False
},
@@ -229,38 +227,7 @@ class URITest(unittest.TestCase):
'params': {},
'query': {},
'relative': False
},
"http://somesite.net;someparam=1": {
'uri': 'http://somesite.net;someparam=1',
'scheme': 'http',
'hostname': 'somesite.net',
'port': None,
'hostport': 'somesite.net',
'path': '',
'userinfo': '',
'userinfo': '',
'username': '',
'password': '',
'params': {"someparam" : "1"},
'query': {},
'relative': False
},
"file://somelocation;someparam=1": {
'uri': 'file:somelocation;someparam=1',
'scheme': 'file',
'hostname': '',
'port': None,
'hostport': '',
'path': 'somelocation',
'userinfo': '',
'userinfo': '',
'username': '',
'password': '',
'params': {"someparam" : "1"},
'query': {},
'relative': True
}
}
def test_uri(self):
@@ -347,7 +314,6 @@ class URITest(unittest.TestCase):
class FetcherTest(unittest.TestCase):
def setUp(self):
self.origdir = os.getcwd()
self.d = bb.data.init()
self.tempdir = tempfile.mkdtemp()
self.dldir = os.path.join(self.tempdir, "download")
@@ -359,11 +325,7 @@ class FetcherTest(unittest.TestCase):
self.d.setVar("PERSISTENT_DIR", persistdir)
def tearDown(self):
os.chdir(self.origdir)
if os.environ.get("BB_TMPDIR_NOCLEAN") == "yes":
print("Not cleaning up %s. Please remove manually." % self.tempdir)
else:
bb.utils.prunedir(self.tempdir)
bb.utils.prunedir(self.tempdir)
class MirrorUriTest(FetcherTest):
@@ -428,33 +390,11 @@ class MirrorUriTest(FetcherTest):
uris, uds = bb.fetch2.build_mirroruris(fetcher, mirrors, self.d)
self.assertEqual(uris, ['file:///someotherpath/downloads/bitbake-1.0.tar.gz'])
def test_mirror_of_mirror(self):
# Test if mirror of a mirror works
mirrorvar = self.mirrorvar + " http://.*/.* http://otherdownloads.yoctoproject.org/downloads/ \n"
mirrorvar = mirrorvar + " http://otherdownloads.yoctoproject.org/.* http://downloads2.yoctoproject.org/downloads/ \n"
fetcher = bb.fetch.FetchData("http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", self.d)
mirrors = bb.fetch2.mirror_from_string(mirrorvar)
uris, uds = bb.fetch2.build_mirroruris(fetcher, mirrors, self.d)
self.assertEqual(uris, ['file:///somepath/downloads/bitbake-1.0.tar.gz',
'file:///someotherpath/downloads/bitbake-1.0.tar.gz',
'http://otherdownloads.yoctoproject.org/downloads/bitbake-1.0.tar.gz',
'http://downloads2.yoctoproject.org/downloads/bitbake-1.0.tar.gz'])
recmirrorvar = "https://.*/[^/]* http://AAAA/A/A/A/ \n" \
"https://.*/[^/]* https://BBBB/B/B/B/ \n"
def test_recursive(self):
fetcher = bb.fetch.FetchData("https://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", self.d)
mirrors = bb.fetch2.mirror_from_string(self.recmirrorvar)
uris, uds = bb.fetch2.build_mirroruris(fetcher, mirrors, self.d)
self.assertEqual(uris, ['http://AAAA/A/A/A/bitbake/bitbake-1.0.tar.gz',
'https://BBBB/B/B/B/bitbake/bitbake-1.0.tar.gz',
'http://AAAA/A/A/A/B/B/bitbake/bitbake-1.0.tar.gz'])
class FetcherLocalTest(FetcherTest):
def setUp(self):
def touch(fn):
with open(fn, 'a'):
with file(fn, 'a'):
os.utime(fn, None)
super(FetcherLocalTest, self).setUp()
@@ -486,7 +426,9 @@ class FetcherLocalTest(FetcherTest):
def test_local_wildcard(self):
tree = self.fetchUnpack(['file://a', 'file://dir/*'])
self.assertEqual(tree, ['a', 'dir/c', 'dir/d', 'dir/subdir/e'])
# FIXME: this is broken - it should return ['a', 'dir/c', 'dir/d', 'dir/subdir/e']
# see https://bugzilla.yoctoproject.org/show_bug.cgi?id=6128
self.assertEqual(tree, ['a', 'b', 'dir/c', 'dir/d', 'dir/subdir/e'])
def test_local_dir(self):
tree = self.fetchUnpack(['file://a', 'file://dir'])
@@ -494,29 +436,22 @@ class FetcherLocalTest(FetcherTest):
def test_local_subdir(self):
tree = self.fetchUnpack(['file://dir/subdir'])
self.assertEqual(tree, ['dir/subdir/e'])
# FIXME: this is broken - it should return ['dir/subdir/e']
# see https://bugzilla.yoctoproject.org/show_bug.cgi?id=6129
self.assertEqual(tree, ['subdir/e'])
def test_local_subdir_file(self):
tree = self.fetchUnpack(['file://dir/subdir/e'])
self.assertEqual(tree, ['dir/subdir/e'])
def test_local_subdirparam(self):
tree = self.fetchUnpack(['file://a;subdir=bar', 'file://dir;subdir=foo/moo'])
self.assertEqual(tree, ['bar/a', 'foo/moo/dir/c', 'foo/moo/dir/d', 'foo/moo/dir/subdir/e'])
tree = self.fetchUnpack(['file://a;subdir=bar'])
self.assertEqual(tree, ['bar/a'])
def test_local_deepsubdirparam(self):
tree = self.fetchUnpack(['file://dir/subdir/e;subdir=bar'])
self.assertEqual(tree, ['bar/dir/subdir/e'])
def test_local_absolutedir(self):
# Unpacking to an absolute path that is a subdirectory of the root
# should work
tree = self.fetchUnpack(['file://a;subdir=%s' % os.path.join(self.unpackdir, 'bar')])
# Unpacking to an absolute path outside of the root should fail
with self.assertRaises(bb.fetch2.UnpackError):
self.fetchUnpack(['file://a;subdir=/bin/sh'])
class FetcherNetworkTest(FetcherTest):
if os.environ.get("BB_SKIP_NETTESTS") == "yes":
@@ -540,19 +475,6 @@ class FetcherNetworkTest(FetcherTest):
fetcher.download()
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
def test_fetch_mirror_of_mirror(self):
self.d.setVar("MIRRORS", "http://.*/.* http://invalid2.yoctoproject.org/ \n http://invalid2.yoctoproject.org/.* http://downloads.yoctoproject.org/releases/bitbake")
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
fetcher.download()
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
def test_fetch_file_mirror_of_mirror(self):
self.d.setVar("MIRRORS", "http://.*/.* file:///some1where/ \n file:///some1where/.* file://some2where/ \n file://some2where/.* http://downloads.yoctoproject.org/releases/bitbake")
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
os.mkdir(self.dldir + "/some2where")
fetcher.download()
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
def test_fetch_premirror(self):
self.d.setVar("PREMIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
@@ -597,36 +519,6 @@ class FetcherNetworkTest(FetcherTest):
url1 = url2 = "git://git.openembedded.org/bitbake;rev=270a05b0b4ba0959fe0624d2a4885d7b70426da5;tag=270a05b0b4ba0959fe0624d2a4885d7b70426da5"
self.assertRaises(bb.fetch.FetchError, self.gitfetcher, url1, url2)
def test_gitfetch_localusehead(self):
# Create dummy local Git repo
src_dir = tempfile.mkdtemp(dir=self.tempdir,
prefix='gitfetch_localusehead_')
src_dir = os.path.abspath(src_dir)
bb.process.run("git init", cwd=src_dir)
bb.process.run("git commit --allow-empty -m'Dummy commit'",
cwd=src_dir)
# Use other branch than master
bb.process.run("git checkout -b my-devel", cwd=src_dir)
bb.process.run("git commit --allow-empty -m'Dummy commit 2'",
cwd=src_dir)
stdout = bb.process.run("git rev-parse HEAD", cwd=src_dir)
orig_rev = stdout[0].strip()
# Fetch and check revision
self.d.setVar("SRCREV", "AUTOINC")
url = "git://" + src_dir + ";protocol=file;usehead=1"
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
fetcher.unpack(self.unpackdir)
stdout = bb.process.run("git rev-parse HEAD",
cwd=os.path.join(self.unpackdir, 'git'))
unpack_rev = stdout[0].strip()
self.assertEqual(orig_rev, unpack_rev)
def test_gitfetch_remoteusehead(self):
url = "git://git.openembedded.org/bitbake;usehead=1"
self.assertRaises(bb.fetch.ParameterError, self.gitfetcher, url, url)
def test_gitfetch_premirror(self):
url1 = "git://git.openembedded.org/bitbake"
url2 = "git://someserver.org/bitbake"
@@ -654,68 +546,17 @@ class FetcherNetworkTest(FetcherTest):
os.chdir(os.path.dirname(self.unpackdir))
fetcher.unpack(self.unpackdir)
class TrustedNetworksTest(FetcherTest):
def test_trusted_network(self):
# Ensure trusted_network returns False when the host IS in the list.
url = "git://Someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_wild_trusted_network(self):
# Ensure trusted_network returns true when the *.host IS in the list.
url = "git://Someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_prefix_wild_trusted_network(self):
# Ensure trusted_network returns true when the prefix matches *.host.
url = "git://git.Someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_two_prefix_wild_trusted_network(self):
# Ensure trusted_network returns true when the prefix matches *.host.
url = "git://something.git.Someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_port_trusted_network(self):
# Ensure trusted_network returns True, even if the url specifies a port.
url = "git://someserver.org:8080/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "someserver.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_untrusted_network(self):
# Ensure trusted_network returns False when the host is NOT in the list.
url = "git://someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org server2.org server3.org")
self.assertFalse(bb.fetch.trusted_network(self.d, url))
def test_wild_untrusted_network(self):
# Ensure trusted_network returns False when the host is NOT in the list.
url = "git://*.someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org server2.org server3.org")
self.assertFalse(bb.fetch.trusted_network(self.d, url))
class URLHandle(unittest.TestCase):
datatable = {
"http://www.google.com/index.html" : ('http', 'www.google.com', '/index.html', '', '', {}),
"cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg" : ('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', '', {'module': 'familiar/dist/ipkg'}),
"cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg" : ('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', 'anonymous', collections.OrderedDict([('tag', 'V0-99-81'), ('module', 'familiar/dist/ipkg')])),
"git://git.openembedded.org/bitbake;branch=@foo" : ('git', 'git.openembedded.org', '/bitbake', '', '', {'branch': '@foo'}),
"file://somelocation;someparam=1": ('file', '', 'somelocation', '', '', {'someparam': '1'}),
"cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg" : ('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', 'anonymous', {'tag': 'V0-99-81', 'module': 'familiar/dist/ipkg'}),
"git://git.openembedded.org/bitbake;branch=@foo" : ('git', 'git.openembedded.org', '/bitbake', '', '', {'branch': '@foo'})
}
# we require a pathname to encodeurl but users can still pass such urls to
# decodeurl and we need to handle them
decodedata = datatable.copy()
decodedata.update({
"http://somesite.net;someparam=1": ('http', 'somesite.net', '', '', '', {'someparam': '1'}),
})
def test_decodeurl(self):
for k, v in self.decodedata.items():
for k, v in self.datatable.items():
result = bb.fetch.decodeurl(k)
self.assertEqual(result, v)
@@ -724,131 +565,5 @@ class URLHandle(unittest.TestCase):
result = bb.fetch.encodeurl(v)
self.assertEqual(result, k)
class FetchLatestVersionTest(FetcherTest):
test_git_uris = {
# version pattern "X.Y.Z"
("mx-1.0", "git://github.com/clutter-project/mx.git;branch=mx-1.4", "9b1db6b8060bd00b121a692f942404a24ae2960f", "")
: "1.99.4",
# version pattern "vX.Y"
("mtd-utils", "git://git.infradead.org/mtd-utils.git", "ca39eb1d98e736109c64ff9c1aa2a6ecca222d8f", "")
: "1.5.0",
# version pattern "pkg_name-X.Y"
("presentproto", "git://anongit.freedesktop.org/git/xorg/proto/presentproto", "24f3a56e541b0a9e6c6ee76081f441221a120ef9", "")
: "1.0",
# version pattern "pkg_name-vX.Y.Z"
("dtc", "git://git.qemu.org/dtc.git", "65cc4d2748a2c2e6f27f1cf39e07a5dbabd80ebf", "")
: "1.4.0",
# combination version pattern
("sysprof", "git://git.gnome.org/sysprof", "cd44ee6644c3641507fb53b8a2a69137f2971219", "")
: "1.2.0",
("u-boot-mkimage", "git://git.denx.de/u-boot.git;branch=master;protocol=git", "62c175fbb8a0f9a926c88294ea9f7e88eb898f6c", "")
: "2014.01",
# version pattern "yyyymmdd"
("mobile-broadband-provider-info", "git://git.gnome.org/mobile-broadband-provider-info", "4ed19e11c2975105b71b956440acdb25d46a347d", "")
: "20120614",
# packages with a valid UPSTREAM_CHECK_GITTAGREGEX
("xf86-video-omap", "git://anongit.freedesktop.org/xorg/driver/xf86-video-omap", "ae0394e687f1a77e966cf72f895da91840dffb8f", "(?P<pver>(\d+\.(\d\.?)*))")
: "0.4.3",
("build-appliance-image", "git://git.yoctoproject.org/poky", "b37dd451a52622d5b570183a81583cc34c2ff555", "(?P<pver>(([0-9][\.|_]?)+[0-9]))")
: "11.0.0",
("chkconfig-alternatives-native", "git://github.com/kergoth/chkconfig;branch=sysroot", "cd437ecbd8986c894442f8fce1e0061e20f04dee", "chkconfig\-(?P<pver>((\d+[\.\-_]*)+))")
: "1.3.59",
("remake", "git://github.com/rocky/remake.git", "f05508e521987c8494c92d9c2871aec46307d51d", "(?P<pver>(\d+\.(\d+\.)*\d*(\+dbg\d+(\.\d+)*)*))")
: "3.82+dbg0.9",
}
test_wget_uris = {
# packages with versions inside directory name
("util-linux", "http://kernel.org/pub/linux/utils/util-linux/v2.23/util-linux-2.24.2.tar.bz2", "", "")
: "2.24.2",
("enchant", "http://www.abisource.com/downloads/enchant/1.6.0/enchant-1.6.0.tar.gz", "", "")
: "1.6.0",
("cmake", "http://www.cmake.org/files/v2.8/cmake-2.8.12.1.tar.gz", "", "")
: "2.8.12.1",
# packages with versions only in current directory
("eglic", "http://downloads.yoctoproject.org/releases/eglibc/eglibc-2.18-svnr23787.tar.bz2", "", "")
: "2.19",
("gnu-config", "http://downloads.yoctoproject.org/releases/gnu-config/gnu-config-20120814.tar.bz2", "", "")
: "20120814",
# packages with "99" in the name of possible version
("pulseaudio", "http://freedesktop.org/software/pulseaudio/releases/pulseaudio-4.0.tar.xz", "", "")
: "5.0",
("xserver-xorg", "http://xorg.freedesktop.org/releases/individual/xserver/xorg-server-1.15.1.tar.bz2", "", "")
: "1.15.1",
# packages with valid UPSTREAM_CHECK_URI and UPSTREAM_CHECK_REGEX
("cups", "http://www.cups.org/software/1.7.2/cups-1.7.2-source.tar.bz2", "https://github.com/apple/cups/releases", "(?P<name>cups\-)(?P<pver>((\d+[\.\-_]*)+))\-source\.tar\.gz")
: "2.0.0",
("db", "http://download.oracle.com/berkeley-db/db-5.3.21.tar.gz", "http://www.oracle.com/technetwork/products/berkeleydb/downloads/index-082944.html", "http://download.oracle.com/otn/berkeley-db/(?P<name>db-)(?P<pver>((\d+[\.\-_]*)+))\.tar\.gz")
: "6.1.19",
}
if os.environ.get("BB_SKIP_NETTESTS") == "yes":
print("Unset BB_SKIP_NETTESTS to run network tests")
else:
def test_git_latest_versionstring(self):
for k, v in self.test_git_uris.items():
self.d.setVar("PN", k[0])
self.d.setVar("SRCREV", k[2])
self.d.setVar("UPSTREAM_CHECK_GITTAGREGEX", k[3])
ud = bb.fetch2.FetchData(k[1], self.d)
pupver= ud.method.latest_versionstring(ud, self.d)
verstring = pupver[0]
r = bb.utils.vercmp_string(v, verstring)
self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
def test_wget_latest_versionstring(self):
for k, v in self.test_wget_uris.items():
self.d.setVar("PN", k[0])
self.d.setVar("UPSTREAM_CHECK_URI", k[2])
self.d.setVar("UPSTREAM_CHECK_REGEX", k[3])
ud = bb.fetch2.FetchData(k[1], self.d)
pupver = ud.method.latest_versionstring(ud, self.d)
verstring = pupver[0]
r = bb.utils.vercmp_string(v, verstring)
self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
class FetchCheckStatusTest(FetcherTest):
test_wget_uris = ["http://www.cups.org/software/1.7.2/cups-1.7.2-source.tar.bz2",
"http://www.cups.org/",
"http://downloads.yoctoproject.org/releases/sato/sato-engine-0.1.tar.gz",
"http://downloads.yoctoproject.org/releases/sato/sato-engine-0.2.tar.gz",
"http://downloads.yoctoproject.org/releases/sato/sato-engine-0.3.tar.gz",
"https://yoctoproject.org/",
"https://yoctoproject.org/documentation",
"http://downloads.yoctoproject.org/releases/opkg/opkg-0.1.7.tar.gz",
"http://downloads.yoctoproject.org/releases/opkg/opkg-0.3.0.tar.gz",
"ftp://ftp.gnu.org/gnu/autoconf/autoconf-2.60.tar.gz",
"ftp://ftp.gnu.org/gnu/chess/gnuchess-5.08.tar.gz",
"ftp://ftp.gnu.org/gnu/gmp/gmp-4.0.tar.gz",
# GitHub releases are hosted on Amazon S3, which doesn't support HEAD
"https://github.com/kergoth/tslib/releases/download/1.1/tslib-1.1.tar.xz"
]
if os.environ.get("BB_SKIP_NETTESTS") == "yes":
print("Unset BB_SKIP_NETTESTS to run network tests")
else:
def test_wget_checkstatus(self):
fetch = bb.fetch2.Fetch(self.test_wget_uris, self.d)
for u in self.test_wget_uris:
ud = fetch.ud[u]
m = ud.method
ret = m.checkstatus(fetch, ud, self.d)
self.assertTrue(ret, msg="URI %s, can't check status" % (u))
def test_wget_checkstatus_connection_cache(self):
from bb.fetch2 import FetchConnectionCache
connection_cache = FetchConnectionCache()
fetch = bb.fetch2.Fetch(self.test_wget_uris, self.d,
connection_cache = connection_cache)
for u in self.test_wget_uris:
ud = fetch.ud[u]
m = ud.method
ret = m.checkstatus(fetch, ud, self.d)
self.assertTrue(ret, msg="URI %s, can't check status" % (u))
connection_cache.close_connections()

View File

@@ -1,164 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# BitBake Test for lib/bb/parse/
#
# Copyright (C) 2015 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import unittest
import tempfile
import logging
import bb
import os
logger = logging.getLogger('BitBake.TestParse')
import bb.parse
import bb.data
import bb.siggen
class ParseTest(unittest.TestCase):
testfile = """
A = "1"
B = "2"
do_install() {
echo "hello"
}
C = "3"
"""
def setUp(self):
self.d = bb.data.init()
bb.parse.siggen = bb.siggen.init(self.d)
def parsehelper(self, content, suffix = ".bb"):
f = tempfile.NamedTemporaryFile(suffix = suffix)
f.write(bytes(content, "utf-8"))
f.flush()
os.chdir(os.path.dirname(f.name))
return f
def test_parse_simple(self):
f = self.parsehelper(self.testfile)
d = bb.parse.handle(f.name, self.d)['']
self.assertEqual(d.getVar("A"), "1")
self.assertEqual(d.getVar("B"), "2")
self.assertEqual(d.getVar("C"), "3")
def test_parse_incomplete_function(self):
testfileB = self.testfile.replace("}", "")
f = self.parsehelper(testfileB)
with self.assertRaises(bb.parse.ParseError):
d = bb.parse.handle(f.name, self.d)['']
unsettest = """
A = "1"
B = "2"
B[flag] = "3"
unset A
unset B[flag]
"""
def test_parse_unset(self):
f = self.parsehelper(self.unsettest)
d = bb.parse.handle(f.name, self.d)['']
self.assertEqual(d.getVar("A"), None)
self.assertEqual(d.getVarFlag("A","flag"), None)
self.assertEqual(d.getVar("B"), "2")
overridetest = """
RRECOMMENDS_${PN} = "a"
RRECOMMENDS_${PN}_libc = "b"
OVERRIDES = "libc:${PN}"
PN = "gtk+"
"""
def test_parse_overrides(self):
f = self.parsehelper(self.overridetest)
d = bb.parse.handle(f.name, self.d)['']
self.assertEqual(d.getVar("RRECOMMENDS"), "b")
bb.data.expandKeys(d)
self.assertEqual(d.getVar("RRECOMMENDS"), "b")
d.setVar("RRECOMMENDS_gtk+", "c")
self.assertEqual(d.getVar("RRECOMMENDS"), "c")
overridetest2 = """
EXTRA_OECONF = ""
EXTRA_OECONF_class-target = "b"
EXTRA_OECONF_append = " c"
"""
def test_parse_overrides(self):
f = self.parsehelper(self.overridetest2)
d = bb.parse.handle(f.name, self.d)['']
d.appendVar("EXTRA_OECONF", " d")
d.setVar("OVERRIDES", "class-target")
self.assertEqual(d.getVar("EXTRA_OECONF"), "b c d")
overridetest3 = """
DESCRIPTION = "A"
DESCRIPTION_${PN}-dev = "${DESCRIPTION} B"
PN = "bc"
"""
def test_parse_combinations(self):
f = self.parsehelper(self.overridetest3)
d = bb.parse.handle(f.name, self.d)['']
bb.data.expandKeys(d)
self.assertEqual(d.getVar("DESCRIPTION_bc-dev"), "A B")
d.setVar("DESCRIPTION", "E")
d.setVar("DESCRIPTION_bc-dev", "C D")
d.setVar("OVERRIDES", "bc-dev")
self.assertEqual(d.getVar("DESCRIPTION"), "C D")
classextend = """
VAR_var_override1 = "B"
EXTRA = ":override1"
OVERRIDES = "nothing${EXTRA}"
BBCLASSEXTEND = "###CLASS###"
"""
classextend_bbclass = """
EXTRA = ""
python () {
d.renameVar("VAR_var", "VAR_var2")
}
"""
#
# Test based upon a real world data corruption issue. One
# data store changing a variable poked through into a different data
# store. This test case replicates that issue where the value 'B' would
# become unset/disappear.
#
def test_parse_classextend_contamination(self):
cls = self.parsehelper(self.classextend_bbclass, suffix=".bbclass")
#clsname = os.path.basename(cls.name).replace(".bbclass", "")
self.classextend = self.classextend.replace("###CLASS###", cls.name)
f = self.parsehelper(self.classextend)
alldata = bb.parse.handle(f.name, self.d)
d1 = alldata['']
d2 = alldata[cls.name]
self.assertEqual(d1.getVar("VAR_var"), "B")
self.assertEqual(d2.getVar("VAR_var"), None)

View File

@@ -22,8 +22,6 @@
import unittest
import bb
import os
import tempfile
import re
class VerCmpString(unittest.TestCase):
@@ -38,10 +36,6 @@ class VerCmpString(unittest.TestCase):
self.assertTrue(result < 0)
result = bb.utils.vercmp_string('1.1', '1_p2')
self.assertTrue(result < 0)
result = bb.utils.vercmp_string('1.0', '1.0+1.1-beta1')
self.assertTrue(result < 0)
result = bb.utils.vercmp_string('1.1', '1.0+1.1-beta1')
self.assertTrue(result > 0)
def test_explode_dep_versions(self):
correctresult = {"foo" : ["= 1.10"]}
@@ -107,497 +101,3 @@ class Path(unittest.TestCase):
for arg1, correctresult in checkitems:
result = bb.utils._check_unsafe_delete_path(arg1)
self.assertEqual(result, correctresult, '_check_unsafe_delete_path("%s") != %s' % (arg1, correctresult))
class EditMetadataFile(unittest.TestCase):
_origfile = """
# A comment
HELLO = "oldvalue"
THIS = "that"
# Another comment
NOCHANGE = "samevalue"
OTHER = 'anothervalue'
MULTILINE = "a1 \\
a2 \\
a3"
MULTILINE2 := " \\
b1 \\
b2 \\
b3 \\
"
MULTILINE3 = " \\
c1 \\
c2 \\
c3 \\
"
do_functionname() {
command1 ${VAL1} ${VAL2}
command2 ${VAL3} ${VAL4}
}
"""
def _testeditfile(self, varvalues, compareto, dummyvars=None):
if dummyvars is None:
dummyvars = []
with tempfile.NamedTemporaryFile('w', delete=False) as tf:
tf.write(self._origfile)
tf.close()
try:
varcalls = []
def handle_file(varname, origvalue, op, newlines):
self.assertIn(varname, varvalues, 'Callback called for variable %s not in the list!' % varname)
self.assertNotIn(varname, dummyvars, 'Callback called for variable %s in dummy list!' % varname)
varcalls.append(varname)
return varvalues[varname]
bb.utils.edit_metadata_file(tf.name, varvalues.keys(), handle_file)
with open(tf.name) as f:
modfile = f.readlines()
# Ensure the output matches the expected output
self.assertEqual(compareto.splitlines(True), modfile)
# Ensure the callback function was called for every variable we asked for
# (plus allow testing behaviour when a requested variable is not present)
self.assertEqual(sorted(varvalues.keys()), sorted(varcalls + dummyvars))
finally:
os.remove(tf.name)
def test_edit_metadata_file_nochange(self):
# Test file doesn't get modified with nothing to do
self._testeditfile({}, self._origfile)
# Test file doesn't get modified with only dummy variables
self._testeditfile({'DUMMY1': ('should_not_set', None, 0, True),
'DUMMY2': ('should_not_set_again', None, 0, True)}, self._origfile, dummyvars=['DUMMY1', 'DUMMY2'])
# Test file doesn't get modified with some the same values
self._testeditfile({'THIS': ('that', None, 0, True),
'OTHER': ('anothervalue', None, 0, True),
'MULTILINE3': (' c1 c2 c3 ', None, 4, False)}, self._origfile)
def test_edit_metadata_file_1(self):
newfile1 = """
# A comment
HELLO = "newvalue"
THIS = "that"
# Another comment
NOCHANGE = "samevalue"
OTHER = 'anothervalue'
MULTILINE = "a1 \\
a2 \\
a3"
MULTILINE2 := " \\
b1 \\
b2 \\
b3 \\
"
MULTILINE3 = " \\
c1 \\
c2 \\
c3 \\
"
do_functionname() {
command1 ${VAL1} ${VAL2}
command2 ${VAL3} ${VAL4}
}
"""
self._testeditfile({'HELLO': ('newvalue', None, 4, True)}, newfile1)
def test_edit_metadata_file_2(self):
newfile2 = """
# A comment
HELLO = "oldvalue"
THIS = "that"
# Another comment
NOCHANGE = "samevalue"
OTHER = 'anothervalue'
MULTILINE = " \\
d1 \\
d2 \\
d3 \\
"
MULTILINE2 := " \\
b1 \\
b2 \\
b3 \\
"
MULTILINE3 = "nowsingle"
do_functionname() {
command1 ${VAL1} ${VAL2}
command2 ${VAL3} ${VAL4}
}
"""
self._testeditfile({'MULTILINE': (['d1','d2','d3'], None, 4, False),
'MULTILINE3': ('nowsingle', None, 4, True),
'NOTPRESENT': (['a', 'b'], None, 4, False)}, newfile2, dummyvars=['NOTPRESENT'])
def test_edit_metadata_file_3(self):
newfile3 = """
# A comment
HELLO = "oldvalue"
# Another comment
NOCHANGE = "samevalue"
OTHER = "yetanothervalue"
MULTILINE = "e1 \\
e2 \\
e3 \\
"
MULTILINE2 := "f1 \\
\tf2 \\
\t"
MULTILINE3 = " \\
c1 \\
c2 \\
c3 \\
"
do_functionname() {
othercommand_one a b c
othercommand_two d e f
}
"""
self._testeditfile({'do_functionname()': (['othercommand_one a b c', 'othercommand_two d e f'], None, 4, False),
'MULTILINE2': (['f1', 'f2'], None, '\t', True),
'MULTILINE': (['e1', 'e2', 'e3'], None, -1, True),
'THIS': (None, None, 0, False),
'OTHER': ('yetanothervalue', None, 0, True)}, newfile3)
def test_edit_metadata_file_4(self):
newfile4 = """
# A comment
HELLO = "oldvalue"
THIS = "that"
# Another comment
OTHER = 'anothervalue'
MULTILINE = "a1 \\
a2 \\
a3"
MULTILINE2 := " \\
b1 \\
b2 \\
b3 \\
"
"""
self._testeditfile({'NOCHANGE': (None, None, 0, False),
'MULTILINE3': (None, None, 0, False),
'THIS': ('that', None, 0, False),
'do_functionname()': (None, None, 0, False)}, newfile4)
def test_edit_metadata(self):
newfile5 = """
# A comment
HELLO = "hithere"
# A new comment
THIS += "that"
# Another comment
NOCHANGE = "samevalue"
OTHER = 'anothervalue'
MULTILINE = "a1 \\
a2 \\
a3"
MULTILINE2 := " \\
b1 \\
b2 \\
b3 \\
"
MULTILINE3 = " \\
c1 \\
c2 \\
c3 \\
"
NEWVAR = "value"
do_functionname() {
command1 ${VAL1} ${VAL2}
command2 ${VAL3} ${VAL4}
}
"""
def handle_var(varname, origvalue, op, newlines):
if varname == 'THIS':
newlines.append('# A new comment\n')
elif varname == 'do_functionname()':
newlines.append('NEWVAR = "value"\n')
newlines.append('\n')
valueitem = varvalues.get(varname, None)
if valueitem:
return valueitem
else:
return (origvalue, op, 0, True)
varvalues = {'HELLO': ('hithere', None, 0, True), 'THIS': ('that', '+=', 0, True)}
varlist = ['HELLO', 'THIS', 'do_functionname()']
(updated, newlines) = bb.utils.edit_metadata(self._origfile.splitlines(True), varlist, handle_var)
self.assertTrue(updated, 'List should be updated but isn\'t')
self.assertEqual(newlines, newfile5.splitlines(True))
# Make sure the orig value matches what we expect it to be
def test_edit_metadata_origvalue(self):
origfile = """
MULTILINE = " stuff \\
morestuff"
"""
expected_value = "stuff morestuff"
global value_in_callback
value_in_callback = ""
def handle_var(varname, origvalue, op, newlines):
global value_in_callback
value_in_callback = origvalue
return (origvalue, op, -1, False)
bb.utils.edit_metadata(origfile.splitlines(True),
['MULTILINE'],
handle_var)
testvalue = re.sub('\s+', ' ', value_in_callback.strip())
self.assertEqual(expected_value, testvalue)
class EditBbLayersConf(unittest.TestCase):
def _test_bblayers_edit(self, before, after, add, remove, notadded, notremoved):
with tempfile.NamedTemporaryFile('w', delete=False) as tf:
tf.write(before)
tf.close()
try:
actual_notadded, actual_notremoved = bb.utils.edit_bblayers_conf(tf.name, add, remove)
with open(tf.name) as f:
actual_after = f.readlines()
self.assertEqual(after.splitlines(True), actual_after)
self.assertEqual(notadded, actual_notadded)
self.assertEqual(notremoved, actual_notremoved)
finally:
os.remove(tf.name)
def test_bblayers_remove(self):
before = r"""
# A comment
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS = " \
/home/user/path/layer1 \
/home/user/path/layer2 \
/home/user/path/subpath/layer3 \
/home/user/path/layer4 \
"
"""
after = r"""
# A comment
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS = " \
/home/user/path/layer1 \
/home/user/path/subpath/layer3 \
/home/user/path/layer4 \
"
"""
self._test_bblayers_edit(before, after,
None,
'/home/user/path/layer2',
[],
[])
def test_bblayers_add(self):
before = r"""
# A comment
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS = " \
/home/user/path/layer1 \
/home/user/path/layer2 \
/home/user/path/subpath/layer3 \
/home/user/path/layer4 \
"
"""
after = r"""
# A comment
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS = " \
/home/user/path/layer1 \
/home/user/path/layer2 \
/home/user/path/subpath/layer3 \
/home/user/path/layer4 \
/other/path/to/layer5 \
"
"""
self._test_bblayers_edit(before, after,
'/other/path/to/layer5/',
None,
[],
[])
def test_bblayers_add_remove(self):
before = r"""
# A comment
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS = " \
/home/user/path/layer1 \
/home/user/path/layer2 \
/home/user/path/subpath/layer3 \
/home/user/path/layer4 \
"
"""
after = r"""
# A comment
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS = " \
/home/user/path/layer1 \
/home/user/path/layer2 \
/home/user/path/layer4 \
/other/path/to/layer5 \
"
"""
self._test_bblayers_edit(before, after,
['/other/path/to/layer5', '/home/user/path/layer2/'], '/home/user/path/subpath/layer3/',
['/home/user/path/layer2'],
[])
def test_bblayers_add_remove_home(self):
before = r"""
# A comment
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS = " \
~/path/layer1 \
~/path/layer2 \
~/otherpath/layer3 \
~/path/layer4 \
"
"""
after = r"""
# A comment
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS = " \
~/path/layer2 \
~/path/layer4 \
~/path2/layer5 \
"
"""
self._test_bblayers_edit(before, after,
[os.environ['HOME'] + '/path/layer4', '~/path2/layer5'],
[os.environ['HOME'] + '/otherpath/layer3', '~/path/layer1', '~/path/notinlist'],
[os.environ['HOME'] + '/path/layer4'],
['~/path/notinlist'])
def test_bblayers_add_remove_plusequals(self):
before = r"""
# A comment
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS += " \
/home/user/path/layer1 \
/home/user/path/layer2 \
"
"""
after = r"""
# A comment
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS += " \
/home/user/path/layer2 \
/home/user/path/layer3 \
"
"""
self._test_bblayers_edit(before, after,
'/home/user/path/layer3',
'/home/user/path/layer1',
[],
[])
def test_bblayers_add_remove_plusequals2(self):
before = r"""
# A comment
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS += " \
/home/user/path/layer1 \
/home/user/path/layer2 \
/home/user/path/layer3 \
"
BBLAYERS += "/home/user/path/layer4"
BBLAYERS += "/home/user/path/layer5"
"""
after = r"""
# A comment
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS += " \
/home/user/path/layer2 \
/home/user/path/layer3 \
"
BBLAYERS += "/home/user/path/layer5"
BBLAYERS += "/home/user/otherpath/layer6"
"""
self._test_bblayers_edit(before, after,
['/home/user/otherpath/layer6', '/home/user/path/layer3'], ['/home/user/path/layer1', '/home/user/path/layer4', '/home/user/path/layer7'],
['/home/user/path/layer3'],
['/home/user/path/layer7'])

View File

@@ -1,6 +1,6 @@
# tinfoil: a simple wrapper around cooker for bitbake-based command-line utilities
#
# Copyright (C) 2012-2017 Intel Corporation
# Copyright (C) 2012 Intel Corporation
# Copyright (C) 2011 Mentor Graphics Corporation
#
# This program is free software; you can redistribute it and/or modify
@@ -17,466 +17,88 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import logging
import warnings
import os
import sys
import atexit
import re
from collections import OrderedDict, defaultdict
import bb.cache
import bb.cooker
import bb.providers
import bb.taskdata
import bb.utils
import bb.command
import bb.remotedata
from bb.cooker import state, BBCooker, CookerFeatures
from bb.cookerdata import CookerConfiguration, ConfigParameters
from bb.main import setup_bitbake, BitBakeConfigParameters, BBMainException
import bb.fetch2
# We need this in order to shut down the connection to the bitbake server,
# otherwise the process will never properly exit
_server_connections = []
def _terminate_connections():
for connection in _server_connections:
connection.terminate()
atexit.register(_terminate_connections)
class TinfoilUIException(Exception):
"""Exception raised when the UI returns non-zero from its main function"""
def __init__(self, returncode):
self.returncode = returncode
def __repr__(self):
return 'UI module main returned %d' % self.returncode
class TinfoilCommandFailed(Exception):
"""Exception raised when run_command fails"""
class TinfoilDataStoreConnector:
def __init__(self, tinfoil, dsindex):
self.tinfoil = tinfoil
self.dsindex = dsindex
def getVar(self, name):
value = self.tinfoil.run_command('dataStoreConnectorFindVar', self.dsindex, name)
overrides = None
if isinstance(value, dict):
if '_connector_origtype' in value:
value['_content'] = self.tinfoil._reconvert_type(value['_content'], value['_connector_origtype'])
del value['_connector_origtype']
if '_connector_overrides' in value:
overrides = value['_connector_overrides']
del value['_connector_overrides']
return value, overrides
def getKeys(self):
return set(self.tinfoil.run_command('dataStoreConnectorGetKeys', self.dsindex))
def getVarHistory(self, name):
return self.tinfoil.run_command('dataStoreConnectorGetVarHistory', self.dsindex, name)
def expandPythonRef(self, varname, expr, d):
ds = bb.remotedata.RemoteDatastores.transmit_datastore(d)
ret = self.tinfoil.run_command('dataStoreConnectorExpandPythonRef', ds, varname, expr)
return ret
def setVar(self, varname, value):
if self.dsindex is None:
self.tinfoil.run_command('setVariable', varname, value)
else:
# Not currently implemented - indicate that setting should
# be redirected to local side
return True
def setVarFlag(self, varname, flagname, value):
if self.dsindex is None:
self.tinfoil.run_command('dataStoreConnectorSetVarFlag', self.dsindex, varname, flagname, value)
else:
# Not currently implemented - indicate that setting should
# be redirected to local side
return True
def delVar(self, varname):
if self.dsindex is None:
self.tinfoil.run_command('dataStoreConnectorDelVar', self.dsindex, varname)
else:
# Not currently implemented - indicate that setting should
# be redirected to local side
return True
def delVarFlag(self, varname, flagname):
if self.dsindex is None:
self.tinfoil.run_command('dataStoreConnectorDelVar', self.dsindex, varname, flagname)
else:
# Not currently implemented - indicate that setting should
# be redirected to local side
return True
def renameVar(self, name, newname):
if self.dsindex is None:
self.tinfoil.run_command('dataStoreConnectorRenameVar', self.dsindex, name, newname)
else:
# Not currently implemented - indicate that setting should
# be redirected to local side
return True
class TinfoilCookerAdapter:
"""
Provide an adapter for existing code that expects to access a cooker object via Tinfoil,
since now Tinfoil is on the client side it no longer has direct access.
"""
class TinfoilCookerCollectionAdapter:
""" cooker.collection adapter """
def __init__(self, tinfoil):
self.tinfoil = tinfoil
def get_file_appends(self, fn):
return self.tinfoil.get_file_appends(fn)
def __getattr__(self, name):
if name == 'overlayed':
return self.tinfoil.get_overlayed_recipes()
elif name == 'bbappends':
return self.tinfoil.run_command('getAllAppends')
else:
raise AttributeError("%s instance has no attribute '%s'" % (self.__class__.__name__, name))
class TinfoilRecipeCacheAdapter:
""" cooker.recipecache adapter """
def __init__(self, tinfoil):
self.tinfoil = tinfoil
self._cache = {}
def get_pkg_pn_fn(self):
pkg_pn = defaultdict(list, self.tinfoil.run_command('getRecipes') or [])
pkg_fn = {}
for pn, fnlist in pkg_pn.items():
for fn in fnlist:
pkg_fn[fn] = pn
self._cache['pkg_pn'] = pkg_pn
self._cache['pkg_fn'] = pkg_fn
def __getattr__(self, name):
# Grab these only when they are requested since they aren't always used
if name in self._cache:
return self._cache[name]
elif name == 'pkg_pn':
self.get_pkg_pn_fn()
return self._cache[name]
elif name == 'pkg_fn':
self.get_pkg_pn_fn()
return self._cache[name]
elif name == 'deps':
attrvalue = defaultdict(list, self.tinfoil.run_command('getRecipeDepends') or [])
elif name == 'rundeps':
attrvalue = defaultdict(lambda: defaultdict(list), self.tinfoil.run_command('getRuntimeDepends') or [])
elif name == 'runrecs':
attrvalue = defaultdict(lambda: defaultdict(list), self.tinfoil.run_command('getRuntimeRecommends') or [])
elif name == 'pkg_pepvpr':
attrvalue = self.tinfoil.run_command('getRecipeVersions') or {}
elif name == 'inherits':
attrvalue = self.tinfoil.run_command('getRecipeInherits') or {}
elif name == 'bbfile_priority':
attrvalue = self.tinfoil.run_command('getBbFilePriority') or {}
elif name == 'pkg_dp':
attrvalue = self.tinfoil.run_command('getDefaultPreference') or {}
else:
raise AttributeError("%s instance has no attribute '%s'" % (self.__class__.__name__, name))
self._cache[name] = attrvalue
return attrvalue
def __init__(self, tinfoil):
self.tinfoil = tinfoil
self.collection = self.TinfoilCookerCollectionAdapter(tinfoil)
self.recipecaches = {}
# FIXME all machines
self.recipecaches[''] = self.TinfoilRecipeCacheAdapter(tinfoil)
self._cache = {}
def __getattr__(self, name):
# Grab these only when they are requested since they aren't always used
if name in self._cache:
return self._cache[name]
elif name == 'skiplist':
attrvalue = self.tinfoil.get_skipped_recipes()
elif name == 'bbfile_config_priorities':
ret = self.tinfoil.run_command('getLayerPriorities')
bbfile_config_priorities = []
for collection, pattern, regex, pri in ret:
bbfile_config_priorities.append((collection, pattern, re.compile(regex), pri))
attrvalue = bbfile_config_priorities
else:
raise AttributeError("%s instance has no attribute '%s'" % (self.__class__.__name__, name))
self._cache[name] = attrvalue
return attrvalue
def findBestProvider(self, pn):
return self.tinfoil.find_best_provider(pn)
class Tinfoil:
def __init__(self, output=sys.stdout, tracking=False):
# Needed to avoid deprecation warnings with python 2.6
warnings.filterwarnings("ignore", category=DeprecationWarning)
def __init__(self, output=sys.stdout, tracking=False, setup_logging=True):
# Set up logging
self.logger = logging.getLogger('BitBake')
self.config_data = None
self.cooker = None
self.tracking = tracking
self.ui_module = None
self.server_connection = None
if setup_logging:
# This is the *client-side* logger, nothing to do with
# logging messages from the server
bb.msg.logger_create('BitBake', output)
console = logging.StreamHandler(output)
bb.msg.addDefaultlogFilter(console)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if output.isatty():
format.enable_color()
console.setFormatter(format)
self.logger.addHandler(console)
def __enter__(self):
return self
self.config = CookerConfiguration()
configparams = TinfoilConfigParameters(parse_only=True)
self.config.setConfigParameters(configparams)
self.config.setServerRegIdleCallback(self.register_idle_function)
features = []
if tracking:
features.append(CookerFeatures.BASEDATASTORE_TRACKING)
self.cooker = BBCooker(self.config, features)
self.config_data = self.cooker.data
bb.providers.logger.setLevel(logging.ERROR)
self.cooker_data = None
def __exit__(self, type, value, traceback):
self.shutdown()
def prepare(self, config_only=False, config_params=None, quiet=0):
if self.tracking:
extrafeatures = [bb.cooker.CookerFeatures.BASEDATASTORE_TRACKING]
else:
extrafeatures = []
if not config_params:
config_params = TinfoilConfigParameters(config_only=config_only, quiet=quiet)
cookerconfig = CookerConfiguration()
cookerconfig.setConfigParameters(config_params)
server, self.server_connection, ui_module = setup_bitbake(config_params,
cookerconfig,
extrafeatures)
self.ui_module = ui_module
# Ensure the path to bitbake's bin directory is in PATH so that things like
# bitbake-worker can be run (usually this is the case, but it doesn't have to be)
path = os.getenv('PATH').split(':')
bitbakebinpath = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), '..', '..', 'bin'))
for entry in path:
if entry.endswith(os.sep):
entry = entry[:-1]
if os.path.abspath(entry) == bitbakebinpath:
break
else:
path.insert(0, bitbakebinpath)
os.environ['PATH'] = ':'.join(path)
if self.server_connection:
_server_connections.append(self.server_connection)
if config_only:
config_params.updateToServer(self.server_connection.connection, os.environ.copy())
self.run_command('parseConfiguration')
else:
self.run_actions(config_params)
self.config_data = bb.data.init()
connector = TinfoilDataStoreConnector(self, None)
self.config_data.setVar('_remote_data', connector)
self.cooker = TinfoilCookerAdapter(self)
self.cooker_data = self.cooker.recipecaches['']
else:
raise Exception('Failed to start bitbake server')
def run_actions(self, config_params):
"""
Run the actions specified in config_params through the UI.
"""
ret = self.ui_module.main(self.server_connection.connection, self.server_connection.events, config_params)
if ret:
raise TinfoilUIException(ret)
def register_idle_function(self, function, data):
pass
def parseRecipes(self):
"""
Force a parse of all recipes. Normally you should specify
config_only=False when calling prepare() instead of using this
function; this function is designed for situations where you need
to initialise Tinfoil and use it with config_only=True first and
then conditionally call this function to parse recipes later.
"""
config_params = TinfoilConfigParameters(config_only=False)
self.run_actions(config_params)
sys.stderr.write("Parsing recipes..")
self.logger.setLevel(logging.WARNING)
def run_command(self, command, *params):
"""
Run a command on the server (as implemented in bb.command).
Note that there are two types of command - synchronous and
asynchronous; in order to receive the results of asynchronous
commands you will need to set an appropriate event mask
using set_event_mask() and listen for the result using
wait_event() - with the correct event mask you'll at least get
bb.command.CommandCompleted and possibly other events before
that depending on the command.
"""
if not self.server_connection:
raise Exception('Not connected to server (did you call .prepare()?)')
try:
while self.cooker.state in (state.initial, state.parsing):
self.cooker.updateCache()
except KeyboardInterrupt:
self.cooker.shutdown()
self.cooker.updateCache()
sys.exit(2)
commandline = [command]
if params:
commandline.extend(params)
result = self.server_connection.connection.runCommand(commandline)
if result[1]:
raise TinfoilCommandFailed(result[1])
return result[0]
self.logger.setLevel(logging.INFO)
sys.stderr.write("done.\n")
def set_event_mask(self, eventlist):
"""Set the event mask which will be applied within wait_event()"""
if not self.server_connection:
raise Exception('Not connected to server (did you call .prepare()?)')
llevel, debug_domains = bb.msg.constructLogOptions()
ret = self.run_command('setEventMask', self.server_connection.connection.getEventHandle(), llevel, debug_domains, eventlist)
if not ret:
raise Exception('setEventMask failed')
self.cooker_data = self.cooker.recipecache
def wait_event(self, timeout=0):
"""
Wait for an event from the server for the specified time.
A timeout of 0 means don't wait if there are no events in the queue.
Returns the next event in the queue or None if the timeout was
reached. Note that in order to recieve any events you will
first need to set the internal event mask using set_event_mask()
(otherwise whatever event mask the UI set up will be in effect).
"""
if not self.server_connection:
raise Exception('Not connected to server (did you call .prepare()?)')
return self.server_connection.events.waitEvent(timeout)
def get_overlayed_recipes(self):
return defaultdict(list, self.run_command('getOverlayedRecipes'))
def get_skipped_recipes(self):
return OrderedDict(self.run_command('getSkippedRecipes'))
def get_all_providers(self):
return defaultdict(list, self.run_command('allProviders'))
def find_providers(self):
return self.run_command('findProviders')
def find_best_provider(self, pn):
return self.run_command('findBestProvider', pn)
def get_runtime_providers(self, rdep):
return self.run_command('getRuntimeProviders', rdep)
def get_recipe_file(self, pn):
"""
Get the file name for the specified recipe/target. Raises
bb.providers.NoProvider if there is no match or the recipe was
skipped.
"""
best = self.find_best_provider(pn)
if not best or (len(best) > 3 and not best[3]):
skiplist = self.get_skipped_recipes()
taskdata = bb.taskdata.TaskData(None, skiplist=skiplist)
skipreasons = taskdata.get_reasons(pn)
if skipreasons:
raise bb.providers.NoProvider('%s is unavailable:\n %s' % (pn, ' \n'.join(skipreasons)))
def prepare(self, config_only = False):
if not self.cooker_data:
if config_only:
self.cooker.parseConfiguration()
self.cooker_data = self.cooker.recipecache
else:
raise bb.providers.NoProvider('Unable to find any recipe file matching "%s"' % pn)
return best[3]
def get_file_appends(self, fn):
return self.run_command('getFileAppends', fn)
def parse_recipe(self, pn):
"""
Parse the specified recipe and return a datastore object
representing the environment for the recipe.
"""
fn = self.get_recipe_file(pn)
return self.parse_recipe_file(fn)
def parse_recipe_file(self, fn, appends=True, appendlist=None, config_data=None):
"""
Parse the specified recipe file (with or without bbappends)
and return a datastore object representing the environment
for the recipe.
Parameters:
fn: recipe file to parse - can be a file path or virtual
specification
appends: True to apply bbappends, False otherwise
appendlist: optional list of bbappend files to apply, if you
want to filter them
config_data: custom config datastore to use. NOTE: if you
specify config_data then you cannot use a virtual
specification for fn.
"""
if appends and appendlist == []:
appends = False
if config_data:
dctr = bb.remotedata.RemoteDatastores.transmit_datastore(config_data)
dscon = self.run_command('parseRecipeFile', fn, appends, appendlist, dctr)
else:
dscon = self.run_command('parseRecipeFile', fn, appends, appendlist)
if dscon:
return self._reconvert_type(dscon, 'DataStoreConnectionHandle')
else:
return None
def build_file(self, buildfile, task):
"""
Runs the specified task for just a single recipe (i.e. no dependencies).
This is equivalent to bitbake -b, except no warning will be printed.
"""
return self.run_command('buildFile', buildfile, task, True)
self.parseRecipes()
def shutdown(self):
if self.server_connection:
self.run_command('clientComplete')
_server_connections.remove(self.server_connection)
bb.event.ui_queue = []
self.server_connection.terminate()
self.server_connection = None
self.cooker.shutdown(force=True)
self.cooker.post_serve()
self.cooker.unlockBitbake()
def _reconvert_type(self, obj, origtypename):
"""
Convert an object back to the right type, in the case
that marshalling has changed it (especially with xmlrpc)
"""
supported_types = {
'set': set,
'DataStoreConnectionHandle': bb.command.DataStoreConnectionHandle,
}
class TinfoilConfigParameters(ConfigParameters):
origtype = supported_types.get(origtypename, None)
if origtype is None:
raise Exception('Unsupported type "%s"' % origtypename)
if type(obj) == origtype:
newobj = obj
elif isinstance(obj, dict):
# New style class
newobj = origtype()
for k,v in obj.items():
setattr(newobj, k, v)
else:
# Assume we can coerce the type
newobj = origtype(obj)
if isinstance(newobj, bb.command.DataStoreConnectionHandle):
connector = TinfoilDataStoreConnector(self, newobj.dsindex)
newobj = bb.data.init()
newobj.setVar('_remote_data', connector)
return newobj
class TinfoilConfigParameters(BitBakeConfigParameters):
def __init__(self, config_only, **options):
def __init__(self, **options):
self.initial_options = options
# Apply some sane defaults
if not 'parse_only' in options:
self.initial_options['parse_only'] = not config_only
#if not 'status_only' in options:
# self.initial_options['status_only'] = config_only
if not 'ui' in options:
self.initial_options['ui'] = 'knotty'
if not 'argv' in options:
self.initial_options['argv'] = []
super(TinfoilConfigParameters, self).__init__()
def parseCommandLine(self, argv=None):
# We don't want any parameters parsed from the command line
opts = super(TinfoilConfigParameters, self).parseCommandLine([])
for key, val in self.initial_options.items():
setattr(opts[0], key, val)
return opts
def parseCommandLine(self):
class DummyOptions:
def __init__(self, initial_options):
for key, val in initial_options.items():
setattr(self, key, val)
return DummyOptions(self.initial_options), None

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,17 @@
#
# Gtk+ UI pieces for BitBake
#
# Copyright (C) 2006-2007 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

View File

@@ -0,0 +1,437 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import pango
import gobject
import bb.process
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hobwidget import hic, HobNotebook, HobAltButton, HobWarpCellRendererText, HobButton, HobInfoButton
from bb.ui.crumbs.runningbuild import RunningBuildTreeView
from bb.ui.crumbs.runningbuild import BuildFailureTreeView
from bb.ui.crumbs.hobpages import HobPage
from bb.ui.crumbs.hobcolor import HobColors
class BuildConfigurationTreeView(gtk.TreeView):
def __init__ (self):
gtk.TreeView.__init__(self)
self.set_rules_hint(False)
self.set_headers_visible(False)
self.set_property("hover-expand", True)
self.get_selection().set_mode(gtk.SELECTION_SINGLE)
# The icon that indicates whether we're building or failed.
renderer0 = gtk.CellRendererText()
renderer0.set_property('font-desc', pango.FontDescription('courier bold 12'))
col0 = gtk.TreeViewColumn ("Name", renderer0, text=0)
self.append_column (col0)
# The message of configuration.
renderer1 = HobWarpCellRendererText(col_number=1)
col1 = gtk.TreeViewColumn ("Values", renderer1, text=1)
self.append_column (col1)
def set_vars(self, key="", var=[""]):
d = {}
if type(var) == str:
d = {key: [var]}
elif type(var) == list and len(var) > 1:
#create the sub item line
l = []
text = ""
for item in var:
text = " - " + item
l.append(text)
d = {key: var}
return d
def set_config_model(self, show_vars):
listmodel = gtk.TreeStore(gobject.TYPE_STRING, gobject.TYPE_STRING)
parent = None
for var in show_vars:
for subitem in var.items():
name = subitem[0]
is_parent = True
for value in subitem[1]:
if is_parent:
parent = listmodel.append(parent, (name, value))
is_parent = False
else:
listmodel.append(parent, (None, value))
name = " - "
parent = None
# renew the tree model after get the configuration messages
self.set_model(listmodel)
def show(self, src_config_info, src_params):
vars = []
vars.append(self.set_vars("BB version:", src_params.bb_version))
vars.append(self.set_vars("Target arch:", src_params.target_arch))
vars.append(self.set_vars("Target OS:", src_params.target_os))
vars.append(self.set_vars("Machine:", src_config_info.curr_mach))
vars.append(self.set_vars("Distro:", src_config_info.curr_distro))
vars.append(self.set_vars("Distro version:", src_params.distro_version))
vars.append(self.set_vars("SDK machine:", src_config_info.curr_sdk_machine))
vars.append(self.set_vars("Tune features:", src_params.tune_pkgarch))
vars.append(self.set_vars("Layers:", src_config_info.layers))
for path in src_config_info.layers:
import os, os.path
if os.path.exists(path):
branch = bb.process.run('cd %s; git branch | grep "^* " | tr -d "* "' % path)[0]
if branch.startswith("fatal:"):
branch = "(unknown)"
if branch:
branch = branch.strip('\n')
vars.append(self.set_vars("Branch:", branch))
break
self.set_config_model(vars)
def reset(self):
self.set_model(None)
#
# BuildDetailsPage
#
class BuildDetailsPage (HobPage):
def __init__(self, builder):
super(BuildDetailsPage, self).__init__(builder, "Building ...")
self.num_of_issues = 0
self.endpath = (0,)
# create visual elements
self.create_visual_elements()
def create_visual_elements(self):
# create visual elements
self.vbox = gtk.VBox(False, 12)
self.progress_box = gtk.VBox(False, 12)
self.task_status = gtk.Label("\n") # to ensure layout is correct
self.task_status.set_alignment(0.0, 0.5)
self.progress_box.pack_start(self.task_status, expand=False, fill=False)
self.progress_hbox = gtk.HBox(False, 6)
self.progress_box.pack_end(self.progress_hbox, expand=True, fill=True)
self.progress_bar = HobProgressBar()
self.progress_hbox.pack_start(self.progress_bar, expand=True, fill=True)
self.stop_button = HobAltButton("Stop")
self.stop_button.connect("clicked", self.stop_button_clicked_cb)
self.stop_button.set_sensitive(False)
self.progress_hbox.pack_end(self.stop_button, expand=False, fill=False)
self.notebook = HobNotebook()
self.config_tv = BuildConfigurationTreeView()
self.scrolled_view_config = gtk.ScrolledWindow ()
self.scrolled_view_config.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
self.scrolled_view_config.add(self.config_tv)
self.notebook.append_page(self.scrolled_view_config, "Build configuration")
self.failure_tv = BuildFailureTreeView()
self.failure_model = self.builder.handler.build.model.failure_model()
self.failure_tv.set_model(self.failure_model)
self.scrolled_view_failure = gtk.ScrolledWindow ()
self.scrolled_view_failure.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
self.scrolled_view_failure.add(self.failure_tv)
self.notebook.append_page(self.scrolled_view_failure, "Issues")
self.build_tv = RunningBuildTreeView(readonly=True, hob=True)
self.build_tv.set_model(self.builder.handler.build.model)
self.scrolled_view_build = gtk.ScrolledWindow ()
self.scrolled_view_build.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
self.scrolled_view_build.add(self.build_tv)
self.notebook.append_page(self.scrolled_view_build, "Log")
self.builder.handler.build.model.connect_after("row-changed", self.scroll_to_present_row, self.scrolled_view_build.get_vadjustment(), self.build_tv)
self.button_box = gtk.HBox(False, 6)
self.back_button = HobAltButton('&lt;&lt; Back')
self.back_button.connect("clicked", self.back_button_clicked_cb)
self.button_box.pack_start(self.back_button, expand=False, fill=False)
def update_build_status(self, current, total, task):
recipe_path, recipe_task = task.split(", ")
recipe = os.path.basename(recipe_path).rstrip(".bb")
tsk_msg = "<b>Running task %s of %s:</b> %s\n<b>Recipe:</b> %s" % (current, total, recipe_task, recipe)
self.task_status.set_markup(tsk_msg)
self.stop_button.set_sensitive(True)
def reset_build_status(self):
self.task_status.set_markup("\n") # to ensure layout is correct
self.endpath = (0,)
def show_issues(self):
self.num_of_issues += 1
self.notebook.show_indicator_icon("Issues", self.num_of_issues)
self.notebook.queue_draw()
def reset_issues(self):
self.num_of_issues = 0
self.notebook.hide_indicator_icon("Issues")
def _remove_all_widget(self):
children = self.vbox.get_children() or []
for child in children:
self.vbox.remove(child)
children = self.box_group_area.get_children() or []
for child in children:
self.box_group_area.remove(child)
children = self.get_children() or []
for child in children:
self.remove(child)
def add_build_fail_top_bar(self, actions, log_file=None):
primary_action = "Edit %s" % actions
color = HobColors.ERROR
build_fail_top = gtk.EventBox()
#build_fail_top.set_size_request(-1, 200)
build_fail_top.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse(color))
build_fail_tab = gtk.Table(14, 46, True)
build_fail_top.add(build_fail_tab)
icon = gtk.Image()
icon_pix_buffer = gtk.gdk.pixbuf_new_from_file(hic.ICON_INDI_ERROR_FILE)
icon.set_from_pixbuf(icon_pix_buffer)
build_fail_tab.attach(icon, 1, 4, 0, 6)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
label.set_markup("<span size='x-large'><b>%s</b></span>" % self.title)
build_fail_tab.attach(label, 4, 26, 0, 6)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
# Ensure variable disk_full is defined
if not hasattr(self.builder, 'disk_full'):
self.builder.disk_full = False
if self.builder.disk_full:
markup = "<span size='medium'>There is no disk space left, so Hob cannot finish building your image. Free up some disk space\n"
markup += "and restart the build. Check the \"Issues\" tab for more details</span>"
label.set_markup(markup)
else:
label.set_markup("<span size='medium'>Check the \"Issues\" information for more details</span>")
build_fail_tab.attach(label, 4, 40, 4, 9)
# create button 'Edit packages'
action_button = HobButton(primary_action)
#action_button.set_size_request(-1, 40)
action_button.set_tooltip_text("Edit the %s parameters" % actions)
action_button.connect('clicked', self.failure_primary_action_button_clicked_cb, primary_action)
if log_file:
open_log_button = HobAltButton("Open log")
open_log_button.set_relief(gtk.RELIEF_HALF)
open_log_button.set_tooltip_text("Open the build's log file")
open_log_button.connect('clicked', self.open_log_button_clicked_cb, log_file)
attach_pos = (24 if log_file else 14)
file_bug_button = HobAltButton('File a bug')
file_bug_button.set_relief(gtk.RELIEF_HALF)
file_bug_button.set_tooltip_text("Open the Yocto Project bug tracking website")
file_bug_button.connect('clicked', self.failure_activate_file_bug_link_cb)
if not self.builder.disk_full:
build_fail_tab.attach(action_button, 4, 13, 9, 12)
if log_file:
build_fail_tab.attach(open_log_button, 14, 23, 9, 12)
build_fail_tab.attach(file_bug_button, attach_pos, attach_pos + 9, 9, 12)
else:
restart_build = HobButton("Restart the build")
restart_build.set_tooltip_text("Restart the build")
restart_build.connect('clicked', self.restart_build_button_clicked_cb)
build_fail_tab.attach(restart_build, 4, 13, 9, 12)
build_fail_tab.attach(action_button, 14, 23, 9, 12)
if log_file:
build_fail_tab.attach(open_log_button, attach_pos, attach_pos + 9, 9, 12)
self.builder.disk_full = False
return build_fail_top
def show_fail_page(self, title):
self._remove_all_widget()
self.title = "Hob cannot build your %s" % title
self.build_fail_bar = self.add_build_fail_top_bar(title, self.builder.current_logfile)
self.pack_start(self.group_align, expand=True, fill=True)
self.box_group_area.pack_start(self.build_fail_bar, expand=False, fill=False)
self.box_group_area.pack_start(self.vbox, expand=True, fill=True)
self.vbox.pack_start(self.notebook, expand=True, fill=True)
self.show_all()
self.notebook.set_page("Issues")
self.back_button.hide()
def add_build_stop_top_bar(self, action, log_file=None):
color = HobColors.LIGHT_GRAY
build_stop_top = gtk.EventBox()
#build_stop_top.set_size_request(-1, 200)
build_stop_top.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse(color))
build_stop_top.set_flags(gtk.CAN_DEFAULT)
build_stop_top.grab_default()
build_stop_tab = gtk.Table(11, 46, True)
build_stop_top.add(build_stop_tab)
icon = gtk.Image()
icon_pix_buffer = gtk.gdk.pixbuf_new_from_file(hic.ICON_INFO_HOVER_FILE)
icon.set_from_pixbuf(icon_pix_buffer)
build_stop_tab.attach(icon, 1, 4, 0, 6)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
label.set_markup("<span size='x-large'><b>%s</b></span>" % self.title)
build_stop_tab.attach(label, 4, 26, 0, 6)
action_button = HobButton("Edit %s" % action)
action_button.set_size_request(-1, 40)
if action == "image":
action_button.set_tooltip_text("Edit the image parameters")
elif action == "recipes":
action_button.set_tooltip_text("Edit the included recipes")
elif action == "packages":
action_button.set_tooltip_text("Edit the included packages")
action_button.connect('clicked', self.stop_primary_action_button_clicked_cb, action)
build_stop_tab.attach(action_button, 4, 13, 6, 9)
if log_file:
open_log_button = HobAltButton("Open log")
open_log_button.set_relief(gtk.RELIEF_HALF)
open_log_button.set_tooltip_text("Open the build's log file")
open_log_button.connect('clicked', self.open_log_button_clicked_cb, log_file)
build_stop_tab.attach(open_log_button, 14, 23, 6, 9)
attach_pos = (24 if log_file else 14)
build_button = HobAltButton("Build new image")
#build_button.set_size_request(-1, 40)
build_button.set_tooltip_text("Create a new image from scratch")
build_button.connect('clicked', self.new_image_button_clicked_cb)
build_stop_tab.attach(build_button, attach_pos, attach_pos + 9, 6, 9)
return build_stop_top, action_button
def show_stop_page(self, action):
self._remove_all_widget()
self.title = "Build stopped"
self.build_stop_bar, action_button = self.add_build_stop_top_bar(action, self.builder.current_logfile)
self.pack_start(self.group_align, expand=True, fill=True)
self.box_group_area.pack_start(self.build_stop_bar, expand=False, fill=False)
self.box_group_area.pack_start(self.vbox, expand=True, fill=True)
self.vbox.pack_start(self.notebook, expand=True, fill=True)
self.show_all()
self.back_button.hide()
return action_button
def show_page(self, step):
self._remove_all_widget()
if step == self.builder.PACKAGE_GENERATING or step == self.builder.FAST_IMAGE_GENERATING:
self.title = "Building packages ..."
else:
self.title = "Building image ..."
self.build_details_top = self.add_onto_top_bar(None)
self.pack_start(self.build_details_top, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
self.box_group_area.pack_start(self.vbox, expand=True, fill=True)
self.progress_bar.reset()
self.config_tv.reset()
self.vbox.pack_start(self.progress_box, expand=False, fill=False)
self.vbox.pack_start(self.notebook, expand=True, fill=True)
self.box_group_area.pack_end(self.button_box, expand=False, fill=False)
self.show_all()
self.notebook.set_page("Log")
self.back_button.hide()
self.reset_build_status()
self.reset_issues()
def update_progress_bar(self, title, fraction, status=None):
self.progress_bar.update(fraction)
self.progress_bar.set_title(title)
self.progress_bar.set_rcstyle(status)
def back_button_clicked_cb(self, button):
self.builder.show_configuration()
def new_image_button_clicked_cb(self, button):
self.builder.reset()
def show_back_button(self):
self.back_button.show()
def stop_button_clicked_cb(self, button):
self.builder.stop_build()
def hide_stop_button(self):
self.stop_button.set_sensitive(False)
self.stop_button.hide()
def scroll_to_present_row(self, model, path, iter, v_adj, treeview):
if treeview and v_adj:
if path[0] > self.endpath[0]: # check the event is a new row append or not
self.endpath = path
# check the gtk.adjustment position is at end boundary or not
if (v_adj.upper <= v_adj.page_size) or (v_adj.value == v_adj.upper - v_adj.page_size):
treeview.scroll_to_cell(path)
def show_configurations(self, configurations, params):
self.config_tv.show(configurations, params)
def failure_primary_action_button_clicked_cb(self, button, action):
if "Edit recipes" in action:
self.builder.show_recipes()
elif "Edit packages" in action:
self.builder.show_packages()
elif "Edit image" in action:
self.builder.show_configuration()
def restart_build_button_clicked_cb(self, button):
self.builder.just_bake()
def stop_primary_action_button_clicked_cb(self, button, action):
if "recipes" in action:
self.builder.show_recipes()
elif "packages" in action:
self.builder.show_packages()
elif "image" in action:
self.builder.show_configuration()
def open_log_button_clicked_cb(self, button, log_file):
if log_file:
log_file = "file:///" + log_file
gtk.show_uri(screen=button.get_screen(), uri=log_file, timestamp=0)
def failure_activate_file_bug_link_cb(self, button):
button.child.emit('activate-link', "http://bugzilla.yoctoproject.org")

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,455 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2008 Intel Corporation
#
# Authored by Rob Bradford <rob@linux.intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
import threading
import os
import datetime
import time
class BuildConfiguration:
""" Represents a potential *or* historic *or* concrete build. It
encompasses all the things that we need to tell bitbake to do to make it
build what we want it to build.
It also stored the metadata URL and the set of possible machines (and the
distros / images / uris for these. Apart from the metdata URL these are
not serialised to file (since they may be transient). In some ways this
functionality might be shifted to the loader class."""
def __init__ (self):
self.metadata_url = None
# Tuple of (distros, image, urls)
self.machine_options = {}
self.machine = None
self.distro = None
self.image = None
self.urls = []
self.extra_urls = []
self.extra_pkgs = []
def get_machines_model (self):
model = gtk.ListStore (gobject.TYPE_STRING)
for machine in self.machine_options.keys():
model.append ([machine])
return model
def get_distro_and_images_models (self, machine):
distro_model = gtk.ListStore (gobject.TYPE_STRING)
for distro in self.machine_options[machine][0]:
distro_model.append ([distro])
image_model = gtk.ListStore (gobject.TYPE_STRING)
for image in self.machine_options[machine][1]:
image_model.append ([image])
return (distro_model, image_model)
def get_repos (self):
self.urls = self.machine_options[self.machine][2]
return self.urls
# It might be a lot lot better if we stored these in like, bitbake conf
# file format.
@staticmethod
def load_from_file (filename):
conf = BuildConfiguration()
with open(filename, "r") as f:
for line in f:
data = line.split (";")[1]
if (line.startswith ("metadata-url;")):
conf.metadata_url = data.strip()
continue
if (line.startswith ("url;")):
conf.urls += [data.strip()]
continue
if (line.startswith ("extra-url;")):
conf.extra_urls += [data.strip()]
continue
if (line.startswith ("machine;")):
conf.machine = data.strip()
continue
if (line.startswith ("distribution;")):
conf.distro = data.strip()
continue
if (line.startswith ("image;")):
conf.image = data.strip()
continue
return conf
# Serialise to a file. This is part of the build process and we use this
# to be able to repeat a given build (using the same set of parameters)
# but also so that we can include the details of the image / machine /
# distro in the build manager tree view.
def write_to_file (self, filename):
f = open (filename, "w")
lines = []
if (self.metadata_url):
lines += ["metadata-url;%s\n" % (self.metadata_url)]
for url in self.urls:
lines += ["url;%s\n" % (url)]
for url in self.extra_urls:
lines += ["extra-url;%s\n" % (url)]
if (self.machine):
lines += ["machine;%s\n" % (self.machine)]
if (self.distro):
lines += ["distribution;%s\n" % (self.distro)]
if (self.image):
lines += ["image;%s\n" % (self.image)]
f.writelines (lines)
f.close ()
class BuildResult(gobject.GObject):
""" Represents an historic build. Perhaps not successful. But it includes
things such as the files that are in the directory (the output from the
build) as well as a deserialised BuildConfiguration file that is stored in
".conf" in the directory for the build.
This is GObject so that it can be included in the TreeStore."""
(STATE_COMPLETE, STATE_FAILED, STATE_ONGOING) = \
(0, 1, 2)
def __init__ (self, parent, identifier):
gobject.GObject.__init__ (self)
self.date = None
self.files = []
self.status = None
self.identifier = identifier
self.path = os.path.join (parent, identifier)
# Extract the date, since the directory name is of the
# format build-<year><month><day>-<ordinal> we can easily
# pull it out.
# TODO: Better to stat a file?
(_, date, revision) = identifier.split ("-")
print(date)
year = int (date[0:4])
month = int (date[4:6])
day = int (date[6:8])
self.date = datetime.date (year, month, day)
self.conf = None
# By default builds are STATE_FAILED unless we find a "complete" file
# in which case they are STATE_COMPLETE
self.state = BuildResult.STATE_FAILED
for file in os.listdir (self.path):
if (file.startswith (".conf")):
conffile = os.path.join (self.path, file)
self.conf = BuildConfiguration.load_from_file (conffile)
elif (file.startswith ("complete")):
self.state = BuildResult.STATE_COMPLETE
else:
self.add_file (file)
def add_file (self, file):
# Just add the file for now. Don't care about the type.
self.files += [(file, None)]
class BuildManagerModel (gtk.TreeStore):
""" Model for the BuildManagerTreeView. This derives from gtk.TreeStore
but it abstracts nicely what the columns mean and the setup of the columns
in the model. """
(COL_IDENT, COL_DESC, COL_MACHINE, COL_DISTRO, COL_BUILD_RESULT, COL_DATE, COL_STATE) = \
(0, 1, 2, 3, 4, 5, 6)
def __init__ (self):
gtk.TreeStore.__init__ (self,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_OBJECT,
gobject.TYPE_INT64,
gobject.TYPE_INT)
class BuildManager (gobject.GObject):
""" This class manages the historic builds that have been found in the
"results" directory but is also used for starting a new build."""
__gsignals__ = {
'population-finished' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'populate-error' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
())
}
def update_build_result (self, result, iter):
# Convert the date into something we can sort by.
date = long (time.mktime (result.date.timetuple()))
# Add a top level entry for the build
self.model.set (iter,
BuildManagerModel.COL_IDENT, result.identifier,
BuildManagerModel.COL_DESC, result.conf.image,
BuildManagerModel.COL_MACHINE, result.conf.machine,
BuildManagerModel.COL_DISTRO, result.conf.distro,
BuildManagerModel.COL_BUILD_RESULT, result,
BuildManagerModel.COL_DATE, date,
BuildManagerModel.COL_STATE, result.state)
# And then we use the files in the directory as the children for the
# top level iter.
for file in result.files:
self.model.append (iter, (None, file[0], None, None, None, date, -1))
# This function is called as an idle by the BuildManagerPopulaterThread
def add_build_result (self, result):
gtk.gdk.threads_enter()
self.known_builds += [result]
self.update_build_result (result, self.model.append (None))
gtk.gdk.threads_leave()
def notify_build_finished (self):
# This is a bit of a hack. If we have a running build running then we
# will have a row in the model in STATE_ONGOING. Find it and make it
# as if it was a proper historic build (well, it is completed now....)
# We need to use the iters here rather than the Python iterator
# interface to the model since we need to pass it into
# update_build_result
iter = self.model.get_iter_first()
while (iter):
(ident, state) = self.model.get(iter,
BuildManagerModel.COL_IDENT,
BuildManagerModel.COL_STATE)
if state == BuildResult.STATE_ONGOING:
result = BuildResult (self.results_directory, ident)
self.update_build_result (result, iter)
iter = self.model.iter_next(iter)
def notify_build_succeeded (self):
# Write the "complete" file so that when we create the BuildResult
# object we put into the model
complete_file_path = os.path.join (self.cur_build_directory, "complete")
f = file (complete_file_path, "w")
f.close()
self.notify_build_finished()
def notify_build_failed (self):
# Without a "complete" file then this will mark the build as failed:
self.notify_build_finished()
# This function is called as an idle
def emit_population_finished_signal (self):
gtk.gdk.threads_enter()
self.emit ("population-finished")
gtk.gdk.threads_leave()
class BuildManagerPopulaterThread (threading.Thread):
def __init__ (self, manager, directory):
threading.Thread.__init__ (self)
self.manager = manager
self.directory = directory
def run (self):
# For each of the "build-<...>" directories ..
if os.path.exists (self.directory):
for directory in os.listdir (self.directory):
if not directory.startswith ("build-"):
continue
build_result = BuildResult (self.directory, directory)
self.manager.add_build_result (build_result)
gobject.idle_add (BuildManager.emit_population_finished_signal,
self.manager)
def __init__ (self, server, results_directory):
gobject.GObject.__init__ (self)
# The builds that we've found from walking the result directory
self.known_builds = []
# Save out the bitbake server, we need this for issuing commands to
# the cooker:
self.server = server
# The TreeStore that we use
self.model = BuildManagerModel ()
# The results directory is where we create (and look for) the
# build-<xyz>-<n> directories. We need to populate ourselves from
# directory
self.results_directory = results_directory
self.populate_from_directory (self.results_directory)
def populate_from_directory (self, directory):
thread = BuildManager.BuildManagerPopulaterThread (self, directory)
thread.start()
# Come up with the name for the next build ident by combining "build-"
# with the date formatted as yyyymmdd and then an ordinal. We do this by
# an optimistic algorithm incrementing the ordinal if we find that it
# already exists.
def get_next_build_ident (self):
today = datetime.date.today ()
datestr = str (today.year) + str (today.month) + str (today.day)
revision = 0
test_name = "build-%s-%d" % (datestr, revision)
test_path = os.path.join (self.results_directory, test_name)
while (os.path.exists (test_path)):
revision += 1
test_name = "build-%s-%d" % (datestr, revision)
test_path = os.path.join (self.results_directory, test_name)
return test_name
# Take a BuildConfiguration and then try and build it based on the
# parameters of that configuration. S
def do_build (self, conf):
server = self.server
# Work out the build directory. Note we actually create the
# directories here since we need to write the ".conf" file. Otherwise
# we could have relied on bitbake's builder thread to actually make
# the directories as it proceeds with the build.
ident = self.get_next_build_ident ()
build_directory = os.path.join (self.results_directory,
ident)
self.cur_build_directory = build_directory
os.makedirs (build_directory)
conffile = os.path.join (build_directory, ".conf")
conf.write_to_file (conffile)
# Add a row to the model representing this ongoing build. It's kinda a
# fake entry. If this build completes or fails then this gets updated
# with the real stuff like the historic builds
date = long (time.time())
self.model.append (None, (ident, conf.image, conf.machine, conf.distro,
None, date, BuildResult.STATE_ONGOING))
try:
server.runCommand(["setVariable", "BUILD_IMAGES_FROM_FEEDS", 1])
server.runCommand(["setVariable", "MACHINE", conf.machine])
server.runCommand(["setVariable", "DISTRO", conf.distro])
server.runCommand(["setVariable", "PACKAGE_CLASSES", "package_ipk"])
server.runCommand(["setVariable", "BBFILES", \
"""${OEROOT}/meta/packages/*/*.bb ${OEROOT}/meta-moblin/packages/*/*.bb"""])
server.runCommand(["setVariable", "TMPDIR", "${OEROOT}/build/tmp"])
server.runCommand(["setVariable", "IPK_FEED_URIS", \
" ".join(conf.get_repos())])
server.runCommand(["setVariable", "DEPLOY_DIR_IMAGE",
build_directory])
server.runCommand(["buildTargets", [conf.image], "rootfs"])
except Exception as e:
print(e)
class BuildManagerTreeView (gtk.TreeView):
""" The tree view for the build manager. This shows the historic builds
and so forth. """
# We use this function to control what goes in the cell since we store
# the date in the model as seconds since the epoch (for sorting) and so we
# need to make it human readable.
def date_format_custom_cell_data_func (self, col, cell, model, iter):
date = model.get (iter, BuildManagerModel.COL_DATE)[0]
datestr = time.strftime("%A %d %B %Y", time.localtime(date))
cell.set_property ("text", datestr)
# This format function controls what goes in the cell. We use this to map
# the integer state to a string and also to colourise the text
def state_format_custom_cell_data_fun (self, col, cell, model, iter):
state = model.get (iter, BuildManagerModel.COL_STATE)[0]
if (state == BuildResult.STATE_ONGOING):
cell.set_property ("text", "Active")
cell.set_property ("foreground", "#000000")
elif (state == BuildResult.STATE_FAILED):
cell.set_property ("text", "Failed")
cell.set_property ("foreground", "#ff0000")
elif (state == BuildResult.STATE_COMPLETE):
cell.set_property ("text", "Complete")
cell.set_property ("foreground", "#00ff00")
else:
cell.set_property ("text", "")
def __init__ (self):
gtk.TreeView.__init__(self)
# Misc descriptiony thing
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn (None, renderer,
text=BuildManagerModel.COL_DESC)
self.append_column (col)
# Machine
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Machine", renderer,
text=BuildManagerModel.COL_MACHINE)
self.append_column (col)
# distro
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Distribution", renderer,
text=BuildManagerModel.COL_DISTRO)
self.append_column (col)
# date (using a custom function for formatting the cell contents it
# takes epoch -> human readable string)
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Date", renderer,
text=BuildManagerModel.COL_DATE)
self.append_column (col)
col.set_cell_data_func (renderer,
self.date_format_custom_cell_data_func)
# For status.
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Status", renderer,
text = BuildManagerModel.COL_STATE)
self.append_column (col)
col.set_cell_data_func (renderer,
self.state_format_custom_cell_data_fun)

View File

@@ -0,0 +1,341 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import hashlib
from bb.ui.crumbs.hobwidget import HobInfoButton, HobButton
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hig.settingsuihelper import SettingsUIHelper
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
from bb.ui.crumbs.hig.proxydetailsdialog import ProxyDetailsDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class AdvancedSettingsDialog (CrumbsDialog, SettingsUIHelper):
def details_cb(self, button, parent, protocol):
dialog = ProxyDetailsDialog(title = protocol.upper() + " Proxy Details",
user = self.configuration.proxies[protocol][1],
passwd = self.configuration.proxies[protocol][2],
parent = parent,
flags = gtk.DIALOG_MODAL
| gtk.DIALOG_DESTROY_WITH_PARENT
| gtk.DIALOG_NO_SEPARATOR)
dialog.add_button(gtk.STOCK_CLOSE, gtk.RESPONSE_OK)
response = dialog.run()
if response == gtk.RESPONSE_OK:
self.configuration.proxies[protocol][1] = dialog.user
self.configuration.proxies[protocol][2] = dialog.passwd
self.refresh_proxy_components()
dialog.destroy()
def set_save_button(self, button):
self.save_button = button
def rootfs_combo_changed_cb(self, rootfs_combo, all_package_format, check_hbox):
combo_item = self.rootfs_combo.get_active_text()
modified = False
for child in check_hbox.get_children():
if isinstance(child, gtk.CheckButton):
check_hbox.remove(child)
modified = True
for format in all_package_format:
if format != combo_item:
check_button = gtk.CheckButton(format)
check_hbox.pack_start(check_button, expand=False, fill=False)
modified = True
if modified:
check_hbox.remove(self.pkgfmt_info)
check_hbox.pack_start(self.pkgfmt_info, expand=False, fill=False)
check_hbox.show_all()
def gen_pkgfmt_widget(self, curr_package_format, all_package_format, tooltip_combo="", tooltip_extra=""):
pkgfmt_vbox = gtk.VBox(False, 6)
label = self.gen_label_widget("Root file system package format")
pkgfmt_vbox.pack_start(label, expand=False, fill=False)
rootfs_format = ""
if curr_package_format:
rootfs_format = curr_package_format.split()[0]
rootfs_format_widget, rootfs_combo = self.gen_combo_widget(rootfs_format, all_package_format, tooltip_combo)
pkgfmt_vbox.pack_start(rootfs_format_widget, expand=False, fill=False)
label = self.gen_label_widget("Additional package formats")
pkgfmt_vbox.pack_start(label, expand=False, fill=False)
check_hbox = gtk.HBox(False, 12)
pkgfmt_vbox.pack_start(check_hbox, expand=False, fill=False)
for format in all_package_format:
if format != rootfs_format:
check_button = gtk.CheckButton(format)
is_active = (format in curr_package_format.split())
check_button.set_active(is_active)
check_hbox.pack_start(check_button, expand=False, fill=False)
self.pkgfmt_info = HobInfoButton(tooltip_extra, self)
check_hbox.pack_start(self.pkgfmt_info, expand=False, fill=False)
rootfs_combo.connect("changed", self.rootfs_combo_changed_cb, all_package_format, check_hbox)
pkgfmt_vbox.show_all()
return pkgfmt_vbox, rootfs_combo, check_hbox
def __init__(self, title, configuration, all_image_types,
all_package_formats, all_distros, all_sdk_machines,
max_threads, parent, flags, buttons=None):
super(AdvancedSettingsDialog, self).__init__(title, parent, flags, buttons)
# class members from other objects
# bitbake settings from Builder.Configuration
self.configuration = configuration
self.image_types = all_image_types
self.all_package_formats = all_package_formats
self.all_distros = all_distros[:]
self.all_sdk_machines = all_sdk_machines
self.max_threads = max_threads
# class members for internal use
self.distro_combo = None
self.dldir_text = None
self.sstatedir_text = None
self.sstatemirror_text = None
self.bb_spinner = None
self.pmake_spinner = None
self.rootfs_size_spinner = None
self.extra_size_spinner = None
self.gplv3_checkbox = None
self.sdk_checkbox = None
self.image_types_checkbuttons = {}
self.md5 = self.config_md5()
self.settings_changed = False
# create visual elements on the dialog
self.save_button = None
self.create_visual_elements()
self.connect("response", self.response_cb)
def _get_sorted_value(self, var):
return " ".join(sorted(str(var).split())) + "\n"
def config_md5(self):
data = ""
data += ("PACKAGE_CLASSES: " + self.configuration.curr_package_format + '\n')
data += ("DISTRO: " + self._get_sorted_value(self.configuration.curr_distro))
data += ("IMAGE_ROOTFS_SIZE: " + self._get_sorted_value(self.configuration.image_rootfs_size))
data += ("IMAGE_EXTRA_SIZE: " + self._get_sorted_value(self.configuration.image_extra_size))
data += ("INCOMPATIBLE_LICENSE: " + self._get_sorted_value(self.configuration.incompat_license))
data += ("SDK_MACHINE: " + self._get_sorted_value(self.configuration.curr_sdk_machine))
data += ("TOOLCHAIN_BUILD: " + self._get_sorted_value(self.configuration.toolchain_build))
data += ("IMAGE_FSTYPES: " + self._get_sorted_value(self.configuration.image_fstypes))
return hashlib.md5(data).hexdigest()
def create_visual_elements(self):
self.nb = gtk.Notebook()
self.nb.set_show_tabs(True)
self.nb.append_page(self.create_image_types_page(), gtk.Label("Image types"))
self.nb.append_page(self.create_output_page(), gtk.Label("Output"))
self.nb.set_current_page(0)
self.vbox.pack_start(self.nb, expand=True, fill=True)
self.vbox.pack_end(gtk.HSeparator(), expand=True, fill=True)
self.show_all()
def get_num_checked_image_types(self):
total = 0
for b in self.image_types_checkbuttons.values():
if b.get_active():
total = total + 1
return total
def set_save_button_state(self):
if self.save_button:
self.save_button.set_sensitive(self.get_num_checked_image_types() > 0)
def image_type_checkbutton_clicked_cb(self, button):
self.set_save_button_state()
if self.get_num_checked_image_types() == 0:
# Show an error dialog
lbl = "<b>Select an image type</b>"
msg = "You need to select at least one image type."
dialog = CrumbsMessageDialog(self, lbl, gtk.MESSAGE_WARNING, msg)
button = dialog.add_button("OK", gtk.RESPONSE_OK)
HobButton.style_button(button)
response = dialog.run()
dialog.destroy()
def create_image_types_page(self):
main_vbox = gtk.VBox(False, 16)
main_vbox.set_border_width(6)
advanced_vbox = gtk.VBox(False, 6)
advanced_vbox.set_border_width(6)
distro_vbox = gtk.VBox(False, 6)
label = self.gen_label_widget("Distro:")
tooltip = "Selects the Yocto Project distribution you want"
try:
i = self.all_distros.index( "defaultsetup" )
except ValueError:
i = -1
if i != -1:
self.all_distros[ i ] = "Default"
if self.configuration.curr_distro == "defaultsetup":
self.configuration.curr_distro = "Default"
distro_widget, self.distro_combo = self.gen_combo_widget(self.configuration.curr_distro, self.all_distros,"<b>Distro</b>" + "*" + tooltip)
distro_vbox.pack_start(label, expand=False, fill=False)
distro_vbox.pack_start(distro_widget, expand=False, fill=False)
main_vbox.pack_start(distro_vbox, expand=False, fill=False)
rows = (len(self.image_types)+1)/3
table = gtk.Table(rows + 1, 10, True)
advanced_vbox.pack_start(table, expand=False, fill=False)
tooltip = "Image file system types you want."
info = HobInfoButton("<b>Image types</b>" + "*" + tooltip, self)
label = self.gen_label_widget("Image types:")
align = gtk.Alignment(0, 0.5, 0, 0)
table.attach(align, 0, 4, 0, 1)
align.add(label)
table.attach(info, 4, 5, 0, 1)
i = 1
j = 1
for image_type in sorted(self.image_types):
self.image_types_checkbuttons[image_type] = gtk.CheckButton(image_type)
self.image_types_checkbuttons[image_type].connect("toggled", self.image_type_checkbutton_clicked_cb)
article = ""
if image_type.startswith(("a", "e", "i", "o", "u")):
article = "n"
if image_type == "live":
self.image_types_checkbuttons[image_type].set_tooltip_text("Build iso and hddimg images")
else:
self.image_types_checkbuttons[image_type].set_tooltip_text("Build a%s %s image" % (article, image_type))
table.attach(self.image_types_checkbuttons[image_type], j - 1, j + 3, i, i + 1)
if image_type in self.configuration.image_fstypes.split():
self.image_types_checkbuttons[image_type].set_active(True)
i += 1
if i > rows:
i = 1
j = j + 4
main_vbox.pack_start(advanced_vbox, expand=False, fill=False)
self.set_save_button_state()
return main_vbox
def create_output_page(self):
advanced_vbox = gtk.VBox(False, 6)
advanced_vbox.set_border_width(6)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Package format</span>'), expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
tooltip_combo = "Selects the package format used to generate rootfs."
tooltip_extra = "Selects extra package formats to build"
pkgfmt_widget, self.rootfs_combo, self.check_hbox = self.gen_pkgfmt_widget(self.configuration.curr_package_format, self.all_package_formats,"<b>Root file system package format</b>" + "*" + tooltip_combo,"<b>Additional package formats</b>" + "*" + tooltip_extra)
sub_vbox.pack_start(pkgfmt_widget, expand=False, fill=False)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Image size</span>'), expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = self.gen_label_widget("Image basic size (in MB)")
tooltip = "Defines the size for the generated image. The OpenEmbedded build system determines the final size for the generated image using an algorithm that takes into account the initial disk space used for the generated image, the Image basic size value, and the Additional free space value.\n\nFor more information, check the <a href=\"http://www.yoctoproject.org/docs/current/poky-ref-manual/poky-ref-manual.html#var-IMAGE_ROOTFS_SIZE\">Yocto Project Reference Manual</a>."
rootfs_size_widget, self.rootfs_size_spinner = self.gen_spinner_widget(int(self.configuration.image_rootfs_size*1.0/1024), 0, 65536,"<b>Image basic size</b>" + "*" + tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(rootfs_size_widget, expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = self.gen_label_widget("Additional free space (in MB)")
tooltip = "Sets extra free disk space to be added to the generated image. Use this variable when you want to ensure that a specific amount of free disk space is available on a device after an image is installed and running."
extra_size_widget, self.extra_size_spinner = self.gen_spinner_widget(int(self.configuration.image_extra_size*1.0/1024), 0, 65536,"<b>Additional free space</b>" + "*" + tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(extra_size_widget, expand=False, fill=False)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Licensing</span>'), expand=False, fill=False)
self.gplv3_checkbox = gtk.CheckButton("Exclude GPLv3 packages")
self.gplv3_checkbox.set_tooltip_text("Check this box to prevent GPLv3 packages from being included in your image")
if "GPLv3" in self.configuration.incompat_license.split():
self.gplv3_checkbox.set_active(True)
else:
self.gplv3_checkbox.set_active(False)
advanced_vbox.pack_start(self.gplv3_checkbox, expand=False, fill=False)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">SDK</span>'), expand=False, fill=False)
sub_hbox = gtk.HBox(False, 6)
advanced_vbox.pack_start(sub_hbox, expand=False, fill=False)
self.sdk_checkbox = gtk.CheckButton("Populate SDK")
tooltip = "Check this box to generate an SDK tarball that consists of the cross-toolchain and a sysroot that contains development packages for your image."
self.sdk_checkbox.set_tooltip_text(tooltip)
self.sdk_checkbox.set_active(self.configuration.toolchain_build)
sub_hbox.pack_start(self.sdk_checkbox, expand=False, fill=False)
tooltip = "Select the host platform for which you want to run the toolchain contained in the SDK tarball."
sdk_machine_widget, self.sdk_machine_combo = self.gen_combo_widget(self.configuration.curr_sdk_machine, self.all_sdk_machines,"<b>Populate SDK</b>" + "*" + tooltip)
sub_hbox.pack_start(sdk_machine_widget, expand=False, fill=False)
return advanced_vbox
def response_cb(self, dialog, response_id):
package_format = []
package_format.append(self.rootfs_combo.get_active_text())
for child in self.check_hbox:
if isinstance(child, gtk.CheckButton) and child.get_active():
package_format.append(child.get_label())
self.configuration.curr_package_format = " ".join(package_format)
distro = self.distro_combo.get_active_text()
if distro == "Default":
distro = "defaultsetup"
self.configuration.curr_distro = distro
self.configuration.image_rootfs_size = self.rootfs_size_spinner.get_value_as_int() * 1024
self.configuration.image_extra_size = self.extra_size_spinner.get_value_as_int() * 1024
self.configuration.image_fstypes = ""
for image_type in self.image_types:
if self.image_types_checkbuttons[image_type].get_active():
self.configuration.image_fstypes += (" " + image_type)
self.configuration.image_fstypes.strip()
if self.gplv3_checkbox.get_active():
if "GPLv3" not in self.configuration.incompat_license.split():
self.configuration.incompat_license += " GPLv3"
else:
if "GPLv3" in self.configuration.incompat_license.split():
self.configuration.incompat_license = self.configuration.incompat_license.split().remove("GPLv3")
self.configuration.incompat_license = " ".join(self.configuration.incompat_license or [])
self.configuration.incompat_license = self.configuration.incompat_license.strip()
self.configuration.toolchain_build = self.sdk_checkbox.get_active()
self.configuration.curr_sdk_machine = self.sdk_machine_combo.get_active_text()
md5 = self.config_md5()
self.settings_changed = (self.md5 != md5)

View File

@@ -0,0 +1,44 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class CrumbsDialog(gtk.Dialog):
"""
A GNOME HIG compliant dialog widget.
Add buttons with gtk.Dialog.add_button or gtk.Dialog.add_buttons
"""
def __init__(self, title="", parent=None, flags=0, buttons=None):
super(CrumbsDialog, self).__init__(title, parent, flags, buttons)
self.set_property("has-separator", False) # note: deprecated in 2.22
self.set_border_width(6)
self.vbox.set_property("spacing", 12)
self.action_area.set_property("spacing", 12)
self.action_area.set_property("border-width", 6)

View File

@@ -0,0 +1,70 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import glib
import gtk
from bb.ui.crumbs.hobwidget import HobIconChecker
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class CrumbsMessageDialog(gtk.MessageDialog):
"""
A GNOME HIG compliant dialog widget.
Add buttons with gtk.Dialog.add_button or gtk.Dialog.add_buttons
"""
def __init__(self, parent = None, label="", dialog_type = gtk.MESSAGE_QUESTION, msg=""):
super(CrumbsMessageDialog, self).__init__(None,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT,
dialog_type,
gtk.BUTTONS_NONE,
None)
self.set_skip_taskbar_hint(False)
self.set_markup(label)
if 0 <= len(msg) < 300:
self.format_secondary_markup(msg)
else:
vbox = self.get_message_area()
vbox.set_border_width(1)
vbox.set_property("spacing", 12)
self.textWindow = gtk.ScrolledWindow()
self.textWindow.set_shadow_type(gtk.SHADOW_IN)
self.textWindow.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
self.msgView = gtk.TextView()
self.msgView.set_editable(False)
self.msgView.set_wrap_mode(gtk.WRAP_WORD)
self.msgView.set_cursor_visible(False)
self.msgView.set_size_request(300, 300)
self.buf = gtk.TextBuffer()
self.buf.set_text(msg)
self.msgView.set_buffer(self.buf)
self.textWindow.add(self.msgView)
self.msgView.show()
vbox.add(self.textWindow)
self.textWindow.show()

View File

@@ -0,0 +1,219 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import glob
import gtk
import gobject
import os
import re
import shlex
import subprocess
import tempfile
from bb.ui.crumbs.hobwidget import hic, HobButton
from bb.ui.crumbs.progressbar import HobProgressBar
import bb.ui.crumbs.utils
import bb.process
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class DeployImageDialog (CrumbsDialog):
__dummy_usb__ = "--select a usb drive--"
def __init__(self, title, image_path, parent, flags, buttons=None, standalone=False):
super(DeployImageDialog, self).__init__(title, parent, flags, buttons)
self.image_path = image_path
self.standalone = standalone
self.create_visual_elements()
self.connect("response", self.response_cb)
def create_visual_elements(self):
self.set_size_request(600, 400)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
markup = "<span font_desc='12'>The image to be written into usb drive:</span>"
label.set_markup(markup)
self.vbox.pack_start(label, expand=False, fill=False, padding=2)
table = gtk.Table(2, 10, False)
table.set_col_spacings(5)
table.set_row_spacings(5)
self.vbox.pack_start(table, expand=True, fill=True)
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
scroll.set_shadow_type(gtk.SHADOW_IN)
tv = gtk.TextView()
tv.set_editable(False)
tv.set_wrap_mode(gtk.WRAP_WORD)
tv.set_cursor_visible(False)
self.buf = gtk.TextBuffer()
self.buf.set_text(self.image_path)
tv.set_buffer(self.buf)
scroll.add(tv)
table.attach(scroll, 0, 10, 0, 1)
# There are 2 ways to use DeployImageDialog
# One way is that called by HOB when the 'Deploy Image' button is clicked
# The other way is that called by a standalone script.
# Following block of codes handles the latter way. It adds a 'Select Image' button and
# emit a signal when the button is clicked.
if self.standalone:
gobject.signal_new("select_image_clicked", self, gobject.SIGNAL_RUN_FIRST,
gobject.TYPE_NONE, ())
icon = gtk.Image()
pix_buffer = gtk.gdk.pixbuf_new_from_file(hic.ICON_IMAGES_DISPLAY_FILE)
icon.set_from_pixbuf(pix_buffer)
button = gtk.Button("Select Image")
button.set_image(icon)
#button.set_size_request(140, 50)
table.attach(button, 9, 10, 1, 2, gtk.FILL, 0, 0, 0)
button.connect("clicked", self.select_image_button_clicked_cb)
separator = gtk.HSeparator()
self.vbox.pack_start(separator, expand=False, fill=False, padding=10)
self.usb_desc = gtk.Label()
self.usb_desc.set_alignment(0.0, 0.5)
markup = "<span font_desc='12'>You haven't chosen any USB drive.</span>"
self.usb_desc.set_markup(markup)
self.usb_combo = gtk.combo_box_new_text()
self.usb_combo.connect("changed", self.usb_combo_changed_cb)
model = self.usb_combo.get_model()
model.clear()
self.usb_combo.append_text(self.__dummy_usb__)
for usb in self.find_all_usb_devices():
self.usb_combo.append_text("/dev/" + usb)
self.usb_combo.set_active(0)
self.vbox.pack_start(self.usb_combo, expand=False, fill=False)
self.vbox.pack_start(self.usb_desc, expand=False, fill=False, padding=2)
self.progress_bar = HobProgressBar()
self.vbox.pack_start(self.progress_bar, expand=False, fill=False)
separator = gtk.HSeparator()
self.vbox.pack_start(separator, expand=False, fill=True, padding=10)
self.vbox.show_all()
self.progress_bar.hide()
def set_image_text_buffer(self, image_path):
self.buf.set_text(image_path)
def set_image_path(self, image_path):
self.image_path = image_path
def popen_read(self, cmd):
tmpout, errors = bb.process.run("%s" % cmd)
return tmpout.strip()
def find_all_usb_devices(self):
usb_devs = [ os.readlink(u)
for u in glob.glob('/dev/disk/by-id/usb*')
if not re.search(r'part\d+', u) ]
return [ '%s' % u[u.rfind('/')+1:] for u in usb_devs ]
def get_usb_info(self, dev):
return "%s %s" % \
(self.popen_read('cat /sys/class/block/%s/device/vendor' % dev),
self.popen_read('cat /sys/class/block/%s/device/model' % dev))
def select_image_button_clicked_cb(self, button):
self.emit('select_image_clicked')
def usb_combo_changed_cb(self, usb_combo):
combo_item = self.usb_combo.get_active_text()
if not combo_item or combo_item == self.__dummy_usb__:
markup = "<span font_desc='12'>You haven't chosen any USB drive.</span>"
self.usb_desc.set_markup(markup)
else:
markup = "<span font_desc='12'>" + self.get_usb_info(combo_item.lstrip("/dev/")) + "</span>"
self.usb_desc.set_markup(markup)
def response_cb(self, dialog, response_id):
if response_id == gtk.RESPONSE_YES:
lbl = ''
msg = ''
combo_item = self.usb_combo.get_active_text()
if combo_item and combo_item != self.__dummy_usb__ and self.image_path:
cmdline = bb.ui.crumbs.utils.which_terminal()
if cmdline:
tmpfile = tempfile.NamedTemporaryFile()
cmdline += "\"sudo dd if=" + self.image_path + \
" of=" + combo_item + " && sync; echo $? > " + tmpfile.name + "\""
subprocess.call(shlex.split(cmdline))
if int(tmpfile.readline().strip()) == 0:
lbl = "<b>Deploy image successfully.</b>"
else:
lbl = "<b>Failed to deploy image.</b>"
msg = "Please check image <b>%s</b> exists and USB device <b>%s</b> is writable." % (self.image_path, combo_item)
tmpfile.close()
else:
if not self.image_path:
lbl = "<b>No selection made.</b>"
msg = "You have not selected an image to deploy."
else:
lbl = "<b>No selection made.</b>"
msg = "You have not selected a USB device."
if len(lbl):
crumbs_dialog = CrumbsMessageDialog(self, lbl, gtk.MESSAGE_INFO, msg)
button = crumbs_dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
crumbs_dialog.run()
crumbs_dialog.destroy()
def update_progress_bar(self, title, fraction, status=None):
self.progress_bar.update(fraction)
self.progress_bar.set_title(title)
self.progress_bar.set_rcstyle(status)
def write_file(self, ifile, ofile):
self.progress_bar.reset()
self.progress_bar.show()
f_from = os.open(ifile, os.O_RDONLY)
f_to = os.open(ofile, os.O_WRONLY)
total_size = os.stat(ifile).st_size
written_size = 0
while True:
buf = os.read(f_from, 1024*1024)
if not buf:
break
os.write(f_to, buf)
written_size += 1024*1024
self.update_progress_bar("Writing to usb:", written_size * 1.0/total_size)
self.update_progress_bar("Writing completed:", 1.0)
os.close(f_from)
os.close(f_to)
self.progress_bar.hide()

View File

@@ -0,0 +1,172 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
import os
from bb.ui.crumbs.hobwidget import HobViewTable, HobInfoButton, HobButton, HobAltButton
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
from bb.ui.crumbs.hig.layerselectiondialog import LayerSelectionDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class ImageSelectionDialog (CrumbsDialog):
__columns__ = [{
'col_name' : 'Image name',
'col_id' : 0,
'col_style': 'text',
'col_min' : 400,
'col_max' : 400
}, {
'col_name' : 'Select',
'col_id' : 1,
'col_style': 'radio toggle',
'col_min' : 160,
'col_max' : 160
}]
def __init__(self, image_folder, image_types, title, parent, flags, buttons=None, image_extension = {}):
super(ImageSelectionDialog, self).__init__(title, parent, flags, buttons)
self.connect("response", self.response_cb)
self.image_folder = image_folder
self.image_types = image_types
self.image_list = []
self.image_names = []
self.image_extension = image_extension
# create visual elements on the dialog
self.create_visual_elements()
self.image_store = gtk.ListStore(gobject.TYPE_STRING, gobject.TYPE_BOOLEAN)
self.fill_image_store()
def create_visual_elements(self):
hbox = gtk.HBox(False, 6)
self.vbox.pack_start(hbox, expand=False, fill=False)
entry = gtk.Entry()
entry.set_text(self.image_folder)
table = gtk.Table(1, 10, True)
table.set_size_request(560, -1)
hbox.pack_start(table, expand=False, fill=False)
table.attach(entry, 0, 9, 0, 1)
image = gtk.Image()
image.set_from_stock(gtk.STOCK_OPEN, gtk.ICON_SIZE_BUTTON)
open_button = gtk.Button()
open_button.set_image(image)
open_button.connect("clicked", self.select_path_cb, self, entry)
table.attach(open_button, 9, 10, 0, 1)
self.image_table = HobViewTable(self.__columns__, "Images")
self.image_table.set_size_request(-1, 300)
self.image_table.connect("toggled", self.toggled_cb)
self.image_table.connect_group_selection(self.table_selected_cb)
self.image_table.connect("row-activated", self.row_actived_cb)
self.vbox.pack_start(self.image_table, expand=True, fill=True)
self.show_all()
def change_image_cb(self, model, path, columnid):
if not model:
return
iter = model.get_iter_first()
while iter:
rowpath = model.get_path(iter)
model[rowpath][columnid] = False
iter = model.iter_next(iter)
model[path][columnid] = True
def toggled_cb(self, table, cell, path, columnid, tree):
model = tree.get_model()
self.change_image_cb(model, path, columnid)
def table_selected_cb(self, selection):
model, paths = selection.get_selected_rows()
if paths:
self.change_image_cb(model, paths[0], 1)
def row_actived_cb(self, tab, model, path):
self.change_image_cb(model, path, 1)
self.emit('response', gtk.RESPONSE_YES)
def select_path_cb(self, action, parent, entry):
dialog = gtk.FileChooserDialog("", parent,
gtk.FILE_CHOOSER_ACTION_SELECT_FOLDER)
text = entry.get_text()
dialog.set_current_folder(text if len(text) > 0 else os.getcwd())
button = dialog.add_button("Cancel", gtk.RESPONSE_NO)
HobAltButton.style_button(button)
button = dialog.add_button("Open", gtk.RESPONSE_YES)
HobButton.style_button(button)
response = dialog.run()
if response == gtk.RESPONSE_YES:
path = dialog.get_filename()
entry.set_text(path)
self.image_folder = path
self.fill_image_store()
dialog.destroy()
def fill_image_store(self):
self.image_list = []
self.image_store.clear()
imageset = set()
for root, dirs, files in os.walk(self.image_folder):
# ignore the sub directories
dirs[:] = []
for f in files:
for image_type in self.image_types:
if image_type in self.image_extension:
real_types = self.image_extension[image_type]
else:
real_types = [image_type]
for real_image_type in real_types:
if f.endswith('.' + real_image_type):
imageset.add(f.rsplit('.' + real_image_type)[0].rsplit('.rootfs')[0])
self.image_list.append(f)
for image in imageset:
self.image_store.set(self.image_store.append(), 0, image, 1, False)
self.image_table.set_model(self.image_store)
def response_cb(self, dialog, response_id):
self.image_names = []
if response_id == gtk.RESPONSE_YES:
iter = self.image_store.get_iter_first()
while iter:
path = self.image_store.get_path(iter)
if self.image_store[path][1]:
for f in self.image_list:
if f.startswith(self.image_store[path][0] + '.'):
self.image_names.append(f)
break
iter = self.image_store.iter_next(iter)

Some files were not shown because too many files have changed in this diff Show More