Compare commits

..

346 Commits

Author SHA1 Message Date
Richard Purdie
6caa7d1d6c bitbake: command: Fix getCmdLineAction bugs
Executing "bitbake" doesn't get a sane message since the None return value
wasn't being handled correctly. Also fix msg -> cmd_action['msg'] as
otherwise an invalid variable is accessed which then crashes the server
due to the previous bug.

(Bitbake rev: c6211291ae07410832031a5274690437cc2b09a6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:44:14 +00:00
Richard Purdie
2a69358610 bitbake: command: Add missing import traceback
Without this, if an exception occurs the server will silently crash
with no feedback to the user about why (since traceback isn't imported).

(Bitbake rev: e637a635bf7b5a9a2e9dc20afc18aceec98d578f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:44:06 +00:00
Christopher Larson
f3acb135e7 bitbake: command: add error to return of runCommand
Currently, command.py can return an error message from runCommand, due to
being unable to run the command, yet few of our UIs (just hob) can handle it
today. This can result in seeing a TypeError with traceback in certain rare
circumstances.

To resolve this, we need a clean way to get errors back from runCommand,
without having to isinstance() the return value. This implements such a thing
by making runCommand also return an error (or None if no error occurred).

As runCommand now has a method of returning errors, we can also alter the
getCmdLineAction bits such that the returned value is just the action, not an
additional message. If a sync command wants to return an error, it raises
CommandError(message), and the message will be passed to the caller
appropriately.

Example Usage:

    result, error = server.runCommand(...)
    if error:
        log.error('Unable to run command: %s' % error)
        return 1

(Bitbake rev: 717831b8315cb3904d9b590e633000bc897e8fb6)

Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:58 +00:00
Saul Wold
878ef6a31c libtasn1: Upgrade to 2.13
(From OE-Core rev: 94c375a281378413d24a402ec6a59762d0eb5b85)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:41 +00:00
Nitin A Kamble
f49e7629df libtasn1: fix build with automake 1.12
(From OE-Core rev: 6fb4913eb237062303bcda50e9910f53dc95d0dd)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:41 +00:00
Saul Wold
e124ac50dc libtasn1: Update to 2.12
Use the GUN_MIRROR correctly

(From OE-Core rev: d02a682360d803f9b5f033ddc5d0f43020eebd13)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:41 +00:00
Martin Jansa
498951a247 bison: move remove-gets.patch to BASE_SRC_URI, it's needed for bison-native too if host has (e)glibc-2.16
(From OE-Core rev: cae95a527c1e9faefc0c051254e67dad7fad4197)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:41 +00:00
Elizabeth Flanagan
3456295898 build-appliance-image: Bump SRCREV
With the pending point release for denzil we need to point
to the release revision and the correct branch.

(From OE-Core rev: 0a9e8bf35afd5990c1b586bba5eb68f643458a4b)

Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-04 22:23:43 +00:00
Elizabeth Flanagan
c917351c9b DISTRO var bump for pending release
With the pending 1.2.2 release we need to bump distro vars.

(From meta-yocto rev: f9b4864a7fb4f25df74f1bf3dc1d55e72bd27fc1)

Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-04 22:21:17 +00:00
Tom Zanussi
f3db1cd8ff yocto-bsp: set branches_base for list_property_values()
yocto_bsp_list_property_values() is missing the context it needs to
properly filter choicelists, so add it to the context object.

Fixes [YOCTO #3233]

(From meta-yocto rev: 064b15f76c5b52899f4c3fdef06412c3063062a5)

(From meta-yocto rev: 601b6227908f3dd7972ad62c53d1041f4429aeb2)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:37:08 +00:00
Scott Rifenbark
b0689417df documentation: Formats. Also, the January 2013 date summary
I went into the chapters and did some formatting in order
to generate a new commit.  The commit summary message for
the previous commit was wrong and I pushed it.  The date
for the 1.2.2 release is January 2013.

(From yocto-docs rev: 457549a44cb7871d5c645f5aab02350cf76b6f1f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:27 +00:00
Scott Rifenbark
270402dd38 documentation: Updated manual history tables for Feb 2013
The release date for the five manuals was updated to
Feb 2013 for the 1.2.2 release.

(From yocto-docs rev: 2110815be55bddbfd24495aad7b8d5e2b69f3475)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:27 +00:00
Scott Rifenbark
5d30ffd8f8 documentation: Added February 2013 as release date for 1.3.1
I added this date as the release date for the five manuals
that have a manual history table.

(From yocto-docs rev: 5f107aab8bd2de0be78163eaf356656ddae4bf5f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:27 +00:00
Scott Rifenbark
c2c8bb40e8 poky.ent: Updated to remove "current" and replace with 1.2.2
(From yocto-docs rev: ad61c64d5b33ca3b7aa02f67934c7c2317d8cbe1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:27 +00:00
Scott Rifenbark
625044e9a8 documentation: Updated Manual History table for 1.2.2 release
Involved putting in a place holder date, bumping the copyright
date to 2013, and updating the poky.ent file as appropriate
for 1.2.2 and 7.0.2.

(From yocto-docs rev: 0a76733066b3440809ecafce756c5fdb4eafaae6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:26 +00:00
Scott Rifenbark
d571e5a68a Documentation: ref-manual - Updated LIC_FILES_CHKSUM example.
One of the examples used "startline" instead of "beginline".
Correction made.

(From yocto-docs rev: 59345ad197619280bef7a469d671feae80f0c4e6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:26 +00:00
Scott Rifenbark
8e078932c3 documentation: poky-ref-manual - Fixed grammar typo.
(From yocto-docs rev: 2660f17b79a36772081e37117be85ee304b78f20)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:26 +00:00
Scott Garman
151d4fbc4e cups: patch for CVE-2011-2896
Patch from: http://cups.org/strfiles/3867/str3867.patch

The LZW decompressor in the LWZReadByte function in giftoppm.c in the
David Koblas GIF decoder in PBMPLUS, as used in the gif_read_lzw
function in filter/image-gif.c in CUPS before 1.4.7, the LZWReadByte
function in plug-ins/common/file-gif-load.c in GIMP 2.6.11 and earlier,
the LZWReadByte function in img/gifread.c in XPCE in SWI-Prolog 5.10.4
and earlier, and other products, does not properly handle code words
that are absent from the decompression table when encountered, which
allows remote attackers to trigger an infinite loop or a heap-based
buffer overflow, and possibly execute arbitrary code, via a crafted
compressed stream, a related issue to CVE-2006-1168 and CVE-2011-2895.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-2896

[YOCTO #3582]
[ CQID: WIND00299595 ]

(From OE-Core rev: f4aca76c7933abf2771999c309d49ab91a3d9480)

Signed-off-by: Li Wang <li.wang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>

Merged with denzil branch, partial fix for denzil bug [YOCTO #3652]

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:26 +00:00
Li Wang
caa1d03089 librsvg: CVE-2011-3146
Store node type separately in RsvgNode

commit 34c95743ca692ea0e44778e41a7c0a129363de84 upstream

The node name (formerly RsvgNode:type) cannot be used to infer
the sub-type of RsvgNode that we're dealing with, since for unknown
elements we put type = node-name. This lead to a (potentially exploitable)
crash e.g. when the element name started with "fe" which tricked
the old code into considering it as a RsvgFilterPrimitive.
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-3146

https://bugzilla.gnome.org/show_bug.cgi?id=658014

[YOCTO #3581]
[ CQID: WIND00376773 ]
Upstream-Status: Backport

(From OE-Core rev: 6d030fcb69221da073ce413049deb8447934bed5)

Signed-off-by: Li Wang <li.wang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>

Resolved merge conflicts with denzil branch.

Fixes denzil bug [YOCTO #3651].

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:25 +00:00
yanjun.zhu
a86e32a18b squashfs: fix CVE-2012-4025
CQID:WIND00366813

Reference: http://squashfs.git.sourceforge.net/git/gitweb.cgi?
p=squashfs/squashfs;a=patch;h=8515b3d420f502c5c0236b86e2d6d7e3b23c190e

Integer overflow in the queue_init function in unsquashfs.c in
unsquashfs in Squashfs 4.2 and earlier allows remote attackers
to execute arbitrary code via a crafted block_log field in the
superblock of a .sqsh file, leading to a heap-based buffer overflow.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-4025

(From OE-Core rev: e6fddd1961061895e9335fa94b636163efdc9caa)

Signed-off-by: yanjun.zhu <yanjun.zhu@windriver.com>

[YOCTO #3564]
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:25 +00:00
Scott Garman
156c2554b7 freetype: patches for CVE-2012-5668, 5669, and 5670
For details of these security issues, please see:

http://www.openwall.com/lists/oss-security/2012/12/25/1

Thanks to Eren Turkay <eren@hambedded.org> for submitting source
patches that apply cleanly to freetype 2.4.9.

This fixes denzil bug [YOCTO #3649]

(From OE-Core rev: be34916d81b71385a560a6990c7b30eba243b356)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:25 +00:00
Scott Garman
b6037b6d2f libxml2: patch for CVE-2012-2871
the patch come from:
http://src.chromium.org/viewvc/chrome/trunk/src/third_party/libxml/ \
src/include/libxml/tree.h?r1=56276&r2=149930

libxml2 2.9.0-rc1 and earlier, as used in Google Chrome before
21.0.1180.89, does not properly support a cast of an unspecified
variable during handling of XSL transforms, which allows remote
attackers to cause a denial of service or possibly have unknown other
impact via a crafted document, related to the _xmlNs data structure in
include/libxml/tree.h.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-2871

[YOCTO #3580]
[ CQID: WIND00376779 ]

(From OE-Core rev: fa3d44594360786b2526d64f0ea5bc26b44a1fa8)

Signed-off-by: Li Wang <li.wang at windriver.com>

This fixes denzil bug [YOCTO #3648]

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:25 +00:00
Scott Garman
cf77faba91 gitignore: add generated doc files to ignore list
(From OE-Core rev: c067fbcb910f888cc6328d725a395ce681862377)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:24 +00:00
Richard Purdie
88b65c79ac boot-directdisk: Fix kernel location after STAGING_KERNEL_DIR change
This catches up with the STAGING_KERNEL_DIR location change
and uses the correct variable to future proof this issue.

[YOCTO #2783]

(From OE-Core rev: 28715eff6dff3415b1d7b0be8cbb465c417e307f)

(From OE-Core rev: f02a7341e37aec155772e1546d8b21ef2c9f5e9d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:24 +00:00
Scott Garman
bfae0622b5 build-appliance-image: Allow SRCREV to be overriden
This will allow use to automagically set the SRCREV for builds on the
autobuilder. It will still require manual updating for releases.

(From OE-Core rev: 1b4781e5c6eee234fcf57dd53d5167b31d81a482)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:24 +00:00
Scott Garman
01c1421270 psplash: new patch to fix segfault
This fixes a segmentation fault when passing -a without
an argument.

Fixes [YOCTO #2903]

(From OE-Core rev: f5b8ba5e51ac41cf375119a88083617f667a85d5)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:23 +00:00
Mihai Lindner
2ccb03f9b7 sysklogd: removed tabs from syslog.conf
Yocto #2926: syslog.conf should not have tabs within the selector field.
Removed tabs from the selector field of syslog rules. Tabs or spaces
should be used, in syslog.conf, only when separating selectors from
actions.

(From OE-Core rev: 1316be4e597332a629842b3f5a7dde8e45dd057d)

(From OE-Core rev: c806466c8d4a9d0d4a66d34d3565d5879c2f2b0f)

Signed-off-by: Mihai Lindner <mihaix.lindner@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:23 +00:00
Khem Raj
07f304f4e2 grub,guile,cpio,tar,wget: Fix gnulib for absense of gets in eglibc
eglibc 2.16 does not export gets anymore

(From OE-Core rev: 043d67c6677fa87496c4c441e9d366e2003ab9aa)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts with denzil branch and backported guile
patch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:23 +00:00
Khem Raj
db915496d3 bison: Fix for gets being removed from eglibc 2.16
(From OE-Core rev: bc91a267d097c100480ea02ece7fb372167eaf7f)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:23 +00:00
Khem Raj
8fe4344e74 gettext,m4,augeas,gnutls: Account for removal of gets in eglibc 2.16
These recipes use gnulib which needs this change to use gets
when its defined and not otherwise. Until that change goes into
gnulib and then all these package upgrade gnulib in their sourcebase
we patch them

(From OE-Core rev: b955f1a7bc716055c78ed575eccac6f611dc2395)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts with denzil branch and backported gnutls
patch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:22 +00:00
Khem Raj
3c57cb356e diffutils: Fix build with eglibc 2.16
eglibc 2.16 has removed gets so we account for that

(From OE-Core rev: b6bcd4e26e94364939c8874db90e64fbb242e841)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:22 +00:00
Khem Raj
38d6972032 coreutils: Fix build with eglibc 2.16
eglibc 2.16 has removed gets so we account for that

(From OE-Core rev: 965243ab5b5d992977193c444dbbbf09701467c2)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:22 +00:00
Scott Garman
ef745cb34f poky.conf: Add Ubuntu 12.04.1 LTS to SANITY_TESTED_DISTROS
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 16:02:04 +00:00
yanjun.zhu
a6f0dcbe7d squashfs: fix for CVE-2012-4024
Reference:http://squashfs.git.sourceforge.net/git/gitweb.cgi?p=
squashfs/squashfs;a=commit;h=19c38fba0be1ce949ab44310d7f49887576cc123

Fix potential stack overflow in get_component() where an individual
pathname component in an extract file (specified on the command line
or in an extract file) could exceed the 1024 byte sized targname
allocated on the stack.

Fix by dynamically allocating targname rather than storing it as
a fixed size on the stack.

[YOCTO #3513]

Fixes denzil [YOCTO #3520]

(From OE-Core rev: d35560f33f257bd12a07c7c0be770319086d6ad9)

Signed-off-by: yanjun.zhu <yanjun.zhu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:20 +00:00
Marcin Juszkiewicz
cf796f8908 libxml: disable lzma
On my system libxml-native got linked with host copy of liblzma and as a
result libxslt-native was not linkable:

| x86_64-linux-libtool: link: gcc -isystem/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/include -O2 -pipe -Wall -Wl,-rpath-link -Wl,/home/hrw
/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib -Wl,-rpath-link -Wl,/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-
linux/lib -Wl,-rpath -Wl,/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib -Wl,-rpath -Wl,/home/hrw/HDD/devel/canonical/ci-linaro/oecore/buil
d/tmp-eglibc/sysroots/x86_64-linux/lib -Wl,-O1 -o .libs/xsltproc xsltproc.o  -L/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib -L/home/hrw/
HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/lib ../libxslt/.libs/libxslt.so ../libexslt/.libs/libexslt.so /home/hrw/HDD/devel/canonical/ci-linaro/oecore/
build/tmp-eglibc/work/x86_64-linux/libxslt-native-1.1.26-r8/libxslt-1.1.26/libxslt/.libs/libxslt.so /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux
/usr/lib/libxml2.so -ldl /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/liblzma.so -lrt -lz -lm -pthread -Wl,-rpath -Wl,/home/hrw/HDD/deve
l/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib
| /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/libxml2.so: undefined reference to `lzma_code@XZ_5.0'
| /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/libxml2.so: undefined reference to `lzma_auto_decoder@XZ_5.0'
| /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/libxml2.so: undefined reference to `lzma_end@XZ_5.0'
| /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/libxml2.so: undefined reference to `lzma_properties_decode@XZ_5.0'
| collect2: error: ld returned 1 exit status
| make[2]: *** [xsltproc] Error 1
| make[2]: Leaving directory `/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/work/x86_64-linux/libxslt-native-1.1.26-r8/libxslt-1.1.26/xsltproc'

(From OE-Core rev: 42e03215cc494f1508b96c2bb63243a02e5ef812)

Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:20 +00:00
Saul Wold
e416fb6920 libxml2: Update to 2.8.0
removed 2 patches that are now fixed upstream
updated hash.c LIC_FILES_CHKSUM due to updating the date to 2012

(From OE-Core rev: c74ed920d3a9a0e379f8fd450e2841628ee0beb2)

Signed-off-by: Saul Wold <sgw@linux.intel.com>

Resolved merge conflicts in denzil branch.

Addresses CVE-2011-1944.

Fixes denzil [YOCTO #2703]

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:19 +00:00
Richard Purdie
aa66dc91d3 libxml2/libxslt: Don't depend on ansidecl.h header
We don't DEPEND on binutils for ansidecl.h so ensure we should never
use the header. This makes builds determinstic and means something like:

bitbake binutils
bitbake libxml2 -c configure
bitbake binutils -c clean
bitbake libxml2

doen't fail to build.

(From OE-Core rev: 54d27bbc26d1e45e51ee8ef0f051a2bd8f627cc0)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:19 +00:00
Nitin A Kamble
b45184aef3 libxml2: fix build with automake 1.12
(From OE-Core rev: dd1b77c473ee92608ad0a5bdbea0880d2f613c2c)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:19 +00:00
Phil Blundell
0e43be806d openssl: Use ${CFLAGS} not ${FULL_OPTIMIZATION}
The latter variable is only applicable for target builds and could
result in passing incompatible options (and/or failing to pass
required options) to ${BUILD_CC} for a virtclass-native build.

(From OE-Core rev: d5a99f3dab07fa676788b434e18174c0798d4460)

Signed-off-by: Phil Blundell <philb@gnu.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:18 +00:00
Scott Garman
d6935247dd openssl: upgrade to 1.0.0j
Addresses CVE-2012-2333

Fixes [YOCTO #2682]

Fixes denzil [YOCTO #2701]

(From OE-Core rev: cf84ebac391b243099fe0d05223433ecb8e71641)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:18 +00:00
yanjun.zhu
b8f6d9cbfd libproxy: Fix for CVE-2012-4504
Reference:https://code.google.com/p/libproxy/source/detail?r=853

Stack-based buffer overflow in the url::get_pac function in url.cpp
in libproxy 0.4.x before 0.4.9 allows remote servers to have an
unspecified impact via a large proxy.pac file.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-4504

[YOCTO #3487]

Fixes denzil [YOCTO #3511]

(From OE-Core rev: 543d608ae6251956b84e6423ec66f146f926d4b8)

Signed-off-by: yanjun.zhu <yanjun.zhu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:18 +00:00
Martin Jansa
2288ff099c opkg-utils: bump SRCREV to latest
(From OE-Core rev: 119215fee75a64de49d498c3d57446783722a292)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:18 +00:00
Andrei Gherzan
93cc23571e opkg-utils: Add needed python modules as RDEPENDS
(From OE-Core rev: dadfb4914b25a970c61e7f2354c01086d4823fd6)

Signed-off-by: Andrei Gherzan <andrei@gherzan.ro>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:17 +00:00
Robert Yang
95f2a5b635 rootfs_rpm.bbclass: save rpmlib rather than remove it
The rpmlib was removed when images that add
"remove_packaging_data_files" to ROOTFS_POSTPROCESS_COMMAND, which would
make the increment rpm image generation doesn't work in the second
build, since list_installed_packages would get incorrect value in the
second build, move the rpmlib to ${T} rather than remove it, and move it
back when INC_RPM_IMAGE_GEN =1.

[YOCTO #2690]

(From OE-Core rev: c30e79510c06701f10f659eedaa0fe785538ac17)

(From OE-Core rev: 15e13ea1fc8a0c29b4ca68c31c83ca013c89c36e)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:17 +00:00
Robert Yang
75997b4565 package_rpm.bbclass: Fix incremental rpm image generation
Fix the incremental rpm image generation, it didn't work since the code
has been changed.

The btmanifest should have a ".manifest" suffix, so that it can be moved
to ${T} by rootfs_rpm.bbclass:
mv ${IMAGE_ROOTFS}/install/*.manifest ${T}/

Note: The locale pkgs would always be re-installed.

[YOCTO #2690]

(From OE-Core rev: 5149630746626c6d416f26ab9dd1c7213fcd8c50)

(From OE-Core rev: 1f5113ae91ed639cf06fcbb9431b460d7a06bbbc)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:17 +00:00
Roy.Li
970d279774 bitbake: compile tar-replacement firstly
Compiling tar-replacement or not is decided by version of host tar,
if the host tar version is lower than 1.23, Compiling tar-replacement
is needed.

When doing popoluate tar-replacement sysroot to write the tar to
sysroot, but writing is not finished. other packages probably
use the being written tar to unzip file, which will lead to failure
and report the below error:
"bitbake_build/tmp/sysroots/x86_64-linux/usr/bin/tar: Text file busy"

Now we compile tar-replacement firstly to ensure that a being written
tar command will not be used.

(From OE-Core rev: 3c1c4719fc96f6f1fbb257413d6baf3d91fdf4e8)

(From OE-Core rev: 1a6f61d9493bdbade256dc6c19bbffe75a2684a4)

Signed-off-by: Roy.Li <rongqing.li@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:16 +00:00
Joe Slater
40e6fc6a65 gettext: install libgettextlib.a before removing it
In a multiple job build, Makefile can simultaneously
be installing and removing libgettextlib.a.  We serialize
the operations.

(From OE-Core rev: 2750546b2152eecdbb37e963a2495383f6944184)

(From OE-Core rev: 500c9c1e0047ba9f35e3591f4252fe2dd38bc4f1)

Signed-off-by: Joe Slater <jslater@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:16 +00:00
Paul Eggleton
e9c2218231 classes/qmake_base: support linux-gnuspe/linux-uclibcspe TARGET_OS
Fix borrowed from OE-Classic. This should fix build failures during
do_configure of Qt applications with the p1022ds machine from
meta-fsl-ppc, for example.

(From OE-Core rev: a19fc8e19a6cc6885a1e0616b1f42cc49c8f2c9f)

(From OE-Core rev: 0baef81f0ebf854b3e3e78b0d3745cc8ad41491e)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:16 +00:00
Ross Burton
84399e189b gst-plugins-good: disable (uninstalled) examples
The examples pull in a GTK+ build dependency, so remove that too.

(From OE-Core rev: f6975629fd5aa34bf423415bf2328e2146a6e675)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:15 +00:00
Paul Eggleton
846b7c3887 bitbake: lib/bb/siggen.py: log when tainting the signature of a task
Log a note when applying a taint to a task signature (e.g. when using
the -f or -C command line options) so that the user knows this has been
done.

(Bitbake rev: 0fd960fdea83874eedb541cbc2920257e0f3fb81)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-12 08:58:48 +01:00
Paul Eggleton
732007cbb6 bitbake: bitbake: ensure -f causes dependent tasks to be re-run
If -f is specified, force dependent tasks to be re-run next time. This
works by changing the force behaviour so that instead of deleting the
task's stamp, we write a "taint" file into the stamps directory, which
will alter the taskhash randomly and thus trigger the task to re-run
next time we evaluate whether or not that should be done as well as
influencing the taskhashes of any dependent tasks so that they are
similarly re-triggered. As a bonus because we write this file as
<stamp file name>.taskname.taint, the existing code which deletes the
stamp files in OE's do_clean will already handle removing it.

This means you can now do the following:

bitbake somepackage
[ change the source code in the package's WORKDIR ]
bitbake -c compile -f somepackage
bitbake somepackage

and the result will be that all of the tasks that depend on do_compile
(do_install, do_package, etc.) will be re-run in the last step.

Note that to operate in the manner described above you need full hashing
enabled (i.e. BB_SIGNATURE_HANDLER must be set to a signature handler
that inherits from BasicHash). If this is not the case, -f will just
delete the stamp for the specified task as it did before.

This fix is required for [YOCTO #2615] and [YOCTO #2256].

(Bitbake rev: f7b55a94226f9acd985f87946e26d01bd86a35bb)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-12 08:58:34 +01:00
Martin Jansa
9945923c8b openssl: add deprecated and unmaintained find.pl from perl-5.14 to fix perlpath.pl
* openembedded-core/meta/recipes-connectivity/openssl/openssl.inc
*
* is using perlpath.pl:
*
*   do_configure () {
*           cd util
*           perl perlpath.pl ${STAGING_BINDIR_NATIVE}
*   ...
*
* and perlpath.pl is using find.pl:
* openssl-1.0.0i/util/perlpath.pl:
*   #!/usr/local/bin/perl
*   #
*   # modify the '#!/usr/local/bin/perl'
*   # line in all scripts that rely on perl.
*   #
*
*   require "find.pl";
*   ...
*
* which was removed in perl-5.16.0 and marked as deprecated and
* unmaintained in 5.14 and older:
* /tmp/usr/lib/perl5/5.14.2/find.pl:
*   warn "Legacy library @{[(caller(0))[6]]} will be removed from the Perl
*   core distribution in the next major release. Please install it from the
*   CPAN distribution Perl4::CoreLibs. It is being used at @{[(caller)[1]]},
*   line @{[(caller)[2]]}.\n";
*
*   # This library is deprecated and unmaintained. It is included for
*   # compatibility with Perl 4 scripts which may use it, but it will be
*   # removed in a future version of Perl. Please use the File::Find module
*   # instead.

(from OE-Core rev c09bf5d177a7ecd2045ef7e13fff4528137a9775)

(From OE-Core rev: c15fae372cf75403facc28cf76f973b1279425dd)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:09 +01:00
Dennis Lan
9bf253f467 openjade-native: fix undefined Getopts error, use std namespace
Using Gentoo Linux as the build host, it fails without this patch
Use Getopt::Std in place of getopts.pl.

https://bugs.gentoo.org/show_bug.cgi?id=420083

which following error:
/usr/bin/perl -w ./../msggen.pl -l jstyleModule InterpreterMessages.msg
/usr/bin/perl -w ./../msggen.pl -l jstyleModule DssslAppMessages.msg
Undefined subroutine &main::Getopts called at ./../msggen.pl line 22.
make[2]: *** [InterpreterMessages.h] Error 2
make[2]: *** Waiting for unfinished jobs....
Undefined subroutine &main::Getopts called at ./../msggen.pl line 22.
make[2]: *** [DssslAppMessages.h] Error 2

(from OE-Core rev 169a89b10817b742c063fcd76721e4dbbcca6199)

(From OE-Core rev: 7c7dcb05685d840c70474d409f6a58ae459c46f0)

Signed-off-by: Dennis Lan <dennis.yxun@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:08 +01:00
Richard Purdie
d156f75fea siteconfig: Clear cache before rebuilding
This ensures consistent build results and avoids build failures when compiler flags
change for example.

(From OE-Core rev: a5ff8396cad130f809f8f8da49bb38e6f80f923c)

(From OE-Core rev: b21d1daf709ddce14c93a5f16c91ff702e1cb7ff)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:08 +01:00
Darren Hart
809d97b938 gnutls: Update SRC_URI to use GNU_MIRROR
The current SRC_URI fails. Update it with the GNU_MIRROR SRC_URI from
upstream commit 753b22012f10c393c191d3116b9d38ee4be6d112.

(From OE-Core rev: 8430748e838872b22fe0e83a7dbf3a2a5b1faba2)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: John Howard <john.howard@intel.com>
CC: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:08 +01:00
Paul Eggleton
e5a85ac2e4 classes/cml1: ensure -c menuconfig forces a rebuild next time
Ensure the following results in the kernel being rebuilt, repackaged and
re-deployed in the final step:

bitbake virtual/kernel
bitbake -c menuconfig virtual/kernel
[ make changes to the kernel configuration and save ]
bitbake virtual/kernel

If there are no changes to the configuration saved, the rebuild will not
be triggered.

Note that this relies on a function recently added to BitBake and
requires full hashing (i.e. BB_SIGNATURE_HANDLER must be set to a
signature handler that inherits from BasicHash) - if this is not the
case or the function is not available in the version of BitBake being
used this change will do nothing.

Fixes [YOCTO #2256].

(From OE-Core rev: 9bf6b60e1599cf5dd87089d42584583cdfd6807a)

(From OE-Core rev: a9600e68e64a111be4cb934e14b914fa553b5654)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:08 +01:00
Jesse Zhang
b4bb378261 udev: don't mount with -o sync
mount.sh mounts all partitions with -o sync, which is bad for system
performance.

(From OE-Core rev: d49cf73754150b50a911d326aaa666f5da78855c)

(From OE-Core rev: 44c102386c9bca17743d2edd1f94d4071974204d)

Signed-off-by: Jesse Zhang <sen.zhang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:07 +01:00
Bogdan Marinescu
e7d4eba0a9 Save proxy settings
Proxy settings were not properly saved between Hob runs. This
fix is mostly a backport of code in master.

[YOCTO #3024]

Signed-off-by: Bogdan Marinescu <bogdan.a.marinescu@intel.com>
2012-10-01 18:13:15 +01:00
Paul Eggleton
aad5c9f699 bitbake: hob: format error messages properly
Error messages that use arguments need to be formatted properly, or we
don't get the full message. Use a formatter to do this when an error
occurs.

Partial fix for [YOCTO #2983].

(Bitbake rev: 6783538884adecd914909a9ab4ca73c27575f3ad)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-01 18:13:05 +01:00
Matthew McClintock
42d9652fb1 poky.conf: add distros that work correctly
All the builds pass for these distros as well

atom-pc, beagleboard, mpc8513e-rdb, routerstationpro

As well as other powerpc targets:

p1022ds, p4080ds, p5020ds, p5020ds-64b

Signed-off-by: Matthew McClintock <msm@freescale.com>
2012-10-01 18:12:48 +01:00
Paul Eggleton
4e09d164d9 bitbake: hob: ensure error message text is properly escaped
Our lack of markup escaping was causing invalid markup, leading to the
error dialog being blank. Use the glib markup escaping function provided
by PyGTK+ to do this properly and avoid the blank error dialogs.

Partial fix for [YOCTO #2983].

(Bitbake rev: 563ea5233a5ab1629c51e802d04280692f96c596)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-01 18:12:32 +01:00
Darren Hart
d567e770c3 bootimg: Use STAGING_KERNEL_DIR
bootimg.bbclass using STAGING_DIR_HOST/kernel instead of
STAGING_KERNEL_DIR, resulting in build failure of live images.

| install: cannot stat `/usr/local/dev/yocto/fishriver-test/build/tmp/sysroots/fishriver/kernel/bzImage': No such file or directory

Replace it with STAGING_KERNEL_DIR.

(From OE-Core rev: 8f16811a8d51982a8b3d70e6087aef4a41926840)

(From OE-Core rev: 032bd9a856f9ca0b43dff272bd4f95481aa46597)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Tested-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:18 +01:00
Richard Purdie
a5f5b1e80a texi2html: Fix perl location on recent distros
This fixes errors like:
| error: Failed dependencies:
|       /bin/perl is needed by texi2html-5.0-r1.i586

(From OE-Core rev: d4c27021ffc813732526ab9ae6969e5ae0bdf7e8)

(From OE-Core rev: f28dcaf565050d5c857c3d09164104410a2e4173)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:18 +01:00
Richard Purdie
d47234d4c9 dbus: Ensure dbus-nativesdk doesn't RPROVIDE dbus-x11
dbus-x11 should not RPROVIDE dbus-x11 as this is incorrect and confuses
builds. This fixes the nativesdk case.

(From OE-Core rev: f4cc32585f9ac392460991b46b8cfa7a347a27e6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:18 +01:00
Radu Moisan
3cdd930eeb dbus: include dbus-launch in the main dbus package
Followed suggestions from Bugz 2261:

2) make the virtual/libx11 DEPENDS conditional based on the x11 distro feature.
This makes the build dependencies reflect the feature list.

3) remove dbus-x11, meaning that dbus-launch with its potential X11 dependency
is now back in dbus where is belongs.

4) make dbus provide dbus-x11, for compatibility.

Fixes [Yocto #2261]

(From OE-Core rev: 9bf6d834312581e8b8741fb9d1621e4c40de5687)

Signed-off-by: Radu Moisan <radu.moisan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:18 +01:00
Franklin S. Cooper Jr
cfd36177f7 u-boot: Use fw_env.config if avaliable
* Add support for board specific fw_env.config file if avaliable

(From OE-Core rev: 4bc2151d6bb500b0489bc00bce7574dc24f41b90)

Signed-off-by: Franklin S. Cooper Jr <fcooper@ti.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:17 +01:00
Ross Burton
1ecc27c20c pulseaudio: remove ConsoleKit dependency
ConsoleKit is a runtime dependency for the ConsoleKit module, but there isn't a
build-time dependency.

(From OE-Core rev: ebfc81f57bbc60e958472d9a1257e6a19f60adbb)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:17 +01:00
Martin Jansa
82114edb4a pulseaudio: fix pulseaudio-server RDEPENDS
* module-cork-music-on-phone was renamed to module-role-cork
  http://cgit.freedesktop.org/pulseaudio/pulseaudio/commit/?id=3c5cc345472302b9511c19244b3eceb4a3674d8c

(From OE-Core rev: b102849a145ca6602ac8e499b1420672d290c11b)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:17 +01:00
Khem Raj
a62752bb4f pulseaudio: Always enable NLS
When NLS is disabled e.g. on uclibc the build fails
The actual problem is that pulseaudio build system
should cater for it but it does not

(From OE-Core rev: b7d10637059b2352bcca45bc15b26d0dd056e78f)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:17 +01:00
Constantin Musca
7ffad91e79 pulseaudio: upgrade to 2.1
(From OE-Core rev: 540093fd9f52c86e6803554e2796668227bb89b5)

Signed-off-by: Constantin Musca <constantinx.musca@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:16 +01:00
Xin Ouyang
1e2d6bffc5 libatomics-ops: update to the latest version 7.2
All old patches are droped because:

Merged into 7.2 by upstream:
* fedora/libatomic_ops-1.2-ppclwzfix.patch
* gentoo/libatomic_ops-1.2-mips.patch
* gentoo/sh4-atomic-ops.patch
* libatomics-ops_fix_for_x32.patch

Obsolete:
* doublefix.patch

(From OE-Core rev: 59afdbbddbacf5d9c668bb8f011c8f150421d498)

Signed-off-by: Xin Ouyang <Xin.Ouyang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:16 +01:00
Cristian Iorga
cb1c31939d libcanberra: upgrade to 0.29
(From OE-Core rev: 074c34617a361a589665774ac8d9060a4ef4ef82)

Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:16 +01:00
Cristian Iorga
bec76a3813 pulseaudio: upgrade to 2.0
(From OE-Core rev: 961872787ac2c2b18d4589967e68a60ab3a4cc86)

Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Conflicts:

	meta/recipes-multimedia/pulseaudio/pulseaudio_2.0.bb
2012-09-28 16:53:16 +01:00
Denys Dmytriyenko
ead623b2a0 pulseaudio: fix typo in the patch name, pulseaudo -> pulseaudio
No PR bump is needed.

(From OE-Core rev: ac3edffa2586064ff480b89e80b608f14e566fa7)

Signed-off-by: Denys Dmytriyenko <denys@ti.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:16 +01:00
Khem Raj
b973813796 libatomics-ops: Make it build for SH4
(From OE-Core rev: fc47820982aea41f2b0fdd4d87fb0242bf7346dd)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:15 +01:00
Saul Wold
8941d5aa3b pulseaudio: disable tcpwrap by default
This ensures that tcpwrapper usage is always disabled, this was
inconsistent because it would test for libwrap and sometimes enable
and sometimes not.

This ensures consistent build reproducibility.

(From OE-Core rev: 5b3d18d12fff156d4d360b779eb4ae78a480ce10)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:15 +01:00
Saul Wold
f6876adb6a bluez4: fix packaging issue after update
WARNING: QA Issue: bluez4: Files/directories were installed but not shipped
  /usr/share/dbus-1
  /usr/share/dbus-1/system-services
  /usr/share/dbus-1/system-services/org.bluez.service

(From OE-Core rev: 5c52472af73ed91970096eed3d216fdbc7ff42d2)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:15 +01:00
Cristian Iorga
3c3614c562 bluez4: update to ver. 4.101
(From OE-Core rev: 6e6407e9a8c59cb51685c4b767b62eacb8dbf852)

Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:15 +01:00
Cristian Iorga
1c71304f81 gst-plugin-bluetooth: update to ver. 4.101
(From OE-Core rev: 6d1bbc26d27506e5dd5c32013ea5074c5c5a342d)

Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:14 +01:00
Dongxiao Xu
5be28549ae bluez-hcidump: upgrade to version 2.4
(From OE-Core rev: 4934e54821ed19fb98ea7691af6a2d3bcb1922f7)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:14 +01:00
Jonas Danielsson
bbe83b55cd bluez4: make alsa support conditional upon DISTRO_FEATURES
Do not enable alsa in bluez4 unless it's included in DISTRO_FEATURES.

(From OE-Core rev: f6297d648b1464719d1e1e42e99d473b69a13e56)

Signed-off-by: Jonas Danielsson <jonas.danielsson@lundinova.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:14 +01:00
Paul Eggleton
3c363a70aa dhcp: remove dependency of dev/staticdev packages on main package
The main package is empty and is not produced, which leaves the dev
and staticdev packages broken. Remove the dependencies (added in
bitbake.conf by default) to fix this.

(From OE-Core rev: 5380c65e819d82f783cb75aa21db7c73bb445189)

(From OE-Core rev: 02dc5c9b7b1f21c9f8d9a9299933fa88dc16c542)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:14 +01:00
Paul Eggleton
2fab410f2e classes/mirrors: remove bogus gnutls mirror
This mirror entry which maps to itself plus a slash, if matched, put the
fetcher into a circular loop until the stack space is exhausted. A patch
has been sent to fix this issue in BitBake, but we should remove the
bogus entry as well.

(Note that this entry does not actually trigger the issue with current
master because the gnutls recipe now uses GNU_MIRROR instead of
ftp.gnutls.org, thus the bogus mirror entry is not matched.)

(from OE-Core rev 0de1827a9601143b090f751ea702fdb65a936b77)

(From OE-Core rev: 31ec9690c37c3a57e557684cbf5e5a4069bd57b7)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:13 +01:00
Matthew McClintock
4b8d430c1f sysvinit-inittab_2.88dsf.bb: only run serial checks at boot if we have items to check
Right now, we delay running the serial console checks to we boot up. This causes
issues for read only file systems. So, if have not configured any serial ports to
check via SERIAL_CONSOLES_CHECK we can skip the check at boot. This fixes any
issues with read only file systems and ipk packaging.

(From OE-Core rev: 2136030c1d240d9b8f123e3c8af5dacf66e86ab4)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:13 +01:00
Saul Wold
b0f05958fc kernel: Fix packaging issue
Remove /etc since it is empty, when creating a machine that does not
deliver any module config files, the /etc is empty and is then warned
about not being shipped, so we remove it.

This occurs in the routerstationpro with the following warning:
WARNING: For recipe linux-yocto, the following files/directories were installed but not shipped in any package:
WARNING:   /etc

(From OE-Core rev: 961498e3b4c4a93070bf278e67fc48c02333cd63)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:13 +01:00
Matthew McClintock
57cdce3108 valgrind_3.7.0.bb: fix missing leading space on _append
(From OE-Core rev: a175e09d1b0be85d8cbc58672485ec5ee5475ae2)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:13 +01:00
Richard Purdie
241653a01b autotools.bbclass: Add functionality to force a clean of ${B} when reconfiguring (and ${S} != ${B})
Unfortunately whilst rerunning configure and make against a project will mostly
work there are situations where it does not correctly do the right thing.

In particular, eglibc and gcc will fail out with errors where settings
do not match a previously built configuration. It could be argued they are
broken but the situation is what it is. There is the possibility of more subtle
errors too.

This patch adds removal of the build directory (${B}) when configure is
rerunning, the sstate checksum for do_configure has changed and ${S} != ${B}.
We could simply use a stamp but saving out the previous configuration checksum
adds some data at no real overhead.

If we find there are things where we want to disable this behaviour with
CONFIGURESTAMPFILE = "" in the recipe, or users could disable it globally.

[YOCTO #2774]
[YOCTO #2848]

This is particularly helpful for eglibc and gcc which use split builds by default and
are a particular source of reconfigure type problems.

(From OE-Core rev: f15f61af77cc4e52a037f509f8e49e1ea530cf35)

(From OE-Core rev: 14fc04e480aaf1cb5cd9d3a04a5b38d2fda115b1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:12 +01:00
Matthew McClintock
f6fb4890df eglibc_2.15: make patch only for Freescale machines
It's only Freescale machines that don't imlpement fsqrt, we don't want
this to effect others.

This patch was only added after the last release of denzil, so it's not
present in the release yet. Also, 2.15 is removed from master so it
should only apply to denzil branch

(From OE-Core rev: c541f746253fdb6d11cd961c7dff1aca8c2d2703)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:12 +01:00
Zhenhua Luo
b4de44f425 valgrind: fix debug info reading error when do memcheck on ppc targets
following is the error message:
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /lib/ld-2.13.so:
        --2263-- Can't make sense of .got section mapping
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /home/root/lzh:
        --2263-- Can't make sense of .data section mapping
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /usr/lib/valgrind/vgpreload_core-ppc32-linux.so:
        --2263-- Can't make sense of .data section mapping
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /usr/lib/valgrind/vgpreload_memcheck-ppc32-linux.so:
        --2263-- Can't make sense of .data section mapping
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /lib/libc-2.13.so:
        --2263-- Can't make sense of .data section mapping

(From OE-Core rev: 14626cc76210ed6fe40316a311f24147ed8de8be)

(From OE-Core rev: a76be502fbb9517c38cd716fa1f21a238b314162)

Signed-off-by: Zhenhua Luo <b19537@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:11 +01:00
Saul Wold
b32c50e016 python-pygtk: Upgrade to 2.24
This is needed for the build appliance and Hob also

(From OE-Core rev: a0abfd60e8cb78b40278eec85a8d0c722f8ef1e4)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:11 +01:00
Saul Wold
b5013e9e9e build-appliance: add zip-native, which is needed to build the final zip bundle
(From OE-Core rev: 8aeceab5d03fa3c88f0128ce1ac6bfde0d88e1b6)

(From OE-Core rev: 9ab2613327fcd4cf811afc52eff92903644e9b11)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:10 +01:00
Scott Garman
0cd0a3b475 kernel.bbclass: put perf .debug dir in -dbg package
This is needed to avoid the following QA error:

ERROR: QA Issue: non debug package contains .debug directory:
kernel-dev path

Patch proposed by Matthew McClintock <msm@freescale.com>

(From OE-Core rev: df879f191d1e86596444cb30a0a77a785958520c)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:10 +01:00
Scott Garman
825c647f65 relocatable.bbclass: Account for case when ORIGIN is in RPATH
This patch was backported from OE-Core rev:
43600df0d4efc976a9451163dd334b4763937932

This fixes a case when RPATH embedded in program have one of
its path already relative to ORIGIN. We were losing that path
if such a path existed. This patch appends it to the new edited
rpath being created when we see it.

so RPATH like below

(RPATH) Library rpath:
[$ORIGIN/../lib/amd64/jli:$ORIGIN/../jre/lib/amd64/jli]

would end up being empty

but after this patch its kept intact

(From OE-Core rev: 9ebb327ae17d1a765fd1499546ccf9076bb93234)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:10 +01:00
Khem Raj
16b6bc4288 kernel.bbclass: Dont package kxgettext.o
kxgettext.o is generated when building ppc kernels
so we end up with packaging errors like

> ERROR: QA Issue: Architecture did not match (20 to 62) on
> /work/virtex5-poky-linux/linux-xilinx-2.6.38-r00/packages-split/kernel-dev/usr/src/kernel/scripts/kconfig/kxgettext.o

(From OE-Core rev: 21952b62e3fca6c9fe750db62ca2b0587912be8a)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:09 +01:00
Bruce Ashfield
4ef357f033 kernel.bbclass: add non-santized kernel provides
If the kernel version string uses characters or symbols that
need to be santized for the package name, we can end up with a
mismatch between module requirements and what the kernel
provides.

The kernel version is pulled from utsrelease.h, which contains
the exact string that was passed to the kernel build, not
one that is santized, this can result in:

 echo "CONFIG_LOCALVERSION="\"MYVER+snapshot_standard\" >> ${B}/.config

 <build>

 % rpm -qp kernel-module-uvesafb-3.4-r0.qemux86.rpm --requires
update-modules
kernel-3.4.3-MYVER+snapshot_standard
 % rpm -qp kernel-3.4.3-myver+snapshot-standard-3.4-r0.qemux86.rpm --provides
kernel-3.4.3-myver+snapshot-standard = 3.4-r0

At rootfs assembly time, we'll have a dependency issue with the kernel
providing the santizied string and the modules requiring the utsrelease.h
string.

To not break existing use cases, we can add a second provides to the
kernel packaging with the unsantized version string, and allowing the
kernel module packaging to be unchanged.

   RPROVIDES_kernel-base += "kernel-${KERNEL_VERSION}"

 % rpm -qp kernel-3.4.3-myver+snapshot-standard-3.4-r0.qemux86.rpm --provides
kernel-3.4.3-MYVER+snapshot_standard
kernel-3.4.3-myver+snapshot-standard = 3.4-r0

(From OE-Core rev: 0af1d1412add1baf3f6c1a5cfb2e4f92fb6a85dc)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:09 +01:00
Darren Hart
91714ea8ae kernel: Add kernel headers to kernel-dev package
[YOCTO #1614]

Add the kernel headers to the kernel-dev package. This packages what was
already built and kept in sysroots for building modules with bitbake.
Making this available on the target requires removing some additional
host binaries.

Move the location to /usr/src/kernel

Before use on the target, the user will need to:

    # cd /usr/src/kernel
    # make scripts

This renders the kernel-misc recipe empty, so remove it.

As we use /usr/src/kernel in several places (and I missed one in the
previous version), add a KERNEL_SRC_DIR variable and use that throughout
the class to avoid update errors in the future.

Now that we package the kernel headers, drop the
kernel_package_preprocess function which removed them from PKGD.

All *-sdk image recipes include dev-pkgs, so the kernel-dev package will
be installed by default on all such images.

(From OE-Core rev: 0e3e88f9f87d1083ddd7dcaa526b3cd7a1cd53ff)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: Bruce Ashfield <bruce.ashfield@windriver.com>
CC: Tom Zanussi <tom.zanussi@intel.com>
CC: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:09 +01:00
Bruce Ashfield
fbb459aab9 kernel: save $kerndir/tools and $kerndir/lib from pruning
The kernel source tree in the sysroot has all unecessary source
code removed. The existing use case is to support module building
out of the sysroot, but as more toolsa are moved into the kernel
tree itself there are new use cases for the kernel sysroot source.

To avoid putting dependencies on the kernel, and to be able to
individually build and package these tools out of the source tree,
we can save $kerndir/tools and $kernddir/lib from being removed.
This enables tools like perf to be built our of the kernel source
in the sysroot, without significantly increasing the amount of
source in the sysroot.

(From OE-Core rev: 456f97c25488c2f6f6810b1a32781513cc719d8e)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:09 +01:00
Martin Jansa
7b7f3cff44 kernel.bbclass: pass KERNEL_VERSION to depmod calls in postinst
* without this, kernel upgrades where KERNEL_VERSION is changed
  e.g. 3.4.2 -> 3.4.3 generate .dep for running 3.4.2 and after reboot user ends
  up without any module loaded to make it worse after reboot nothing is upgraded
  to trigger another kernel(-module) postinst to generate .dep for now running 3.4.3

(From OE-Core rev: 2a7cdf088e484bb123a72826a02c3169a418ed0a)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:08 +01:00
Koen Kooi
a512e68202 man: make man actually work by installing custom man.config
The default man.conf is named wrong and doesn't work. It references gtbl, while groff installs tbl and other things. This man.conf is imported from OE classic and runtime tested on angstrom.

Before:

root@beaglebone:~# man man
sh: /usr/bin/gtbl: No such file or directory
sh: line 0: echo: write error: Broken pipe
gunzip: write: Broken pipe
gunzip: error inflating
sh: line 0: echo: write error: Broken pipe
sh: line 0: echo: write error: Broken pipe

After:

root@beaglebone:~# man man
MAN(1)                        Manual pager utils                        MAN(1)

NAME
       man - an interface to the on-line reference manuals

SYNOPSIS
       man  [-C  file]  [-d]  [-D]  [--warnings[=warnings]]  [-R encoding] [-L
       locale] [-m system[,...]] [-M path] [-S list]  [-e  extension]  [-i|-I]
       [--regex|--wildcard]   [--names-only]  [-a]  [-u]  [--no-subpages]  [-P
       pager] [-r prompt] [-7] [-E encoding] [--no-hyphenation] [--no-justifi-
       cation]  [-p  string]  [-t]  [-T[device]]  [-H[browser]] [-X[dpi]] [-Z]
       [[section] page ...] ...
       man -k [apropos options] regexp ...
       man -K [-w|-W] [-S list] [-i|-I] [--regex] [section] term ...
       man -f [whatis options] page ...
       man -l [-C file] [-d] [-D] [--warnings[=warnings]]  [-R  encoding]  [-L
       locale]  [-P  pager]  [-r  prompt]  [-7] [-E encoding] [-p string] [-t]
       [-T[device]] [-H[browser]] [-X[dpi]] [-Z] file ...
       man -w|-W [-C file] [-d] [-D] page ...
       man -c [-C file] [-d] [-D] page ...
       man [-hV]

Check for config name:

root@beaglebone:~# rm /etc/man.config
root@beaglebone:~# man man
Warning: cannot open configuration file /etc/man.config
No manual entry for man

As a bonus a bunch of references to the buildhost get removed from the config file.

(From OE-Core rev: 13d82ecd6b25ff4c34b3639e10113d7ebb33dc88)

Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:08 +01:00
Koen Kooi
ef789c1327 man: fix RDEPENDS and reformat recipe
(From OE-Core rev: f9aba0793123dafffc305c028f10e8f595c5a4ee)

Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:08 +01:00
Richard Purdie
2011408939 opkg-utils: UPdate to version with python 2.6 fix
(From OE-Core rev: ba2058aa74eb6cd263bd19a8338eeeced734f55c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:07 +01:00
Martin Jansa
9520635a00 opkg-utils: bump SRCREV
* there are 2 small fixes
  python-2.6 compatibility
  missing C option for opkg-build

(From OE-Core rev: 825a992af39d4eb75f105241e4cd94624b1dea43)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:07 +01:00
Martin Jansa
a259fdb296 opkg-utils: bump SRCREV for Packages cache fix and other fixes
(From OE-Core rev: ce5b46980f35097bd5fcc8195c5d5be1b980c870)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:07 +01:00
Valentin Popa
55997cbf3d xz: updated to version 5.1.1alpha
The licenses are the same, only some white spaces added/removed.

(From OE-Core rev: dbfc3d05e49b46ec033623d66e64cf781df4f632)

Signed-off-by: Valentin Popa <valentin.popa@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:07 +01:00
Martin Jansa
988318927a package.bbclass: fix TypeError in runstrip
* some packages have .ko files which are not elf, without this change
  it fails with TypeError, with this change only runstip fails and
  reports where:
  ERROR: runstrip: ''arm-oe-linux-gnueabi-strip'  '/OE/shr-core/tmp-eglibc/work/armv4t-oe-linux-gnueabi/emacs-23.4-r0/package/usr/share/emacs/23.4/etc/tutorials/TUTORIAL.ko'' strip command failed

(From OE-Core rev: 12e40ca7317289fec126d9f30b28a717fe72d274)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:06 +01:00
Richard Purdie
bf9a5226fc distutils/steuptools: Fix files layout and unbreak builds
The last two distutils changes progressivly broke the builds. Firstly they
moved things from the site_packages directory to being higher up the tree
which introduced package QA warnings as a side effect. Secondly, it interacts
badly with setuptools which passes in --root=${D} itself.

This patch restores the original directory layout, hence fixing the QA
warnings and also passes extra options to setuptools to deal with the
--root option it passes.

(From OE-Core rev: bed18d5df7915e4127a538be9c7550e185c8c850)

(From OE-Core rev: f60a04ccbf9ed614b5b5b9b02c8a24980bf17d3d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:06 +01:00
Matthew McClintock
d72eea5a5d distutils.bblass: change order of args to install step
This let's the user override install-lib argument again if it needs
to be something else, otherwise things like python-setuptools
won't be able to modify the install-lib dir

This fixes a new issue exposed by my previous distutils patch
that fixed the python modules default install location

(From OE-Core rev: 0fd7b7dd361e59c687366480cd9f89c81c2e5bce)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:06 +01:00
Matthew McClintock
a99b03e3ed distutils.bbclass: fix libdir for 64-bit python modules built with distutils
Without this some modules will be intalled in /usr/lib/python2.6/
instead of /usr/${libdir}/python2.6

(From OE-Core rev: bc6bd774aa8a3e085e9cabcefb11c3fc537139d5)

(From OE-Core rev: fc513eda1cfccc583f49847c3c04b5c781585e15)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:06 +01:00
Saul Wold
1773d1da62 build-appliance-image: Add vmx* files and build zip file
This commit adds the vmx* files needed to setup a VMware image,
this also packages the vmdk along with the vmx files.

(From OE-Core rev: ed0ffc12ed6f98471dced1ce2020af4e5c99f2c7)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:05 +01:00
Saul Wold
d8ccc44114 build-appliance-image: Update SRCREV to Denzil 1.2.1
(From OE-Core rev: 242fb49ac416824e53c58a8a0cb9bb9d19a72ec4)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:05 +01:00
Valentin Popa
6f3e6c75de build-appliance-image: rename from self-hosted-image
(-) renamed self-hosted-image to build-appliance-image
(-) replaced build-appliance-image description

[YOCTO #2636]

(From OE-Core rev: 04096f31778886479dac479132bded57e717653e)

(From OE-Core rev: bf133c331029ac588b27173145db5be5f6ee1ef5)

Signed-off-by: Valentin Popa <valentin.popa@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:05 +01:00
Franklin S Cooper Jr
89fa2c1fc6 psplash: LIC_CHKSUM Tweak
* Change the license checksum to use the lines in the psplash.h that contains
  license information instead of doing a checksum on the entire file.

(From OE-Core rev: 2c80eb5b9c103087774f032be01f5cf6309464d7)

Signed-off-by: Franklin S Cooper Jr <fcooper@ti.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:05 +01:00
Kang Kai
e22b4411ae ltp_20120104: add rdepends
[Yocto #2973]

Add rdepends libaio to fix this defect.

(From OE-Core rev: 79d12314729649add741509a46b7770e22dd23ad)

Signed-off-by: Kang Kai <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:04 +01:00
Bruce Ashfield
33921847a2 kernel-yocto: set master branch to a defined SRCREV
To support custom repositories that set a SRCREV and that only have
a single master branch, do_validate_branches needs a special case
for 'master'. We can't delete and recreate the branch, since you
cannot delete the current branch, instead we must reset the branch
to the proper SRCREV.

(From OE-Core rev: 3a8dc0a01d2756bb8f51afccad772fca1dc48af3)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:04 +01:00
Bruce Ashfield
5427f5d70f linux-yocto: allow do_validate_branches to handle all branches
Branch validation will not restrict a branch that doesn't exist
in the tree at the time of validation (since you can't reset a
SRCREV on a non-existent branch). This restriction can be removed
by looking for all branches that contain the specified SRCREV
and forcing them to that value.

(From OE-Core rev: 790f6441851fd4b2b84129340c438092f058135b)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:04 +01:00
Jeff Polk
4624b5eefb recipes-core/eglibc-2.13: Patch for locale-base-tt-ru packaging
The eglibc-2.13 build can fail because locale-base-tt-ru is in
PACKAGES twice. This is because the SUPPORTED list and the i18n
directories are out of sync with each other; the SUPPORTED list
expects a directory named "tt_RU.UTF8", but the directory is
actually named "tt_RU", and likewise for the @iqtelif variants.

(From OE-Core rev: 280886bb865efde6bda327a1c821220d64c893ba)

Signed-off-by: Peter Seebach <peter.seebach@windriver.com>
Signed-off-by: Jeff Polk <jeff.polk@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:04 +01:00
Matthew McClintock
e7376bb459 eglibc/gcc: add patches to fix eglibc 2.15 build
This drops one patch against eglibc for 2.15 and adds two new ones,
also it adds a gcc patch. We use all of these internally and they
are tested quite well.

(From OE-Core rev: a7014c446b0d2f3b40c4b058c64bb61c8720d799)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:03 +01:00
Marcin Juszkiewicz
c2bbe5f5d3 libpam: disable NIS to not link with libtirpc when it is available
I was checking ways to make incremental builds faster so I started using
sstate-cache and SSTATE_MIRRORS. But this gave me some nasty bug:

| Collected errors:
|  * satisfy_dependencies_for: Cannot satisfy the following dependencies
for php-cgi:
|  *    libtirpc1 (>= 0.2.2) *
|  * opkg_install_cmd: Cannot install package php-cgi.

I checked details:

In my previous build libtirpc got built before libpam so libpam found it
and linked. As a result packages depend on libtirpc1 but as there is no
such build dependency sstate handling code did not used libtirpc copy...

(From OE-Core rev: e629bdcd1bcb51f2d2101fb53daeac0bd29ab637)

(From OE-Core rev: 8ff92269cd63e153892d129e6e2255812a454a99)

Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:03 +01:00
Matthew McClintock
758677e212 glib.inc: disable selinux for native builds
In addition to dbus, we also need to disable selinux for glib as well
otherwise we will get the same link error

(Note: Upstream master has disabled selinux AFAICT)

(From OE-Core rev: 318bc896b1bd5399807a417865b8e088d9d9eb15)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:03 +01:00
Matthew McClintock
b96bd8465a dbus.inc: disable selinux for native builds
(Note: Upstream master has disabled selinux for this AFAICT)

Fixes issues such as:

| /usr/bin/ld: cannot find -lselinux
| collect2: ld returned 1 exit status
| make[3]: *** [libdbus-glib-1.la] Error 1

(From OE-Core rev: 7ee10f25430421dc6e9552ffe15a6a5acbd4cb51)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:02 +01:00
Tom Zanussi
65ffa73950 yocto-bsp: use base branches for qemu 'newbranch' case
The branch updating for the [YOCTO #2587] fix inadvertently changed
some of the qemu branch names incorrectly, fix it.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:22 +01:00
Tom Zanussi
a76fc366ce yocto-bsp: generate default properties even if json specified
Users seem to want to specify incomplete property sets when using json
input.  Allow this by generating default properties before the
user-specified properties are applied; the user will then get the
defaults for any unspecified values, and avoid cryptic backtraces.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:22 +01:00
Tom Zanussi
759237f721 yocto-bsp: use emgd 1.10 for i386 template
Make i386 template use emgd 1.10 for denzil, along with associated
changes.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:22 +01:00
Tom Zanussi
3e632506c2 yocto-bsp: add i586 option for i386
Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
1b9306705c yocto-bsp: add some standard policy
Add some useful default options to to the i386 and x86_64 templates.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
65706548f6 yocto-bsp: remove 'branch' statements in .scc if reusing branch
If reusing a branch (need_new_branch == 'n') we don't need to branch
in the .scc, so make it conditional on need_new_branch.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
bc6b04fe22 yocto-bsp: use rstrip() for assignment lines
strip() isn't necessary and causes unintended formatting changes in
the output; rstrip() remove the trailing newlines as intended while
leaving indenting whitespace intact.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
8a51a8afe8 yocto-bsp: use standard branch mapping in bsp templates
Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
2517376ee8 yocto-bsp: add standard branch mapping
Add a mechanism to distinguish common-pc variants of standard
branches.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
9078e985ac yocto-bsp: use branches_base
Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
b123553c1a yocto-bsp: allow branch display filtering
Add a "branches_base" property that can be used to allow only matching
branches to be returned from all_branches().

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
717704d12c yocto-bsp: update default branch names
Make sure the default branch names match branch names found in the
kernel branch listing.

Fixes [YOCTO #2587].

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
e5c5e20a5e yocto-bsp: strip '/base' from kernel branches in templates
For new branches, users can specify /base branches, but we don't want
the '/base' in the resultant branch name, so remove it.

Fixes [YOCTO #2693].

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:20 +01:00
Tom Zanussi
96631f080b yocto-bsp: add new strip_base() function
Add a strip_base() function to remove '/base' from the branch names
presented to the user.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:20 +01:00
Paul Eggleton
78d15a8015 cooker: fix UnboundLocalError when exception occurs during parsing
Fix a recent regression where we see the following additional error
after an error occurs during parsing:

ERROR: Command execution failed: Traceback (most recent call last):
  File "/home/paul/poky/poky/bitbake/lib/bb/command.py", line 84, in runAsyncCommand
    self.cooker.updateCache()
  File "/home/paul/poky/poky/bitbake/lib/bb/cooker.py", line 1202, in updateCache
    if not self.parser.parse_next():
  File "/home/paul/poky/poky/bitbake/lib/bb/cooker.py", line 1672, in parse_next
    self.virtuals += len(result)
UnboundLocalError: local variable 'result' referenced before assignment

(Bitbake rev: 1ae0181ba49ccfcb2d889de5dd1d8912b9e49157)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:20 +01:00
Matthew McClintock
ffff5fa2d7 bitbake/fetch2: remove references to ChecksumError class
From: Matthew McClintock <msm@freescale.com>

When merging fetch2 improvements from master into denzil, there
were too many dependencies to pull in the entire ChecksumError
class, so this patch removes references to ChecksumError for
compatability.

Fixes this issue:

NameError: global name 'ChecksumError' is not defined

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
2012-08-21 11:35:08 +01:00
Richard Purdie
119b7ff164 bitbake: fetch2: Handle errors orruring when building mirror urls
When we build the mirror urls, its possible an error will occur. If it
does, it should just mean we don't attempt this mirror url. The current
code actually aborts *all* the mirrors, not just the failed url.

This patch catches and logs the exception allowing things to continue.

(Bitbake rev: c35cbd1a1403865cf4f59ec88e1881669868103c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:08 +01:00
Richard Purdie
1162729c79 bitbake: fetch2: Improve mirror looping to consider more cases
Currently we only consider one pass through the mirror list. This doesn't
catch cases where for example you might want to setup a mirror of a mirror
and allow multiple redirection. There is no reason we can't support this
and the patch loops through the list recursively now.

As a safeguard, it will stop if any duplicate urls are found, hence
avoiding circular dependency looping.

(From Poky rev: 0ec0a4412865e54495c07beea1ced8355da58073)

(Bitbake rev: e585730e931e6abdb15ba8a3849c5fd22845b891)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:08 +01:00
Richard Purdie
4591963649 bitbake: fetch2: Explicitly check for mirror tarballs in mirror handling code
With support for things like git:// -> git:// urls, we need to be
more explicity about the mirrortarball check since we need to fall
through to the following code in other cases.

(From Poky rev: 28e858cd6f7509468ef3e527a86820b9e06044db)

(Bitbake rev: a2459f5ca2f517964287f9a7c666a6856434e631)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Richard Purdie
3f30bf4eab bitbake: fetch2: Split try_mirrors into two parts
There are no functionality changes in this change

(From Poky rev: d222ebb7c75d74fde4fd04ea6feb27e10a862bae)

(Bitbake rev: db62e109cc36380ff8b8918628c9dea14ac9afbc)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Conflicts:

	bitbake/lib/bb/fetch2/__init__.py

Signed-off-by: Khem Raj <kraj@juniper.net>
2012-08-21 11:35:07 +01:00
Richard Purdie
ea032bb2c3 bitbake: fetch2: Ensure when downloading we are consistently in the same directory
This assists with build reproducuility. It also avoids errors if cwd
happens not to exist when we call into the fetcher. That situation
would be unusual but I hit it with the unit tests.

(From Poky rev: 86517af9e066c2da1d580fa66b7c7f0340f3403e)

(Bitbake rev: b886c6c15a58643e06ca5ad7a3ff1f7766e4f48c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Richard Purdie
4bfb54e0ba bitbake: fetch2: Only cache data if fn is set, its pointless caching it against a None value
(From Poky rev: c2df30bf6d1f8c263a38c45866936c1bf496ece5)

(Bitbake rev: f4b59cc6e1c3ddc168a1678ce39ff402ea1ff4cc)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Richard Purdie
729b396b21 bitbake: fetch2: Fix error handling in uri_replace()
(From Poky rev: 1bfba28a583cb167f60e05ecdf34d0786dc1eec5)

(Bitbake rev: aa7467a764ddcbc7d65af99e88cf093b6ec6d24e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Richard Purdie
bdb918f808 bitbake: fetch2/__init__: Make it clearer when uri_replace doesn't return a match
(From Poky rev: dc9976331c5cbb0983adb54f6deb97b9203bacbc)

(Bitbake rev: eb96609864dec95a516e6e687dd6a2f31d523acf)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Zhenhua Luo
e4f8c7f693 valgrind: fix default.supp missing issue
When run valgrind, following error appears:
    ==2254== FATAL: can't open suppressions file "/usr/lib/valgrind/default.supp"

(From OE-Core rev: 0b3261d513cdad80174a9b9e804981c50bcb7ca2)

(From OE-Core rev: 95756cfbb7a9348b23cb46a49a5509e57e973faf)

Signed-off-by: Zhenhua Luo <b19537@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:33:31 +01:00
Paul Eggleton
4539cf1936 classes/license: fix manifest to work with deb
Prepend the license manifest creation call to ROOTFS_POSTPROCESS_COMMAND
instead of appending to ROOTFS_POSTINSTALL_COMMAND. The latter is not
implemented for the deb backend (and probably ought to just be removed
completely), and by using _prepend we can still ensure it occurs before
package info is removed (and before buildhistory in case it is needed
there in future).

(From OE-Core rev: 6ffd958ff2f7f1d07ab9da5ca8db1727dd074980)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:45 +01:00
Paul Eggleton
6631752c25 scripts/buildhistory-diff: add GitPython version check
Display an error if the user does not have at least version 0.3.1 of
GitPython installed.

(From OE-Core rev: 07b9c3bc67439d47627fe256796465520b533753)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:45 +01:00
Paul Eggleton
75b7901f22 buildhistory_analysis: fix error when version specifier missing
Passing None to split_versions() will raise an exception, so check that
the version is specified before passing it in.

(From OE-Core rev: a530aee6d9b2b63ab5fa780b1761eac759e8c833)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:45 +01:00
Paul Eggleton
c520132cb8 classes/rootfs_*: fix splitting package dependency strings
If a + character appears in a version specification within the list of
package dependencies, the version will not be removed from the list in
list_package_depends/recommends leading to garbage appearing in the
dependency graphs generated by buildhistory. To avoid any future
problems due to unusual characters appearing in versions, change the
regex to match almost any character.

Fixes [YOCTO #2451].

(From OE-Core rev: d592c3a26c630d5f3bfba4804a93766447bf72c9)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:45 +01:00
Saul Wold
a8d4eb449f foomatic: fix perl path for target
This problem appears on F17 when configure finds /bin/perl, since the beh
script is a target side script, we need to set PERL in the do_configure_prepend
in order for the correct perl to be used

(From OE-Core rev: f189ee78bed0920cfd33689ebb9aad45fded2c4d)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:44 +01:00
Otavio Salvador
4253c34000 shadow: use 'users' group by default
The rootfs has 'users' group at number 100 and without this fix it
would assign to a non-existent group and if a group with gid as 1000
is created later it would own all files for users created.

(From OE-Core rev: c2bd2936907ea8b776d58e8cc58a8359a6e7e9b9)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Saul Wold <sgw@linux.intel.com>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:44 +01:00
Otavio Salvador
1a70ddc4e8 shadow-native: use 'users' group by default
The rootfs has 'users' group at number 100 and without this fix it
would assign to a non-existent group and if a group with gid as 1000
is created later it would own all files for users created.

(From OE-Core rev: 42e9f988bc691ca763d5eda3537d6281b7902794)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Saul Wold <sgw@linux.intel.com>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:44 +01:00
Matthew McClintock
6352d0a9a1 u-boot.inc: update linker arguments to pass --sysroot arg
If we are building from sstate-cache it's possible to be building
from another folder on another machine, therefore the linker requires
that a proper --sysroot is passed too it so it can find things like
libgcc.a and avoid errors such as:

| arm-poky-linux-gnueabi-gcc  -g  -O2  -fno-common -ffixed-r8 -msoft-float   -D__KERNEL__ -DCONFIG_SYS_TEXT_BASE=0x80008000 -I/local/yocto/upstream/label/ubuntu1204-64b/machine/beagleboard/poky/edison/tmp/work/beagleboard-poky-linux-gnueabi/u-boot-v2011.06+git5+b1af6f532e0d348b153d5c148369229d24af361a-r0/git/include -fno-builtin -ffreestanding -nostdinc -isystem /local/yocto/upstream/label/ubuntu1204-64b/machine/beagleboard/poky/edison/tmp/sysroots/x86_64-linux/usr/bin/armv7a-vfp-neon-poky-linux-gnueabi/../../lib/armv7a-vfp-neon-poky-linux-gnueabi/gcc/arm-poky-linux-gnueabi/4.6.3/include -pipe  -DCONFIG_ARM -D__ARM__ -marm  -mabi=aapcs-linux -mno-thumb-interwork -march=armv5 -Wall -Wstrict-prototypes -fno-stack-protector -fno-toplevel-reorder   -o hello_world.o hello_world.c -c
| arm-poky-linux-gnueabi-gcc  -g  -O2  -fno-common -ffixed-r8 -msoft-float   -D__KERNEL__ -DCONFIG_SYS_TEXT_BASE=0x80008000 -I/local/yocto/upstream/label/ubuntu1204-64b/machine/beagleboard/poky/edison/tmp/work/beagleboard-poky-linux-gnueabi/u-boot-v2011.06+git5+b1af6f532e0d348b153d5c148369229d24af361a-r0/git/include -fno-builtin -ffreestanding -nostdinc -isystem /local/yocto/upstream/label/ubuntu1204-64b/machine/beagleboard/poky/edison/tmp/sysroots/x86_64-linux/usr/bin/armv7a-vfp-neon-poky-linux-gnueabi/../../lib/armv7a-vfp-neon-poky-linux-gnueabi/gcc/arm-poky-linux-gnueabi/4.6.3/include -pipe  -DCONFIG_ARM -D__ARM__ -marm  -mabi=aapcs-linux -mno-thumb-interwork -march=armv5 -Wall -Wstrict-prototypes -fno-stack-protector -fno-toplevel-reorder   -o stubs.o stubs.c -c
| arm-poky-linux-gnueabi-ld  -r -o libstubs.o  stubs.o
| arm-poky-linux-gnueabi-ld -g -Ttext 0x80300000 \
| 			-o hello_world -e hello_world hello_world.o libstubs.o \
| 			-L. -lgcc
| arm-poky-linux-gnueabi-ld: cannot find -lgcc
| make[1]: *** [hello_world] Error 1

(From OE-Core rev: ad78441045183277a7e77341f4af6d9d65a4a3c8)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:44 +01:00
Nitin A Kamble
0a6f9a5f5a tcl: fix target recipe build issue on older distros
the builddir is put in front of the LD_LIBRARY_PATH, causing dynamically
linking of target library with native tclsh.

Fix this behavior to cross build tcl correctly.

This issue got exposed when eglibc-2.15 was configured for the target.

(From OE-Core rev: 8e25fe0ecc3d6fe2d5456b525c5014554bc70cfe)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:43 +01:00
Richard Purdie
7c5318e1a0 utils.bbclass: add helper function to add all multilib variants of a specific package
This is useful for the scenario where we want to add 'gcc' to
the root file system for all multilib variants

(From OE-Core rev: e82c2f0b91611f3e755985bb8d1608ca5792e825)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:43 +01:00
Enrico Scholz
88927f4f1f libtool: fixed parallel build related race
While building libtool, the libtool script itself will be regenerated
because OE modifies a dependency[1]. With -jX, this operation (-->
removal, creation of non-x file, 'chmod a+x') can happen at a time when
the script is going to be executed.  This can cause errors like:

| arm-linux-gnueabi-libtool: compile:  ccache arm-linux-gnueabi-gcc ...
| ...
| /bin/sh ./config.status libtool
| ...
| arm-linux-gnueabi-libtool: compile:  ccache arm-linux-gnueabi-gcc ...
| /bin/sh: ./arm-linux-gnueabi-libtool: Permission denied
| make[2]: *** [libltdl/libltdl_libltdl_la-lt__alloc.lo] Error 126

I am not sure whether the custom do_compile_prepend() is still needed.
For now only the issue above will be fixed by executing ./config.status
yet again.

[1] see 648290d5bf

(From OE-Core rev: 15204a6cbcdbbb84e02da05b1fb15644fe7df332)

Signed-off-by: Enrico Scholz <enrico.scholz@sigma-chemnitz.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:43 +01:00
Ting Liu
68888987ce image_types.bbclass: redefine EXTRA_IMAGECMD_jffs2 to leverage siteinfo
(From OE-Core rev: 3bff2398cd2d730111faa182d16356e189a36353)

Signed-off-by: Ting Liu <b28495@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:43 +01:00
Matthew McClintock
3129f4ff0e task-core-tools-testapps.bb: kexec-tools does not work on e5500-64b parts
This prevents kexec from building for this part since it does not work

(From OE-Core rev: d9bf008b36e8b2211624705d8ee4e90d94463dd5)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Matthew McClintock
09ae715bb1 gcc-package-runtime.inc: Fix QA warning
> ERROR: QA Issue: gcc-runtime: Files/directories were installed but not shipped
>   /usr/lib/libgomp.so.1.0.0
>   /usr/lib/libgomp.so.1

(From OE-Core rev: 4ec107f822453bd9468009d7a2124a3d592610b5)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Leon Woestenberg
c26cfe7738 kernel.bbclass: Copy bounds.h only if it exists, needed for 2.6.x.
Linux 2.6.x kernels did not (all) have the bounds.h file, so copy
only iff exists.

(See OE-Core 02ac0d1b65389e1779d5f95047f761d7a82ef7a4)

(From OE-Core rev: 6e9cfa4ba34d8899dfb271818ef30730de8353fa)

Signed-off-by: Leon Woestenberg <leon@sidebranch.com>
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Khem Raj
81aac104fa xserver-xorg: Fix build on powerpc
(From OE-Core rev: 5e141b2a7331f7ee8d9eedf02c4fc2ae5ed8d5ec)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Saul Wold
bc754beb3a curl: Use gnutls for target and openssl for native
Since gnutls is available on the target use it, but we do not build gnutls for
the native side as it adds too many dependecies, so use openssl.

(From OE-Core rev: 0dc6543a2d898d381c287d6b7becfc8fb8f279c0)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Saul Wold
e20af93811 curl: enable ssl support
This patch enables ssl support for curl to allow git to clone from
https / ssl sites. We do not want to enable gnutls for native or
nativesdk, as it adds additional dependency and increase build time

[YOCTO #2532]

(From OE-Core rev: 9f7e9fb6cd08b3048e97dd1011f0510416beb103)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Martin Donnelly
235667966c augeas: Add libxml2 dependency
This patch fixes the following Augeas configure error.

| checking for LIBXML... no
| configure: error: Package requirements (libxml-2.0) were not met:
|
| No package 'libxml-2.0' found
|
| Consider adjusting the PKG_CONFIG_PATH environment variable if you
| installed software in a non-standard prefix.
|
| Alternatively, you may set the environment variables LIBXML_CFLAGS
| and LIBXML_LIBS to avoid the need to call pkg-config.
| See the pkg-config man page for more details.
| ERROR: oe_runconf failed

(From OE-Core rev: 1d55679821003ac4d652b08f2eebab1636505042)

Signed-off-by: Martin Donnelly <martin.donnelly@ge.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Zhenhua Luo
258bbaa1d2 task-core-sdk.bb: add libgomp and libgomp-dev by RECOMMENDS
(From OE-Core rev: e072dc29c6f4dd3c429e2f0e07da3c29bda36023)

Signed-off-by: Zhenhua Luo <b19537@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Ting Liu
f44faa8757 lsof: define linux C library type when using eglibc
lsof tries to compile a temp c source file and execute the binary to
determine linux C library type (file Configure, line 2689-2717).
It is inpracticable for cross-compilation and may have build issue on
some distros since it depends on host settings.

Fix below error when building for 64bit target on 64bit host:
[...]
| dsock.c:481:44: error: 'TCP_LISTEN' undeclared (first use in this function)
| dsock.c:482:45: error: 'TCP_CLOSING' undeclared (first use in this function)
[...]
| make: *** [dsock.o] Error 1

The actual issue exists in do_configure:
[...]
Testing C library type with cc ... done
Cannot determine C library type; assuming it is not glibc.

Which is in turn caused by missing 'gnu/stubs-32.h" when compiling
the temp c source file on host:
[...]
fatal error: gnu/stubs-32.h: No such file or directory compilation terminated.

file gnu/stubs-32.h is provided by 32bit glibc.

(From OE-Core rev: 8c38bc022de209187f31952ae02313dd3104f4c6)

Signed-off-by: Ting Liu <b28495@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Matthew McClintock
3cf5fc2272 sysvinit-inittab_2.88dsf.bb: Allow multiple serial port consoles to be defined
Set SERIAL_CONSOLES if you want to define multiple serial consoles, also if
you need to check for the presence of the serial consoles you can also define
SERIAL_CONSOLES_CHECK to determine if these are present when you boot. This
will prevent error message that pop up when the serial port is not present.

SERIAL_CONSOLES = "115200;ttyS0 115200;ttyS1 115200;ttyEHV0"
SERIAL_CONSOLES_CHECK = "${SERIAL_CONSOLES}"

The above lines in machine.conf or elsewhere will have the effect of having
two serial consoles and removing any that are not present at boot

(From OE-Core rev: 2e7dddfce4a40a56f671116a2001b13c57667c70)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Matthew McClintock
ffd554d2ff libgomp: add libgomp (openmp) library, and build for powerpc targets by default
(From OE-Core rev: d58668c6770f519199192c7e3817fbc7d6576af3)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:40 +01:00
Matthew McClintock
4899d07aa7 gcc: gcc-cross-canadian: use correct location for libraries for powerpc64
This fixes the issue where gcc invokes the linker with an incorrect -L
library location and gives up because it can't find libraries. It was
looking in a /lib folder instead of /lib64

(From OE-Core rev: aa010039a38188f1b1b38a978287d1597138b8b9)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:40 +01:00
Matthew McClintock
0b71ac7a99 gcc-configure-common.inc: use --with-long-double-128 on powerpc to comply with ABI
(From OE-Core rev: a2e00d2cae8e4b58fc3b9fc7853da519a615aa31)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:40 +01:00
Matthew McClintock
bdebedb6cf openjade-native_1.3.2.bb: fix typo and change the deps exclusion to correct var
(From OE-Core rev: 6ef3b77ba8baddb5748f2ee27d39a5a0d32e3bfb)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:40 +01:00
Matthew McClintock
b2b365e8f0 dtc.inc: fix for libdir == /usr/lib64
On 64bit systems dtc will still install libaries in /usr/lib
unless we havet this override

(From OE-Core rev: 679e04a33b6e4569e7a95758ccb10d50931f5d67)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:39 +01:00
Matthew McClintock
738f163797 packagedata.py: Fix get_subpkgedata_fn for multilib
This happens when tryng to add libgcc-dev to as a multilib package
(e.g. IMAGE_INSTALL_append = " lib32-libgcc-dev")

| Processing task-core-boot...
| Processing fman-ucode...
| Processing dosfstools...
| Processing lib32-libgcc-dev...
| Unable to find package lib32-libgcc-dev (libgcc-dev)!
NOTE: package fsl-image-full-1.0-r1.1.3.6: task do_rootfs: Failed

RPM (or bitbake?) is looking in the tmp/pkgdata, however some of these file
paths are mungned for the multilib scenario:

$ find tmp/pkgdata/ | grep libgcc-dev$
tmp/pkgdata/ppce5500-fsl-linux/runtime/lib32-libgcc-dev
tmp/pkgdata/ppc64e5500-fsl-linux/runtime/libgcc-dev

This patch fixes where we look for these files so they can be found and
properly installed for the multilib root file system

(From OE-Core rev: 24e8399aeccf4b0742acd986bb506ff6f388b4a2)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:39 +01:00
Matthew McClintock
1ddd5378ec qemu-0.15.1: add patch to fix compilatation problems on powerpc
ERROR: Function failed: do_compile (see /opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/work/ppc64e5500-fsl-linux/qemu-0.15.1-r6/temp/log.do_compile.28447 for further information)
ERROR: Logfile of failure stored in: /opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/work/ppc64e5500-fsl-linux/qemu-0.15.1-r6/temp/log.do_compile.28447
Log data follows:
| DEBUG: SITE files ['endian-big', 'bit-64', 'powerpc-common', 'common-linux', 'common-glibc', 'powerpc-linux', 'powerpc64-linux', 'common']
| ERROR: Function failed: do_compile (see /opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/work/ppc64e5500-fsl-linux/qemu-0.15.1-r6/temp/log.do_compile.28447 for further information)
| NOTE: make -j 24
|   LINK  ppc-linux-user/qemu-ppc
| /opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/sysroots/x86_64-linux/usr/libexec/ppc64e5500-fsl-linux/gcc/powerpc64-fsl-linux/4.6.4/ld:/opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/work/ppc64e5500-fsl-linux/qemu-0.15.1-r6/qemu-0.15.1/ppc64.ld:84: syntax error
| collect2: ld returned 1 exit status
| make[1]: *** [qemu-ppc] Error 1
| make: *** [subdir-ppc-linux-user] Error 2
| make: *** Waiting for unfinished jobs....
| ERROR: oe_runmake failed

(From OE-Core rev: 2a1f7a8be5170cdb85f9faae81d94ac2ca8b6566)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:39 +01:00
Matthew McClintock
ebbd09e21d libxml-parser-perl_2.41.bb: fix MakeMaker issues with using wrong CC/LD/etc
MakeMaker has a bug where it does not propagate CC/LD/etc information
down to subproject it generates Makefiles for... this recipe has has an
Expat subproject which has issues building if we are using sstate-cache
and it will reference the old sysroots and be unable to build properly.
There is an upstream MakeMaker bug for this issue but we can work around
it by fixing up the Makefiles for now

See:
https://rt.cpan.org/Public/Bug/Display.html?id=28632

(From OE-Core rev: e1609123a6ca6aef18e48afe0ce61325da910fc1)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:39 +01:00
Zhenhua Luo
09c83947da linux-dtb: add multi-dtb build support
including following enhancement:
    * support multi-dtb build
    * skip dtb build and install when KERNEL_DEVICETREE is empty
    * print a warning message when specified dts file is not available

(From OE-Core rev: 59b149e466c9fc81ec94a740e805339db97fc3ac)

Signed-off-by: Zhenhua Luo <b19537@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:38 +01:00
Saul Wold
8c6455b6db xinetd: Update to 2.3.15
(From OE-Core rev: 48f93e0ade1c534b9af2b84874f9b17e3107c724)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:38 +01:00
Scott Rifenbark
73cdebf60d documentation/dev-manual/dev-manual-kernel-appendix.xml: Add note about conflict
Added a note to the part of the example where you bitbake the kernel
after turning off CONFIG_SMP.  The warnings you get can cause confusion.
the note explains they are normal.

(From yocto-docs rev: 08ed090f0b8b6970832242a52827ae2957918cf3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-29 15:54:26 +01:00
Scott Rifenbark
6b06a4fa1b documentation/dev-manual/dev-manual-kernel-appendix.xml: Added branch step
The example did not specify to switch to the "denzil" branch after
establishing the local repo of poky-extras.  The example will not
work without this step.

(From yocto-docs rev: 69b99a77f1f8247c217e77af89ecec3982adc264)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-29 15:54:26 +01:00
Ross Burton
2dcbb48df9 gconf.bbclass: don't register schemas in the install stage
Previously this was installing schemas in the sysroot, which is wrong for native
packages as nothing should touch the sysroot directly, and even more wrong for
non-native packages as the sysroot is irrelevant.

So, export the environment variable that stops the registration happening at
install time. The postinst script will handle the non-native case, and for the
sysroot I've opened #2648.  This isn't a massive problem as nothing to my
knowledge actually installs schemas to the sysroot.

[YOCTO #2245]

(From OE-Core rev: 741146fa90f28f7ce8d82ee7f7e254872d519724)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-29 15:49:34 +01:00
Elizabeth Flanagan
fb2335fa2b distro.conf: Flipping for pending point release
7.0.1/1.2.1 release. Flipping distro.conf values

Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
2012-06-21 14:22:32 -07:00
Richard Purdie
e972d78009 poky-tiny: eliminate mtrace rdepends
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 12:01:52 +01:00
Scott Rifenbark
91d6344765 documentation: Updated Manual revision history table
Using July 2012 for the release date.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 12:01:39 +01:00
Anders Darander
a707b3269c qt4-embedded: fix QT_ARCH usage in QT_CONFIG_FLAGS
After the change to shell style functions (from python style), the
ability to use oe_filter_out on QT_CONFIG_FLAGS got broken.

This patch solves that by referring to QT_ARCH in a more correct way.

(From OE-Core rev: 8394dda5f12157c88005a788cd35421f498c9b82)

Signed-off-by: Anders Darander <anders@chargestorm.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:35 +01:00
Bruce Ashfield
0d3748ca5d linux-yocto/3.0: update to v3.0.32
Updating the 3.0 kernel SRCREVs to integrate the v3.0.32 -stable
release.

(From OE-Core rev: 6d97c94d25713b47417e184308ab43947c7f243d)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:34 +01:00
Bruce Ashfield
5f2b526109 linux-yocto/3.2: update to v3.2.18
Updating the 3.2 kernel SRCREVs to pickup the -stable update
to v3.2.18.

(From OE-Core rev: 0308f91b17b052902a01c98afdd5619cd0c617e5)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:34 +01:00
Bruce Ashfield
6bb0fdda40 linux-yocto: policy cleanups
Updating the meta SRCREVs to pickup configuration policy cleanups:

  49f931b meta/fishriver: remove redundant features and options
  51a6d3f meta/emenlow: remove redundant features and options
  101dd7f meta/crownbay: remove redundant features and options
  4110ecd meta/sugarbay: remove redundant features and options
  0f1304a meta/jasperforest: remove redundant features and options
  0a56a3b meta/common-pc-64: factor out SCSI CDROM option
  b71938a meta/common-pc-64: use usb-mass-storage feature
  0724f40 meta: add scsi cdrom feature
  438bca8 meta/common-pc: use usb-mass-storage feature
  c970881 meta: factor out SCSI options from the usb-mass-storage feature
  4c8135e meta: add scsi disk feature
  6872a81 meta: add scsi feature
  e706ec5 meta/sugarbay: factor out policy-related options
  8b7fbc2 meta/jasperforest: factor out policy-related options
  fea1b0e meta/fishriver: factor out policy-related options
  13bf9ab meta/emenlow: factor out policy-related options
  4748d50 meta/crownbay: factor out policy-related options
  44f592f meta/common-pc-64: factor out policy-related options
  5a3f5c7 meta/common-pc: factor out policy-related options
  1f5a10b meta/common-pc-64: use usb features
  4b87723 meta/common-pc: use usb features
  594ba05 meta: add ROOT_HUB_TT config option to the usb/ehci-hcd feature

(From OE-Core rev: db35cd40c7abe13a9701eb74099d69d461cadb0a)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:34 +01:00
Bruce Ashfield
5ad28e97e6 linux-yocto: intel BSP config changes
Updating the meta SRCREV for the following fixes:

   1dfd60f meta/fishriver: move smp options from recipe-space
   012780a meta/emenlow: move smp options from recipe-space
   b59b1a5 meta/crownbay: move smp options from recipe-space
   74dc6ac meta/sugarbay: remove boot-live options
   a4bedcb meta/jasperforest: remove boot-live options
   4ae7b81 meta/sugarbay: use usb features
   30e7e8c meta/jasperforest: use usb features
   22d0c5d meta/fishriver: use usb features
   e262965 meta/emenlow: use usb features

(From OE-Core rev: bde50853658bab563a888b82278a6acfdce6305b)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:34 +01:00
Bruce Ashfield
495ea21c8b linux-yocto/3.2: configuration and pch merge
Updating the 3.2 SRCREVs to import the following meta/config
changes:

   6b3d4e0 meta: add mei feature
   519abac meta: add usb/uhci-hcd feature
   a67c5a3 meta/crownbay: use usb features
   0855066 meta: add usb/ohci-hcd feature
   15f1a99 meta: add usb/ehci-hcd feature
   8fa6408 meta: add usb/xhci-hcd feature
   c724a55 meta: add usb/base feature
   b55b3a1 sys940x: Cleanup sys940x.scc
   93f2e97 sys940x: Use PHYSICAL_START of 0x200000 to boot
   aaa034b sys940x: Add common standard and preempt-rt features
   e2b1286 sys940x: Add efi-ext to standard and preempt-rt configs
   d188c21 sys940x: Move emgd-1.10 data to the standard scc file
   72d9369 fri2: Cleanup fri2-$KTYPE.scc files re efi-ext.scc
   dbcb120 fri2: Use emgd-1.10 feature and branch

And the following driver fix:

   f39a0a9 pch_gbe: Do not abort probe on bad MAC

(From OE-Core rev: 0609299880ad0aca121e7192d84f85d913c40c62)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:33 +01:00
Saul Wold
94e3e894d0 eglibc: added ac_cv_path_ to CACHED_CONFIGUREVARS
On Fedora 17, bash has moved to /usr/bin/bash and the configure process finds it
on the host machine there, this ensures that it is set correctly for the target.

[YOCTO #2363]

(From OE-Core rev: 0d957dd0604230bef1d01ee9992c56d2aca62ec1)

Signed-off-by: Saul Wold <sgw@linux.intel.com>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:33 +01:00
Saul Wold
8389decfe6 quilt: added ac_cv_path_BASH to CACHED_CONFIGUREVARS
On Fedora 17, bash has moved to /usr/bin/bash and the configure process finds it
on the host machine there, this ensures that it is set correctly for the target.

[YOCTO #2363]

(From OE-Core rev: d54ff1f79f05ba5bd0e1006545e7f1e699998668)

Signed-off-by: Saul Wold <sgw@linux.intel.com>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:33 +01:00
Nitin A Kamble
7412611252 eglibc: package mtrace separately
add libc-mtrace as dependency for task-core-tools-debug

now eglibc-mtrace gets included in an sdk image and not in a non-sdk image.

This does not affect builds with uclibc.

This fixes bug: [YOCTO# 2374]

(From OE-Core rev: 6f78625dbab5c81ef20b197aee5206f63611b673)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:32 +01:00
Gary Thomas
026d502b2a webkit-gtk: Apply work around for all PowerPC targets
The current patch for bug #1570 only applies to qemuppc but should be
applicable for all PowerPC targets.  Also update the patch so that
only one language backend, either ICU or PANGO, is built.

Also remove some old customizations (dependencies on darwin) as these
should now be handled in a layer specific .bbappend file.

(From OE-Core rev: 87eae0851e5334734df40a833596c6cbc6715f7f)

Signed-off-by: Gary Thomas <gary@mlbassoc.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:32 +01:00
Richard Purdie
e81f7c6152 openjade-native: Ensure we reautoconf the package
Currently since configure.in in is in a subdirectory, we don't reautoconf the
recipe. We really need to do this, to update things like the libtool script used
and fix various issues such as those that could creep in if a reautoconf is
triggered for some reason. Since this source only calls AM_INIT_AUTOMAKE to gain the
PACKAGE and VERSION definitions and that macro now errors if Makefile.am doesn't
exist, we need to add these definitions manually.

These changes avoid failures like:

----
| ...
| DssslApp.cxx:117:36: error: 'PACKAGE' was not declared in this scope
| DssslApp.cxx:118:36: error: 'VERSION' was not declared in this scope
| make[2]: *** [DssslApp.lo] Error 1
----

(From OE-Core rev: 87753615435c8aec7df5964045e24f13877cd7cc)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:32 +01:00
Paul Eggleton
119e1b7dc9 poky.conf: use correct version string for Ubuntu 12.04
Since it is an LTS release, the final version string was not
"Ubuntu 12.04" but "Ubuntu 12.04 LTS", so use this when doing the tested
host distribution check.

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:27:19 +01:00
Paul Eggleton
5b3a0eac61 hob: handle sanity check failures as a separate event
In order to show a friendlier error message that does not bury the
actual sanity error in our typical preamble about disabling sanity
checks, use a separate event to indicate that sanity checks failed.

This change is intended to work together with the related change to
sanity.bbclass in OE-Core.

Fixes [YOCTO #2336].

(Bitbake rev: 24b631acdaa143a4de39c6e1328849660c66f219)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:27:11 +01:00
Kang Kai
4c4924ad1b cooker.py: terminate the Parser processes
[Yocto 2142]

Force to exit HOB when hob is parsing recipes, the bitbake doesn't stop.
It hangs on function BitBakeServerConnection::terminate in file
server/process.py:
    else:
        self.procserver.join()
It is waiting for the children process quit.

In stage of parse recipes BBCooker spawns Parser processes as many as
cpu numbers. When quit the Parser processes they make their internal
Queue to call cancel_join_thread() to avoid block but don't work at
this time.
So force to terminate the Parser processes.

(Bitbake rev: bebef58b21bdff7a3ee1fa2449b7df19144f26fd)

Signed-off-by: Kang Kai <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:27:05 +01:00
Shane Wang
9e6d1101b4 Hob: Adjust the progress bar and set 100% only when all is done.
After parsing recipes, Hob will populate recipes and packages, which is probably
time exhaused. So, this patch is to adjust the progress bar and ensure 100% is
set if and only if all populations are done.

The patch also fixes "weird 18 second delay when parsing recipes" on build appliance.
Because Hob is doing something, but the progress bar shows 100% and wait there.

[Yocto #2341]

(Bitbake rev: 2c4a21dc8a588c8cf05549ddd9734731a46bea10)

Signed-off-by: Shane Wang <shane.wang@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:55 +01:00
Scott Rifenbark
2bddf70a84 documentation/poky.ent: Updated variables for correct 1.2.1 build.
Key variables are DISTRO at "1.2.1", YOCTO_DOC_VERSION at "current",
and POKYVERSION at "7.0.1".  Note that I have to change "current"
to "1.2.1" before publishing any manuals prior to the official release
of 1.2.1.

(From yocto-docs rev: e62e0baec71c9d39473a9c67caf17f26346539d5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:23 +01:00
Scott Rifenbark
6698060d8e documentation/bsp-guide/bsp.xml: Review comments to recommendations
I added a small review comment to the section based on reviewer
feedback.

(From yocto-docs rev: 206d43c23efa114b57a1e75e469a6f5bdaf94715)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:22 +01:00
Scott Rifenbark
bd3cd64da3 documentation/bsp-guide/bsp.xml: Updates to requirements section
Implemented review feedback from Dave Stewart and Tom Zanussi.

(From yocto-docs rev: 774e00d34d2abd466a6d64b4b91f60d87203add4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:22 +01:00
Scott Rifenbark
c88f25ddb4 documentation/bsp-guide/bsp.xml: BSP recommendations section added
Added the "Requirements and Recommendations for Released BSPs"
section.  This section was requested by Dave Stewart based on
community input for direction on how to create a BSP that was
compliant with the Yocto Project.  The input for the section came
from Tom Zanussi.

A spell-check was performed also prior to this commit that addressed
a few spelling issues across the file.

(From yocto-docs rev: 6357eb7a26abb3dca14daf5d9b9a4e245dd0827b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:22 +01:00
Laurentiu Palcu
ef215694de freetype: upgrade to 2.4.9
(From OE-Core rev: 7c767d3723e0b55d3bcd3864a9cdbce6d11d5b35)

Signed-off-by: Laurentiu Palcu <laurentiu.palcu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:50 +01:00
Paul Gortmaker
71a6fb605a gitignore: add wildcard to match toplevel patch files
To support the basic workflow of trivial patches:

 git format-patch HEAD~.. ; git send-email --to foo@bar.com 0001-foo.patch

We don't want git status reporting on patches lying in the top
level dir in this case.

Cc: Richard Purdie <richard.purdie@linuxfoundation.org>
(From OE-Core rev: 7e32cbf30352e12c55c3c378631f4e238cf682c5)

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:49 +01:00
Joe Slater
bfc8589048 shared-mime-info: fix build race condition
The definition of install-data-hook in Makefile.am leads
to multiple, overlapping, executions of the install-binPROGRAMS
target.  We modify the definition to avoid that.

(From OE-Core rev: d8a09cb17f2f3b43718ba354da7368a2ed793766)

Signed-off-by: Joe Slater <jslater@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:49 +01:00
Song.Li
6514e193ac groff: Fix build on Fedora 17
Generally distros keep perl at /usr/bin/perl
But Fedora 17 also has /bin/perl,
this causes groff_1.20.1 build to put perl
interpreter path as /bin/perl
But we set perl location for target as /usr/bin/perl

This mismatch of perl path causes failure of rootfs image creation
like this:

| error: Failed dependencies:
|       bin/perl is needed by groff-1.20.1-r1.ppc603e

(From OE-Core rev: 75824ff13f43b330b11cf9a130f061baee785e1a)

Signed-off-by: Song.Li <song.li@windriver.com>

Sync up with the do_install_append_virtclass-native chunk.

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:49 +01:00
Jesse Zhang
44fb9daa81 beecrypt: disable java
If java is installed on host, beecrypt will attempt to use it.

(From OE-Core rev: 4d2ff0a69692f54313ffa9dc83d0e4a2ddba47c3)

Signed-off-by: Jesse Zhang <sen.zhang@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:49 +01:00
Scott Garman
309b2c090e runqemu-ifup: enable arp proxying
This allows core-image-sato to access the WAN.

Thanks to Dexuan Cui for proposing this fix.

Fixes [YOCTO #2329]

(From OE-Core rev: 680a94c378f20c00e8bee0575b8922bccc008fec)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Tom Zanussi
67c7bc5e6c gnupg: disable CCID driver
The CCID driver driver is apparently unnecessary, so disable it.

Also remove the associated libusb dependency, since that won't be
needed either.

According to Scott Garman <scott.a.garman@intel.com>:

I'd just note that the CCID smartcard reader is a specific piece of
hardware that is unlikely to be used in a majority of our use cases.

(From OE-Core rev: 2fcd564b5395950f480a288d434c64c8fee65ece)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts when importing from oe-core master.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Tom Zanussi
e06e502bbb gnupg: add libusb to DEPENDS
gnupg apparently depends on libusb:

| error: Failed dependencies:
| 	 libusb-0.1-4 >= 0.1.3 is needed by gnupg-2.0.18-r1.core2

So add libusb to gnupg DEPENDS.

(From OE-Core rev: 1a76f50c1f159477a86dc7a6cb95873cee05d9e6)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts when importing from oe-core master.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Zhai Edwin
89c0e81273 webkit-gtk: Use glib as unicode backend to avoid browser crash
webkit-gtk depends on ICU for the unicode, but ICU is not safe when build and
target system owns different endian. ICU's community is not responsive to make
a patch for this, so glib is used as work around here.

[YOCTO #1570] got fixed

(From OE-Core rev: df83a9480ba7b2fd2bcc0a92932d51434d7795a0)

Signed-off-by: Zhai Edwin <edwin.zhai@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Paul Eggleton
78a5471a29 classes/sanity: send sanity check failure as a separate event for Hob
In order to show a friendlier error message within Hob that does not
bury the actual sanity error in our typical preamble about disabling
sanity checks, use a separate event to indicate that sanity checks
failed.

This change is intended to work together with the related change to
BitBake, however it has a check to ensure that it does not fail with
older versions that do not include that change.

Fixes [YOCTO #2336].

(From OE-Core rev: 3788f9bcb36cca90ca8cf650c9d33f5485e3087b)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Joshua Lock
b30a243f3f sanity.bbclass: copy the data store and finalise before running checks
At the ConfigParsed event the datastore has yet to be finalised and thus
appends and overrides have not been set.
To ensure the sanity check is being run against the configuration values
the user has set call finalize() on a copy of the datastore and pass that
for all sanity checks.

(From OE-Core rev: 527e26ea1e44f114fc9fcec1bc7d83156dba1a70)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Richard Purdie
20657c1fa0 image.bbclass: Ensure ${S} is cleaned at the start of rootfs generation
Some image classes such as bootimg save files into ${S} as part of rootfs
generation. For correctness we should therefore clean this at the start of
image generation to ensure reproducibility.

I found this issue when some files I thought should disappear from my rootfs
would not disappear.

(From OE-Core rev: 23b7d7dab475caca4558e3b20db534122bee1525)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Mihai Lindner
3029a08744 sudo: fixed wrong chmod path
Placed $D between braces ${D} to be correctly expanded to the
workdir path, instead of a path relative to host rootfs.
Currently, bitbake sudo fails on host systems where sudo is not
installed.

(From OE-Core rev: 83c5acfe4731990c296be1bf67059452a72f9584)

Signed-off-by: Mihai Lindner <mihaix.lindner@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Richard Purdie
d376a4e8f1 bitbake.conf: Improve wget timeouts
The wget default is a 900 second timeout and 20 retries. This is way too long
for most of our usecases so this patch changes it to a 30 second timeout and
reduces retries from 5 to 2. We have good mirror infrastructure, this will
let us fall back to it easier.

(From OE-Core rev: dbb88617576ea9bbeec08f5e5e15c26c4c18347f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Xiaofeng Yan
0d0846e06f ncurses: Avoid occasional builling failure when having parallel processable task
ncurses failure non-gplv3 build (race issue) like the following \
error information:

| tic: error while loading shared libraries: /srv/home/pokybuild \
/yocto-autobuilder/yocto-slave/nightly-non-gpl3/build/build/tmp/\
work/x86_64-linux/ncurses-native-5.9-r8.1/ncurses-5.9/narrowc/lib\
/libtinfo.so.5: file too short
| ? tic could not build /srv/home/pokybuild/yocto-autobuilder/\
yocto-slave/nightly-non-gpl3/build/build/tmp/work/x86_64-linux/\
ncurses-native-5.9-r8.1/image/srv/home/pokybuild/yocto-autobuilder\
/yocto-slave/nightly-non-gpl3/build/build/tmp/sysroots/x86_64-linux\
/usr/share/terminfo
| make[1]: *** [install.data] Error 1

This is a race issue which is caused by
install.libs and install.data:

1) install.data needs run tic
2) tic needs libtinfo.so
3) install.libs would regenerate libtinfo.so
4) but install.data doesn't depend on install.libs, and they can run
   parallelly

So there would be errors in a very critical condition: tic is begining
to run at the same time when install.libs is generating libtinfo.so, and
this libtinfo.so is not integrity, then there would be the  above error.

Let task install.libs run before install.data for fixing this bug.

[YOCTO #2298]

(From OE-Core rev: 6993570787a97fbca5ea81513b0120c6d7563484)

Signed-off-by: Xiaofeng Yan <xiaofeng.yan@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Robert Yang
59ac33c77f rpm 5.4.0: respect to the arch when choose the alternatives
There is a bug if we:
1) bitbake diffutils with MACHINE=crownbay
2) bitbake diffutils with MACHINE=qemux86
3) bitbake core-image-sato with MACHINE=crownbay

Then the diffutils.i586 would be installed to the crownbay's image, this
is because diffutils.i586 is newer than diffutils.core2, and rpm doesn't
respect to the arch priorities:

We have put the archs in order in _solve_dbpath:

crownbay/solvedb:core2/solvedb:i586/solvedb:all/solvedb

Fix rpm to respect to the order, for example, if it finds a pkg in both
core2/ and i586/, and the core2/ comes first, it should not use the one
in i586/ even if it's build time is newer.

Note: Don't worry about the _free(*ptr), it can check whether ptr is
NULL or not.

This is for the denzil branch, and the master branch also needs it.

[YOCTO #2360]

(From OE-Core rev: 2199e6b9c82bb2b6738e87903f30329586db20e2)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Richard Purdie
3cb36a5ed9 Update version to 1.15.2 (correspdoning to Yocto 1.2 release)
(Bitbake rev: 270a05b0b4ba0959fe0624d2a4885d7b70426da5)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:10:07 +01:00
Liming An
75e32007ef Hob: add original url show function with the tooltip hyperlink for user
When case about No browser, such as running in 'Build Appliance', user can't open
the hyper link, so add this work around for user. (Checking the browser is avaiable
or not is hard by different system and browser type)

[YOCTO #2340]

(Bitbake rev: 02cc701869bceb2d0e11fe3cf51fb0582cda01b0)

Signed-off-by: Liming An <limingx.l.an@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:59 +01:00
Liming An
d52e74cee9 Hob: change the refresh icon speed to make it view clear
Because the arrow icon refresh so fast as the go backward by illusion, so adjust it slow.

[YOCTO #2335]

(Bitbake rev: ac4a8885fafdc0d1e79831334ead9a8ddb6e2472)

Signed-off-by: Liming An <limingx.l.an@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:52 +01:00
Dongxiao Xu
f1630d3cd4 Hob: Clear the building status if command failed
We may meet certain command failure during build time, for example,
out of memory. In this case, we need to clear the "building" status.

This fixes [YOCTO #2371]

(Bitbake rev: 283dbbbf5d34adb4c9e3aa87e3925fdebe21ff42)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:42 +01:00
Tom Zanussi
b7f1a8f870 yocto-bsp: clarify help with reference to meta-intel
The current yocto-bsp help assumes knowledge that the meta-intel layer
needs to be cloned before it's put into the BBLAYERS.  Avoid the
guesswork and state the details explicitly in the help.

Also, the shorter 'usage' string doesn't mention it at all; it would
help to at minimum mention it and refer the user to the detailed help.

Fixes [YOCTO #2330].

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:31 +01:00
Tom Zanussi
015f117d85 yocto-kernel: use BUILDDIR to find bblayers.conf
The current code assumes that builddir == srcdir/build, which it
obviously isn't sometimes.  Use BUILDDIR to get the actual builddir
being used.

Fixes [YOCTO #2219].

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:24 +01:00
Richard Purdie
b8338046ba netbase: Correctly set FILESEXTRAPATHS to include the version
[YOCTO #2366]

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:10 +01:00
Scott Rifenbark
7552ccd06c documentation/yocto-project-qs/yocto-project-qs.xml: pre-built example fix
The example showing how to use pre-built images, the toolchain, and
filesystem was off a bit.  I changed some wording to indicate using the
.ext3 filetype of the filesystem.  Previously it talked about expanding
the tarball version but the example has been changed to use .ext3.
Also, the environment setup file has been mis-named forever.  It should
have i586 in it and not i686.  And, finally, the image name does not
have a release number as part of the name.

(From yocto-docs rev: 97ed79993dd3e2eede4807482e15633b66b99f49)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:29 +01:00
Scott Rifenbark
e24d5cc2cd documentation/dev-manual/dev-manual-bsp-appendix.xml: .bbappend example
The linux-yocto_3.2.bbappend example was out of date.  There is no
longer a kernel features statement in the last part of the section.
Only COMPATIBLE_MACHINE, KMACHINE, and KBRANCH remain.  I removed
the fourth one from the text description and the example code.

(From yocto-docs rev: 89a11ce3c2a43e2d7c26599976d906011130131f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Scott Rifenbark
1f2fc974df documentation/dev-manual/dev-manual-bsp-appendix.xml: Added note
Added a note telling the user that the commit ID strings in the
example might not match the actual commit ID strings found in
the .bbappend file.

(From yocto-docs rev: 0477122c42eaf6d5e18e28a2356fe58c1070c608)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Scott Rifenbark
a473ba170d documentation/yocto-project-qs/yocto-project-qs.xml: Minor wording changes
(From yocto-docs rev: 528e34b1694739396295b769cc6f83d58dd3bf59)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Scott Rifenbark
a5fe09c6aa documentation/dev-manual/dev-manual-kernel-appendix.xml: Kernel example fixed
Due to a bug (2256) the example that changes the kernel configuration
through menuconfig did not work.  I have re-written the section
to now start with the default behavior of CONFIG_SMP=y and then
have the user change the configuration to where it is not set.

The changes include the reversing of the flow and the work-around
needed due to bug 2256.

(From yocto-docs rev: 2eaaafab0390d1108b212b9cfb7ca8365e0f39a9)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Scott Rifenbark
2072256b05 documentation: Added 1.2.1 manual entry information.
Added 1.2.1 manual history entry to five manuals.  The date
is to be determined.

(From yocto-docs rev: bb920814d5adaa24d37fbcefd85de2ba93ddf604)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Paul Eggleton
b623203ac9 documentation/yocto-project-qs/yocto-project-qs.xml: Setup changes
Remove mercurial as this is no longer needed.
linuxdoc-tools was mentioned twice in the CentOS list.
We no longer support Fedora versions older than 15 so remove this note.

This commit applies to 1.2, 1.2.1, and 1.3.

(From yocto-docs rev: 1347f92c49e61a42aa51e5c1ffccde88a449a4fb)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:27 +01:00
Scott Rifenbark
c7e4a6ae2c documentation/Makefile: Fixed figures publishing bug
I discovered a bug when publishing documents.  There are two scp
commands that copy a document's files and figures to the appropriate
directory in the srifenbark@yocto-www:~/www.yoctoproject.or-docs
server where the manuals are published.  The second scp command
had a "/figures" at the end.  This was causing a new "figures"
directory to be created within the "figures" directory.  This
redundancy shows up as missing figures in the manuals if a new figure
or changed figure is ever added to the book after initial
publishing.  I removed the extra "/figures" at the end of the scp
command.

(From yocto-docs rev: 5ab530f998427405a0486b94ca76cff58a4cf463)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:27 +01:00
Andrei Gherzan
1628159028 fotowall: Add #include ui_wizard.h to ExportWizard.cpp
App/ExportWizard.cpp depends on wizard.h which depends on ui_wizard. The last one
should be already generated before compiling ExportWizard.cpp.

[YOCTO #2297]

(From OE-Core rev: d7bf94647f17c0382caad8af0bdda837b14b22dc)

Signed-off-by: Andrei Gherzan <andrei@gherzan.ro>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:14 +01:00
Khem Raj
b477f676e3 classes/mirrors.bbclass: Point snapshot.debian.org mirror to working location
If you point to snapshot.debian.net/archive/pool then it will fetch
you a html page which will end up in corrupt download. The locations
have changed for archives and here we point the mirror to right
location.

(From OE-Core rev: d5574749b2272357f6bdad04c37ec0657b391cca)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:14 +01:00
Khem Raj
9d2534ab24 uclibc.inc: uclibc rtld does support GNU_HASH
(From OE-Core rev: 090d8a687517c2d4deb33295a3cceb5175aa28f3)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:13 +01:00
Khem Raj
df815f20c8 openssl: Fix build for mips64(el)
(From OE-Core rev: 8c74ddf5fd5502fd759f310096e9013fad0ca4db)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:13 +01:00
Zhai Edwin
b64eefe2bb pango: Fix modules load failure in multilib environment
Multi-libs of Pango need different modules, thus different config files and
utils. This patch separate config file and utils with different MLPREFIX to
avoid conflict.

[YOCTO #2356] got fixed.

(From OE-Core rev: 535e298b98182d95c3280d2d46aa6388e27aac40)

Signed-off-by: Zhai Edwin <edwin.zhai@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:13 +01:00
Richard Purdie
4abd299bf0 sed: Explicitly disable acl for deterministic builds
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:13 +01:00
Dongxiao Xu
30c3c8420e qt4: move functions from python to shell style
In qt4's do_configure operation, it will refer to some variables that
are derived from 'd', however these variable values may be not correct
in multilib case since the extraction of these variables happens before
the multilib handler.

The fix is to move these python style functions back to shell style.

This fixes [YOCTO #2355]

[RP: Fix whitepace]
(From OE-Core rev: 98cb2efe4e9f3092d531c9fc809406c3ef559725)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
[SG: Resolve merge conflicts for 1.2.1]
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:12 +01:00
Darren Hart
a74fb01b6b initrdscripts: Update install.sh to work with mmc devices
Fixes [YOCTO #2385]

The installer only searches for hd[ab] sd[ab]. Some newer BSPs have mmcblk
devices that should be used as the install target. These devices also have a
partition prefix (mmcblk0p1 instead of mmcblk01). As they are detected
asynchronously, it is necessary to add the rootwait kernel parameter to avoid
a race condition trying to mount the root device.

As BSPs like the FRI2 and the sys940x have mmc devices and will have a 1.2
release, we should push this to 1.2.1. The changes are perfectly contained and
easily verified.

Test for an mmcblk device and add the p partition prefix if necessary. Add the
rootwait kernel parameter when an mmcblk device is detected.  Replace the series
of explicit umount commands with a single umount using a wildcard. This will
find all the partitions and will not try to unmount non-existant devices. Avoid
copy and paste errors by replacing /dev/${device}${pX} references with the
previously assigned rootfs, bootfs, and swap variables.

These changes have been tested on the FRI2 Sato image which installed to
/dev/mmcblk0 as well as the N450 Sato image which installed to /dev/sda. Both
were successful.

(From OE-Core rev: 36634e16c0a0c80674bacf20f9841e3b042bd5fd)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:12 +01:00
Paul Eggleton
c003c04590 buildhistory: fix multiple commit of images and packages at the same time
The echo line here was merging multiple lines into one, and the result
was that if both image and package changes had to be comitted then only
the image changes were being committed and the package changes could
potentially be merged into the next package change. Quoting the variable
reference fixes this.

Fixes [YOCTO #2411]

(From OE-Core rev: 540cd9d42a4db562e5eca431cec89ac5a6a05cab)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:12 +01:00
Saul Wold
35cc0b023f builder: Add Please Wait Dialog Box
Add dialog box while bitbake starts hob to inform user
to please wait for the hob screen to become visible.

(From OE-Core rev: e9239e4250ef140920847f78625cc7206763c32c)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:12 +01:00
Nitin A Kamble
c2826b50ce quilt: fix perl path in target perl scripts
While building on distros like fedora17, which has /bin/perl,
the target perl scripts get perl path also as /bin/perl.
And that is not correction path of perl on the target.

This commit avoids this error.

| error: Failed dependencies:
|       /bin/perl is needed by quilt-0.51-r2.i586
NOTE: package core-image-sato-sdk-1.0-r0: task do_rootfs: Failed
ERROR: Task 8
(/home/nitin/prj/poky.git/meta/recipes-sato/images/core-image-sato-sdk.bb,
do_rootfs) failed with exit code '1'

(From OE-Core rev: c8c394bd806978c867f2fe82e4bde65c98764880)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:11 +01:00
Saul Wold
75225bcc84 boost: Ensure we use our user-config.jam file
This change ensures we use the user-config.jam Configuration
that we created and will not use anything from the user's home
directory.

[YOCTO #2302]

(From OE-Core rev: f246e467b8513f1c1c33b5e7462ae6478754d531)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:11 +01:00
Mark Norman
d3204ddc12 uclibc SDK not including libpthread_nonshared.a
Modified the uclibc PACKAGES list order to ensure the uclibc-dev package is
processed before uclibc-staticdev to allow *_nonshared.a libraries to be
packaged in the uclibc-dev package.  The *_nonshared.a libraries are required
by the SDK.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:11 +01:00
Scott Garman
7c7ac8548d runqemu-ifup: enable ip masquerading for QEMU NAT addresses
Fix the IP masquerading settings so that networked QEMU sessions can
reach external networks.

This is a partial fix for [YOCTO #2329].

(From OE-Core rev: 78c7a82a2e3214eaec3c559269e3cc6c219759c0)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:11 +01:00
Scott Garman
bbf95cae4c openssl: upgrade to 1.0.0i
Addresses CVE-2012-2110

Fixes bug [YOCTO #2368]

(From OE-Core rev: 51a122a5593c62d7ffd07f860e54a2fb0327959c)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:10 +01:00
Scott Garman
3df821277d libpng: upgrade to 1.2.49
License hasn't changed, just updated the md5 checksums due to trivial
date changes within the text (and the position of the license text
within png.h).

Addresses CVE-2011-3045

Fixes [YOCTO #2352]

(From OE-Core rev: 6e2235a4d769b16ebf68d6bbed56d8bcc0e0c83f)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:10 +01:00
Andrei Gherzan
4f4685469a python: Add patch to search for db.h in inc_dirs and remove warning
python should search for db.h in inc_dirs and not in a hardcoded path.
If db.h is found but HASHVERSION is not 2 we avoid a warning by not.
adding this module to missing variable.

[YOCTO #1937]

(From OE-Core rev: 8eb3e52d39147f8cb98ec95857be17db0444098e)

Signed-off-by: Andrei Gherzan <andrei@gherzan.ro>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:10 +01:00
Andrei Gherzan
d1bc1191d6 python: Add patch for 64bit platform
This patch was added for 64bit host machines. In the compile process python
is checking if platform is a 64bit platform using sys.maxint which is the host's
value. The patch fixes this issue so that python would check if TARGET machine
is 64bit not the HOST machine. In this way will have "dl" and "imageop" modules
built if HOST machine is 64bit but the target machine is 32bit.

[YOCTO #1937]

(From OE-Core rev: 22ae3959f40845ebcc00413ccf733539472a1a81)

Signed-off-by: Andrei Gherzan <andrei@gherzan.ro>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:10 +01:00
Andreas Oberritter
5c507a2fd7 {kernel, module}.bbclass: don't run depmod for module packages during do_rootfs
* depmod already gets executed by pkg_postinst_kernel-image.

* If you build a module using module.bbclass, pkg_postinst returns 1 in
  do_rootfs, causing pkg_postinst to run again on first boot. To improve
  this situation, I copied pkg_postinst from kernel.bbclass to module.bbclass.
  This was rejected by Koen, because he doesn't like the code from
  kernel.bblcass, which uses ${STAGING_DIR_KERNEL}. Richard then suggested
  that calling depmod during do_rootfs wasn't necessary at all, because
  it already gets done by kernel-image.

(From OE-Core rev: 663b4be025283a30adb823760ce9d9a056106bcf)

Signed-off-by: Andreas Oberritter <obi@opendreambox.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:09 +01:00
Richard Purdie
bf4740cf66 utils.bbclass: Testing via env in create_wrapper is a nice idea but breaks things
For example, pseudo-native wants to set LD_LIBRBARY_PATH but setting this
into the environment here causes the existing pseudo (running during do_install)
to poke into paths in /opt and this breaks builds.

The simplest fix is simply not to do this. Comments tweaks to match the code.

(From OE-Core rev: 915769c405e24751eae613e9ef55f05490a726de)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:09 +01:00
Nitin A Kamble
24ffb5c0b1 libproxy: fix compilation with gcc 4.7
(From OE-Core rev: 6689c52eb13430593d6afe48dba3973467cd2404)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:09 +01:00
Paul Eggleton
a92fed4fe5 dpkg-native: fix deb-based rootfs construction failure on Fedora 16
Backport a fix from 1.16.x upstream to use fd instead of stream-based
I/O in dpkg-deb, which avoids the use of fflush() on an input stream
(the behaviour of which is undefined by POSIX, and appears to have
changed in the version of glibc introduced in Fedora 16 and presumably
other systems).

Fixes [YOCTO #1858].

(From OE-Core rev: b1c28667592e736115ab5e603a12c2723b939cf2)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:09 +01:00
Saul Wold
a518e1e3b1 quilt: move empty quiltrc to native sysconfdir
patch.bbclass orignally pointed at /usr/bin/quiltrc for an empty
version to ensure that no user setting were picked up, change this
to /etc/quiltrc in the Native sysroot since we now have a native
sysconfdir.

Make sure that the quiltrc is actually installed in the Native
sysconfdir, not the target, so fix this after the recipe split.

(From OE-Core rev: aec4cdc6efda430a0965d6b3b4f84c7943390273)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:08 +01:00
Saul Wold
fc9716930a gcc: Add plugins package for ARM, fix /usr/incude packaging
WARNING: For recipe gcc, the following files/directories were installed but not shipped in any package:
WARNING:   /usr/include
WARNING:   /usr/lib/gcc/arm-poky-linux-gnueabi/4.6.4/plugin/libgcc
WARNING:   /usr/lib/gcc/arm-poky-linux-gnueabi/4.6.4/plugin/libgcc/config
WARNING:   /usr/lib/gcc/arm-poky-linux-gnueabi/4.6.4/plugin/libgcc/config/arm
WARNING:   /usr/lib/gcc/arm-poky-linux-gnueabi/4.6.4/plugin/libgcc/config/arm/bpabi-lib.h
(From OE-Core rev: cf49cf3958b24fdb89d57abbf1f1b30c07a06030)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:08 +01:00
Saul Wold
f99ced96cf xserver-kdrive: Add xkb to existing docs list
WARNING: For recipe xserver-kdrive, the following files/directories were installed but not shipped in any package:
WARNING:   /usr/share/man
WARNING:   /usr/share/man/man5
WARNING:   /usr/share/man/man1
WARNING:   /usr/share/man/man1/Xephyr.1
WARNING:   /usr/share/man/man1/Xserver.1
(From OE-Core rev: 515cbe565b684359ac9d8bd0fb523aa3d2f810e2)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:08 +01:00
Saul Wold
d0f0d1b41d libgcc: Package additional *crt*.o files for PPC
WARNING: For recipe libgcc, the following files/directories were installed but not shipped in any package:
WARNING:   /usr/lib/powerpc-poky-linux/4.6.4/ecrti.o
WARNING:   /usr/lib/powerpc-poky-linux/4.6.4/ncrti.o
WARNING:   /usr/lib/powerpc-poky-linux/4.6.4/ecrtn.o
WARNING:   /usr/lib/powerpc-poky-linux/4.6.4/ncrtn.o
(From OE-Core rev: 580d734ddc928aaaac9acaa248427b01731074f2)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:08 +01:00
Saul Wold
77203b75f5 binutils: add embedspu for ppc builds
WARNING: For recipe binutils, the following files/directories were installed but not shipped in any package:
WARNING:   /usr/bin/embedspu
(From OE-Core rev: 15c8ea4d35edbcaf03c94aba06ded85851679157)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:07 +01:00
Ken Werner
a0f1aca7a0 bdwgc: Set ARM_INSTRUCTION_SET to "arm"
The bdwgc recipe uses a version of libatomic that fails when building in Thumb
mode. This has been fixed upstream already. The
pulseaudio/libatomics-ops_1.2.bb has the same issue and sets the
ARM_INSTRUCTION_SET to "arm" (probably until a new version gets pulled in).
This patch applies the same workaround to the bdwgc/bdwgc_20110107.bb recipe.

(From OE-Core rev: 544fe63b6a861129ea15f4cd37952e513ab0013e)

Signed-off-by: Ken Werner <ken.werner@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:07 +01:00
Darren Hart
b3de1f1140 gthumb: Disable parallel make for gthumb install
With PARALLEL_MAKE set to 14, I frequently see the gthumb do_install
task hang. Make is spinning at 100% CPU and the build makes no
more progress.

The following work-around proposed by Richard Purdie allows progress
to be made.

(From OE-Core rev: e933129ddb8ae55d618b5875fca26bc46fcb100b)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: Joshua Lock <josh@linux.intel.com>
CC: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:07 +01:00
Andreas Oberritter
6e93ac2581 python: use PKGSUFFIX for libpython2
* python-nativesdk shouldn't provide libpython2, but
  libpython2-nativesdk.

(From OE-Core rev: 260dfd9ccbf7d1e0ed60256aaf80fed5bf0c24e2)

Signed-off-by: Andreas Oberritter <obi@opendreambox.org>

[PR Bump - sgw]

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:07 +01:00
Otavio Salvador
c1bfbf7168 connman: backport test script fixes
Those fixes are required to get the test scripts to work with current
0.79 DBus API.

(From OE-Core rev: aadeb3199d1b34369b63810696b9d61a86afb31d)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:06 +01:00
Khem Raj
5a7d852a94 connman: Fix linking with gold linker
Fixes errors like below

/home/kraj/work/angstrom/build/tmp-angstrom_2010_x-eglibc/sysroots/x86_64-linux/usr/libexec/armv5te-angstrom-linux-gnueabi/gcc/arm-angstrom-linux-gnueabi/4.6.3/ld:
error: hidden symbol '__start___debug' is not defined locally
/home/kraj/work/angstrom/build/tmp-angstrom_2010_x-eglibc/sysroots/x86_64-linux/usr/libexec/armv5te-angstrom-linux-gnueabi/gcc/arm-angstrom-linux-gnueabi/4.6.3/ld:
error: hidden symbol '__stop___debug' is not defined locally
collect2: ld returned 1 exit status
make[1]: *** [plugins/loopback.la] Error 1

(From OE-Core rev: 3e6e97b40f8cb9568993c5cc65d73189ec6b7b8a)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:06 +01:00
Scott Rifenbark
3bf8069100 documentation/yocto-project-qs/yocto-project-qs.xml: added quotes
Need quotes around the INHERIT statement.

Reported-by: Frans Meulenbroeks <fransmeulenbroeks@gmail.com>
(From yocto-docs rev: b040ab0cf8e776c04fc787ba09722327b60913f2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:36 +01:00
Scott Rifenbark
cbd192a6c5 documentation/dev-manual/dev-manual-kernel-appendix.xml: grammar fix
(From yocto-docs rev: 8f62155b56f82c705f05585d2ab68d4a4af5a501)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:36 +01:00
Scott Rifenbark
6d22ae627b documentation/dev-manual/dev-manual-kernel-appendix.xml: Removed KMACHINE
The example that compliles the altered code will not run now when the
KMACHINE statement is in the linux-yocto_3.2.bbappend file.  I have
commented it out of the book.

(From yocto-docs rev: 112a10a32ba3d7b24f22e25e39202b717571cbf0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:36 +01:00
Scott Rifenbark
49a58c65b6 documentation/dev-manual/dev-manual-kernel-appendix.xml: updated KSRC
The KSRC example needs "_3_2" at the end of the variable.

(From yocto-docs rev: 99bf77dd648b28c2d425d23215383b7c733b054d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:36 +01:00
Scott Rifenbark
06cde35657 documentation/dev-manual/dev-manual-kernel-appendix.xml: added quotes
Turns out the KSRC_linux_yocto ?= /home/scottrif/linux-yocto-3.2.git
statement in the linux-yocto_3.2.bbappend file in poky-extras needs
quote characters around the pathname.  I updated the example
statement.

(From yocto-docs rev: bcdb8d230f20bf69567380d562c991ff6eeb41cd)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
196a62b50c documentation/dev-manual/dev-manual-kernel-appendix.xml: kernel machine update
Found two instances of "yocto/standard/common-pc/base".  this should
now be "standard/default/common-pc/base".

(From yocto-docs rev: d710bc868409ad21bdf9e63c042ec40b0d305ad0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
8ddaa3ede8 documentation/dev-manual/dev-manual-kernel-appendix.xml: 3.0 to 3.2
The kernel used for example is no linux-yocto_3.2.  I changed
all occurences of yocto_3.0 over to yocto_3.2

(From yocto-docs rev: c3585a0fec0381a88071004660ab96016f9674e2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
52ccf5a9eb documentation/dev-manual/dev-manual-kernel-appendix.xml: formatting fix
(From yocto-docs rev: 1d1a5059163749b5adecf9432ffc5e2f2207acf4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
b2c9b25f97 documentation/dev-manual/dev-manual-kernel-appendix.xml: altered example
The example code with the printk statements needed to be altered.
And the wording supporting the example was modifyied to be more
generic.

(From yocto-docs rev: 4d03fe2e08dbdcab438aae551e9696e11a3e4477)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
5a1fb95a8d documentation/dev-manual/dev-manual-kernel-appendix.xml: updated cpuinit
Looks like calibrate_delay(void) changed in the example.  Updated
to the most recent code.

(From yocto-docs rev: 402af7d379b0df5e97b1863aa627aad98ceb5e6f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:34 +01:00
Scott Rifenbark
22b9983cc7 documentation/dev-manual/dev-manual-kernel-appendix.xml: output updated
Updated the example sets up the bare clone.  The shell output changed
because the upstream repo changed to
"origin/standard/default/common-pc/base".

(From yocto-docs rev: 72077ca9e7db747cbccc4d9d8deabfa424c6147c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:34 +01:00
Scott Rifenbark
7f5e6a1959 documentation/Makefile: Added denzil specific .PNG file logic
A new figure was needed in the Kernel appendix.  I added that
figure to the block of code that creates the tarball for the
denzil branch.

(From yocto-docs rev: cb3997cad15f7bf6f812f939a3fe4dcac955376d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:34 +01:00
Scott Rifenbark
01025ad2c4 documentation/dev-manual: New figure just for denzil
New image needed for Denzil.  I created a new file named
"kernel-example-repos-denzil.png" and copied it to the
Figures folder.  I also deleted the
"kernel-example-repos.png" image.

(From yocto-docs rev: 150b4c01cce283ae8de29f51a2e4e7dcb60281ca)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:34 +01:00
Scott Rifenbark
71580376c9 documentation/dev-manual/dev-manual-bsp-appendix.xml: updated hddimg example
In the "Building and Booting the Image" section there is an example
.hddimg file.  I updated the file to be the actual file used during
the BSP example build.

(From yocto-docs rev: ce759fb3d4e5e22f0928cdd03c17c0b5d9f4167b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
0774a11505 documentation/dev-manual/dev-manual-bsp-appendix.xml: BBLAYER update
For 1.2 you evidently need to add $HOME/poky/meta-intel to your
bblayers.conf file.

(From yocto-docs rev: 05bb85dd133d8da0697cd4414b05dde2a636b737)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
4d2d5abd8b documentation/dev-manual/dev-manual-bsp-appendix.xml: wording updated
Wording for linux-yocto_3.2.bbappend file updated to support the
addition o fhte KBRANCH variable.

(From yocto-docs rev: 6bd32650f1004055ac67157f96ab62abf5883047)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
884034b256 documentation/dev-manual/dev-manual-bsp-appendix.xml: mymachine.conf
edited it to now include the PREFERRED_VERSION variable.

(From yocto-docs rev: 6ddd56cbcec752e27b2bbf0fc687af79b2249377)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
ee98021efe documentation/dev-manual/dev-manual-bsp-appendix.xml: recipes-kernel update
The section on changing recipes-kernel was way out of date.
I updated all relavent changes.

(From yocto-docs rev: b9f954983447e45766a0bf785285c0591fe9d340)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
f52747d7a2 documentation/dev-manual/dev-manual-bsp-appendix.xml: 3.2 for 3.0
Kernel used in now 3.2 and not 3.0.

(From yocto-docs rev: 8ee757e0d4f97f7652de2c9ee1556c142920596a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
1bf998fe41 documentation/dev-manual/dev-manual-bsp-appendix.xml: added layerdepends
The layer.conf file now uses a LAYERDEPENDS variable.  I added that
to the example.

(From yocto-docs rev: 09f4d9e74ceccb3053a36d2a3deed5cc3d3be157)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
1b3c00a34f documentation/dev-manual/dev-manual-bsp-appendix.xml: changed kernel
The kernel in mymachine.conf had to be changed from 3.0 to 3.2

(From yocto-docs rev: 8a385bfa11298251fd80445d6fd2da6034d6b9dc)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
b3f870297e documentation/dev-manual/dev-manual-bsp-appendix.xml: Output updated
The output for creating and switching to the denzil branch
for meta-intel needed updated.

(From yocto-docs rev: 54602beb1aa56521c7f5812803724ff53bf11bf1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
93deb57c91 documentation/dev-manual/dev-manual-bsp-appendix.xml: Bad variable
The variable substitution had to be changed from
"&DISTRO_NAME;-6.0.0.tar.bz2" to "&DISTRO_NAME;-&POKYVERSION;.tar.bz2".

(From yocto-docs rev: 8ed6cb5e2b56dee3fa8d127b449183ae141a9153)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
2883b754a1 documentation/dev-manual/dev-manual-bsp-appendix.xml: Added link
Created a link to the Yocto Project Files term.

(From yocto-docs rev: 32d7d7008ebcb0b25f77b855025c7059526b9694)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
86325bbc5d documentation/dev-manual/dev-manual-bsp-appendix.xml: typo corrected.
(From yocto-docs rev: 73eba4180162fcd6570ae90c6cac1b16088d4a01)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
746d718f53 documentation/dev-manual/dev-manual-bsp-appendix.xml: Bad variable
Had to remove "poky-" from the front of this variable that resolves
to a YP Files top-level name from the tarball.

(From yocto-docs rev: d01d5bd6c4d1fd754d4fccc087d557058d6a5733)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:31 +01:00
Scott Rifenbark
c498338197 documentation/dev-manual/dev-manual-model.xml: Fixed a bad link title.
(From yocto-docs rev: 0b59afe539b2adc3459c1e22404136d81250d292)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:31 +01:00
Scott Rifenbark
ba554bd865 documentation/dev-manual/dev-manual-model.xml: Better wording.
(From yocto-docs rev: bb3fa5eeed2784b415d009ae07c39149adc1a147)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:31 +01:00
Scott Rifenbark
0dda5d88a5 documentation/dev-manual/dev-manual-model.xml: Added BitBake
Throughout the documentation set I have refered to the YP build
system generically in order to avoid use of the "Poky" term.  Richard
has suggested that we refer to the actual thing that does the building.
So I have added BitBake to this particular sentence to refer to the
tool.

(From yocto-docs rev: 4d52fc9c8d1e1cbfca99590fcaa09392f5d235bf)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:31 +01:00
Scott Rifenbark
06f44161f1 documentation/dev-manual/dev-manual-model.xml: Fixed poor references.
(From yocto-docs rev: 91885c11cc33a10b3d65006304bf5a6ca748f13f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
4b4a018466 documentation/dev-manual/figures/kernel-overview-3.png: Removed file
This file was replaced by a release-specific file named
"kernel-overview-3-denzil.

(From yocto-docs rev: e9604111299d3699105225302c43a25e7b2730b1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
caf6532b51 documentation: release-specific figure needed for denzil in dev-manual
dev-manual/dev-manual-model.xml:  The Bare Clone and Copy of the Bare
Clone figures are out of date for denzil.  These needed to be
re-done so they use "linux-yocto-3.2.git" and "my-linux-yocto-3.0-work"
as the root names.  This presents a Makefile issue when making the
denzil and pre-denzil versions of the manuals.  Whenever you use a
different figure for a different release, you need to involve the
BRANCH variable in the Makefile.  This is necessary because you are
using different figures in the generated tarballs.  The set of figures
could be unique to the release.  The outdated figure is
"kernel-overview-3.png" and will eventually be removed (later commit).
I created a new figure named "kernel-overview-3-denzil.png" and used
that in the dev-manual-model.xml file.

documentation/Makefile:  I updated the Makefile to test for a
"denzil" release build and if so include the new file in the
generated tarball.  This commit adds the new .PNG file as well.
Fixed the Makefile so that if you don't supply a BRANCH value, it
uses the latest figures (denzil).

(From yocto-docs rev: 49552b12a967f97eb4d75477895bf32f61d69aa6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
a81cb954bb documentation/dev-manual/dev-manual-model.xml: Wording change
Changed the Note wording to work with the list and not be specific
to a number of supported kernels.

(From yocto-docs rev: a6ffe0834c0ed76ec09315f34c65888c20eed958)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
0089bb9ad0 documentation/dev-manual/dev-manual-model.xml: Updated wording for list
The list specifically named four kernels supported.  I changed it so
it would say "several kernels".

(From yocto-docs rev: b6c34f86c1f3724c1416b8fb7770e1c33587e065)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
ce8d4157db documentation/dev-manual/dev-manual-model.xml: BSP Layer step updated
Several things out-of-date for step five of the BSP Creation overview.

(From yocto-docs rev: ec06bd4f7bb1764e4a37328a51923d7b707d19e6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:29 +01:00
Scott Rifenbark
66b18cb5cd documentation/dev-manual/dev-manual-model.xml: Fixed link
The link and wording to the YP Downloads page on the website was
wrong.  Fixed it up.

(From yocto-docs rev: 5baf847c9b5b8af07c8945921352d3aba2a9cfa8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:29 +01:00
Scott Rifenbark
bbf33914ea documentation/dev-manual/dev-manual-common-tasks.xml: Font corrected.
(From yocto-docs rev: 0fab3eecf7f67ae890ff4fc2f6c12fed4aa4d897)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:29 +01:00
Scott Rifenbark
e1e12bfd0c documentation/dev-manual/dev-manual-common-tasks.xml: Section name fixed.
(From yocto-docs rev: 6c5724d8c0e75efc22dd2f4477a797afeaed5347)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:29 +01:00
Scott Rifenbark
bc7f18c61d documentation/dev-manual/dev-manual-common-tasks.xml: fixed path
Added more detail to the pathname for the example
formfactor_0.0.bbappend file.

(From yocto-docs rev: 32e60999494bb5b69d683008ad804613e4b99d07)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
89e2958475 documentation/dev-manual/dev-manual-common-tasks.xml: link and output fixed
Fixed a reference to Yocto Project Files and provided a link.

Put in an updated version of the meta/recipes-bsp/formfactor/
formfactor_0.0.bb file in the example.

(From yocto-docs rev: 05001174d2337a91e839e991a3e9ecd6657a56f4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
943c6917e6 documentation/dev-manual/dev-manual-start.xml: shell output examples updated
Updated various shell output examples created from cloning various
Git repositories, etc.

(From yocto-docs rev: ed167b1643a60ab30c09c2f42baebf781564ca20)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
2b6e86beae documentation/dev-manual/dev-manual-start.xml: updated output for bare clone
Updated the shell output example when user creates a bare clone
of kernel.  We use linux-yocto-3.2 here.

(From yocto-docs rev: e24beac8c8b6c65f94b71f36bf9f5d918ee4375e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
42a9a50771 documentation/dev-manual/dev-manual-newbie.xml: Tag example fixed
The example that creates a local branch based on a release tag
in the "Repositories, Tags, and Branches" section was not optimal.
Darren Hart informed me that naming a local branch the same name
as a tag confuses Git.  Plus, the "-b" option was mis-placed.
Renamed the local branch to have "my-" in front of it and moved
the "-b" option earlier in the command.

(From yocto-docs rev: 24ab16d18fb317efb86d2c4ddb2ac1a1449df519)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
74c34c9d3c documentation/dev-manual/dev-manual-newbie.xml: Fixed branch example
The example in the "Repositories, Tags, and Branches" section that
creates a local branch that tracks the upstream branch is incorrect.
The syntax should be "git checkout -b &DISTRO_NAME; origin/&DISTRO_NAME;

Fixed it.

(From yocto-docs rev: 7b47dd460f240a0d7f07edf2767bcad1ddc9d4c3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:27 +01:00
Scott Rifenbark
e9b8cf485c documentation/dev-manual/dev-manual-newbie.xml: Link added for TOPDIR
(From yocto-docs rev: e02c1762fadd22f6ffc06e91ac82ebb59a7a7f68)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:27 +01:00
Scott Rifenbark
2863d953bd documentation/dev-manual/dev-manual-intro.xml: Hob and BA added
Added Hob and Build Appliance to the list of other stuff the user
might want to reference.

(From yocto-docs rev: 74ca0a95f0ea1b2045a42f0895ba874bdfa2d46c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:27 +01:00
Scott Rifenbark
716bdd4bf5 documentation/dev-manual/dev-manual-start.xml: Misc fixes
Better wording for the role of BitBake.  Updated shell output for
the clone of poky.

(From yocto-docs rev: 0f7d9557413827f82388d3fe677109074f04e30c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:27 +01:00
Paul Eggleton
dfecd3e3d7 documentation/yocto-project-qs/yocto-project-qs.xml: Package requirements
The following packages no longer need to be installed on the host
system:

* python-psyco
* help2man
* cvs
* hg

Additionally, linuxdoc-tools was mentioned twice in the Fedora list.

(From yocto-docs rev: bf7f37e040e5d5e19738f4c3a313acfd406351e3)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:26 +01:00
Scott Rifenbark
142de43be2 documentation/dev-manual/dev-manual-common-tasks.xml: Fix cusomizing example
As suggested by Paul Eggleton and Richard Purdie, the example that
describes another method for creating a cusomt image was modified
so that it is based on an existing recipe instead of requiring a
new image.

Reported-by: Paul Eggleton <paul.eggleton@linux.intel.com>
(From yocto-docs rev: b5b32be9087c3d1c8e8d97751ce2cce09829f23b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:26 +01:00
Scott Rifenbark
752c707df3 documentation/poky.ent: Changed "latest" to "current"
Needed to change this so that the manuals will make correctly and
manual links will not point to the "latest" version of manuals on the
YP website.  This change should have been made prior to the final
1.2 build so that it would have been in the 1.2 tarball.

(From yocto-docs rev: a8615e05aef205629c832041f30c76567d8359bd)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 20:57:24 +01:00
Richard Purdie
d20a24310e self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: fd989e1bceef6df36619ba8944c8141abefd282e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:21:45 +01:00
Richard Purdie
8e04664ffd self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: 117ca04008415ed0e6e10dcd373ab5f685b3225a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:17:25 +01:00
Dongxiao Xu
3ab5d73f0c sanity.bbclass: Add a new case to issue sanity_check()
Judge if "SanityCheck" event is received, it will issue the
sanity_check() and send "SanityCheckPassed" back if succeeded.

(From OE-Core rev: 19704f9e69ecf09531687385b478b47f49fe372d)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:15:47 +01:00
Dongxiao Xu
33f048240d Hob: Issue sanity check after parse is completed
In original scheme, sanity check is part of the parsing process. If a
sanity check fails, it means the parsing is failed and values in Hob
GUI may not correct.

With this commit, Hob will actively issue sanity_check() after the
parsing is completed.

This fixes [YOCTO #2361]

(Bitbake rev: 36968815dcc91759eeacb308bf4b294af416eee5)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:15:42 +01:00
Dongxiao Xu
0bf04aa4ad Hob: Add proxy setting into setting's md5
If user changed the proxy setting, we will reparse configuration because
it may need sanity check.

(Bitbake rev: 0be54917cd88ea8f110027a7840ac69a411fd589)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:15:36 +01:00
Dongxiao Xu
612555e6fe event.py: Add SanityCheck and SanityCheckPassed events
(Bitbake rev: 4d7bf9d813229b78b1cd87d06f7042e7923b7db4)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:15:29 +01:00
Richard Purdie
35196ff703 self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: 85bebd85c4f6603ac8fc1290121c34b92cc434f9)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:13:00 +01:00
Lianhao Lu
0a48c697d7 pseudo: PR bump.
Bump PR value due to the commit
c6c701f424aeb502d20ff02d02712e56f4e259a5.

(From OE-Core rev: b6ee2880fccf04923ede31256ea418451cbf2e46)

Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:10:39 +01:00
Scott Rifenbark
7e56770a60 documentation: Updated Manual Revision Tables again.
After some discussion from Song and Richard, the dates in the
manual revision table has been updated to "April 2012" for the
1.2 release.

(From yocto-docs rev: b3fc2ec7c5aedb8ea0a2d502bdcd7e8f4092ed96)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:08:16 +01:00
Scott Rifenbark
9a548f0ee4 documentation: Replacements for "1.1" and "edison", etc.
I did a quick and dirty scrub over the manuals for the strings
"1.1" and "edison".  I found some instances that were not properly
variablized.  Also, discovered some references to the
linux-yocto-3.0-1.1.x.  All but one instance of this needed changed
to linux-yocto-3.2.

(From yocto-docs rev: 620fb4b7626defcefc8a039de09ae4599ee7f454)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:08:09 +01:00
Scott Rifenbark
946c650a47 documentation: Manual Revision Tables updated
Five tables updated for the five manuals that have the tables.
Used "May 2012" as the date.

(From yocto-docs rev: 0d4d46ba300c07ff9c73186506be5b409bef9d1b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:08:02 +01:00
Scott Rifenbark
f99c947c32 documentation/yocto-project-qs/yocto-project-qs.xml: Added Build Appliance
Added a blurb about the Build Appliance to the start of the QS.

(From yocto-docs rev: b2766121c05740300fd5a6cea2f3b8a2f62db6e5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:55 +01:00
Tom Zanussi
9ffbd2ef22 documentation/dev-manual/dev-manual-common-tasks.xml: removed kernel26
kernel26 is now obsolete so remove mention of it from the docs.
Removed from docs.

(From yocto-docs rev: 7b9da106d746192f802095584b04e3ee8347eabd)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:48 +01:00
Scott Rifenbark
66625417b4 documentation/poky-ref-manual/ref-images.xml: added link
added the link for the Build Appliance page to the description of the
self-hosted image.

(From yocto-docs rev: 719ba4308489b29eefa7f08ddffb65bd5e41fc2c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:40 +01:00
Joshua Lock
e95ce40abd scripts/hob: disable sanity checks when launching
This enables us to use the GUI to change any settings which might cause
sanity checks to fail, such as the proxy configuration.

(From OE-Core rev: fe98d1c7159636f123b27292bbd4cc224b532bf0)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:33 +01:00
Joshua Lock
4a83ebbee0 sanity.bbclass: add variable to disable the sanity checks
It's useful for Hob to be able to disable the sanity checks completely
without marking them as passed so that the user can get into the GUI to
configure their settings, etc.

Add a variable, DISABLE_SANITY_CHECKS, to do so.

(From OE-Core rev: b022641f939bcfcdaddddc4db3af4d2dc70de832)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:26 +01:00
Richard Purdie
90705b36ad python: Fix various contamination issues leading to broken/missing c modules
The move of libcrypto to /lib instead of /usr/lib has broken the _hashlib module
compilation. There were also a number of other failing modules which should
have been building correctly. This turned out partly to be the /lib issue
but also due to a number of native paths creeping into compiler commandlines.

These changes add in /lib as part of the searh directory and remove
a number of host contamination issues within setup.py. Post release we
should really further go through this file and just delete large sections
of it as its hard to be sure what strange paths python is injecting as
search paths.

This patch also fixes issues where re-execution of the compile task
would corrupt the Makefile in various ways, again leading to puzzling
paths within the configuration.

(From OE-Core rev: 20e2761e1da1cb5dcd267e161f2a6b6a429e9f39)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:19 +01:00
Richard Purdie
be5a5c7e7b bitbake.conf: Add a STAGING_BASELIBDIR variable that recipes can use to find base_libdir
(From OE-Core rev: 4697911991caa2f2a21dd43f54e0c4404d722873)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:12 +01:00
Joshua Lock
64471e9340 hob: enable sanity checks after launch
To ensure the users configuration is sanity tested enable the sanity
checks after the GUI has started but before any parsing is done.

(Bitbake rev: 244ce2b900ae6cecbeeccfe2056e61c132476261)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:05 +01:00
Richard Purdie
4becd60e65 self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: b19af63)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 16:08:34 +01:00
Richard Purdie
9fcfda78b9 poky-tiny: Drop now unneeded DISTRO_FEATURES_LIBC_TOOLCHAIN (after gettext fix)
After the recent gettext dependency fix (commit 6e5cb40dfa
"gettext.bbclass: Ensure we don't overwrite other DEPENDS_GETTEXT values",
its no longer necessary to have to have these options to build meta-toolchain.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 16:05:58 +01:00
Richard Purdie
8558c3e1f4 initramfs-live-boot: Disable unionfs until its issue with the system rootdir are resolved
There are issues with the current unionfs when making a union mount over "/".
Until these are resolved we can't use unionfs for live booting so disable this
temporarily as a workaround.

unionfs is usable in other circumstances.

[YOCTO #2331 workaround]

(From OE-Core rev: 60ee26ae23132b916019d58e20b8c2e1ddd2b471)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:42 +01:00
Richard Purdie
0bfb42dbb6 pseudo: Drop nativesdk wrapper and link against old memcpy symbol
The -nativesdk pseudo wrapper setting LD_LIBRARY_PATH turned out to be a
bad idea since it can mix up different libc and lib-dl verisons which
may or may not work depending on the phase of the moon.

As an alternative to solving the original problem, this patch drops the
symbol version requirement on memcpy which allows pseudo to work with
libc's back to 2.7 which should be sufficient for our supported targets
using nativesdk.

[YOCTO #2299]
[YOCTO #2351]

(From OE-Core rev: c6c701f424aeb502d20ff02d02712e56f4e259a5)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:42 +01:00
Richard Purdie
37b069ea5d pseudo: Fix bashisms
(From OE-Core rev: 90e22bbb316088fa951d51e75de4e5424bd51ed6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:42 +01:00
Richard Purdie
6d7260e8f6 package.bbclass: Ensure kernel modules get stripped
Kernel modules are not marked as executable but we do expect to strip them.
This patch adds in missing code to ensure we do this. Without this images
are getting sigificantly bloated in size.

(From OE-Core rev: 00b0a5f2f51bb3f88bbb9ae558c2859e3c1c406c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:41 +01:00
Richard Purdie
236bda9ed6 gettext.bbclass: Ensure we don't overwrite other DEPENDS_GETTEXT values
In particular, this overwrites the value from cross-canadian.bbclass in
some cases which isn't the desired behaviour and unnecessarily
complicates/breaks the dependency chain.

(From OE-Core rev: 751ead4fa7d4120de906a1d9cb1d5a29357bebad)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:41 +01:00
Richard Purdie
375835092c qemu: Backport a patch to solve SSE2 instruction emulation issues
This fix addresses various issues seen in qemux86-64 images:
 * scroll bars in matchbox-terminal not working
 * files not appearing in pcmanfm
 * warnings on the console from glib/gobject about invalid gdouble values

Its due to an emulation issue in qemu which the backported patch fixes.

I managed to debug it to a specific function, Khem found the qemu patch
to backport, thanks Khem!

[YOCTO #1906]

(From OE-Core rev: 69d083f8b8d8f7d095ed5682d305870c4d93fe62)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:41 +01:00
Scott Rifenbark
2c3d4f5bee documentation/bsp-guide/bsp.xml: spelling corrected.
Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: c0ee8ce391114f7a5b4f1c59fdf997ba4f3bcf75)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-18 16:42:15 +01:00
Scott Rifenbark
45114a9df0 documentation/poky-ref-manual/ref-images.xml: Added self-hosted image
I added the self-hosted-image to the list of images.

(From yocto-docs rev: a8265cb523705a374d23bf60aab5b7969ad937fc)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-18 16:42:05 +01:00
Richard Purdie
08290c6003 self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: 00256125873ff6f1630743a712e882e5f473a9d2)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-18 15:59:56 +01:00
Elizabeth Flanagan
729e7f774c distro.conf: Flipping for denzil
Flipping values in distro.conf for upcoming release

Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
2012-04-18 15:55:32 +01:00
3840 changed files with 211372 additions and 196167 deletions

40
.gitignore vendored
View File

@@ -1,23 +1,41 @@
*.pyc
*.pyo
/*.patch
build*/
build*/conf/local.conf
build*/conf/bblayers.conf
build*/downloads
build*/tmp/
build*/sstate-cache
pyshtables.py
pstage/
scripts/oe-git-proxy-socks
sources/
meta-*/
meta-*
!meta-skeleton
!meta-hob
hob-image-*.bb
!meta-demoapps
*.swp
*.orig
*.rej
*~
!meta-yocto
!meta-yocto-bsp
bitbake/doc/manual/html/
bitbake/doc/manual/pdf/
bitbake/doc/manual/txt/
bitbake/doc/manual/xhtml/
pull-*/
documentation/adt-manual/adt-manual.html
documentation/adt-manual/adt-manual.pdf
documentation/adt-manual/adt-manual.tgz
documentation/dev-manual/dev-manual.html
documentation/dev-manual/dev-manual.pdf
documentation/dev-manual/dev-manual.tgz
documentation/poky-ref-manual/poky-ref-manual.html
documentation/poky-ref-manual/poky-ref-manual.pdf
documentation/poky-ref-manual/poky-ref-manual.tgz
documentation/poky-ref-manual/bsp-guide.html
documentation/poky-ref-manual/bsp-guide.pdf
documentation/bsp-guide/bsp-guide.html
documentation/bsp-guide/bsp-guide.pdf
documentation/bsp-guide/bsp-guide.tgz
documentation/yocto-project-qs/yocto-project-qs.html
documentation/yocto-project-qs/yocto-project-qs.tgz
documentation/kernel-manual/kernel-manual.html
documentation/kernel-manual/kernel-manual.tgz
documentation/kernel-manual/kernel-manual.pdf
documentation/mega-manual/mega-manual.html
documentation/mega-manual/mega-manual.pdf
documentation/mega-manual/mega-manual.tgz

22
README
View File

@@ -18,7 +18,7 @@ e.g. for the hardware support. Poky is in turn a component of the Yocto Project.
The Yocto Project has extensive documentation about the system including a
reference manual which can be found at:
http://yoctoproject.org/documentation
http://yoctoproject.org/community/documentation
OpenEmbedded-Core is a layer containing the core metadata for current versions
of OpenEmbedded. It is distro-less (can build a functional image with
@@ -27,23 +27,3 @@ DISTRO = "") and contains only emulated machine support.
For information about OpenEmbedded, see the OpenEmbedded website:
http://www.openembedded.org/
Where to Send Patches
=====================
As Poky is an integration repository, patches against the various components
should be sent to their respective upstreams.
bitbake:
bitbake-devel@lists.openembedded.org
meta-yocto:
poky@yoctoproject.org
Most everything else should be sent to the OpenEmbedded Core mailing list. If
in doubt, check the oe-core git repository for the content you intend to modify.
Before sending, be sure the patches apply cleanly to the current oe-core git
repository.
openembedded-core@lists.openembedded.org
Note: The scripts directory should be treated with extra care as it is a mix
of oe-core and poky-specific files.

View File

@@ -1,34 +1,28 @@
Poky Hardware README
====================
This file gives details about using Poky with the reference machines
supported out of the box. A full list of supported reference target machines
can be found by looking in the following directories:
meta/conf/machine/
meta-yocto-bsp/conf/machine/
If you are in doubt about using Poky/OpenEmbedded with your hardware, consult
the documentation for your board/device.
This file gives details about using Poky with different hardware reference
boards and consumer devices. A full list of target machines can be found by
looking in the meta/conf/machine/ directory. If in doubt about using Poky with
your hardware, consult the documentation for your board/device.
Support for additional devices is normally added by creating BSP layers - for
more information please see the Yocto Board Support Package (BSP) Developer's
Guide - documentation source is in documentation/bspguide or download the PDF
from:
http://yoctoproject.org/documentation
http://yoctoproject.org/community/documentation
Support for physical reference hardware has now been split out into a
meta-yocto-bsp layer which can be removed separately from other layers if not
needed.
Support for machines other than QEMU may be moved out to separate BSP layers in
future versions.
QEMU Emulation Targets
======================
To simplify development, the build system supports building images to
work with the QEMU emulator in system emulation mode. Several architectures
are currently supported:
To simplify development Poky supports building images to work with the QEMU
emulator in system emulation mode. Several architectures are currently
supported:
* ARM (qemuarm)
* x86 (qemux86)
@@ -36,33 +30,32 @@ are currently supported:
* PowerPC (qemuppc)
* MIPS (qemumips)
Use of the QEMU images is covered in the Yocto Project Reference Manual.
The appropriate MACHINE variable value corresponding to the target is given
in brackets.
Use of the QEMU images is covered in the Poky Reference Manual. The Poky
MACHINE setting corresponding to the target is given in brackets.
Hardware Reference Boards
=========================
The following boards are supported by the meta-yocto-bsp layer:
The following boards are supported by Poky's core layer:
* Texas Instruments Beagleboard (beagleboard)
* Freescale MPC8315E-RDB (mpc8315e-rdb)
* Ubiquiti Networks RouterStation Pro (routerstationpro)
For more information see the board's section below. The appropriate MACHINE
variable value corresponding to the board is given in brackets.
For more information see the board's section below. The Poky MACHINE setting
corresponding to the board is given in brackets.
Consumer Devices
================
The following consumer devices are supported by the meta-yocto-bsp layer:
The following consumer devices are supported by Poky's core layer:
* Intel Atom based PCs and devices (atom-pc)
For more information see the device's section below. The appropriate MACHINE
variable value corresponding to the device is given in brackets.
For more information see the device's section below. The Poky MACHINE setting
corresponding to the device is given in brackets.
@@ -85,7 +78,7 @@ supports ethernet, wifi, sound, and i915 graphics by default in addition to
common PC input devices, busses, and so on.
Depending on the device, it can boot from a traditional hard-disk, a USB device,
or over the network. Writing generated images to physical media is
or over the network. Writing poky generated images to physical media is
straightforward with a caveat for USB devices. The following examples assume the
target boot device is /dev/sdb, be sure to verify this and use the correct
device as the following commands are run as root and are not reversable.
@@ -138,7 +131,7 @@ USB Device:
device stops flashing, remove and reinsert the device to allow the
kernel to detect the new partition layout.
c. Copy the contents of the image to the USB-ZIP mode device:
c. Copy the contents of the poky image to the USB-ZIP mode device:
# mkdir /tmp/image
# mkdir /tmp/usbkey
@@ -288,8 +281,8 @@ anything here.
Load the kernel and dtb (device tree blob), and boot the system as follows:
1. Get the kernel (uImage-mpc8315e-rdb.bin) and dtb (uImage-mpc8315e-rdb.dtb)
files from the tmp/deploy directory, and make them available on your TFTP
server.
files from the Poky build tmp/deploy directory, and make them available on
your TFTP server.
2. Connect the board's first serial port to your workstation and then start up
your favourite serial terminal so that you will be able to interact with
@@ -308,9 +301,9 @@ Load the kernel and dtb (device tree blob), and boot the system as follows:
5. Download the kernel and dtb, and boot:
=> tftp 1000000 uImage-mpc8315e-rdb.bin
=> tftp 2000000 uImage-mpc8315e-rdb.dtb
=> bootm 1000000 - 2000000
=> tftp 800000 uImage-mpc8315e-rdb.bin
=> tftp 780000 uImage-mpc8315e-rdb.dtb
=> bootm 800000 - 780000
Ubiquiti Networks RouterStation Pro (routerstationpro)

View File

@@ -40,17 +40,9 @@ from bb import cooker
from bb import ui
from bb import server
__version__ = "1.18.0"
__version__ = "1.15.2"
logger = logging.getLogger("BitBake")
# Unbuffer stdout to avoid log truncation in the event
# of an unorderly exit as well as to provide timely
# updates to log files for use with tail
try:
if sys.stdout.name == '<stdout>':
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
except:
pass
class BBConfiguration(object):
"""
@@ -64,11 +56,10 @@ class BBConfiguration(object):
def get_ui(config):
if not config.ui:
# modify 'ui' attribute because it is also read by cooker
config.ui = os.environ.get('BITBAKE_UI', 'knotty')
interface = config.ui
if config.ui:
interface = config.ui
else:
interface = 'knotty'
try:
# Dynamically load the UI based on the ui name. Although we
@@ -78,7 +69,7 @@ def get_ui(config):
return getattr(module, interface).main
except AttributeError:
sys.exit("FATAL: Invalid user interface '%s' specified.\n"
"Valid interfaces: depexp, goggle, ncurses, hob, knotty [default]." % interface)
"Valid interfaces: depexp, goggle, ncurses, hob, knotty [default], knotty2." % interface)
# Display bitbake/OE warnings via the BitBake.Warnings logger, ignoring others"""
@@ -126,9 +117,6 @@ Default BBFILES are the .bb files in the current directory.""")
parser.add_option("-c", "--cmd", help = "Specify task to execute. Note that this only executes the specified task for the providee and the packages it depends on, i.e. 'compile' does not implicitly call stage for the dependencies (IOW: use only if you know what you are doing). Depending on the base.bbclass a listtasks tasks is defined and will show available tasks",
action = "store", dest = "cmd")
parser.add_option("-C", "--clear-stamp", help = "Invalidate the stamp for the specified cmd such as 'compile' and run the default task for the specified target(s)",
action = "store", dest = "invalidate_stamp")
parser.add_option("-r", "--read", help = "read the specified file before bitbake.conf",
action = "append", dest = "prefile", default = [])
@@ -150,13 +138,13 @@ Default BBFILES are the .bb files in the current directory.""")
parser.add_option("-p", "--parse-only", help = "quit after parsing the BB files (developers only)",
action = "store_true", dest = "parse_only", default = False)
parser.add_option("-s", "--show-versions", help = "show current and preferred versions of all recipes",
parser.add_option("-s", "--show-versions", help = "show current and preferred versions of all packages",
action = "store_true", dest = "show_versions", default = False)
parser.add_option("-e", "--environment", help = "show the global or per-package environment (this is what used to be bbread)",
action = "store_true", dest = "show_environment", default = False)
parser.add_option("-g", "--graphviz", help = "emit the dependency trees of the specified packages in the dot syntax, and the pn-buildlist to show the build list",
parser.add_option("-g", "--graphviz", help = "emit the dependency trees of the specified packages in the dot syntax",
action = "store_true", dest = "dot_graph", default = False)
parser.add_option("-I", "--ignore-deps", help = """Assume these dependencies don't exist and are already provided (equivalent to ASSUME_PROVIDED). Useful to make dependency graphs more appealing""",
@@ -182,8 +170,6 @@ Default BBFILES are the .bb files in the current directory.""")
parser.add_option("-B", "--bind", help = "The name/address for the bitbake server to bind to",
action = "store", dest = "bind", default = False)
parser.add_option("", "--no-setscene", help = "Do not run any setscene tasks, forces builds",
action = "store_true", dest = "nosetscene", default = False)
options, args = parser.parse_args(sys.argv)
configuration = BBConfiguration(options)
@@ -214,10 +200,9 @@ Default BBFILES are the .bb files in the current directory.""")
if configuration.bind and configuration.servertype != "xmlrpc":
sys.exit("FATAL: If '-B' or '--bind' is defined, we must set the servertype as 'xmlrpc'.\n")
if "BBDEBUG" in os.environ:
level = int(os.environ["BBDEBUG"])
if level > configuration.debug:
configuration.debug = level
# Save a logfile for cooker into the current working directory. When the
# server is daemonized this logfile will be truncated.
cooker_logfile = os.path.join(os.getcwd(), "cooker.log")
bb.msg.init_msgconfig(configuration.verbose, configuration.debug,
configuration.debug_domains)
@@ -229,8 +214,10 @@ Default BBFILES are the .bb files in the current directory.""")
# Before we start modifying the environment we should take a pristine
# copy for possible later use
initialenv = os.environ.copy()
# Clear away any spurious environment variables while we stoke up the cooker
cleanedvars = bb.utils.clean_environment()
# Clear away any spurious environment variables. But don't wipe the
# environment totally. This is necessary to ensure the correct operation
# of the UIs (e.g. for DISPLAY, etc.)
bb.utils.clean_environment()
server = server.BitBakeServer()
if configuration.bind:
@@ -245,7 +232,7 @@ Default BBFILES are the .bb files in the current directory.""")
server.addcooker(cooker)
server.saveConnectionDetails()
server.detach()
server.detach(cooker_logfile)
# Should no longer need to ever reference cooker
del cooker
@@ -256,10 +243,6 @@ Default BBFILES are the .bb files in the current directory.""")
# Setup a connection to the server (cooker)
server_connection = server.establishConnection()
# Restore the environment in case the UI needs it
for k in cleanedvars:
os.environ[k] = cleanedvars[k]
try:
return server.launchUI(ui_main, server_connection.connection, server_connection.events)
finally:

View File

@@ -1,102 +1,12 @@
#!/usr/bin/env python
# bitbake-diffsigs
# BitBake task signature data comparison utility
#
# Copyright (C) 2012 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import warnings
import fnmatch
import optparse
import logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb.tinfoil
import bb.siggen
logger = logging.getLogger('BitBake')
def find_compare_task(bbhandler, pn, taskname):
""" Find the most recent signature files for the specified PN/task and compare them """
if not hasattr(bb.siggen, 'find_siginfo'):
logger.error('Metadata does not support finding signature data files')
sys.exit(1)
filedates = bb.siggen.find_siginfo(pn, taskname, None, bbhandler.config_data)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-2:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
elif len(latestfiles) < 2:
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (pn, taskname))
sys.exit(1)
else:
# Define recursion callback
def recursecb(key, hash1, hash2):
hashes = [hash1, hash2]
hashfiles = bb.siggen.find_siginfo(key, None, hashes, bbhandler.config_data)
recout = []
if len(hashfiles) == 2:
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb)
recout.extend(list(' ' + l for l in out2))
else:
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
return recout
# Recurse into signature comparison
output = bb.siggen.compare_sigfiles(latestfiles[0], latestfiles[1], recursecb)
if output:
print '\n'.join(output)
sys.exit(0)
parser = optparse.OptionParser(
usage = """
%prog -t recipename taskname
%prog sigdatafile1 sigdatafile2
%prog sigdatafile1""")
parser.add_option("-t", "--task",
help = "find the signature data files for last two runs of the specified task and compare them",
action="store_true", dest="taskmode")
options, args = parser.parse_args(sys.argv)
if len(args) == 1:
parser.print_help()
if len(sys.argv) > 2:
bb.siggen.compare_sigfiles(sys.argv[1], sys.argv[2])
else:
if options.taskmode:
tinfoil = bb.tinfoil.Tinfoil()
if len(args) < 3:
logger.error("Please specify a recipe and task name")
sys.exit(1)
tinfoil.prepare(config_only = True)
find_compare_task(tinfoil, args[1], args[2])
else:
if len(args) == 2:
output = bb.siggen.dump_sigfile(sys.argv[1])
else:
output = bb.siggen.compare_sigfiles(sys.argv[1], sys.argv[2])
if output:
print '\n'.join(output)
bb.siggen.dump_sigfile(sys.argv[1])

View File

@@ -6,6 +6,4 @@ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), '
import bb.siggen
output = bb.siggen.dump_sigfile(sys.argv[1])
if output:
print '\n'.join(output)
bb.siggen.dump_sigfile(sys.argv[1])

View File

@@ -6,27 +6,14 @@
# Copyright (C) 2011 Mentor Graphics Corporation
# Copyright (C) 2012 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import cmd
import logging
import warnings
import os
import sys
import fnmatch
from collections import defaultdict
import re
bindir = os.path.dirname(__file__)
topdir = os.path.dirname(bindir)
@@ -36,14 +23,26 @@ import bb.cache
import bb.cooker
import bb.providers
import bb.utils
import bb.tinfoil
from bb.cooker import state
import bb.fetch2
logger = logging.getLogger('BitBake')
warnings.filterwarnings("ignore", category=DeprecationWarning)
def main(args):
cmds = Commands()
# Set up logging
console = logging.StreamHandler(sys.stdout)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
bb.msg.addDefaultlogFilter(console)
console.setFormatter(format)
logger.addHandler(console)
initialenv = os.environ.copy()
bb.utils.clean_environment()
cmds = Commands(initialenv)
if args:
# Allow user to specify e.g. show-layers instead of show_layers
args = [args[0].replace('-', '_')] + args[1:]
@@ -54,11 +53,42 @@ def main(args):
class Commands(cmd.Cmd):
def __init__(self):
def __init__(self, initialenv):
cmd.Cmd.__init__(self)
self.bbhandler = bb.tinfoil.Tinfoil()
self.returncode = 0
self.bblayers = (self.bbhandler.config_data.getVar('BBLAYERS', True) or "").split()
self.config = Config(parse_only=True)
self.cooker = bb.cooker.BBCooker(self.config,
self.register_idle_function,
initialenv)
self.config_data = self.cooker.configuration.data
bb.providers.logger.setLevel(logging.ERROR)
self.cooker_data = None
self.bblayers = (self.config_data.getVar('BBLAYERS', True) or "").split()
def register_idle_function(self, function, data):
pass
def prepare_cooker(self):
sys.stderr.write("Parsing recipes..")
logger.setLevel(logging.WARNING)
try:
while self.cooker.state in (state.initial, state.parsing):
self.cooker.updateCache()
except KeyboardInterrupt:
self.cooker.shutdown()
self.cooker.updateCache()
sys.exit(2)
logger.setLevel(logging.INFO)
sys.stderr.write("done.\n")
self.cooker_data = self.cooker.status
self.cooker_data.appends = self.cooker.appendlist
def check_prepare_cooker(self):
if not self.cooker_data:
self.prepare_cooker()
def default(self, line):
"""Handle unrecognised commands"""
@@ -73,7 +103,7 @@ class Commands(cmd.Cmd):
else:
sys.stdout.write("usage: bitbake-layers <command> [arguments]\n\n")
sys.stdout.write("Available commands:\n")
procnames = list(set(self.get_names()))
procnames = self.get_names()
for procname in procnames:
if procname[:3] == 'do_':
sys.stdout.write(" %s\n" % procname[3:].replace('_', '-'))
@@ -83,13 +113,14 @@ class Commands(cmd.Cmd):
def do_show_layers(self, args):
"""show current configured layers"""
self.bbhandler.prepare(config_only = True)
self.check_prepare_cooker()
logger.plain('')
logger.plain("%s %s %s" % ("layer".ljust(20), "path".ljust(40), "priority"))
logger.plain('=' * 74)
for layerdir in self.bblayers:
layername = self.get_layer_name(layerdir)
layerpri = 0
for layer, _, regex, pri in self.bbhandler.cooker.status.bbfile_config_priorities:
for layer, _, regex, pri in self.cooker.status.bbfile_config_priorities:
if regex.match(os.path.join(layerdir, 'test')):
layerpri = pri
break
@@ -107,7 +138,7 @@ class Commands(cmd.Cmd):
def do_show_overlayed(self, args):
"""list overlayed recipes (where the same recipe exists in another layer)
"""list overlayed recipes (where the same recipe exists in another layer that has a higher layer priority)
usage: show-overlayed [-f] [-s]
@@ -120,7 +151,7 @@ Options:
recipes with the ones they overlay indented underneath
-s only list overlayed recipes where the version is the same
"""
self.bbhandler.prepare()
self.check_prepare_cooker()
show_filenames = False
show_same_ver_only = False
@@ -152,7 +183,7 @@ Options:
# factor - however, each layer.conf is free to either prepend or append to
# BBPATH (or indeed do crazy stuff with it). Thus the order in BBPATH might
# not be exactly the order present in bblayers.conf either.
bbpath = str(self.bbhandler.config_data.getVar('BBPATH', True))
bbpath = str(self.config_data.getVar('BBPATH', True))
overlayed_class_found = False
for (classfile, classdirs) in classes.items():
if len(classdirs) > 1:
@@ -203,7 +234,7 @@ Options:
-m only list where multiple recipes (in the same layer or different
layers) exist for the same recipe name
"""
self.bbhandler.prepare()
self.check_prepare_cooker()
show_filenames = False
show_multi_provider_only = False
@@ -225,15 +256,15 @@ Options:
def list_recipes(self, title, pnspec, show_overlayed_only, show_same_ver_only, show_filenames, show_multi_provider_only):
pkg_pn = self.bbhandler.cooker.status.pkg_pn
(latest_versions, preferred_versions) = bb.providers.findProviders(self.bbhandler.cooker.configuration.data, self.bbhandler.cooker.status, pkg_pn)
allproviders = bb.providers.allProviders(self.bbhandler.cooker.status)
pkg_pn = self.cooker.status.pkg_pn
(latest_versions, preferred_versions) = bb.providers.findProviders(self.cooker.configuration.data, self.cooker.status, pkg_pn)
allproviders = bb.providers.allProviders(self.cooker.status)
# Ensure we list skipped recipes
# We are largely guessing about PN, PV and the preferred version here,
# but we have no choice since skipped recipes are not fully parsed
skiplist = self.bbhandler.cooker.skiplist.keys()
skiplist.sort( key=lambda fileitem: self.bbhandler.cooker.calc_bbfile_priority(fileitem) )
skiplist = self.cooker.skiplist.keys()
skiplist.sort( key=lambda fileitem: self.cooker.calc_bbfile_priority(fileitem) )
skiplist.reverse()
for fn in skiplist:
recipe_parts = os.path.splitext(os.path.basename(fn))[0].split('_')
@@ -341,7 +372,7 @@ build results (as the layer priority order has effectively changed).
logger.error('Directory %s exists and is non-empty, please clear it out first' % outputdir)
return
self.bbhandler.prepare()
self.check_prepare_cooker()
layers = self.bblayers
if len(arglist) > 2:
layernames = arglist[:-1]
@@ -371,8 +402,8 @@ build results (as the layer priority order has effectively changed).
appended_recipes = []
for layer in layers:
overlayed = []
for f in self.bbhandler.cooker.overlayed.iterkeys():
for of in self.bbhandler.cooker.overlayed[f]:
for f in self.cooker.overlayed.iterkeys():
for of in self.cooker.overlayed[f]:
if of.startswith(layer):
overlayed.append(of)
@@ -396,8 +427,8 @@ build results (as the layer priority order has effectively changed).
logger.warn('Overwriting file %s', fdest)
bb.utils.copyfile(f1full, fdest)
if ext == '.bb':
if f1 in self.bbhandler.cooker.appendlist:
appends = self.bbhandler.cooker.appendlist[f1]
if f1 in self.cooker_data.appends:
appends = self.cooker_data.appends[f1]
if appends:
logger.plain(' Applying appends to %s' % fdest )
for appendname in appends:
@@ -406,9 +437,9 @@ build results (as the layer priority order has effectively changed).
appended_recipes.append(f1)
# Take care of when some layers are excluded and yet we have included bbappends for those recipes
for recipename in self.bbhandler.cooker.appendlist.iterkeys():
for recipename in self.cooker_data.appends.iterkeys():
if recipename not in appended_recipes:
appends = self.bbhandler.cooker.appendlist[recipename]
appends = self.cooker_data.appends[recipename]
first_append = None
for appendname in appends:
layer = layer_path_match(appendname)
@@ -426,14 +457,14 @@ build results (as the layer priority order has effectively changed).
# have come from)
first_regex = None
layerdir = layers[0]
for layername, pattern, regex, _ in self.bbhandler.cooker.status.bbfile_config_priorities:
for layername, pattern, regex, _ in self.cooker.status.bbfile_config_priorities:
if regex.match(os.path.join(layerdir, 'test')):
first_regex = regex
break
if first_regex:
# Find the BBFILES entries that match (which will have come from this conf/layer.conf file)
bbfiles = str(self.bbhandler.config_data.getVar('BBFILES', True)).split()
bbfiles = str(self.config_data.getVar('BBFILES', True)).split()
bbfiles_layer = []
for item in bbfiles:
if first_regex.match(item):
@@ -456,28 +487,13 @@ build results (as the layer priority order has effectively changed).
logger.warning("File %s does not match the flattened layer's BBFILES setting, you may need to edit conf/layer.conf or move the file elsewhere" % f1full)
def get_file_layer(self, filename):
for layer, _, regex, _ in self.bbhandler.cooker.status.bbfile_config_priorities:
for layer, _, regex, _ in self.cooker.status.bbfile_config_priorities:
if regex.match(filename):
for layerdir in self.bblayers:
if regex.match(os.path.join(layerdir, 'test')) and re.match(layerdir, filename):
if regex.match(os.path.join(layerdir, 'test')):
return self.get_layer_name(layerdir)
return "?"
def get_file_layerdir(self, filename):
for layer, _, regex, _ in self.bbhandler.cooker.status.bbfile_config_priorities:
if regex.match(filename):
for layerdir in self.bblayers:
if regex.match(os.path.join(layerdir, 'test')) and re.match(layerdir, filename):
return layerdir
return "?"
def remove_layer_prefix(self, f):
"""Remove the layer_dir prefix, e.g., f = /path/to/layer_dir/foo/blah, the
return value will be: layer_dir/foo/blah"""
f_layerdir = self.get_file_layerdir(f)
prefix = os.path.join(os.path.dirname(f_layerdir), '')
return f[len(prefix):] if f.startswith(prefix) else f
def get_layer_name(self, layerdir):
return os.path.basename(layerdir.rstrip(os.sep))
@@ -497,14 +513,14 @@ usage: show-appends
Recipes are listed with the bbappends that apply to them as subitems.
"""
self.bbhandler.prepare()
if not self.bbhandler.cooker.appendlist:
self.check_prepare_cooker()
if not self.cooker_data.appends:
logger.plain('No append files found')
return
logger.plain('=== Appended recipes ===')
logger.plain('State of append files:')
pnlist = list(self.bbhandler.cooker_data.pkg_pn.keys())
pnlist = list(self.cooker_data.pkg_pn.keys())
pnlist.sort()
for pn in pnlist:
self.show_appends_for_pn(pn)
@@ -512,19 +528,19 @@ Recipes are listed with the bbappends that apply to them as subitems.
self.show_appends_for_skipped()
def show_appends_for_pn(self, pn):
filenames = self.bbhandler.cooker_data.pkg_pn[pn]
filenames = self.cooker_data.pkg_pn[pn]
best = bb.providers.findBestProvider(pn,
self.bbhandler.cooker.configuration.data,
self.bbhandler.cooker_data,
self.bbhandler.cooker_data.pkg_pn)
self.cooker.configuration.data,
self.cooker_data,
self.cooker_data.pkg_pn)
best_filename = os.path.basename(best[3])
self.show_appends_output(filenames, best_filename)
def show_appends_for_skipped(self):
filenames = [os.path.basename(f)
for f in self.bbhandler.cooker.skiplist.iterkeys()]
for f in self.cooker.skiplist.iterkeys()]
self.show_appends_output(filenames, None, " (skipped)")
def show_appends_output(self, filenames, best_filename, name_suffix = ''):
@@ -550,171 +566,30 @@ Recipes are listed with the bbappends that apply to them as subitems.
continue
basename = os.path.basename(filename)
appends = self.bbhandler.cooker.appendlist.get(basename)
appends = self.cooker_data.appends.get(basename)
if appends:
appended.append((basename, list(appends)))
else:
notappended.append(basename)
return appended, notappended
def do_show_cross_depends(self, args):
"""figure out the dependency between recipes that crosses a layer boundary.
usage: show-cross-depends [-f]
class Config(object):
def __init__(self, **options):
self.pkgs_to_build = []
self.debug_domains = []
self.extra_assume_provided = []
self.prefile = []
self.postfile = []
self.debug = 0
self.__dict__.update(options)
Figure out the dependency between recipes that crosses a layer boundary.
def __getattr__(self, attribute):
try:
return super(Config, self).__getattribute__(attribute)
except AttributeError:
return None
Options:
-f show full file path
NOTE:
The .bbappend file can impact the dependency.
"""
self.bbhandler.prepare()
show_filenames = False
for arg in args.split():
if arg == '-f':
show_filenames = True
else:
sys.stderr.write("show-cross-depends: invalid option %s\n" % arg)
self.do_help('')
return
pkg_fn = self.bbhandler.cooker_data.pkg_fn
bbpath = str(self.bbhandler.config_data.getVar('BBPATH', True))
self.require_re = re.compile(r"require\s+(.+)")
self.include_re = re.compile(r"include\s+(.+)")
self.inherit_re = re.compile(r"inherit\s+(.+)")
# The bb's DEPENDS and RDEPENDS
for f in pkg_fn:
f = bb.cache.Cache.virtualfn2realfn(f)[0]
# Get the layername that the file is in
layername = self.get_file_layer(f)
# The DEPENDS
deps = self.bbhandler.cooker_data.deps[f]
for pn in deps:
if pn in self.bbhandler.cooker_data.pkg_pn:
best = bb.providers.findBestProvider(pn,
self.bbhandler.cooker.configuration.data,
self.bbhandler.cooker_data,
self.bbhandler.cooker_data.pkg_pn)
self.check_cross_depends("DEPENDS", layername, f, best[3], show_filenames)
# The RDPENDS
all_rdeps = self.bbhandler.cooker_data.rundeps[f].values()
# Remove the duplicated or null one.
sorted_rdeps = {}
# The all_rdeps is the list in list, so we need two for loops
for k1 in all_rdeps:
for k2 in k1:
sorted_rdeps[k2] = 1
all_rdeps = sorted_rdeps.keys()
for rdep in all_rdeps:
all_p = bb.providers.getRuntimeProviders(self.bbhandler.cooker_data, rdep)
if all_p:
best = bb.providers.filterProvidersRunTime(all_p, rdep,
self.bbhandler.cooker.configuration.data,
self.bbhandler.cooker_data)[0][0]
self.check_cross_depends("RDEPENDS", layername, f, best, show_filenames)
# The inherit class
cls_re = re.compile('classes/')
if f in self.bbhandler.cooker_data.inherits:
inherits = self.bbhandler.cooker_data.inherits[f]
for cls in inherits:
# The inherits' format is [classes/cls, /path/to/classes/cls]
# ignore the classes/cls.
if not cls_re.match(cls):
inherit_layername = self.get_file_layer(cls)
if inherit_layername != layername:
if not show_filenames:
f_short = self.remove_layer_prefix(f)
cls = self.remove_layer_prefix(cls)
else:
f_short = f
logger.plain("%s inherits %s" % (f_short, cls))
# The 'require/include xxx' in the bb file
pv_re = re.compile(r"\${PV}")
fnfile = open(f, 'r')
line = fnfile.readline()
while line:
m, keyword = self.match_require_include(line)
# Found the 'require/include xxxx'
if m:
needed_file = m.group(1)
# Replace the ${PV} with the real PV
if pv_re.search(needed_file) and f in self.bbhandler.cooker_data.pkg_pepvpr:
pv = self.bbhandler.cooker_data.pkg_pepvpr[f][1]
needed_file = re.sub(r"\${PV}", pv, needed_file)
self.print_cross_files(bbpath, keyword, layername, f, needed_file, show_filenames)
line = fnfile.readline()
fnfile.close()
# The "require/include xxx" in conf/machine/*.conf, .inc and .bbclass
conf_re = re.compile(".*/conf/machine/[^\/]*\.conf$")
inc_re = re.compile(".*\.inc$")
# The "inherit xxx" in .bbclass
bbclass_re = re.compile(".*\.bbclass$")
for layerdir in self.bblayers:
layername = self.get_layer_name(layerdir)
for dirpath, dirnames, filenames in os.walk(layerdir):
for name in filenames:
f = os.path.join(dirpath, name)
s = conf_re.match(f) or inc_re.match(f) or bbclass_re.match(f)
if s:
ffile = open(f, 'r')
line = ffile.readline()
while line:
m, keyword = self.match_require_include(line)
# Only bbclass has the "inherit xxx" here.
bbclass=""
if not m and f.endswith(".bbclass"):
m, keyword = self.match_inherit(line)
bbclass=".bbclass"
# Find a 'require/include xxxx'
if m:
self.print_cross_files(bbpath, keyword, layername, f, m.group(1) + bbclass, show_filenames)
line = ffile.readline()
ffile.close()
def print_cross_files(self, bbpath, keyword, layername, f, needed_filename, show_filenames):
"""Print the depends that crosses a layer boundary"""
needed_file = bb.utils.which(bbpath, needed_filename)
if needed_file:
# Which layer is this file from
needed_layername = self.get_file_layer(needed_file)
if needed_layername != layername:
if not show_filenames:
f = self.remove_layer_prefix(f)
needed_file = self.remove_layer_prefix(needed_file)
logger.plain("%s %s %s" %(f, keyword, needed_file))
def match_inherit(self, line):
"""Match the inherit xxx line"""
return (self.inherit_re.match(line), "inherits")
def match_require_include(self, line):
"""Match the require/include xxx line"""
m = self.require_re.match(line)
keyword = "requires"
if not m:
m = self.include_re.match(line)
keyword = "includes"
return (m, keyword)
def check_cross_depends(self, keyword, layername, f, needed_file, show_filenames):
"""Print the DEPENDS/RDEPENDS file that crosses a layer boundary"""
best_realfn = bb.cache.Cache.virtualfn2realfn(needed_file)[0]
needed_layername = self.get_file_layer(best_realfn)
if needed_layername != layername:
if not show_filenames:
f = self.remove_layer_prefix(f)
best_realfn = self.remove_layer_prefix(best_realfn)
logger.plain("%s %s %s" % (f, keyword, best_realfn))
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]) or 0)

View File

@@ -1,38 +0,0 @@
#!/usr/bin/env python
#
# Copyright (C) 2012 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys, logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
import unittest
try:
import bb
except RuntimeError as exc:
sys.exit(str(exc))
tests = ["bb.tests.codeparser",
"bb.tests.cow",
"bb.tests.data",
"bb.tests.fetch",
"bb.tests.utils"]
for t in tests:
__import__(t)
unittest.main(argv=["bitbake-selftest"] + tests)

View File

@@ -1,122 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2012 Wind River Systems, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
import os
import sys
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname( \
os.path.abspath(__file__))), 'lib'))
try:
import bb
except RuntimeError as exc:
sys.exit(str(exc))
import gtk
import optparse
import pygtk
from bb.ui.crumbs.hobwidget import HobAltButton, HobButton
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
from bb.ui.crumbs.hig.deployimagedialog import DeployImageDialog
from bb.ui.crumbs.hig.imageselectiondialog import ImageSelectionDialog
# I put all the fs bitbake supported here. Need more test.
DEPLOYABLE_IMAGE_TYPES = ["jffs2", "cramfs", "ext2", "ext3", "btrfs", "squashfs", "ubi", "vmdk"]
Title = "USB Image Writer"
class DeployWindow(gtk.Window):
def __init__(self, image_path=''):
super(DeployWindow, self).__init__()
if len(image_path) > 0:
valid = True
if not os.path.exists(image_path):
valid = False
lbl = "<b>Invalid image file path: %s.</b>\nPress <b>Select Image</b> to select an image." % image_path
else:
image_path = os.path.abspath(image_path)
extend_name = os.path.splitext(image_path)[1][1:]
if extend_name not in DEPLOYABLE_IMAGE_TYPES:
valid = False
lbl = "<b>Undeployable imge type: %s</b>\nPress <b>Select Image</b> to select an image." % extend_name
if not valid:
image_path = ''
crumbs_dialog = CrumbsMessageDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
button = crumbs_dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
crumbs_dialog.run()
crumbs_dialog.destroy()
self.deploy_dialog = DeployImageDialog(Title, image_path, self,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT
| gtk.DIALOG_NO_SEPARATOR, None, standalone=True)
close_button = self.deploy_dialog.add_button("Close", gtk.RESPONSE_NO)
HobAltButton.style_button(close_button)
close_button.connect('clicked', gtk.main_quit)
write_button = self.deploy_dialog.add_button("Write USB image", gtk.RESPONSE_YES)
HobAltButton.style_button(write_button)
self.deploy_dialog.connect('select_image_clicked', self.select_image_clicked_cb)
self.deploy_dialog.connect('destroy', gtk.main_quit)
response = self.deploy_dialog.show()
def select_image_clicked_cb(self, dialog):
cwd = os.getcwd()
dialog = ImageSelectionDialog(cwd, DEPLOYABLE_IMAGE_TYPES, Title, self, gtk.FILE_CHOOSER_ACTION_SAVE )
button = dialog.add_button("Cancel", gtk.RESPONSE_NO)
HobAltButton.style_button(button)
button = dialog.add_button("Open", gtk.RESPONSE_YES)
HobAltButton.style_button(button)
response = dialog.run()
if response == gtk.RESPONSE_YES:
if not dialog.image_names:
lbl = "<b>No selections made</b>\nClicked the radio button to select a image."
crumbs_dialog = CrumbsMessageDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
button = crumbs_dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
crumbs_dialog.run()
crumbs_dialog.destroy()
dialog.destroy()
return
# get the full path of image
image_path = os.path.join(dialog.image_folder, dialog.image_names[0])
self.deploy_dialog.set_image_text_buffer(image_path)
self.deploy_dialog.set_image_path(image_path)
dialog.destroy()
def main():
parser = optparse.OptionParser(
usage = """%prog [-h] [image_file]
%prog writes bootable images to USB devices. You can
provide the image file on the command line or select it using the GUI.""")
options, args = parser.parse_args(sys.argv)
image_file = args[1] if len(args) > 1 else ''
dw = DeployWindow(image_file)
if __name__ == '__main__':
try:
main()
gtk.main()
except Exception:
import traceback
traceback.print_exc(3)

View File

@@ -1,68 +0,0 @@
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Copyright (C) 2012 Wind River Systems, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# This is used for dumping the bb_cache.dat, the output format is:
# recipe_path PN PV PACKAGES
#
import os
import sys
import warnings
# For importing bb.cache
sys.path.insert(0, os.path.join(os.path.abspath(os.path.dirname(sys.argv[0])), '../lib'))
from bb.cache import CoreRecipeInfo
import cPickle as pickle
def main(argv=None):
"""
Get the mapping for the target recipe.
"""
if len(argv) != 1:
print >>sys.stderr, "Error, need one argument!"
return 2
cachefile = argv[0]
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
while cachefile:
try:
key = pickled.load()
val = pickled.load()
except Exception:
break
if isinstance(val, CoreRecipeInfo) and (not val.skipped):
pn = val.pn
# Filter out the native recipes.
if key.startswith('virtual:native:') or pn.endswith("-native"):
continue
# 1.0 is the default version for a no PV recipe.
if val.__dict__.has_key("pv"):
pv = val.pv
else:
pv = "1.0"
print("%s %s %s %s" % (key, pn, pv, ' '.join(val.packages)))
if __name__ == "__main__":
sys.exit(main(sys.argv[1:]))

View File

@@ -10,8 +10,8 @@ if &compatible || version < 600
finish
endif
" .bb, .bbappend and .bbclass
au BufNewFile,BufRead *.{bb,bbappend,bbclass} set filetype=bitbake
" .bb and .bbclass
au BufNewFile,BufRead *.b{b,bclass} set filetype=bitbake
" .inc
au BufNewFile,BufRead *.inc set filetype=bitbake

View File

@@ -104,30 +104,6 @@ Show debug logging for the specified logging domains
.B \-P, \-\-profile
profile the command and print a report
.TP
.B \-uUI, \-\-ui=UI
User interface to use. Currently, hob, depexp, goggle or ncurses can be specified as UI.
.TP
.B \-tSERVERTYPE, \-\-servertype=SERVERTYPE
Choose which server to use, none, process or xmlrpc.
.TP
.B \-\-revisions-changed
Set the exit code depending on whether upstream floating revisions have changed or not.
.TP
.B \-\-server-only
Run bitbake without UI, the frontend can connect with bitbake server itself.
.TP
.B \-BBIND, \-\-bind=BIND
The name/address for the bitbake server to bind to.
.TP
.B \-\-no\-setscene
Do not run any setscene tasks, forces builds.
.SH ENVIRONMENT VARIABLES
bitbake uses the following environment variables to control its
operation:
.TP
.B BITBAKE_UI
The bitbake user interface; overridden by the \fB-u\fP commandline option.
.SH AUTHORS
BitBake was written by

View File

@@ -228,7 +228,7 @@ addtask printdate before do_build</screen></para>
<para>'nostamp' - don't generate a stamp file for a task. This means the task is always rexecuted.</para>
<para>'fakeroot' - this task needs to be run in a fakeroot environment, obtained by adding the variables in FAKEROOTENV to the environment.</para>
<para>'umask' - the umask to run the task under.</para>
<para> For the 'deptask', 'rdeptask', 'depends', 'rdepends' and 'recrdeptask' flags please see the dependencies section.</para>
<para> For the 'deptask', 'rdeptask', 'recdeptask' and 'recrdeptask' flags please see the dependencies section.</para>
</section>
<section>
@@ -308,35 +308,37 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
</section>
<section>
<title>Dependency handling</title>
<para>BitBake handles dependencies at the task level since to allow for efficient operation with multiple processed executing in parallel. A robust method of specifying task dependencies is therefore needed. </para>
<para>BitBake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
<section>
<title>Dependencies internal to the .bb file</title>
<para>Where the dependencies are internal to a given .bb file, the dependencies are handled by the previously detailed addtask directive.</para>
</section>
<section>
<title>Build Dependencies</title>
<title>DEPENDS</title>
<para>DEPENDS lists build time dependencies. The 'deptask' flag for tasks is used to signify the task of each item listed in DEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_configure[deptask] = "do_populate_staging"</screen></para>
<para>means the do_populate_staging task of each item in DEPENDS must have completed before do_configure can execute.</para>
</section>
<section>
<title>Runtime Dependencies</title>
<para>The PACKAGES variable lists runtime packages and each of these can have RDEPENDS and RRECOMMENDS runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each item runtime dependency which must have completed before that task can be executed.</para>
<title>RDEPENDS</title>
<para>RDEPENDS lists runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each item listed in RDEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_package_write[rdeptask] = "do_package"</screen></para>
<para>means the do_package task of each item in RDEPENDS must have completed before do_package_write can execute.</para>
</section>
<section>
<title>Recursive Dependencies</title>
<para>These are specified with the 'recrdeptask' flag which is used signify the task(s) of dependencies which must have completed before that task can be executed. It works by looking though the build and runtime dependencies of the current recipe as well as any inter-task dependencies the task has, then adding a dependency on the listed task. It will then recurse through the dependencies of those tasks and so on.</para>
<para>It may be desireable to recurse not just through the dependencies of those tasks but through the build and runtime dependencies of dependent tasks too. If that is the case, the taskname itself should be referenced in the task list, e.g. do_a[recrdeptask] = "do_a do_b".</para>
<title>Recursive DEPENDS</title>
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
</section>
<section>
<title>Recursive RDEPENDS</title>
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first.</para>
</section>
<section>
<title>Inter task</title>
<para>The 'depends' flag for tasks is a more generic form of which allows an interdependency on specific tasks rather than specifying the data in DEPENDS.</para>
<para>The 'depends' flag for tasks is a more generic form of which allows an interdependency on specific tasks rather than specifying the data in DEPENDS or RDEPENDS.</para>
<para><screen>do_patch[depends] = "quilt-native:do_populate_staging"</screen></para>
<para>means the do_populate_staging task of the target quilt-native must have completed before the do_patch can execute.</para>
<para>The 'rdepends' flag works in a similar way but takes targets in the runtime namespace instead of the build time dependency namespace.</para>
</section>
</section>

View File

@@ -21,7 +21,7 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
__version__ = "1.18.0"
__version__ = "1.15.2"
import sys
if sys.version_info < (2, 6, 0):
@@ -74,6 +74,11 @@ logger.setLevel(logging.DEBUG - 2)
# can result in construction of the various loggers.
import bb.msg
if "BBDEBUG" in os.environ:
level = int(os.environ["BBDEBUG"])
if level:
bb.msg.set_debug_level(level)
from bb import fetch2 as fetch
sys.modules['bb.fetch'] = sys.modules['bb.fetch2']

View File

@@ -29,7 +29,6 @@ import os
import sys
import logging
import shlex
import glob
import bb
import bb.msg
import bb.process
@@ -73,7 +72,7 @@ class TaskBase(event.Event):
self._task = t
self._package = d.getVar("PF", True)
event.Event.__init__(self)
self._message = "recipe %s: task %s: %s" % (d.getVar("PF", True), t, self.getDisplayName())
self._message = "package %s: task %s: %s" % (d.getVar("PF", True), t, self.getDisplayName())
def getTask(self):
return self._task
@@ -136,8 +135,7 @@ class LogTee(object):
def __repr__(self):
return '<LogTee {0}>'.format(self.name)
def flush(self):
self.outfile.flush()
def exec_func(func, d, dirs = None):
"""Execute an BB 'function'"""
@@ -176,19 +174,8 @@ def exec_func(func, d, dirs = None):
lockfiles = None
tempdir = data.getVar('T', d, 1)
# or func allows items to be executed outside of the normal
# task set, such as buildhistory
task = data.getVar('BB_RUNTASK', d, 1) or func
if task == func:
taskfunc = task
else:
taskfunc = "%s.%s" % (task, func)
runfmt = data.getVar('BB_RUNFMT', d, 1) or "run.{func}.{pid}"
runfn = runfmt.format(taskfunc=taskfunc, task=task, func=func, pid=os.getpid())
runfile = os.path.join(tempdir, runfn)
bb.utils.mkdirhier(os.path.dirname(runfile))
bb.utils.mkdirhier(tempdir)
runfile = os.path.join(tempdir, 'run.{0}.{1}'.format(func, os.getpid()))
with bb.utils.fileslocked(lockfiles):
if ispython:
@@ -219,8 +206,6 @@ def exec_func_python(func, d, runfile, cwd=None):
olddir = None
os.chdir(cwd)
bb.debug(2, "Executing python function %s" % func)
try:
comp = utils.better_compile(code, func, bbfile)
utils.better_exec(comp, {"d": d}, code, bbfile)
@@ -230,15 +215,13 @@ def exec_func_python(func, d, runfile, cwd=None):
raise FuncFailed(func, None)
finally:
bb.debug(2, "Python function %s finished" % func)
if cwd and olddir:
try:
os.chdir(olddir)
except OSError:
pass
def exec_func_shell(func, d, runfile, cwd=None):
def exec_func_shell(function, d, runfile, cwd=None):
"""Execute a shell function from the metadata
Note on directory behavior. The 'dirs' varflag should contain a list
@@ -251,18 +234,18 @@ def exec_func_shell(func, d, runfile, cwd=None):
with open(runfile, 'w') as script:
script.write('#!/bin/sh -e\n')
data.emit_func(func, script, d)
data.emit_func(function, script, d)
if bb.msg.loggerVerboseLogs:
script.write("set -x\n")
if cwd:
script.write("cd %s\n" % cwd)
script.write("%s\n" % func)
script.write("%s\n" % function)
os.chmod(runfile, 0775)
cmd = runfile
if d.getVarFlag(func, 'fakeroot'):
if d.getVarFlag(function, 'fakeroot'):
fakerootcmd = d.getVar('FAKEROOT', True)
if fakerootcmd:
cmd = [fakerootcmd, runfile]
@@ -272,15 +255,11 @@ def exec_func_shell(func, d, runfile, cwd=None):
else:
logfile = sys.stdout
bb.debug(2, "Executing shell function %s" % func)
try:
bb.process.run(cmd, shell=False, stdin=NULL, log=logfile)
except bb.process.CmdError:
logfn = d.getVar('BB_LOGFILE', True)
raise FuncFailed(func, logfn)
bb.debug(2, "Shell function %s finished" % func)
raise FuncFailed(function, logfn)
def _task_data(fn, task, d):
localdata = data.createCopy(d)
@@ -311,23 +290,8 @@ def _exec_task(fn, task, d, quieterr):
bb.fatal("T variable not set, unable to build")
bb.utils.mkdirhier(tempdir)
# Determine the logfile to generate
logfmt = localdata.getVar('BB_LOGFMT', True) or 'log.{task}.{pid}'
logbase = logfmt.format(task=task, pid=os.getpid())
# Document the order of the tasks...
logorder = os.path.join(tempdir, 'log.task_order')
try:
logorderfile = file(logorder, 'a')
except OSError:
logger.exception("Opening log file '%s'", logorder)
pass
logorderfile.write('{0} ({1}): {2}\n'.format(task, os.getpid(), logbase))
logorderfile.close()
# Setup the courtesy link to the logfn
loglink = os.path.join(tempdir, 'log.{0}'.format(task))
logbase = 'log.{0}.{1}'.format(task, os.getpid())
logfn = os.path.join(tempdir, logbase)
if loglink:
bb.utils.remove(loglink)
@@ -350,7 +314,6 @@ def _exec_task(fn, task, d, quieterr):
# Handle logfiles
si = file('/dev/null', 'r')
try:
bb.utils.mkdirhier(os.path.dirname(logfn))
logfile = file(logfn, 'w')
except OSError:
logger.exception("Opening log file '%s'", logfn)
@@ -377,7 +340,6 @@ def _exec_task(fn, task, d, quieterr):
bblogger.addHandler(errchk)
localdata.setVar('BB_LOGFILE', logfn)
localdata.setVar('BB_RUNTASK', task)
event.fire(TaskStarted(task, localdata), localdata)
try:
@@ -423,27 +385,13 @@ def _exec_task(fn, task, d, quieterr):
return 0
def exec_task(fn, task, d, profile = False):
def exec_task(fn, task, d):
try:
quieterr = False
if d.getVarFlag(task, "quieterrors") is not None:
quieterr = True
if profile:
profname = "profile-%s.log" % (os.path.basename(fn) + "-" + task)
try:
import cProfile as profile
except:
import profile
prof = profile.Profile()
ret = profile.Profile.runcall(prof, _exec_task, fn, task, d, quieterr)
prof.dump_stats(profname)
bb.utils.process_profilelog(profname)
return ret
else:
return _exec_task(fn, task, d, quieterr)
return _exec_task(fn, task, d, quieterr)
except Exception:
from traceback import format_exc
if not quieterr:
@@ -479,55 +427,15 @@ def stamp_internal(taskname, d, file_name):
stamp = bb.parse.siggen.stampfile(stamp, file_name, taskname, extrainfo)
stampdir = os.path.dirname(stamp)
if bb.parse.cached_mtime_noerror(stampdir) == 0:
bb.utils.mkdirhier(stampdir)
bb.utils.mkdirhier(os.path.dirname(stamp))
return stamp
def stamp_cleanmask_internal(taskname, d, file_name):
"""
Internal stamp helper function to generate stamp cleaning mask
Returns the stamp path+filename
In the bitbake core, d can be a CacheData and file_name will be set.
When called in task context, d will be a data store, file_name will not be set
"""
taskflagname = taskname
if taskname.endswith("_setscene") and taskname != "do_setscene":
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stamp_base_clean[file_name].get(taskflagname) or d.stampclean[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVarFlag(taskflagname, 'stamp-base-clean', True) or d.getVar('STAMPCLEAN', True)
file_name = d.getVar('BB_FILENAME', True)
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
if not stamp:
return []
cleanmask = bb.parse.siggen.stampcleanmask(stamp, file_name, taskname, extrainfo)
return [cleanmask, cleanmask.replace(taskflagname, taskflagname + "_setscene")]
def make_stamp(task, d, file_name = None):
"""
Creates/updates a stamp for a given task
(d can be a data dict or dataCache)
"""
cleanmask = stamp_cleanmask_internal(task, d, file_name)
for mask in cleanmask:
for name in glob.glob(mask):
# Preserve sigdata files in the stamps directory
if "sigdata" in name:
continue
# Preserve taint files in the stamps directory
if name.endswith('.taint'):
continue
os.unlink(name)
stamp = stamp_internal(task, d, file_name)
# Remove the file and recreate to force timestamp
# change on broken NFS filesystems
@@ -599,7 +507,6 @@ def add_tasks(tasklist, d):
deptask = data.expand(flags[name], d)
task_deps[name][task] = deptask
getTask('depends')
getTask('rdepends')
getTask('deptask')
getTask('rdeptask')
getTask('recrdeptask')
@@ -608,10 +515,9 @@ def add_tasks(tasklist, d):
getTask('noexec')
getTask('umask')
task_deps['parents'][task] = []
if 'deps' in flags:
for dep in flags['deps']:
dep = data.expand(dep, d)
task_deps['parents'][task].append(dep)
for dep in flags['deps']:
dep = data.expand(dep, d)
task_deps['parents'][task].append(dep)
# don't assume holding a reference
data.setVar('_task_deps', task_deps, d)

View File

@@ -1,12 +1,11 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# BitBake Cache implementation
# BitBake 'Event' implementation
#
# Caching of bitbake variables before task execution
# Copyright (C) 2006 Richard Purdie
# Copyright (C) 2012 Intel Corporation
# but small sections based on code from bin/bitbake:
# Copyright (C) 2003, 2004 Chris Larson
@@ -43,7 +42,7 @@ except ImportError:
logger.info("Importing cPickle failed. "
"Falling back to a very slow implementation.")
__cache_version__ = "145"
__cache_version__ = "143"
def getCacheFile(path, filename, data_hash):
return os.path.join(path, filename + "." + data_hash)
@@ -76,13 +75,9 @@ class RecipeInfoCommon(object):
for task in tasks)
@classmethod
def flaglist(cls, flag, varlist, metadata, squash=False):
out_dict = dict((var, metadata.getVarFlag(var, flag, True))
def flaglist(cls, flag, varlist, metadata):
return dict((var, metadata.getVarFlag(var, flag, True))
for var in varlist)
if squash:
return dict((k,v) for (k,v) in out_dict.iteritems() if v)
else:
return out_dict
@classmethod
def getvar(cls, var, metadata):
@@ -119,6 +114,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.basetaskhashes = self.taskvar('BB_BASEHASH', self.tasks, metadata)
self.hashfilename = self.getvar('BB_HASHFILENAME', metadata)
self.file_depends = metadata.getVar('__depends', False)
self.task_deps = metadata.getVar('_task_deps', False) or {'tasks': [], 'parents': {}}
self.skipped = False
@@ -126,13 +122,11 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.pv = self.getvar('PV', metadata)
self.pr = self.getvar('PR', metadata)
self.defaultpref = self.intvar('DEFAULT_PREFERENCE', metadata)
self.broken = self.getvar('BROKEN', metadata)
self.not_world = self.getvar('EXCLUDE_FROM_WORLD', metadata)
self.stamp = self.getvar('STAMP', metadata)
self.stampclean = self.getvar('STAMPCLEAN', metadata)
self.stamp_base = self.flaglist('stamp-base', self.tasks, metadata)
self.stamp_base_clean = self.flaglist('stamp-base-clean', self.tasks, metadata)
self.stamp_extrainfo = self.flaglist('stamp-extra-info', self.tasks, metadata)
self.file_checksums = self.flaglist('file-checksums', self.tasks, metadata, True)
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
self.depends = self.depvar('DEPENDS', metadata)
self.provides = self.depvar('PROVIDES', metadata)
@@ -157,11 +151,8 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.pkg_dp = {}
cachedata.stamp = {}
cachedata.stampclean = {}
cachedata.stamp_base = {}
cachedata.stamp_base_clean = {}
cachedata.stamp_extrainfo = {}
cachedata.file_checksums = {}
cachedata.fn_provides = {}
cachedata.pn_provides = defaultdict(list)
cachedata.all_depends = []
@@ -191,11 +182,8 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.pkg_pepvpr[fn] = (self.pe, self.pv, self.pr)
cachedata.pkg_dp[fn] = self.defaultpref
cachedata.stamp[fn] = self.stamp
cachedata.stampclean[fn] = self.stampclean
cachedata.stamp_base[fn] = self.stamp_base
cachedata.stamp_base_clean[fn] = self.stamp_base_clean
cachedata.stamp_extrainfo[fn] = self.stamp_extrainfo
cachedata.file_checksums[fn] = self.file_checksums
provides = [self.pn]
for provide in self.provides:
@@ -232,7 +220,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
# Collect files we may need for possible world-dep
# calculations
if not self.not_world:
if not self.broken and not self.not_world:
cachedata.possible_world.append(fn)
# create a collection of all targets for sanity checking
@@ -403,12 +391,12 @@ class Cache(object):
"""Parse the specified filename, returning the recipe information"""
infos = []
datastores = cls.load_bbfile(filename, appends, configdata)
depends = []
depends = set()
for variant, data in sorted(datastores.iteritems(),
key=lambda i: i[0],
reverse=True):
virtualfn = cls.realfn2virtual(filename, variant)
depends = depends + (data.getVar("__depends", False) or [])
depends |= (data.getVar("__depends", False) or set())
if depends and not variant:
data.setVar("__depends", depends)
@@ -526,7 +514,7 @@ class Cache(object):
if appends != info_array[0].appends:
logger.debug(2, "Cache: appends for %s changed", fn)
logger.debug(2, "%s to %s" % (str(appends), str(info_array[0].appends)))
bb.note("%s to %s" % (str(appends), str(info_array[0].appends)))
self.remove(fn)
return False
@@ -715,115 +703,4 @@ class CacheData(object):
for info in info_array:
info.add_cacheData(self, fn)
class MultiProcessCache(object):
"""
BitBake multi-process cache implementation
Used by the codeparser & file checksum caches
"""
def __init__(self):
self.cachefile = None
self.cachedata = self.create_cachedata()
self.cachedata_extras = self.create_cachedata()
def init_cache(self, d):
cachedir = (d.getVar("PERSISTENT_DIR", True) or
d.getVar("CACHE", True))
if cachedir in [None, '']:
return
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir, self.__class__.cache_file_name)
logger.debug(1, "Using cache in '%s'", self.cachefile)
try:
p = pickle.Unpickler(file(self.cachefile, "rb"))
data, version = p.load()
except:
return
if version != self.__class__.CACHE_VERSION:
return
self.cachedata = data
def internSet(self, items):
new = set()
for i in items:
new.add(intern(i))
return new
def compress_keys(self, data):
# Override in subclasses if desired
return
def create_cachedata(self):
data = [{}]
return data
def save_extras(self, d):
if not self.cachefile:
return
glf = bb.utils.lockfile(self.cachefile + ".lock", shared=True)
i = os.getpid()
lf = None
while not lf:
lf = bb.utils.lockfile(self.cachefile + ".lock." + str(i), retry=False)
if not lf or os.path.exists(self.cachefile + "-" + str(i)):
if lf:
bb.utils.unlockfile(lf)
lf = None
i = i + 1
continue
p = pickle.Pickler(file(self.cachefile + "-" + str(i), "wb"), -1)
p.dump([self.cachedata_extras, self.__class__.CACHE_VERSION])
bb.utils.unlockfile(lf)
bb.utils.unlockfile(glf)
def merge_data(self, source, dest):
for j in range(0,len(dest)):
for h in source[j]:
if h not in dest[j]:
dest[j][h] = source[j][h]
def save_merge(self, d):
if not self.cachefile:
return
glf = bb.utils.lockfile(self.cachefile + ".lock")
try:
p = pickle.Unpickler(file(self.cachefile, "rb"))
data, version = p.load()
except (IOError, EOFError):
data, version = None, None
if version != self.__class__.CACHE_VERSION:
data = self.create_cachedata()
for f in [y for y in os.listdir(os.path.dirname(self.cachefile)) if y.startswith(os.path.basename(self.cachefile) + '-')]:
f = os.path.join(os.path.dirname(self.cachefile), f)
try:
p = pickle.Unpickler(file(f, "rb"))
extradata, version = p.load()
except (IOError, EOFError):
extradata, version = self.create_cachedata(), None
if version != self.__class__.CACHE_VERSION:
continue
self.merge_data(extradata, data)
os.unlink(f)
self.compress_keys(data)
p = pickle.Pickler(file(self.cachefile, "wb"), -1)
p.dump([data, self.__class__.CACHE_VERSION])
bb.utils.unlockfile(glf)

View File

@@ -41,10 +41,6 @@ class HobRecipeInfo(RecipeInfoCommon):
self.license = self.getvar('LICENSE', metadata)
self.section = self.getvar('SECTION', metadata)
self.description = self.getvar('DESCRIPTION', metadata)
self.homepage = self.getvar('HOMEPAGE', metadata)
self.bugtracker = self.getvar('BUGTRACKER', metadata)
self.prevision = self.getvar('PR', metadata)
self.files_info = self.getvar('FILES_INFO', metadata)
@classmethod
def init_cacheData(cls, cachedata):
@@ -53,17 +49,9 @@ class HobRecipeInfo(RecipeInfoCommon):
cachedata.license = {}
cachedata.section = {}
cachedata.description = {}
cachedata.homepage = {}
cachedata.bugtracker = {}
cachedata.prevision = {}
cachedata.files_info = {}
def add_cacheData(self, cachedata, fn):
cachedata.summary[fn] = self.summary
cachedata.license[fn] = self.license
cachedata.section[fn] = self.section
cachedata.description[fn] = self.description
cachedata.homepage[fn] = self.homepage
cachedata.bugtracker[fn] = self.bugtracker
cachedata.prevision[fn] = self.prevision
cachedata.files_info[fn] = self.files_info

View File

@@ -1,90 +0,0 @@
# Local file checksum cache implementation
#
# Copyright (C) 2012 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import stat
import bb.utils
import logging
from bb.cache import MultiProcessCache
logger = logging.getLogger("BitBake.Cache")
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info("Importing cPickle failed. "
"Falling back to a very slow implementation.")
# mtime cache (non-persistent)
# based upon the assumption that files do not change during bitbake run
class FileMtimeCache(object):
cache = {}
def cached_mtime(self, f):
if f not in self.cache:
self.cache[f] = os.stat(f)[stat.ST_MTIME]
return self.cache[f]
def cached_mtime_noerror(self, f):
if f not in self.cache:
try:
self.cache[f] = os.stat(f)[stat.ST_MTIME]
except OSError:
return 0
return self.cache[f]
def update_mtime(self, f):
self.cache[f] = os.stat(f)[stat.ST_MTIME]
return self.cache[f]
def clear(self):
self.cache.clear()
# Checksum + mtime cache (persistent)
class FileChecksumCache(MultiProcessCache):
cache_file_name = "local_file_checksum_cache.dat"
CACHE_VERSION = 1
def __init__(self):
self.mtime_cache = FileMtimeCache()
MultiProcessCache.__init__(self)
def get_checksum(self, f):
entry = self.cachedata[0].get(f)
cmtime = self.mtime_cache.cached_mtime(f)
if entry:
(mtime, hashval) = entry
if cmtime == mtime:
return hashval
else:
bb.debug(2, "file %s changed mtime, recompute checksum" % f)
hashval = bb.utils.md5_file(f)
self.cachedata_extras[0][f] = (cmtime, hashval)
return hashval
def merge_data(self, source, dest):
for h in source[0]:
if h in dest:
(smtime, _) = source[0][h]
(dmtime, _) = dest[0][h]
if smtime > dmtime:
dest[0][h] = source[0][h]
else:
dest[0][h] = source[0][h]

View File

@@ -5,10 +5,10 @@ import os.path
import bb.utils, bb.data
from itertools import chain
from pysh import pyshyacc, pyshlex, sherrors
from bb.cache import MultiProcessCache
logger = logging.getLogger('BitBake.CodeParser')
PARSERCACHE_VERSION = 2
try:
import cPickle as pickle
@@ -32,56 +32,133 @@ def check_indent(codestr):
return codestr
pythonparsecache = {}
shellparsecache = {}
pythonparsecacheextras = {}
shellparsecacheextras = {}
class CodeParserCache(MultiProcessCache):
cache_file_name = "bb_codeparser.dat"
CACHE_VERSION = 3
def __init__(self):
MultiProcessCache.__init__(self)
self.pythoncache = self.cachedata[0]
self.shellcache = self.cachedata[1]
self.pythoncacheextras = self.cachedata_extras[0]
self.shellcacheextras = self.cachedata_extras[1]
def init_cache(self, d):
MultiProcessCache.init_cache(self, d)
# cachedata gets re-assigned in the parent
self.pythoncache = self.cachedata[0]
self.shellcache = self.cachedata[1]
def compress_keys(self, data):
# When the dicts are originally created, python calls intern() on the set keys
# which significantly improves memory usage. Sadly the pickle/unpickle process
# doesn't call intern() on the keys and results in the same strings being duplicated
# in memory. This also means pickle will save the same string multiple times in
# the cache file. By interning the data here, the cache file shrinks dramatically
# meaning faster load times and the reloaded cache files also consume much less
# memory. This is worth any performance hit from this loops and the use of the
# intern() data storage.
# Python 3.x may behave better in this area
for h in data[0]:
data[0][h]["refs"] = self.internSet(data[0][h]["refs"])
data[0][h]["execs"] = self.internSet(data[0][h]["execs"])
for h in data[1]:
data[1][h]["execs"] = self.internSet(data[1][h]["execs"])
return
def create_cachedata(self):
data = [{}, {}]
return data
codeparsercache = CodeParserCache()
def parser_cachefile(d):
cachedir = (d.getVar("PERSISTENT_DIR", True) or
d.getVar("CACHE", True))
if cachedir in [None, '']:
return None
bb.utils.mkdirhier(cachedir)
cachefile = os.path.join(cachedir, "bb_codeparser.dat")
logger.debug(1, "Using cache in '%s' for codeparser cache", cachefile)
return cachefile
def parser_cache_init(d):
codeparsercache.init_cache(d)
global pythonparsecache
global shellparsecache
cachefile = parser_cachefile(d)
if not cachefile:
return
try:
p = pickle.Unpickler(file(cachefile, "rb"))
data, version = p.load()
except:
return
if version != PARSERCACHE_VERSION:
return
pythonparsecache = data[0]
shellparsecache = data[1]
def parser_cache_save(d):
codeparsercache.save_extras(d)
cachefile = parser_cachefile(d)
if not cachefile:
return
glf = bb.utils.lockfile(cachefile + ".lock", shared=True)
i = os.getpid()
lf = None
while not lf:
shellcache = {}
pythoncache = {}
lf = bb.utils.lockfile(cachefile + ".lock." + str(i), retry=False)
if not lf or os.path.exists(cachefile + "-" + str(i)):
if lf:
bb.utils.unlockfile(lf)
lf = None
i = i + 1
continue
shellcache = shellparsecacheextras
pythoncache = pythonparsecacheextras
p = pickle.Pickler(file(cachefile + "-" + str(i), "wb"), -1)
p.dump([[pythoncache, shellcache], PARSERCACHE_VERSION])
bb.utils.unlockfile(lf)
bb.utils.unlockfile(glf)
def internSet(items):
new = set()
for i in items:
new.add(intern(i))
return new
def parser_cache_savemerge(d):
codeparsercache.save_merge(d)
cachefile = parser_cachefile(d)
if not cachefile:
return
glf = bb.utils.lockfile(cachefile + ".lock")
try:
p = pickle.Unpickler(file(cachefile, "rb"))
data, version = p.load()
except (IOError, EOFError):
data, version = None, None
if version != PARSERCACHE_VERSION:
data = [{}, {}]
for f in [y for y in os.listdir(os.path.dirname(cachefile)) if y.startswith(os.path.basename(cachefile) + '-')]:
f = os.path.join(os.path.dirname(cachefile), f)
try:
p = pickle.Unpickler(file(f, "rb"))
extradata, version = p.load()
except (IOError, EOFError):
extradata, version = [{}, {}], None
if version != PARSERCACHE_VERSION:
continue
for h in extradata[0]:
if h not in data[0]:
data[0][h] = extradata[0][h]
for h in extradata[1]:
if h not in data[1]:
data[1][h] = extradata[1][h]
os.unlink(f)
# When the dicts are originally created, python calls intern() on the set keys
# which significantly improves memory usage. Sadly the pickle/unpickle process
# doesn't call intern() on the keys and results in the same strings being duplicated
# in memory. This also means pickle will save the same string multiple times in
# the cache file. By interning the data here, the cache file shrinks dramatically
# meaning faster load times and the reloaded cache files also consume much less
# memory. This is worth any performance hit from this loops and the use of the
# intern() data storage.
# Python 3.x may behave better in this area
for h in data[0]:
data[0][h]["refs"] = internSet(data[0][h]["refs"])
data[0][h]["execs"] = internSet(data[0][h]["execs"])
for h in data[1]:
data[1][h]["execs"] = internSet(data[1][h]["execs"])
p = pickle.Pickler(file(cachefile, "wb"), -1)
p.dump([data, PARSERCACHE_VERSION])
bb.utils.unlockfile(glf)
Logger = logging.getLoggerClass()
class BufferedLogger(Logger):
@@ -100,8 +177,7 @@ class BufferedLogger(Logger):
self.buffer = []
class PythonParser():
getvars = ("d.getVar", "bb.data.getVar", "data.getVar", "d.appendVar", "d.prependVar")
containsfuncs = ("bb.utils.contains", "base_contains", "oe.utils.contains")
getvars = ("d.getVar", "bb.data.getVar", "data.getVar")
execfuncs = ("bb.build.exec_func", "bb.build.exec_task")
def warn(self, func, arg):
@@ -120,7 +196,7 @@ class PythonParser():
def visit_Call(self, node):
name = self.called_node_name(node.func)
if name in self.getvars or name in self.containsfuncs:
if name in self.getvars:
if isinstance(node.args[0], ast.Str):
self.var_references.add(node.args[0].s)
else:
@@ -159,14 +235,14 @@ class PythonParser():
def parse_python(self, node):
h = hash(str(node))
if h in codeparsercache.pythoncache:
self.references = codeparsercache.pythoncache[h]["refs"]
self.execs = codeparsercache.pythoncache[h]["execs"]
if h in pythonparsecache:
self.references = pythonparsecache[h]["refs"]
self.execs = pythonparsecache[h]["execs"]
return
if h in codeparsercache.pythoncacheextras:
self.references = codeparsercache.pythoncacheextras[h]["refs"]
self.execs = codeparsercache.pythoncacheextras[h]["execs"]
if h in pythonparsecacheextras:
self.references = pythonparsecacheextras[h]["refs"]
self.execs = pythonparsecacheextras[h]["execs"]
return
@@ -180,9 +256,9 @@ class PythonParser():
self.references.update(self.var_references)
self.references.update(self.var_execs)
codeparsercache.pythoncacheextras[h] = {}
codeparsercache.pythoncacheextras[h]["refs"] = self.references
codeparsercache.pythoncacheextras[h]["execs"] = self.execs
pythonparsecacheextras[h] = {}
pythonparsecacheextras[h]["refs"] = self.references
pythonparsecacheextras[h]["execs"] = self.execs
class ShellParser():
def __init__(self, name, log):
@@ -200,12 +276,12 @@ class ShellParser():
h = hash(str(value))
if h in codeparsercache.shellcache:
self.execs = codeparsercache.shellcache[h]["execs"]
if h in shellparsecache:
self.execs = shellparsecache[h]["execs"]
return self.execs
if h in codeparsercache.shellcacheextras:
self.execs = codeparsercache.shellcacheextras[h]["execs"]
if h in shellparsecacheextras:
self.execs = shellparsecacheextras[h]["execs"]
return self.execs
try:
@@ -217,8 +293,8 @@ class ShellParser():
self.process_tokens(token)
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)
codeparsercache.shellcacheextras[h] = {}
codeparsercache.shellcacheextras[h]["execs"] = self.execs
shellparsecacheextras[h] = {}
shellparsecacheextras[h]["execs"] = self.execs
return self.execs

View File

@@ -171,21 +171,9 @@ class CommandsSync:
Set the value of variable in configuration.data
"""
varname = params[0]
value = str(params[1])
value = params[1]
command.cooker.configuration.data.setVar(varname, value)
def enableDataTracking(self, command, params):
"""
Enable history tracking for variables
"""
command.cooker.enableDataTracking()
def disableDataTracking(self, command, params):
"""
Disable history tracking for variables
"""
command.cooker.disableDataTracking()
def initCooker(self, command, params):
"""
Init the cooker to initial state with nothing parsed
@@ -205,21 +193,12 @@ class CommandsSync:
"""
return bb.utils.cpu_count()
def matchFile(self, command, params):
fMatch = params[0]
return command.cooker.matchFile(fMatch)
def generateNewImage(self, command, params):
image = params[0]
base_image = params[1]
package_queue = params[2]
return command.cooker.generateNewImage(image, base_image, package_queue)
def setVarFile(self, command, params):
var = params[0]
val = params[1]
default_file = params[2]
command.cooker.saveConfigurationVar(var, val, default_file)
def setConfFilter(self, command, params):
"""
Set the configuration file parsing filter
"""
filterfunc = params[0]
bb.parse.parse_py.ConfHandler.confFilters.append(filterfunc)
class CommandsAsync:
"""

View File

@@ -1,11 +1,5 @@
"""Code pulled from future python versions, here for compatibility"""
from collections import MutableMapping, KeysView, ValuesView, ItemsView
try:
from thread import get_ident as _get_ident
except ImportError:
from dummy_thread import get_ident as _get_ident
def total_ordering(cls):
"""Class decorator that fills in missing ordering methods"""
convert = {
@@ -32,897 +26,3 @@ def total_ordering(cls):
opfunc.__doc__ = getattr(int, opname).__doc__
setattr(cls, opname, opfunc)
return cls
class OrderedDict(dict):
'Dictionary that remembers insertion order'
# An inherited dict maps keys to values.
# The inherited dict provides __getitem__, __len__, __contains__, and get.
# The remaining methods are order-aware.
# Big-O running times for all methods are the same as regular dictionaries.
# The internal self.__map dict maps keys to links in a doubly linked list.
# The circular doubly linked list starts and ends with a sentinel element.
# The sentinel element never gets deleted (this simplifies the algorithm).
# Each link is stored as a list of length three: [PREV, NEXT, KEY].
def __init__(self, *args, **kwds):
'''Initialize an ordered dictionary. The signature is the same as
regular dictionaries, but keyword arguments are not recommended because
their insertion order is arbitrary.
'''
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
try:
self.__root
except AttributeError:
self.__root = root = [] # sentinel node
root[:] = [root, root, None]
self.__map = {}
self.__update(*args, **kwds)
def __setitem__(self, key, value, PREV=0, NEXT=1, dict_setitem=dict.__setitem__):
'od.__setitem__(i, y) <==> od[i]=y'
# Setting a new item creates a new link at the end of the linked list,
# and the inherited dictionary is updated with the new key/value pair.
if key not in self:
root = self.__root
last = root[PREV]
last[NEXT] = root[PREV] = self.__map[key] = [last, root, key]
dict_setitem(self, key, value)
def __delitem__(self, key, PREV=0, NEXT=1, dict_delitem=dict.__delitem__):
'od.__delitem__(y) <==> del od[y]'
# Deleting an existing item uses self.__map to find the link which gets
# removed by updating the links in the predecessor and successor nodes.
dict_delitem(self, key)
link_prev, link_next, key = self.__map.pop(key)
link_prev[NEXT] = link_next
link_next[PREV] = link_prev
def __iter__(self):
'od.__iter__() <==> iter(od)'
# Traverse the linked list in order.
NEXT, KEY = 1, 2
root = self.__root
curr = root[NEXT]
while curr is not root:
yield curr[KEY]
curr = curr[NEXT]
def __reversed__(self):
'od.__reversed__() <==> reversed(od)'
# Traverse the linked list in reverse order.
PREV, KEY = 0, 2
root = self.__root
curr = root[PREV]
while curr is not root:
yield curr[KEY]
curr = curr[PREV]
def clear(self):
'od.clear() -> None. Remove all items from od.'
for node in self.__map.itervalues():
del node[:]
root = self.__root
root[:] = [root, root, None]
self.__map.clear()
dict.clear(self)
# -- the following methods do not depend on the internal structure --
def keys(self):
'od.keys() -> list of keys in od'
return list(self)
def values(self):
'od.values() -> list of values in od'
return [self[key] for key in self]
def items(self):
'od.items() -> list of (key, value) pairs in od'
return [(key, self[key]) for key in self]
def iterkeys(self):
'od.iterkeys() -> an iterator over the keys in od'
return iter(self)
def itervalues(self):
'od.itervalues -> an iterator over the values in od'
for k in self:
yield self[k]
def iteritems(self):
'od.iteritems -> an iterator over the (key, value) pairs in od'
for k in self:
yield (k, self[k])
update = MutableMapping.update
__update = update # let subclasses override update without breaking __init__
__marker = object()
def pop(self, key, default=__marker):
'''od.pop(k[,d]) -> v, remove specified key and return the corresponding
value. If key is not found, d is returned if given, otherwise KeyError
is raised.
'''
if key in self:
result = self[key]
del self[key]
return result
if default is self.__marker:
raise KeyError(key)
return default
def setdefault(self, key, default=None):
'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od'
if key in self:
return self[key]
self[key] = default
return default
def popitem(self, last=True):
'''od.popitem() -> (k, v), return and remove a (key, value) pair.
Pairs are returned in LIFO order if last is true or FIFO order if false.
'''
if not self:
raise KeyError('dictionary is empty')
key = next(reversed(self) if last else iter(self))
value = self.pop(key)
return key, value
def __repr__(self, _repr_running={}):
'od.__repr__() <==> repr(od)'
call_key = id(self), _get_ident()
if call_key in _repr_running:
return '...'
_repr_running[call_key] = 1
try:
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, self.items())
finally:
del _repr_running[call_key]
def __reduce__(self):
'Return state information for pickling'
items = [[k, self[k]] for k in self]
inst_dict = vars(self).copy()
for k in vars(OrderedDict()):
inst_dict.pop(k, None)
if inst_dict:
return (self.__class__, (items,), inst_dict)
return self.__class__, (items,)
def copy(self):
'od.copy() -> a shallow copy of od'
return self.__class__(self)
@classmethod
def fromkeys(cls, iterable, value=None):
'''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S.
If not specified, the value defaults to None.
'''
self = cls()
for key in iterable:
self[key] = value
return self
def __eq__(self, other):
'''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive
while comparison to a regular mapping is order-insensitive.
'''
if isinstance(other, OrderedDict):
return len(self)==len(other) and self.items() == other.items()
return dict.__eq__(self, other)
def __ne__(self, other):
'od.__ne__(y) <==> od!=y'
return not self == other
# -- the following methods support python 3.x style dictionary views --
def viewkeys(self):
"od.viewkeys() -> a set-like object providing a view on od's keys"
return KeysView(self)
def viewvalues(self):
"od.viewvalues() -> an object providing a view on od's values"
return ValuesView(self)
def viewitems(self):
"od.viewitems() -> a set-like object providing a view on od's items"
return ItemsView(self)
# Multiprocessing pool code imported from python 2.7.3. Previous versions of
# python have issues in this code which hang pool usage
#
# Module providing the `Pool` class for managing a process pool
#
# multiprocessing/pool.py
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# 3. Neither the name of author nor the names of any contributors may be
# used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGE.
#
import threading
import Queue
import itertools
import collections
import time
import multiprocessing
from multiprocessing import Process, cpu_count, TimeoutError, pool
from multiprocessing.util import Finalize, debug
#
# Constants representing the state of a pool
#
RUN = 0
CLOSE = 1
TERMINATE = 2
#
# Miscellaneous
#
def mapstar(args):
return map(*args)
class MaybeEncodingError(Exception):
"""Wraps possible unpickleable errors, so they can be
safely sent through the socket."""
def __init__(self, exc, value):
self.exc = repr(exc)
self.value = repr(value)
super(MaybeEncodingError, self).__init__(self.exc, self.value)
def __str__(self):
return "Error sending result: '%s'. Reason: '%s'" % (self.value,
self.exc)
def __repr__(self):
return "<MaybeEncodingError: %s>" % str(self)
def worker(inqueue, outqueue, initializer=None, initargs=(), maxtasks=None):
assert maxtasks is None or (type(maxtasks) == int and maxtasks > 0)
put = outqueue.put
get = inqueue.get
if hasattr(inqueue, '_writer'):
inqueue._writer.close()
outqueue._reader.close()
if initializer is not None:
initializer(*initargs)
completed = 0
while maxtasks is None or (maxtasks and completed < maxtasks):
try:
task = get()
except (EOFError, IOError):
debug('worker got EOFError or IOError -- exiting')
break
if task is None:
debug('worker got sentinel -- exiting')
break
job, i, func, args, kwds = task
try:
result = (True, func(*args, **kwds))
except Exception, e:
result = (False, e)
try:
put((job, i, result))
except Exception as e:
wrapped = MaybeEncodingError(e, result[1])
debug("Possible encoding error while sending result: %s" % (
wrapped))
put((job, i, (False, wrapped)))
completed += 1
debug('worker exiting after %d tasks' % completed)
class Pool(object):
'''
Class which supports an async version of the `apply()` builtin
'''
Process = Process
def __init__(self, processes=None, initializer=None, initargs=(),
maxtasksperchild=None):
self._setup_queues()
self._taskqueue = Queue.Queue()
self._cache = {}
self._state = RUN
self._maxtasksperchild = maxtasksperchild
self._initializer = initializer
self._initargs = initargs
if processes is None:
try:
processes = cpu_count()
except NotImplementedError:
processes = 1
if processes < 1:
raise ValueError("Number of processes must be at least 1")
if initializer is not None and not hasattr(initializer, '__call__'):
raise TypeError('initializer must be a callable')
self._processes = processes
self._pool = []
self._repopulate_pool()
self._worker_handler = threading.Thread(
target=Pool._handle_workers,
args=(self, )
)
self._worker_handler.daemon = True
self._worker_handler._state = RUN
self._worker_handler.start()
self._task_handler = threading.Thread(
target=Pool._handle_tasks,
args=(self._taskqueue, self._quick_put, self._outqueue, self._pool)
)
self._task_handler.daemon = True
self._task_handler._state = RUN
self._task_handler.start()
self._result_handler = threading.Thread(
target=Pool._handle_results,
args=(self._outqueue, self._quick_get, self._cache)
)
self._result_handler.daemon = True
self._result_handler._state = RUN
self._result_handler.start()
self._terminate = Finalize(
self, self._terminate_pool,
args=(self._taskqueue, self._inqueue, self._outqueue, self._pool,
self._worker_handler, self._task_handler,
self._result_handler, self._cache),
exitpriority=15
)
def _join_exited_workers(self):
"""Cleanup after any worker processes which have exited due to reaching
their specified lifetime. Returns True if any workers were cleaned up.
"""
cleaned = False
for i in reversed(range(len(self._pool))):
worker = self._pool[i]
if worker.exitcode is not None:
# worker exited
debug('cleaning up worker %d' % i)
worker.join()
cleaned = True
del self._pool[i]
return cleaned
def _repopulate_pool(self):
"""Bring the number of pool processes up to the specified number,
for use after reaping workers which have exited.
"""
for i in range(self._processes - len(self._pool)):
w = self.Process(target=worker,
args=(self._inqueue, self._outqueue,
self._initializer,
self._initargs, self._maxtasksperchild)
)
self._pool.append(w)
w.name = w.name.replace('Process', 'PoolWorker')
w.daemon = True
w.start()
debug('added worker')
def _maintain_pool(self):
"""Clean up any exited workers and start replacements for them.
"""
if self._join_exited_workers():
self._repopulate_pool()
def _setup_queues(self):
from multiprocessing.queues import SimpleQueue
self._inqueue = SimpleQueue()
self._outqueue = SimpleQueue()
self._quick_put = self._inqueue._writer.send
self._quick_get = self._outqueue._reader.recv
def apply(self, func, args=(), kwds={}):
'''
Equivalent of `apply()` builtin
'''
assert self._state == RUN
return self.apply_async(func, args, kwds).get()
def map(self, func, iterable, chunksize=None):
'''
Equivalent of `map()` builtin
'''
assert self._state == RUN
return self.map_async(func, iterable, chunksize).get()
def imap(self, func, iterable, chunksize=1):
'''
Equivalent of `itertools.imap()` -- can be MUCH slower than `Pool.map()`
'''
assert self._state == RUN
if chunksize == 1:
result = IMapIterator(self._cache)
self._taskqueue.put((((result._job, i, func, (x,), {})
for i, x in enumerate(iterable)), result._set_length))
return result
else:
assert chunksize > 1
task_batches = Pool._get_tasks(func, iterable, chunksize)
result = IMapIterator(self._cache)
self._taskqueue.put((((result._job, i, mapstar, (x,), {})
for i, x in enumerate(task_batches)), result._set_length))
return (item for chunk in result for item in chunk)
def imap_unordered(self, func, iterable, chunksize=1):
'''
Like `imap()` method but ordering of results is arbitrary
'''
assert self._state == RUN
if chunksize == 1:
result = IMapUnorderedIterator(self._cache)
self._taskqueue.put((((result._job, i, func, (x,), {})
for i, x in enumerate(iterable)), result._set_length))
return result
else:
assert chunksize > 1
task_batches = Pool._get_tasks(func, iterable, chunksize)
result = IMapUnorderedIterator(self._cache)
self._taskqueue.put((((result._job, i, mapstar, (x,), {})
for i, x in enumerate(task_batches)), result._set_length))
return (item for chunk in result for item in chunk)
def apply_async(self, func, args=(), kwds={}, callback=None):
'''
Asynchronous equivalent of `apply()` builtin
'''
assert self._state == RUN
result = ApplyResult(self._cache, callback)
self._taskqueue.put(([(result._job, None, func, args, kwds)], None))
return result
def map_async(self, func, iterable, chunksize=None, callback=None):
'''
Asynchronous equivalent of `map()` builtin
'''
assert self._state == RUN
if not hasattr(iterable, '__len__'):
iterable = list(iterable)
if chunksize is None:
chunksize, extra = divmod(len(iterable), len(self._pool) * 4)
if extra:
chunksize += 1
if len(iterable) == 0:
chunksize = 0
task_batches = Pool._get_tasks(func, iterable, chunksize)
result = MapResult(self._cache, chunksize, len(iterable), callback)
self._taskqueue.put((((result._job, i, mapstar, (x,), {})
for i, x in enumerate(task_batches)), None))
return result
@staticmethod
def _handle_workers(pool):
thread = threading.current_thread()
# Keep maintaining workers until the cache gets drained, unless the pool
# is terminated.
while thread._state == RUN or (pool._cache and thread._state != TERMINATE):
pool._maintain_pool()
time.sleep(0.1)
# send sentinel to stop workers
pool._taskqueue.put(None)
debug('worker handler exiting')
@staticmethod
def _handle_tasks(taskqueue, put, outqueue, pool):
thread = threading.current_thread()
for taskseq, set_length in iter(taskqueue.get, None):
i = -1
for i, task in enumerate(taskseq):
if thread._state:
debug('task handler found thread._state != RUN')
break
try:
put(task)
except IOError:
debug('could not put task on queue')
break
else:
if set_length:
debug('doing set_length()')
set_length(i+1)
continue
break
else:
debug('task handler got sentinel')
try:
# tell result handler to finish when cache is empty
debug('task handler sending sentinel to result handler')
outqueue.put(None)
# tell workers there is no more work
debug('task handler sending sentinel to workers')
for p in pool:
put(None)
except IOError:
debug('task handler got IOError when sending sentinels')
debug('task handler exiting')
@staticmethod
def _handle_results(outqueue, get, cache):
thread = threading.current_thread()
while 1:
try:
task = get()
except (IOError, EOFError):
debug('result handler got EOFError/IOError -- exiting')
return
if thread._state:
assert thread._state == TERMINATE
debug('result handler found thread._state=TERMINATE')
break
if task is None:
debug('result handler got sentinel')
break
job, i, obj = task
try:
cache[job]._set(i, obj)
except KeyError:
pass
while cache and thread._state != TERMINATE:
try:
task = get()
except (IOError, EOFError):
debug('result handler got EOFError/IOError -- exiting')
return
if task is None:
debug('result handler ignoring extra sentinel')
continue
job, i, obj = task
try:
cache[job]._set(i, obj)
except KeyError:
pass
if hasattr(outqueue, '_reader'):
debug('ensuring that outqueue is not full')
# If we don't make room available in outqueue then
# attempts to add the sentinel (None) to outqueue may
# block. There is guaranteed to be no more than 2 sentinels.
try:
for i in range(10):
if not outqueue._reader.poll():
break
get()
except (IOError, EOFError):
pass
debug('result handler exiting: len(cache)=%s, thread._state=%s',
len(cache), thread._state)
@staticmethod
def _get_tasks(func, it, size):
it = iter(it)
while 1:
x = tuple(itertools.islice(it, size))
if not x:
return
yield (func, x)
def __reduce__(self):
raise NotImplementedError(
'pool objects cannot be passed between processes or pickled'
)
def close(self):
debug('closing pool')
if self._state == RUN:
self._state = CLOSE
self._worker_handler._state = CLOSE
def terminate(self):
debug('terminating pool')
self._state = TERMINATE
self._worker_handler._state = TERMINATE
self._terminate()
def join(self):
debug('joining pool')
assert self._state in (CLOSE, TERMINATE)
self._worker_handler.join()
self._task_handler.join()
self._result_handler.join()
for p in self._pool:
p.join()
@staticmethod
def _help_stuff_finish(inqueue, task_handler, size):
# task_handler may be blocked trying to put items on inqueue
debug('removing tasks from inqueue until task handler finished')
inqueue._rlock.acquire()
while task_handler.is_alive() and inqueue._reader.poll():
inqueue._reader.recv()
time.sleep(0)
@classmethod
def _terminate_pool(cls, taskqueue, inqueue, outqueue, pool,
worker_handler, task_handler, result_handler, cache):
# this is guaranteed to only be called once
debug('finalizing pool')
worker_handler._state = TERMINATE
task_handler._state = TERMINATE
debug('helping task handler/workers to finish')
cls._help_stuff_finish(inqueue, task_handler, len(pool))
assert result_handler.is_alive() or len(cache) == 0
result_handler._state = TERMINATE
outqueue.put(None) # sentinel
# We must wait for the worker handler to exit before terminating
# workers because we don't want workers to be restarted behind our back.
debug('joining worker handler')
if threading.current_thread() is not worker_handler:
worker_handler.join(1e100)
# Terminate workers which haven't already finished.
if pool and hasattr(pool[0], 'terminate'):
debug('terminating workers')
for p in pool:
if p.exitcode is None:
p.terminate()
debug('joining task handler')
if threading.current_thread() is not task_handler:
task_handler.join(1e100)
debug('joining result handler')
if threading.current_thread() is not result_handler:
result_handler.join(1e100)
if pool and hasattr(pool[0], 'terminate'):
debug('joining pool workers')
for p in pool:
if p.is_alive():
# worker has not yet exited
debug('cleaning up worker %d' % p.pid)
p.join()
class ApplyResult(object):
def __init__(self, cache, callback):
self._cond = threading.Condition(threading.Lock())
self._job = multiprocessing.pool.job_counter.next()
self._cache = cache
self._ready = False
self._callback = callback
cache[self._job] = self
def ready(self):
return self._ready
def successful(self):
assert self._ready
return self._success
def wait(self, timeout=None):
self._cond.acquire()
try:
if not self._ready:
self._cond.wait(timeout)
finally:
self._cond.release()
def get(self, timeout=None):
self.wait(timeout)
if not self._ready:
raise TimeoutError
if self._success:
return self._value
else:
raise self._value
def _set(self, i, obj):
self._success, self._value = obj
if self._callback and self._success:
self._callback(self._value)
self._cond.acquire()
try:
self._ready = True
self._cond.notify()
finally:
self._cond.release()
del self._cache[self._job]
#
# Class whose instances are returned by `Pool.map_async()`
#
class MapResult(ApplyResult):
def __init__(self, cache, chunksize, length, callback):
ApplyResult.__init__(self, cache, callback)
self._success = True
self._value = [None] * length
self._chunksize = chunksize
if chunksize <= 0:
self._number_left = 0
self._ready = True
del cache[self._job]
else:
self._number_left = length//chunksize + bool(length % chunksize)
def _set(self, i, success_result):
success, result = success_result
if success:
self._value[i*self._chunksize:(i+1)*self._chunksize] = result
self._number_left -= 1
if self._number_left == 0:
if self._callback:
self._callback(self._value)
del self._cache[self._job]
self._cond.acquire()
try:
self._ready = True
self._cond.notify()
finally:
self._cond.release()
else:
self._success = False
self._value = result
del self._cache[self._job]
self._cond.acquire()
try:
self._ready = True
self._cond.notify()
finally:
self._cond.release()
#
# Class whose instances are returned by `Pool.imap()`
#
class IMapIterator(object):
def __init__(self, cache):
self._cond = threading.Condition(threading.Lock())
self._job = multiprocessing.pool.job_counter.next()
self._cache = cache
self._items = collections.deque()
self._index = 0
self._length = None
self._unsorted = {}
cache[self._job] = self
def __iter__(self):
return self
def next(self, timeout=None):
self._cond.acquire()
try:
try:
item = self._items.popleft()
except IndexError:
if self._index == self._length:
raise StopIteration
self._cond.wait(timeout)
try:
item = self._items.popleft()
except IndexError:
if self._index == self._length:
raise StopIteration
raise TimeoutError
finally:
self._cond.release()
success, value = item
if success:
return value
raise value
__next__ = next # XXX
def _set(self, i, obj):
self._cond.acquire()
try:
if self._index == i:
self._items.append(obj)
self._index += 1
while self._index in self._unsorted:
obj = self._unsorted.pop(self._index)
self._items.append(obj)
self._index += 1
self._cond.notify()
else:
self._unsorted[i] = obj
if self._index == self._length:
del self._cache[self._job]
finally:
self._cond.release()
def _set_length(self, length):
self._cond.acquire()
try:
self._length = length
if self._index == self._length:
self._cond.notify()
del self._cache[self._job]
finally:
self._cond.release()
#
# Class whose instances are returned by `Pool.imap_unordered()`
#
class IMapUnorderedIterator(IMapIterator):
def _set(self, i, obj):
self._cond.acquire()
try:
self._items.append(obj)
self._index += 1
self._cond.notify()
if self._index == self._length:
del self._cache[self._job]
finally:
self._cond.release()

View File

@@ -158,7 +158,6 @@ class BBCooker:
#
self.configuration.event_data = bb.data.createCopy(self.configuration.data)
bb.data.update_data(self.configuration.event_data)
bb.parse.init_parser(self.configuration.event_data)
# TOSTOP must not be set or our children will hang when they output
fd = sys.stdout.fileno()
@@ -177,24 +176,21 @@ class BBCooker:
def initConfigurationData(self):
self.configuration.data = bb.data.init()
if self.configuration.show_environment:
self.configuration.data.enableTracking()
if not self.server_registration_cb:
self.configuration.data.setVar("BB_WORKERCONTEXT", "1")
filtered_keys = bb.utils.approved_variables()
bb.data.inheritFromOS(self.configuration.data, self.savedenv, filtered_keys)
self.configuration.data.setVar("BB_ORIGENV", self.savedenv)
def enableDataTracking(self):
self.configuration.data.enableTracking()
def disableDataTracking(self):
self.configuration.data.disableTracking()
def loadConfigurationData(self):
self.initConfigurationData()
self.configuration.data = bb.data.init()
if not self.server_registration_cb:
self.configuration.data.setVar("BB_WORKERCONTEXT", "1")
filtered_keys = bb.utils.approved_variables()
bb.data.inheritFromOS(self.configuration.data, self.savedenv, filtered_keys)
try:
self.parseConfigurationFiles(self.configuration.prefile,
@@ -208,89 +204,6 @@ class BBCooker:
if not self.configuration.cmd:
self.configuration.cmd = self.configuration.data.getVar("BB_DEFAULT_TASK", True) or "build"
def saveConfigurationVar(self, var, val, default_file):
replaced = False
#do not save if nothing changed
if str(val) == self.configuration.data.getVar(var):
return
conf_files = self.configuration.data.varhistory.get_variable_files(var)
#format the value when it is a list
if isinstance(val, list):
listval = ""
for value in val:
listval += "%s " % value
val = listval
topdir = self.configuration.data.getVar("TOPDIR")
#comment or replace operations made on var
for conf_file in conf_files:
if topdir in conf_file:
with open(conf_file, 'r') as f:
contents = f.readlines()
f.close()
lines = self.configuration.data.varhistory.get_variable_lines(var, conf_file)
for line in lines:
total = ""
i = 0
for c in contents:
total += c
i = i + 1
if i==int(line):
end_index = len(total)
index = total.rfind(var, 0, end_index)
begin_line = total.count("\n",0,index)
end_line = int(line)
#check if the variable was saved before in the same way
#if true it replace the place where the variable was declared
#else it comments it
if contents[begin_line-1]== "#added by bitbake\n":
contents[begin_line] = "%s = \"%s\"\n" % (var, val)
replaced = True
else:
for ii in range(begin_line, end_line):
contents[ii] = "#" + contents[ii]
total = ""
for c in contents:
total += c
with open(conf_file, 'w') as f:
f.write(total)
f.close()
if replaced == False:
#remove var from history
self.configuration.data.varhistory.del_var_history(var)
#add var to the end of default_file
default_file = self._findConfigFile(default_file)
with open(default_file, 'r') as f:
contents = f.readlines()
f.close()
total = ""
for c in contents:
total += c
#add the variable on a single line, to be easy to replace the second time
total += "#added by bitbake"
total += "\n%s = \"%s\"\n" % (var, val)
with open(default_file, 'w') as f:
f.write(total)
f.close()
#add to history
loginfo = {"op":set, "file":default_file, "line":total.count("\n")}
self.configuration.data.setVar(var, val, **loginfo)
def parseConfiguration(self):
# Set log file verbosity
@@ -305,12 +218,6 @@ class BBCooker:
nice = int(nice) - curnice
buildlog.verbose("Renice to %s " % os.nice(nice))
if self.status:
del self.status
self.status = bb.cache.CacheData(self.caches_array)
self.handleCollections( self.configuration.data.getVar("BBFILE_COLLECTIONS", True) )
def parseCommandLine(self):
# Parse any commandline into actions
self.commandlineAction = {'action':None, 'msg':None}
@@ -325,10 +232,8 @@ class BBCooker:
self.commandlineAction['msg'] = "No target should be used with the --environment and --buildfile options."
elif len(self.configuration.pkgs_to_build) > 0:
self.commandlineAction['action'] = ["showEnvironmentTarget", self.configuration.pkgs_to_build]
self.configuration.data.setVar("BB_CONSOLELOG", None)
else:
self.commandlineAction['action'] = ["showEnvironment", self.configuration.buildfile]
self.configuration.data.setVar("BB_CONSOLELOG", None)
elif self.configuration.buildfile is not None:
self.commandlineAction['action'] = ["buildFile", self.configuration.buildfile, self.configuration.cmd]
elif self.configuration.revisions_changed:
@@ -366,8 +271,8 @@ class BBCooker:
pkg_pn = self.status.pkg_pn
(latest_versions, preferred_versions) = bb.providers.findProviders(self.configuration.data, self.status, pkg_pn)
logger.plain("%-35s %25s %25s", "Recipe Name", "Latest Version", "Preferred Version")
logger.plain("%-35s %25s %25s\n", "===========", "==============", "=================")
logger.plain("%-35s %25s %25s", "Package Name", "Latest Version", "Preferred Version")
logger.plain("%-35s %25s %25s\n", "============", "==============", "=================")
for p in sorted(pkg_pn):
pref = preferred_versions[p]
@@ -392,6 +297,8 @@ class BBCooker:
# Parse the configuration here. We need to do it explicitly here since
# this showEnvironment() code path doesn't use the cache
self.parseConfiguration()
self.status = bb.cache.CacheData(self.caches_array)
self.handleCollections( self.configuration.data.getVar("BBFILE_COLLECTIONS", True) )
fn, cls = bb.cache.Cache.virtualfn2realfn(buildfile)
fn = self.matchFile(fn)
@@ -399,10 +306,6 @@ class BBCooker:
elif len(pkgs_to_build) == 1:
self.updateCache()
ignore = self.configuration.data.getVar("ASSUME_PROVIDED", True) or ""
if pkgs_to_build[0] in set(ignore.split()):
bb.fatal("%s is in ASSUME_PROVIDED" % pkgs_to_build[0])
localdata = data.createCopy(self.configuration.data)
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
@@ -424,11 +327,6 @@ class BBCooker:
parselog.exception("Unable to read %s", fn)
raise
# Display history
with closing(StringIO()) as env:
self.configuration.data.inchistory.emit(env)
logger.plain(env.getvalue())
# emit variables and shell functions
data.update_data(envdata)
with closing(StringIO()) as env:
@@ -471,8 +369,6 @@ class BBCooker:
taskdata.add_unresolved(localdata, self.status)
bb.event.fire(bb.event.TreeDataPreparationCompleted(len(pkgs_to_build)), self.configuration.data)
return runlist, taskdata
######## WARNING : this function requires cache_extra to be enabled ########
def generateTaskDepTreeData(self, pkgs_to_build, task):
"""
@@ -546,7 +442,6 @@ class BBCooker:
return depend_tree
######## WARNING : this function requires cache_extra to be enabled ########
def generatePkgDepTreeData(self, pkgs_to_build, task):
"""
Create a dependency tree of pkgs_to_build, returning the data.
@@ -574,12 +469,8 @@ class BBCooker:
lic = self.status.license[fn]
section = self.status.section[fn]
description = self.status.description[fn]
homepage = self.status.homepage[fn]
bugtracker = self.status.bugtracker[fn]
files_info = self.status.files_info[fn]
rdepends = self.status.rundeps[fn]
rrecs = self.status.runrecs[fn]
prevision = self.status.prevision[fn]
inherits = self.status.inherits.get(fn, None)
if pn not in depend_tree["pn"]:
depend_tree["pn"][pn] = {}
@@ -590,10 +481,6 @@ class BBCooker:
depend_tree["pn"][pn]["section"] = section
depend_tree["pn"][pn]["description"] = description
depend_tree["pn"][pn]["inherits"] = inherits
depend_tree["pn"][pn]["homepage"] = homepage
depend_tree["pn"][pn]["bugtracker"] = bugtracker
depend_tree["pn"][pn]["files_info"] = files_info
depend_tree["pn"][pn]["revision"] = prevision
if fnid not in seen_fnids:
seen_fnids.append(fnid)
@@ -647,15 +534,11 @@ class BBCooker:
# Prints a flattened form of package-depends below where subpackages of a package are merged into the main pn
depends_file = file('pn-depends.dot', 'w' )
buildlist_file = file('pn-buildlist', 'w' )
print("digraph depends {", file=depends_file)
for pn in depgraph["pn"]:
fn = depgraph["pn"][pn]["filename"]
version = depgraph["pn"][pn]["version"]
print('"%s" [label="%s %s\\n%s"]' % (pn, pn, version, fn), file=depends_file)
print("%s" % pn, file=buildlist_file)
buildlist_file.close()
logger.info("PN build list saved to 'pn-buildlist'")
for pn in depgraph["depends"]:
for depend in depgraph["depends"][pn]:
print('"%s" -> "%s"' % (pn, depend), file=depends_file)
@@ -750,8 +633,7 @@ class BBCooker:
# Calculate priorities for each file
matched = set()
for p in self.status.pkg_fn:
realfn, cls = bb.cache.Cache.virtualfn2realfn(p)
self.status.bbfile_priority[p] = self.calc_bbfile_priority(realfn, matched)
self.status.bbfile_priority[p] = self.calc_bbfile_priority(p, matched)
# Don't show the warning if the BBFILE_PATTERN did match .bbappend files
unmatched = set()
@@ -798,8 +680,8 @@ class BBCooker:
# Generate a list of parsed configuration files by searching the files
# listed in the __depends and __base_depends variables with a .conf suffix.
conffiles = []
dep_files = self.configuration.data.getVar('__base_depends') or []
dep_files = dep_files + (self.configuration.data.getVar('__depends') or [])
dep_files = self.configuration.data.getVar('__depends') or set()
dep_files.union(self.configuration.data.getVar('__base_depends') or set())
for f in dep_files:
if f[0].endswith(".conf"):
@@ -988,16 +870,10 @@ class BBCooker:
bb.fetch.fetcher_init(data)
bb.codeparser.parser_cache_init(data)
bb.event.fire(bb.event.ConfigParsed(), data)
if data.getVar("BB_INVALIDCONF") is True:
data.setVar("BB_INVALIDCONF", False)
self.parseConfigurationFiles(self.configuration.prefile,
self.configuration.postfile)
else:
bb.parse.init_parser(data)
data.setVar('BBINCLUDED',bb.parse.get_file_depends(data))
self.configuration.data = data
self.configuration.data_hash = data.get_hash()
bb.parse.init_parser(data)
data.setVar('BBINCLUDED',bb.parse.get_file_depends(data))
self.configuration.data = data
self.configuration.data_hash = data.get_hash()
def handleCollections( self, collections ):
"""Handle collections"""
@@ -1053,13 +929,13 @@ class BBCooker:
errors = True
continue
if lver <> depver:
parselog.error("Layer '%s' depends on version %d of layer '%s', but version %d is enabled in your configuration", c, depver, dep, lver)
parselog.error("Layer dependency %s of layer %s is at version %d, expected %d", dep, c, lver, depver)
errors = True
else:
parselog.error("Layer '%s' depends on version %d of layer '%s', which exists in your configuration but does not specify a version", c, depver, dep)
parselog.error("Layer dependency %s of layer %s has no version, expected %d", dep, c, depver)
errors = True
else:
parselog.error("Layer '%s' depends on layer '%s', but this layer is not enabled in your configuration", c, dep)
parselog.error("Layer dependency %s of layer %s not found", dep, c)
errors = True
collection_depends[c] = depnamelist
else:
@@ -1109,12 +985,12 @@ class BBCooker:
"""
Find the .bb files which match the expression in 'buildfile'.
"""
if bf.startswith("/") or bf.startswith("../"):
bf = os.path.abspath(bf)
filelist, masked = self.collect_bbfiles()
try:
os.stat(bf)
bf = os.path.abspath(bf)
return [bf]
except OSError:
regexp = re.compile(bf)
@@ -1154,6 +1030,8 @@ class BBCooker:
# Parse the configuration here. We need to do it explicitly here since
# buildFile() doesn't use the cache
self.parseConfiguration()
self.status = bb.cache.CacheData(self.caches_array)
self.handleCollections( self.configuration.data.getVar("BBFILE_COLLECTIONS", True) )
# If we are told to do the None task then query the default task
if (task == None):
@@ -1175,10 +1053,6 @@ class BBCooker:
info_array = infos[fn]
except KeyError:
bb.fatal("%s does not exist" % fn)
if info_array[0].skipped:
bb.fatal("%s was skipped: %s" % (fn, info_array[0].skipreason))
self.status.add_from_recipeinfo(fn, info_array)
# Tweak some variables
@@ -1296,25 +1170,6 @@ class BBCooker:
self.server_registration_cb(buildTargetsIdle, rq)
def generateNewImage(self, image, base_image, package_queue):
'''
Create a new image with a "require" base_image statement
'''
image_name = os.path.splitext(image)[0]
timestr = time.strftime("-%Y%m%d-%H%M%S")
dest = image_name + str(timestr) + ".bb"
with open(dest, "w") as imagefile:
imagefile.write("require " + base_image + "\n")
package_install = "PACKAGE_INSTALL_forcevariable = \""
for package in package_queue:
package_install += str(package) + " "
package_install += "\"\n"
imagefile.write(package_install)
self.state = state.initial
return timestr
def updateCache(self):
if self.state == state.running:
return
@@ -1326,12 +1181,18 @@ class BBCooker:
if self.state != state.parsing:
self.parseConfiguration ()
if self.status:
del self.status
self.status = bb.cache.CacheData(self.caches_array)
ignore = self.configuration.data.getVar("ASSUME_PROVIDED", True) or ""
self.status.ignored_dependencies = set(ignore.split())
for dep in self.configuration.extra_assume_provided:
self.status.ignored_dependencies.add(dep)
self.handleCollections( self.configuration.data.getVar("BBFILE_COLLECTIONS", True) )
(filelist, masked) = self.collect_bbfiles()
self.configuration.data.renameVar("__depends", "__base_depends")
@@ -1340,8 +1201,6 @@ class BBCooker:
if not self.parser.parse_next():
collectlog.debug(1, "parsing complete")
if self.parser.error:
sys.exit(1)
self.show_appends_with_no_recipes()
self.buildDepgraph()
self.state = state.running
@@ -1486,10 +1345,7 @@ class BBCooker:
# Empty the environment. The environment will be populated as
# necessary from the data store.
#bb.utils.empty_environment()
try:
prserv.serv.auto_start(self.configuration.data)
except prserv.serv.PRServiceConfigError:
bb.event.fire(CookerExit(), self.configuration.event_data)
prserv.serv.auto_start(self.configuration.data)
return
def post_serve(self):
@@ -1526,7 +1382,25 @@ def server_main(cooker, func, *args):
ret = profile.Profile.runcall(prof, func, *args)
prof.dump_stats("profile.log")
bb.utils.process_profilelog("profile.log")
# Redirect stdout to capture profile information
pout = open('profile.log.processed', 'w')
so = sys.stdout.fileno()
orig_so = os.dup(sys.stdout.fileno())
os.dup2(pout.fileno(), so)
import pstats
p = pstats.Stats('profile.log')
p.sort_stats('time')
p.print_stats()
p.print_callers()
p.sort_stats('cumulative')
p.print_stats()
os.dup2(orig_so, so)
pout.flush()
pout.close()
print("Raw profiling information saved to profile.log and processed statistics to profile.log.processed")
else:
@@ -1606,7 +1480,6 @@ class Parser(multiprocessing.Process):
self.quit = quit
self.init = init
multiprocessing.Process.__init__(self)
self.context = bb.utils._context.copy()
def run(self):
if self.init:
@@ -1641,7 +1514,6 @@ class Parser(multiprocessing.Process):
def parse(self, filename, appends, caches_array):
try:
bb.utils._context = self.context.copy()
return True, bb.cache.Cache.parse(filename, appends, self.cfg, caches_array)
except Exception as exc:
tb = sys.exc_info()[2]
@@ -1698,7 +1570,6 @@ class CookerParser(object):
def init():
Parser.cfg = self.cfgdata
multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, args=(self.cfgdata,), exitpriority=1)
multiprocessing.util.Finalize(None, bb.fetch.fetcher_parse_save, args=(self.cfgdata,), exitpriority=1)
self.feeder_quit = multiprocessing.Queue(maxsize=1)
self.parser_quit = multiprocessing.Queue(maxsize=self.num_processes)
@@ -1725,7 +1596,6 @@ class CookerParser(object):
self.skipped, self.masked,
self.virtuals, self.error,
self.total)
bb.event.fire(event, self.cfgdata)
self.feeder_quit.put(None)
for process in self.processes:
@@ -1751,7 +1621,6 @@ class CookerParser(object):
sync.start()
multiprocessing.util.Finalize(None, sync.join, exitpriority=-100)
bb.codeparser.parser_cache_savemerge(self.cooker.configuration.data)
bb.fetch.fetcher_parse_done(self.cooker.configuration.data)
def load_cached(self):
for filename, appends in self.fromcache:
@@ -1782,45 +1651,21 @@ class CookerParser(object):
except StopIteration:
self.shutdown()
return False
except bb.BBHandledException as exc:
self.error += 1
logger.error('Failed to parse recipe: %s' % exc.recipe)
self.shutdown(clean=False)
return False
except ParsingFailure as exc:
self.error += 1
logger.error('Unable to parse %s: %s' %
(exc.recipe, bb.exceptions.to_string(exc.realexception)))
self.shutdown(clean=False)
return False
except bb.parse.ParseError as exc:
self.error += 1
except (bb.parse.ParseError, bb.data_smart.ExpansionError) as exc:
logger.error(str(exc))
self.shutdown(clean=False)
return False
except bb.data_smart.ExpansionError as exc:
self.error += 1
_, value, _ = sys.exc_info()
logger.error('ExpansionError during parsing %s: %s', value.recipe, str(exc))
self.shutdown(clean=False)
return False
except SyntaxError as exc:
self.error += 1
logger.error('Unable to parse %s', exc.recipe)
self.shutdown(clean=False)
return False
except Exception as exc:
self.error += 1
etype, value, tb = sys.exc_info()
if hasattr(value, "recipe"):
logger.error('Unable to parse %s', value.recipe,
exc_info=(etype, value, exc.traceback))
else:
# Most likely, an exception occurred during raising an exception
import traceback
logger.error('Exception during parse: %s' % traceback.format_exc())
logger.error('Unable to parse %s', value.recipe,
exc_info=(etype, value, exc.traceback))
self.shutdown(clean=False)
return False
self.current += 1
self.virtuals += len(result)

View File

@@ -158,12 +158,7 @@ def expandKeys(alterdata, readdata = None):
for key in todolist:
ekey = todolist[key]
newval = alterdata.getVar(ekey, 0)
if newval:
val = alterdata.getVar(key, 0)
if val is not None and newval is not None:
bb.warn("Variable key %s (%s) replaces original key %s (%s)." % (key, val, ekey, newval))
alterdata.renameVar(key, ekey)
renameVar(key, ekey, alterdata)
def inheritFromOS(d, savedenv, permitted):
"""Inherit variables from the initial environment."""
@@ -171,9 +166,9 @@ def inheritFromOS(d, savedenv, permitted):
for s in savedenv.keys():
if s in permitted:
try:
d.setVar(s, getVar(s, savedenv, True), op = 'from env')
setVar(s, getVar(s, savedenv, True), d)
if s in exportlist:
d.setVarFlag(s, "export", True, op = 'auto env export')
setVarFlag(s, "export", True, d)
except TypeError:
pass
@@ -199,7 +194,8 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
return 0
if all:
d.varhistory.emit(var, oval, val, o)
commentVal = re.sub('\n', '\n#', str(oval))
o.write('# %s=%s\n' % (var, commentVal))
if (var.find("-") != -1 or var.find(".") != -1 or var.find('{') != -1 or var.find('}') != -1 or var.find('+') != -1) and not all:
return 0
@@ -264,7 +260,6 @@ def emit_func(func, o=sys.__stdout__, d = init()):
emit_var(func, o, d, False) and o.write('\n')
newdeps = bb.codeparser.ShellParser(func, logger).parse_shell(d.getVar(func, True))
newdeps |= set((d.getVarFlag(func, "vardeps", True) or "").split())
seen = set()
while newdeps:
deps = newdeps
@@ -278,26 +273,19 @@ def emit_func(func, o=sys.__stdout__, d = init()):
def update_data(d):
"""Performs final steps upon the datastore, including application of overrides"""
d.finalize(parent = True)
d.finalize()
def build_dependencies(key, keys, shelldeps, vardepvals, d):
deps = set()
vardeps = d.getVarFlag(key, "vardeps", True)
try:
if key[-1] == ']':
vf = key[:-1].split('[')
value = d.getVarFlag(vf[0], vf[1], False)
else:
value = d.getVar(key, False)
value = d.getVar(key, False)
if key in vardepvals:
value = d.getVarFlag(key, "vardepvalue", True)
elif d.getVarFlag(key, "func"):
if d.getVarFlag(key, "python"):
parsedvar = d.expandWithRefs(value, key)
parser = bb.codeparser.PythonParser(key, logger)
if parsedvar.value and "\t" in parsedvar.value:
logger.warn("Variable %s contains tabs, please remove these (%s)" % (key, d.getVar("FILE", True)))
parser.parse_python(parsedvar.value)
deps = deps | parser.references
else:
@@ -313,23 +301,11 @@ def build_dependencies(key, keys, shelldeps, vardepvals, d):
parser = d.expandWithRefs(value, key)
deps |= parser.references
deps = deps | (keys & parser.execs)
# Add varflags, assuming an exclusion list is set
varflagsexcl = d.getVar('BB_SIGNATURE_EXCLUDE_FLAGS', True)
if varflagsexcl:
varfdeps = []
varflags = d.getVarFlags(key)
if varflags:
for f in varflags:
if f not in varflagsexcl:
varfdeps.append('%s[%s]' % (key, f))
if varfdeps:
deps |= set(varfdeps)
deps |= set((vardeps or "").split())
deps -= set((d.getVarFlag(key, "vardepsexclude", True) or "").split())
except Exception as e:
raise bb.data_smart.ExpansionError(key, None, e)
except:
bb.note("Error expanding variable %s" % key)
raise
return deps, value
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
#d.setVarFlag(key, "vardeps", deps)
@@ -362,8 +338,6 @@ def generate_dependencies(d):
def inherits_class(klass, d):
val = getVar('__inherit_cache', d) or []
needle = os.path.join('classes', '%s.bbclass' % klass)
for v in val:
if v.endswith(needle):
return True
if os.path.join('classes', '%s.bbclass' % klass) in val:
return True
return False

View File

@@ -28,7 +28,7 @@ BitBake build tools.
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import copy, re, sys, traceback
import copy, re
from collections import MutableMapping
import logging
import hashlib
@@ -39,46 +39,10 @@ from bb.COW import COWDictBase
logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = ["_append", "_prepend"]
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend)(_(?P<add>.*))?$')
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend)(_(?P<add>.*))?')
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
def infer_caller_details(loginfo, parent = False, varval = True):
"""Save the caller the trouble of specifying everything."""
# Save effort.
if 'ignore' in loginfo and loginfo['ignore']:
return
# If nothing was provided, mark this as possibly unneeded.
if not loginfo:
loginfo['ignore'] = True
return
# Infer caller's likely values for variable (var) and value (value),
# to reduce clutter in the rest of the code.
if varval and ('variable' not in loginfo or 'detail' not in loginfo):
try:
raise Exception
except Exception:
tb = sys.exc_info()[2]
if parent:
above = tb.tb_frame.f_back.f_back
else:
above = tb.tb_frame.f_back
lcls = above.f_locals.items()
for k, v in lcls:
if k == 'value' and 'detail' not in loginfo:
loginfo['detail'] = v
if k == 'var' and 'variable' not in loginfo:
loginfo['variable'] = v
# Infer file/line/function from traceback
if 'file' not in loginfo:
depth = 3
if parent:
depth = 4
file, line, func, text = traceback.extract_stack(limit = depth)[0]
loginfo['file'] = file
loginfo['line'] = line
if func not in loginfo:
loginfo['func'] = func
class VariableParse:
def __init__(self, varname, d, val = None):
@@ -138,169 +102,22 @@ class ExpansionError(Exception):
self.expression = expression
self.variablename = varname
self.exception = exception
if varname:
if expression:
self.msg = "Failure expanding variable %s, expression was %s which triggered exception %s: %s" % (varname, expression, type(exception).__name__, exception)
else:
self.msg = "Failure expanding variable %s: %s: %s" % (varname, type(exception).__name__, exception)
else:
self.msg = "Failure expanding expression %s which triggered exception %s: %s" % (expression, type(exception).__name__, exception)
self.msg = "Failure expanding variable %s, expression was %s which triggered exception %s: %s" % (varname, expression, type(exception).__name__, exception)
Exception.__init__(self, self.msg)
self.args = (varname, expression, exception)
def __str__(self):
return self.msg
class IncludeHistory(object):
def __init__(self, parent = None, filename = '[TOP LEVEL]'):
self.parent = parent
self.filename = filename
self.children = []
self.current = self
def copy(self):
new = IncludeHistory(self.parent, self.filename)
for c in self.children:
new.children.append(c)
return new
def include(self, filename):
newfile = IncludeHistory(self.current, filename)
self.current.children.append(newfile)
self.current = newfile
return self
def __enter__(self):
pass
def __exit__(self, a, b, c):
if self.current.parent:
self.current = self.current.parent
else:
bb.warn("Include log: Tried to finish '%s' at top level." % filename)
return False
def emit(self, o, level = 0):
"""Emit an include history file, and its children."""
if level:
spaces = " " * (level - 1)
o.write("# %s%s" % (spaces, self.filename))
if len(self.children) > 0:
o.write(" includes:")
else:
o.write("#\n# INCLUDE HISTORY:\n#")
level = level + 1
for child in self.children:
o.write("\n")
child.emit(o, level)
class VariableHistory(object):
def __init__(self, dataroot):
self.dataroot = dataroot
self.variables = COWDictBase.copy()
def copy(self):
new = VariableHistory(self.dataroot)
new.variables = self.variables.copy()
return new
def record(self, *kwonly, **loginfo):
if not self.dataroot._tracking:
return
if len(kwonly) > 0:
raise TypeError
infer_caller_details(loginfo, parent = True)
if 'ignore' in loginfo and loginfo['ignore']:
return
if 'op' not in loginfo or not loginfo['op']:
loginfo['op'] = 'set'
if 'detail' in loginfo:
loginfo['detail'] = str(loginfo['detail'])
if 'variable' not in loginfo or 'file' not in loginfo:
raise ValueError("record() missing variable or file.")
var = loginfo['variable']
if var not in self.variables:
self.variables[var] = []
self.variables[var].append(loginfo.copy())
def variable(self, var):
if var in self.variables:
return self.variables[var]
else:
return []
def emit(self, var, oval, val, o):
history = self.variable(var)
commentVal = re.sub('\n', '\n#', str(oval))
if history:
if len(history) == 1:
o.write("#\n# $%s\n" % var)
else:
o.write("#\n# $%s [%d operations]\n" % (var, len(history)))
for event in history:
# o.write("# %s\n" % str(event))
if 'func' in event:
# If we have a function listed, this is internal
# code, not an operation in a config file, and the
# full path is distracting.
event['file'] = re.sub('.*/', '', event['file'])
display_func = ' [%s]' % event['func']
else:
display_func = ''
if 'flag' in event:
flag = '[%s] ' % (event['flag'])
else:
flag = ''
o.write("# %s %s:%s%s\n# %s\"%s\"\n" % (event['op'], event['file'], event['line'], display_func, flag, re.sub('\n', '\n# ', event['detail'])))
if len(history) > 1:
o.write("# computed:\n")
o.write('# "%s"\n' % (commentVal))
else:
o.write("#\n# $%s\n# [no history recorded]\n#\n" % var)
o.write('# "%s"\n' % (commentVal))
def get_variable_files(self, var):
"""Get the files where operations are made on a variable"""
var_history = self.variable(var)
files = []
for event in var_history:
files.append(event['file'])
return files
def get_variable_lines(self, var, f):
"""Get the line where a operation is made on a variable in file f"""
var_history = self.variable(var)
lines = []
for event in var_history:
if f== event['file']:
line = event['line']
lines.append(line)
return lines
def del_var_history(self, var):
if var in self.variables:
self.variables[var] = []
class DataSmart(MutableMapping):
def __init__(self, special = COWDictBase.copy(), seen = COWDictBase.copy() ):
self.dict = {}
self.inchistory = IncludeHistory()
self.varhistory = VariableHistory(self)
self._tracking = False
# cookie monster tribute
self._special_values = special
self._seen_overrides = seen
self.expand_cache = {}
def enableTracking(self):
self._tracking = True
def disableTracking(self):
self._tracking = False
def expandWithRefs(self, s, varname):
if not isinstance(s, basestring): # sanity check
@@ -320,8 +137,6 @@ class DataSmart(MutableMapping):
break
except ExpansionError:
raise
except bb.parse.SkipPackage:
raise
except Exception as exc:
raise ExpansionError(varname, s, exc)
@@ -336,14 +151,10 @@ class DataSmart(MutableMapping):
return self.expandWithRefs(s, varname).value
def finalize(self, parent = False):
def finalize(self):
"""Performs final steps upon the datastore, including application of overrides"""
overrides = (self.getVar("OVERRIDES", True) or "").split(":") or []
finalize_caller = {
'op': 'finalize',
}
infer_caller_details(finalize_caller, parent = parent, varval = False)
#
# Well let us see what breaks here. We used to iterate
@@ -360,9 +171,6 @@ class DataSmart(MutableMapping):
# Then we will handle _append and _prepend
#
# We only want to report finalization once per variable overridden.
finalizes_reported = {}
for o in overrides:
# calculate '_'+override
l = len(o) + 1
@@ -375,19 +183,7 @@ class DataSmart(MutableMapping):
for var in vars:
name = var[:-l]
try:
# Report only once, even if multiple changes.
if name not in finalizes_reported:
finalizes_reported[name] = True
finalize_caller['variable'] = name
finalize_caller['detail'] = 'was: ' + str(self.getVar(name, False))
self.varhistory.record(**finalize_caller)
# Copy history of the override over.
for event in self.varhistory.variable(var):
loginfo = event.copy()
loginfo['variable'] = name
loginfo['op'] = 'override[%s]:%s' % (o, loginfo['op'])
self.varhistory.record(**loginfo)
self.setVar(name, self.getVar(var, False), op = 'finalize', file = 'override[%s]' % o, line = '')
self.setVar(name, self.getVar(var, False))
self.delVar(var)
except Exception:
logger.info("Untracked delVar")
@@ -399,12 +195,7 @@ class DataSmart(MutableMapping):
for append in appends:
keep = []
for (a, o) in self.getVarFlag(append, op) or []:
match = True
if o:
for o2 in o.split("_"):
if not o2 in overrides:
match = False
if not match:
if o and not o in overrides:
keep.append((a ,o))
continue
@@ -418,9 +209,9 @@ class DataSmart(MutableMapping):
# We save overrides that may be applied at some later stage
if keep:
self.setVarFlag(append, op, keep, ignore=True)
self.setVarFlag(append, op, keep)
else:
self.delVarFlag(append, op, ignore=True)
self.delVarFlag(append, op)
def initVar(self, var):
self.expand_cache = {}
@@ -448,11 +239,7 @@ class DataSmart(MutableMapping):
else:
self.initVar(var)
def setVar(self, var, value, **loginfo):
#print("var=" + str(var) + " val=" + str(value))
if 'op' not in loginfo:
loginfo['op'] = "set"
def setVar(self, var, value):
self.expand_cache = {}
match = __setvar_regexp__.match(var)
if match and match.group("keyword") in __setvar_keyword__:
@@ -461,22 +248,15 @@ class DataSmart(MutableMapping):
override = match.group('add')
l = self.getVarFlag(base, keyword) or []
l.append([value, override])
self.setVarFlag(base, keyword, l, ignore=True)
# And cause that to be recorded:
loginfo['detail'] = value
loginfo['variable'] = base
if override:
loginfo['op'] = '%s[%s]' % (keyword, override)
else:
loginfo['op'] = keyword
self.varhistory.record(**loginfo)
self.setVarFlag(base, keyword, l)
# todo make sure keyword is not __doc__ or __module__
# pay the cookie monster
try:
self._special_values[keyword].add(base)
self._special_values[keyword].add( base )
except KeyError:
self._special_values[keyword] = set()
self._special_values[keyword].add(base)
self._special_values[keyword].add( base )
return
@@ -492,28 +272,23 @@ class DataSmart(MutableMapping):
self._seen_overrides[override].add( var )
# setting var
self.dict[var]["_content"] = value
self.varhistory.record(**loginfo)
self.dict[var]["content"] = value
def getVar(self, var, expand=False, noweakdefault=False):
value = self.getVarFlag(var, "_content", False, noweakdefault)
value = self.getVarFlag(var, "content", False, noweakdefault)
# Call expand() separately to make use of the expand cache
if expand and value:
return self.expand(value, var)
return value
def renameVar(self, key, newkey, **loginfo):
def renameVar(self, key, newkey):
"""
Rename the variable key to newkey
"""
val = self.getVar(key, 0)
if val is not None:
loginfo['variable'] = newkey
loginfo['op'] = 'rename from %s' % key
loginfo['detail'] = val
self.varhistory.record(**loginfo)
self.setVar(newkey, val, ignore=True)
self.setVar(newkey, val)
for i in ('_append', '_prepend'):
src = self.getVarFlag(key, i)
@@ -522,34 +297,23 @@ class DataSmart(MutableMapping):
dest = self.getVarFlag(newkey, i) or []
dest.extend(src)
self.setVarFlag(newkey, i, dest, ignore=True)
self.setVarFlag(newkey, i, dest)
if i in self._special_values and key in self._special_values[i]:
self._special_values[i].remove(key)
self._special_values[i].add(newkey)
loginfo['variable'] = key
loginfo['op'] = 'rename (to)'
loginfo['detail'] = newkey
self.varhistory.record(**loginfo)
self.delVar(key, ignore=True)
self.delVar(key)
def appendVar(self, var, value, **loginfo):
loginfo['op'] = 'append'
self.varhistory.record(**loginfo)
newvalue = (self.getVar(var, False) or "") + value
self.setVar(var, newvalue, ignore=True)
def appendVar(self, key, value):
value = (self.getVar(key, False) or "") + value
self.setVar(key, value)
def prependVar(self, var, value, **loginfo):
loginfo['op'] = 'prepend'
self.varhistory.record(**loginfo)
newvalue = value + (self.getVar(var, False) or "")
self.setVar(var, newvalue, ignore=True)
def prependVar(self, key, value):
value = value + (self.getVar(key, False) or "")
self.setVar(key, value)
def delVar(self, var, **loginfo):
loginfo['detail'] = ""
loginfo['op'] = 'del'
self.varhistory.record(**loginfo)
def delVar(self, var):
self.expand_cache = {}
self.dict[var] = {}
if '_' in var:
@@ -557,14 +321,10 @@ class DataSmart(MutableMapping):
if override and override in self._seen_overrides and var in self._seen_overrides[override]:
self._seen_overrides[override].remove(var)
def setVarFlag(self, var, flag, value, **loginfo):
if 'op' not in loginfo:
loginfo['op'] = "set"
loginfo['flag'] = flag
self.varhistory.record(**loginfo)
def setVarFlag(self, var, flag, flagvalue):
if not var in self.dict:
self._makeShadowCopy(var)
self.dict[var][flag] = value
self.dict[var][flag] = flagvalue
def getVarFlag(self, var, flag, expand=False, noweakdefault=False):
local_var = self._findVar(var)
@@ -572,13 +332,13 @@ class DataSmart(MutableMapping):
if local_var:
if flag in local_var:
value = copy.copy(local_var[flag])
elif flag == "_content" and "defaultval" in local_var and not noweakdefault:
elif flag == "content" and "defaultval" in local_var and not noweakdefault:
value = copy.copy(local_var["defaultval"])
if expand and value:
value = self.expand(value, None)
return value
def delVarFlag(self, var, flag, **loginfo):
def delVarFlag(self, var, flag):
local_var = self._findVar(var)
if not local_var:
return
@@ -586,38 +346,23 @@ class DataSmart(MutableMapping):
self._makeShadowCopy(var)
if var in self.dict and flag in self.dict[var]:
loginfo['detail'] = ""
loginfo['op'] = 'delFlag'
loginfo['flag'] = flag
self.varhistory.record(**loginfo)
del self.dict[var][flag]
def appendVarFlag(self, var, flag, value, **loginfo):
loginfo['op'] = 'append'
loginfo['flag'] = flag
self.varhistory.record(**loginfo)
newvalue = (self.getVarFlag(var, flag, False) or "") + value
self.setVarFlag(var, flag, newvalue, ignore=True)
def appendVarFlag(self, key, flag, value):
value = (self.getVarFlag(key, flag, False) or "") + value
self.setVarFlag(key, flag, value)
def prependVarFlag(self, var, flag, value, **loginfo):
loginfo['op'] = 'prepend'
loginfo['flag'] = flag
self.varhistory.record(**loginfo)
newvalue = value + (self.getVarFlag(var, flag, False) or "")
self.setVarFlag(var, flag, newvalue, ignore=True)
def prependVarFlag(self, key, flag, value):
value = value + (self.getVarFlag(key, flag, False) or "")
self.setVarFlag(key, flag, value)
def setVarFlags(self, var, flags, **loginfo):
infer_caller_details(loginfo)
def setVarFlags(self, var, flags):
if not var in self.dict:
self._makeShadowCopy(var)
for i in flags:
if i == "_content":
if i == "content":
continue
loginfo['flag'] = i
loginfo['detail'] = flags[i]
self.varhistory.record(**loginfo)
self.dict[var][i] = flags[i]
def getVarFlags(self, var):
@@ -626,7 +371,7 @@ class DataSmart(MutableMapping):
if local_var:
for i in local_var:
if i.startswith("_"):
if i == "content":
continue
flags[i] = local_var[i]
@@ -635,21 +380,18 @@ class DataSmart(MutableMapping):
return flags
def delVarFlags(self, var, **loginfo):
def delVarFlags(self, var):
if not var in self.dict:
self._makeShadowCopy(var)
if var in self.dict:
content = None
loginfo['op'] = 'delete flags'
self.varhistory.record(**loginfo)
# try to save the content
if "_content" in self.dict[var]:
content = self.dict[var]["_content"]
if "content" in self.dict[var]:
content = self.dict[var]["content"]
self.dict[var] = {}
self.dict[var]["_content"] = content
self.dict[var]["content"] = content
else:
del self.dict[var]
@@ -661,11 +403,6 @@ class DataSmart(MutableMapping):
# we really want this to be a DataSmart...
data = DataSmart(seen=self._seen_overrides.copy(), special=self._special_values.copy())
data.dict["_data"] = self.dict
data.varhistory = self.varhistory.copy()
data.varhistory.datasmart = data
data.inchistory = self.inchistory.copy()
data._tracking = self._tracking
return data
@@ -726,33 +463,13 @@ class DataSmart(MutableMapping):
def get_hash(self):
data = {}
d = self.createCopy()
bb.data.expandKeys(d)
bb.data.update_data(d)
config_whitelist = set((d.getVar("BB_HASHCONFIG_WHITELIST", True) or "").split())
keys = set(key for key in iter(d) if not key.startswith("__"))
config_whitelist = set((self.getVar("BB_HASHCONFIG_WHITELIST", True) or "").split())
keys = set(key for key in iter(self) if not key.startswith("__"))
for key in keys:
if key in config_whitelist:
continue
value = d.getVar(key, False) or ""
value = self.getVar(key, False) or ""
data.update({key:value})
varflags = d.getVarFlags(key)
if not varflags:
continue
for f in varflags:
data.update({'%s[%s]' % (key, f):varflags[f]})
for key in ["__BBTASKS", "__BBANONFUNCS", "__BBHANDLERS"]:
bb_list = d.getVar(key, False) or []
bb_list.sort()
data.update({key:str(bb_list)})
if key == "__BBANONFUNCS":
for i in bb_list:
value = d.getVar(i, True) or ""
data.update({i:value})
data_str = str([(k, data[k]) for k in sorted(data.keys())])
return hashlib.md5(data_str).hexdigest()

View File

@@ -32,7 +32,6 @@ import logging
import atexit
import traceback
import bb.utils
import bb.compat
# This is the pid for which we should generate the event. This is set when
# the runqueue forks off.
@@ -54,7 +53,7 @@ Registered = 10
AlreadyRegistered = 14
# Internal
_handlers = bb.compat.OrderedDict()
_handlers = {}
_ui_handlers = {}
_ui_handler_seq = 0
@@ -105,18 +104,6 @@ def print_ui_queue():
console = logging.StreamHandler(sys.stdout)
console.setFormatter(BBLogFormatter("%(levelname)s: %(message)s"))
logger.handlers = [console]
# First check to see if we have any proper messages
msgprint = False
for event in ui_queue:
if isinstance(event, logging.LogRecord):
if event.levelno > logging.DEBUG:
logger.handle(event)
msgprint = True
if msgprint:
return
# Nope, so just print all of the messages we have (including debug messages)
for event in ui_queue:
if isinstance(event, logging.LogRecord):
logger.handle(event)
@@ -188,7 +175,7 @@ def register(name, handler):
_handlers[name] = noop
return
env = {}
bb.utils.better_exec(code, env)
bb.utils.simple_exec(code, env)
func = bb.utils.better_eval(name, env)
_handlers[name] = func
else:
@@ -325,14 +312,6 @@ class BuildCompleted(BuildBase, OperationCompleted):
OperationCompleted.__init__(self, total, "Building Failed")
BuildBase.__init__(self, n, p, failures)
class DiskFull(Event):
"""Disk full case build aborted"""
def __init__(self, dev, type, freespace, mountpoint):
Event.__init__(self)
self._dev = dev
self._type = type
self._free = freespace
self._mountpoint = mountpoint
class NoProvider(Event):
"""No Provider for an Event"""
@@ -510,15 +489,6 @@ class MsgFatal(MsgBase):
class MsgPlain(MsgBase):
"""General output"""
class LogExecTTY(Event):
"""Send event containing program to spawn on tty of the logger"""
def __init__(self, msg, prog, sleep_delay, retries):
Event.__init__(self)
self.msg = msg
self.prog = prog
self.sleep_delay = sleep_delay
self.retries = retries
class LogHandler(logging.Handler):
"""Dispatch logging messages as bitbake events"""
@@ -562,23 +532,6 @@ class SanityCheckFailed(Event):
"""
Event to indicate sanity check has failed
"""
def __init__(self, msg, network_error=False):
def __init__(self, msg):
Event.__init__(self)
self._msg = msg
self._network_error = network_error
class NetworkTest(Event):
"""
Event to start network test
"""
class NetworkTestPassed(Event):
"""
Event to indicate network test has passed
"""
class NetworkTestFailed(Event):
"""
Event to indicate network test has failed
"""

View File

@@ -32,14 +32,7 @@ class TracebackEntry(namedtuple.abc):
def _get_frame_args(frame):
"""Get the formatted arguments and class (if available) for a frame"""
arginfo = inspect.getargvalues(frame)
try:
if not arginfo.args:
return '', None
# There have been reports from the field of python 2.6 which doesn't
# return a namedtuple here but simply a tuple so fallback gracefully if
# args isn't present.
except AttributeError:
if not arginfo.args:
return '', None
firstarg = arginfo.args[0]

View File

@@ -8,7 +8,6 @@ BitBake build tools.
"""
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2012 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -29,18 +28,10 @@ from __future__ import absolute_import
from __future__ import print_function
import os, re
import logging
import urllib
import urlparse
if 'git' not in urlparse.uses_netloc:
urlparse.uses_netloc.append('git')
from urlparse import urlparse
import operator
import bb.persist_data, bb.utils
import bb.checksum
from bb import data
__version__ = "2"
_checksum_cache = bb.checksum.FileChecksumCache()
logger = logging.getLogger("BitBake.Fetcher")
@@ -59,7 +50,7 @@ class MalformedUrl(BBFetchException):
msg = "The URL: '%s' is invalid and cannot be interpreted" % url
self.url = url
BBFetchException.__init__(self, msg)
self.args = (url,)
self.args = url
class FetchError(BBFetchException):
"""General fetcher exception when something happens incorrectly"""
@@ -72,15 +63,6 @@ class FetchError(BBFetchException):
BBFetchException.__init__(self, msg)
self.args = (message, url)
class ChecksumError(FetchError):
"""Exception when mismatched checksum encountered"""
def __init__(self, message, url = None, checksum = None):
self.checksum = checksum
FetchError.__init__(self, message, url)
class NoChecksumError(FetchError):
"""Exception when no checksum is specified, but BB_STRICT_CHECKSUM is set"""
class UnpackError(BBFetchException):
"""General fetcher exception when something happens incorrectly when unpacking"""
def __init__(self, message, url):
@@ -95,7 +77,7 @@ class NoMethodError(BBFetchException):
msg = "Could not find a fetcher which supports the URL: '%s'" % url
self.url = url
BBFetchException.__init__(self, msg)
self.args = (url,)
self.args = url
class MissingParameterError(BBFetchException):
"""Exception raised when a fetch method is missing a critical parameter in the url"""
@@ -117,215 +99,19 @@ class ParameterError(BBFetchException):
class NetworkAccess(BBFetchException):
"""Exception raised when network access is disabled but it is required."""
def __init__(self, url, cmd):
msg = "Network access disabled through BB_NO_NETWORK but access requested with command %s (for url %s)" % (cmd, url)
msg = "Network access disabled through BB_NO_NETWORK but access rquested with command %s (for url %s)" % (cmd, url)
self.url = url
self.cmd = cmd
BBFetchException.__init__(self, msg)
self.args = (url, cmd)
class NonLocalMethod(Exception):
def __init__(self):
Exception.__init__(self)
class URI(object):
"""
A class representing a generic URI, with methods for
accessing the URI components, and stringifies to the
URI.
It is constructed by calling it with a URI, or setting
the attributes manually:
uri = URI("http://example.com/")
uri = URI()
uri.scheme = 'http'
uri.hostname = 'example.com'
uri.path = '/'
It has the following attributes:
* scheme (read/write)
* userinfo (authentication information) (read/write)
* username (read/write)
* password (read/write)
Note, password is deprecated as of RFC 3986.
* hostname (read/write)
* port (read/write)
* hostport (read only)
"hostname:port", if both are set, otherwise just "hostname"
* path (read/write)
* path_quoted (read/write)
A URI quoted version of path
* params (dict) (read/write)
* relative (bool) (read only)
True if this is a "relative URI", (e.g. file:foo.diff)
It stringifies to the URI itself.
Some notes about relative URIs: while it's specified that
a URI beginning with <scheme>:// should either be directly
followed by a hostname or a /, the old URI handling of the
fetch2 library did not comform to this. Therefore, this URI
class has some kludges to make sure that URIs are parsed in
a way comforming to bitbake's current usage. This URI class
supports the following:
file:relative/path.diff (IETF compliant)
git:relative/path.git (IETF compliant)
git:///absolute/path.git (IETF compliant)
file:///absolute/path.diff (IETF compliant)
file://relative/path.diff (not IETF compliant)
But it does not support the following:
file://hostname/absolute/path.diff (would be IETF compliant)
Note that the last case only applies to a list of
"whitelisted" schemes (currently only file://), that requires
its URIs to not have a network location.
"""
_relative_schemes = ['file', 'git']
_netloc_forbidden = ['file']
def __init__(self, uri=None):
self.scheme = ''
self.userinfo = ''
self.hostname = ''
self.port = None
self._path = ''
self.params = {}
self.relative = False
if not uri:
return
urlp = urlparse(uri)
self.scheme = urlp.scheme
# Convert URI to be relative
if urlp.scheme in self._netloc_forbidden:
uri = re.sub("(?<=:)//(?!/)", "", uri, 1)
urlp = urlparse(uri)
# Identify if the URI is relative or not
if urlp.scheme in self._relative_schemes and \
re.compile("^\w+:(?!//)").match(uri):
self.relative = True
if not self.relative:
self.hostname = urlp.hostname or ''
self.port = urlp.port
self.userinfo += urlp.username or ''
if urlp.password:
self.userinfo += ':%s' % urlp.password
# Do support params even for URI schemes that Python's
# urlparse doesn't support params for.
path = ''
param_str = ''
if not urlp.params:
path, param_str = (list(urlp.path.split(";", 1)) + [None])[:2]
else:
path = urlp.path
param_str = urlp.params
self.path = urllib.unquote(path)
if param_str:
self.params = self._param_dict(param_str)
def __str__(self):
userinfo = self.userinfo
if userinfo:
userinfo += '@'
return "%s:%s%s%s%s%s" % (
self.scheme,
'' if self.relative else '//',
userinfo,
self.hostport,
self.path_quoted,
self._param_str)
@property
def _param_str(self):
ret = ''
for key, val in self.params.items():
ret += ";%s=%s" % (key, val)
return ret
def _param_dict(self, param_str):
parm = {}
for keyval in param_str.split(";"):
key, val = keyval.split("=", 1)
parm[key] = val
return parm
@property
def hostport(self):
if not self.port:
return self.hostname
return "%s:%d" % (self.hostname, self.port)
@property
def path_quoted(self):
return urllib.quote(self.path)
@path_quoted.setter
def path_quoted(self, path):
self.path = urllib.unquote(path)
@property
def path(self):
return self._path
@path.setter
def path(self, path):
self._path = path
if re.compile("^/").match(path):
self.relative = False
else:
self.relative = True
@property
def username(self):
if self.userinfo:
return (self.userinfo.split(":", 1))[0]
return ''
@username.setter
def username(self, username):
self.userinfo = username
if self.password:
self.userinfo += ":%s" % self.password
@property
def password(self):
if self.userinfo and ":" in self.userinfo:
return (self.userinfo.split(":", 1))[1]
return ''
@password.setter
def password(self, password):
self.userinfo = "%s:%s" % (self.username, password)
def decodeurl(url):
"""Decodes an URL into the tokens (scheme, network location, path,
user, password, parameters).
"""
m = re.compile('(?P<type>[^:]*)://((?P<user>[^/]+)@)?(?P<location>[^;]+)(;(?P<parm>.*))?').match(url)
m = re.compile('(?P<type>[^:]*)://((?P<user>.+)@)?(?P<location>[^;]+)(;(?P<parm>.*))?').match(url)
if not m:
raise MalformedUrl(url)
@@ -358,14 +144,14 @@ def decodeurl(url):
s1, s2 = s.split('=')
p[s1] = s2
return type, host, urllib.unquote(path), user, pswd, p
return (type, host, path, user, pswd, p)
def encodeurl(decoded):
"""Encodes a URL from tokens (scheme, network location, path,
user, password, parameters).
"""
type, host, path, user, pswd, p = decoded
(type, host, path, user, pswd, p) = decoded
if not path:
raise MissingParameterError('path', "encoded from the data %s" % str(decoded))
@@ -379,17 +165,14 @@ def encodeurl(decoded):
url += "@"
if host and type != "file":
url += "%s" % host
# Standardise path to ensure comparisons work
while '//' in path:
path = path.replace("//", "/")
url += "%s" % urllib.quote(path)
url += "%s" % path
if p:
for parm in p:
url += ";%s=%s" % (parm, p[parm])
return url
def uri_replace(ud, uri_find, uri_replace, replacements, d):
def uri_replace(ud, uri_find, uri_replace, d):
if not ud.url or not uri_find or not uri_replace:
logger.error("uri_replace: passed an undefined value, not replacing")
return None
@@ -398,47 +181,27 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d):
uri_replace_decoded = list(decodeurl(uri_replace))
logger.debug(2, "For url %s comparing %s to %s" % (uri_decoded, uri_find_decoded, uri_replace_decoded))
result_decoded = ['', '', '', '', '', {}]
for loc, i in enumerate(uri_find_decoded):
for i in uri_find_decoded:
loc = uri_find_decoded.index(i)
result_decoded[loc] = uri_decoded[loc]
regexp = i
if loc == 0 and regexp and not regexp.endswith("$"):
# Leaving the type unanchored can mean "https" matching "file" can become "files"
# which is clearly undesirable.
regexp += "$"
if loc == 5:
# Handle URL parameters
if i:
# Any specified URL parameters must match
for k in uri_replace_decoded[loc]:
if uri_decoded[loc][k] != uri_replace_decoded[loc][k]:
return None
# Overwrite any specified replacement parameters
for k in uri_replace_decoded[loc]:
for l in replacements:
uri_replace_decoded[loc][k] = uri_replace_decoded[loc][k].replace(l, replacements[l])
result_decoded[loc][k] = uri_replace_decoded[loc][k]
elif (re.match(regexp, uri_decoded[loc])):
if not uri_replace_decoded[loc]:
result_decoded[loc] = ""
if isinstance(i, basestring):
if (re.match(i, uri_decoded[loc])):
if not uri_replace_decoded[loc]:
result_decoded[loc] = ""
else:
result_decoded[loc] = re.sub(i, uri_replace_decoded[loc], uri_decoded[loc])
if uri_find_decoded.index(i) == 2:
basename = None
if ud.mirrortarball:
basename = os.path.basename(ud.mirrortarball)
elif ud.localpath:
basename = os.path.basename(ud.localpath)
if basename and result_decoded[loc].endswith("/"):
result_decoded[loc] = os.path.dirname(result_decoded[loc])
if basename and not result_decoded[loc].endswith(basename):
result_decoded[loc] = os.path.join(result_decoded[loc], basename)
else:
for k in replacements:
uri_replace_decoded[loc] = uri_replace_decoded[loc].replace(k, replacements[k])
#bb.note("%s %s %s" % (regexp, uri_replace_decoded[loc], uri_decoded[loc]))
result_decoded[loc] = re.sub(regexp, uri_replace_decoded[loc], uri_decoded[loc])
if loc == 2:
# Handle path manipulations
basename = None
if uri_decoded[0] != uri_replace_decoded[0] and ud.mirrortarball:
# If the source and destination url types differ, must be a mirrortarball mapping
basename = os.path.basename(ud.mirrortarball)
# Kill parameters, they make no sense for mirror tarballs
uri_decoded[5] = {}
elif ud.localpath and ud.method.supports_checksum(ud):
basename = os.path.basename(ud.localpath)
if basename and not result_decoded[loc].endswith(basename):
result_decoded[loc] = os.path.join(result_decoded[loc], basename)
else:
return None
return None
result = encodeurl(result_decoded)
if result == ud.url:
return None
@@ -469,18 +232,10 @@ def fetcher_init(d):
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
_checksum_cache.init_cache(d)
for m in methods:
if hasattr(m, "init"):
m.init(d)
def fetcher_parse_save(d):
_checksum_cache.save_extras(d)
def fetcher_parse_done(d):
_checksum_cache.save_merge(d)
def fetcher_compare_revisions(d):
"""
Compare the revisions in the persistant cache with current values and
@@ -507,37 +262,39 @@ def verify_checksum(u, ud, d):
"""
verify the MD5 and SHA256 checksum for downloaded src
Raises a FetchError if one or both of the SRC_URI checksums do not match
the downloaded file, or if BB_STRICT_CHECKSUM is set and there are no
checksums specified.
return value:
- True: a checksum matched
- False: neither checksum matched
if checksum is missing in recipes file, "BB_STRICT_CHECKSUM" decide the return value.
if BB_STRICT_CHECKSUM = "1" then return false as unmatched, otherwise return true as
matched
"""
if not ud.method.supports_checksum(ud):
if not ud.type in ["http", "https", "ftp", "ftps"]:
return
md5data = bb.utils.md5_file(ud.localpath)
sha256data = bb.utils.sha256_file(ud.localpath)
if ud.method.recommends_checksum(ud):
# If strict checking enabled and neither sum defined, raise error
strict = d.getVar("BB_STRICT_CHECKSUM", True) or None
if (strict and ud.md5_expected == None and ud.sha256_expected == None):
raise NoChecksumError('No checksum specified for %s, please add at least one to the recipe:\n'
'SRC_URI[%s] = "%s"\nSRC_URI[%s] = "%s"' %
(ud.localpath, ud.md5_name, md5data,
ud.sha256_name, sha256data), u)
# If strict checking enabled and neither sum defined, raise error
strict = d.getVar("BB_STRICT_CHECKSUM", True) or None
if (strict and ud.md5_expected == None and ud.sha256_expected == None):
raise FetchError('No checksum specified for %s, please add at least one to the recipe:\n'
'SRC_URI[%s] = "%s"\nSRC_URI[%s] = "%s"' %
(ud.localpath, ud.md5_name, md5data,
ud.sha256_name, sha256data), u)
# Log missing sums so user can more easily add them
if ud.md5_expected == None:
logger.warn('Missing md5 SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"',
ud.localpath, ud.md5_name, md5data)
# Log missing sums so user can more easily add them
if ud.md5_expected == None:
logger.warn('Missing md5 SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"',
ud.localpath, ud.md5_name, md5data)
if ud.sha256_expected == None:
logger.warn('Missing sha256 SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"',
ud.localpath, ud.sha256_name, sha256data)
if ud.sha256_expected == None:
logger.warn('Missing sha256 SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"',
ud.localpath, ud.sha256_name, sha256data)
md5mismatch = False
sha256mismatch = False
@@ -551,20 +308,14 @@ def verify_checksum(u, ud, d):
# We want to alert the user if a checksum is defined in the recipe but
# it does not match.
msg = ""
mismatch = False
if md5mismatch and ud.md5_expected:
msg = msg + "\nFile: '%s' has %s checksum %s when %s was expected" % (ud.localpath, 'md5', md5data, ud.md5_expected)
mismatch = True;
if sha256mismatch and ud.sha256_expected:
msg = msg + "\nFile: '%s' has %s checksum %s when %s was expected" % (ud.localpath, 'sha256', sha256data, ud.sha256_expected)
mismatch = True;
if mismatch:
msg = msg + '\nIf this change is expected (e.g. you have upgraded to a new version without updating the checksums) then you can use these lines within the recipe:\nSRC_URI[%s] = "%s"\nSRC_URI[%s] = "%s"\nOtherwise you should retry the download and/or check with upstream to determine if the file has become corrupted or otherwise unexpectedly modified.\n' % (ud.md5_name, md5data, ud.sha256_name, sha256data)
if len(msg):
raise ChecksumError('Checksum mismatch!%s' % msg, u, md5data)
raise FetchError('Checksum mismatch!%s' % msg, u)
def update_stamp(u, ud, d):
@@ -625,18 +376,10 @@ def get_srcrev(d):
if not format:
raise FetchError("The SRCREV_FORMAT variable must be set when multiple SCMs are used.")
autoinc = False
autoinc_templ = 'AUTOINC+'
for scm in scms:
ud = urldata[scm]
for name in ud.names:
rev = ud.method.sortable_revision(scm, ud, d, name)
if rev.startswith(autoinc_templ):
if not autoinc:
autoinc = True
format = "%s%s" % (autoinc_templ, format)
rev = rev[len(autoinc_templ):]
format = format.replace(name, rev)
return format
@@ -660,16 +403,11 @@ def runfetchcmd(cmd, d, quiet = False, cleanup = []):
# rather than host provided
# Also include some other variables.
# FIXME: Should really include all export varaiables?
exportvars = ['HOME', 'PATH',
'HTTP_PROXY', 'http_proxy',
'HTTPS_PROXY', 'https_proxy',
'FTP_PROXY', 'ftp_proxy',
'FTPS_PROXY', 'ftps_proxy',
'NO_PROXY', 'no_proxy',
'ALL_PROXY', 'all_proxy',
'GIT_PROXY_COMMAND',
'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
'SOCKS5_USER', 'SOCKS5_PASSWD']
exportvars = ['PATH', 'GIT_PROXY_COMMAND', 'GIT_PROXY_HOST',
'GIT_PROXY_PORT', 'GIT_CONFIG', 'http_proxy', 'ftp_proxy',
'https_proxy', 'no_proxy', 'ALL_PROXY', 'all_proxy',
'SSH_AUTH_SOCK', 'SSH_AGENT_PID', 'HOME',
'GIT_PROXY_IGNORE', 'SOCKS5_USER', 'SOCKS5_PASSWD']
for var in exportvars:
val = d.getVar(var, True)
@@ -686,16 +424,11 @@ def runfetchcmd(cmd, d, quiet = False, cleanup = []):
success = True
except bb.process.NotFoundError as e:
error_message = "Fetch command %s" % (e.command)
except bb.process.ExecutionError as e:
if e.stdout:
output = "output:\n%s\n%s" % (e.stdout, e.stderr)
elif e.stderr:
output = "output:\n%s" % e.stderr
else:
output = "no output"
error_message = "Fetch command failed with exit code %s, %s" % (e.exitcode, output)
except bb.process.CmdError as e:
error_message = "Fetch command %s could not be run:\n%s" % (e.command, e.msg)
except bb.process.ExecutionError as e:
error_message = "Fetch command %s failed with exit code %s, output:\n%s" % (e.command, e.exitcode, e.stderr)
if not success:
for f in cleanup:
try:
@@ -720,20 +453,13 @@ def build_mirroruris(origud, mirrors, ld):
uris = []
uds = []
replacements = {}
replacements["TYPE"] = origud.type
replacements["HOST"] = origud.host
replacements["PATH"] = origud.path
replacements["BASENAME"] = origud.path.split("/")[-1]
replacements["MIRRORNAME"] = origud.host.replace(':','.') + origud.path.replace('/', '.').replace('*', '.')
def adduri(uri, ud, uris, uds):
for line in mirrors:
try:
(find, replace) = line
except ValueError:
continue
newuri = uri_replace(ud, find, replace, replacements, ld)
newuri = uri_replace(ud, find, replace, ld)
if not newuri or newuri in uris or newuri == origud.url:
continue
try:
@@ -756,19 +482,6 @@ def build_mirroruris(origud, mirrors, ld):
return uris, uds
def rename_bad_checksum(ud, suffix):
"""
Renames files to have suffix from parameter
"""
if ud.localpath is None:
return
new_localpath = "%s_bad-checksum_%s" % (ud.localpath, suffix)
bb.warn("Renaming %s to %s" % (ud.localpath, new_localpath))
bb.utils.movefile(ud.localpath, new_localpath)
def try_mirror_url(newuri, origud, ud, ld, check = False):
# Return of None or a value means we're finished
# False means try another url
@@ -797,7 +510,6 @@ def try_mirror_url(newuri, origud, ud, ld, check = False):
dldir = ld.getVar("DL_DIR", True)
if origud.mirrortarball and os.path.basename(ud.localpath) == os.path.basename(origud.mirrortarball) \
and os.path.basename(ud.localpath) != os.path.basename(origud.localpath):
bb.utils.mkdirhier(os.path.dirname(ud.donestamp))
open(ud.donestamp, 'w').close()
dest = os.path.join(dldir, os.path.basename(ud.localpath))
if not os.path.exists(dest):
@@ -805,11 +517,7 @@ def try_mirror_url(newuri, origud, ud, ld, check = False):
return None
# Otherwise the result is a local file:// and we symlink to it
if not os.path.exists(origud.localpath):
if os.path.islink(origud.localpath):
# Broken symbolic link
os.unlink(origud.localpath)
os.symlink(ud.localpath, origud.localpath)
os.symlink(ud.localpath, origud.localpath)
update_stamp(newuri, origud, ld)
return ud.localpath
@@ -817,15 +525,8 @@ def try_mirror_url(newuri, origud, ud, ld, check = False):
raise
except bb.fetch2.BBFetchException as e:
if isinstance(e, ChecksumError):
logger.warn("Mirror checksum failure for url %s (original url: %s)\nCleaning and trying again." % (newuri, origud.url))
logger.warn(str(e))
rename_bad_checksum(ud, e.checksum)
elif isinstance(e, NoChecksumError):
raise
else:
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
logger.debug(1, str(e))
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
logger.debug(1, str(e))
try:
ud.method.clean(ud, ld)
except UnboundLocalError:
@@ -872,98 +573,21 @@ def srcrev_internal_helper(ud, d, name):
if not rev:
rev = d.getVar("SRCREV_%s" % name, True)
if not rev:
rev = d.getVar("SRCREV_pn-%s" % pn, True)
rev = d.getVar("SRCREV_pn-%s" % pn, True)
if not rev:
rev = d.getVar("SRCREV", True)
if rev == "INVALID":
var = "SRCREV_pn-%s" % pn
if name != '':
var = "SRCREV_%s_pn-%s" % (name, pn)
raise FetchError("Please set %s to a valid value" % var, ud.url)
raise FetchError("Please set SRCREV to a valid value", ud.url)
if rev == "AUTOINC":
rev = ud.method.latest_revision(ud.url, ud, d, name)
return rev
def get_checksum_file_list(d):
""" Get a list of files checksum in SRC_URI
Returns the resolved local paths of all local file entries in
SRC_URI as a space-separated string
"""
fetch = Fetch([], d, cache = False, localonly = True)
dl_dir = d.getVar('DL_DIR', True)
filelist = []
for u in fetch.urls:
ud = fetch.ud[u]
if ud and isinstance(ud.method, local.Local):
ud.setup_localpath(d)
f = ud.localpath
if f.startswith(dl_dir):
# The local fetcher's behaviour is to return a path under DL_DIR if it couldn't find the file anywhere else
if os.path.exists(f):
bb.warn("Getting checksum for %s SRC_URI entry %s: file not found except in DL_DIR" % (d.getVar('PN', True), os.path.basename(f)))
else:
bb.warn("Unable to get checksum for %s SRC_URI entry %s: file could not be found" % (d.getVar('PN', True), os.path.basename(f)))
continue
filelist.append(f)
return " ".join(filelist)
def get_file_checksums(filelist, pn):
"""Get a list of the checksums for a list of local files
Returns the checksums for a list of local files, caching the results as
it proceeds
"""
def checksum_file(f):
try:
checksum = _checksum_cache.get_checksum(f)
except OSError as e:
import traceback
bb.warn("Unable to get checksum for %s SRC_URI entry %s: %s" % (pn, os.path.basename(f), e))
return None
return checksum
checksums = []
for pth in filelist.split():
checksum = None
if '*' in pth:
# Handle globs
import glob
for f in glob.glob(pth):
checksum = checksum_file(f)
if checksum:
checksums.append((f, checksum))
elif os.path.isdir(pth):
# Handle directories
for root, dirs, files in os.walk(pth):
for name in files:
fullpth = os.path.join(root, name)
checksum = checksum_file(fullpth)
if checksum:
checksums.append((fullpth, checksum))
else:
checksum = checksum_file(pth)
if checksum:
checksums.append((pth, checksum))
checksums.sort(key=operator.itemgetter(1))
return checksums
class FetchData(object):
"""
A class which represents the fetcher state for a given URI.
"""
def __init__(self, url, d, localonly = False):
def __init__(self, url, d):
# localpath is the location of a downloaded result. If not set, the file is local.
self.donestamp = None
self.localfile = ""
@@ -971,7 +595,6 @@ class FetchData(object):
self.lockfile = None
self.mirrortarball = None
self.basename = None
self.basepath = None
(self.type, self.host, self.path, self.user, self.pswd, self.parm) = decodeurl(data.expand(url, d))
self.date = self.getSRCDate(d)
self.url = url
@@ -989,14 +612,10 @@ class FetchData(object):
self.sha256_name = "sha256sum"
if self.md5_name in self.parm:
self.md5_expected = self.parm[self.md5_name]
elif self.type not in ["http", "https", "ftp", "ftps", "sftp"]:
self.md5_expected = None
else:
self.md5_expected = d.getVarFlag("SRC_URI", self.md5_name)
if self.sha256_name in self.parm:
self.sha256_expected = self.parm[self.sha256_name]
elif self.type not in ["http", "https", "ftp", "ftps", "sftp"]:
self.sha256_expected = None
else:
self.sha256_expected = d.getVarFlag("SRC_URI", self.sha256_name)
@@ -1011,13 +630,6 @@ class FetchData(object):
if not self.method:
raise NoMethodError(url)
if localonly and not isinstance(self.method, local.Local):
raise NonLocalMethod()
if self.parm.get("proto", None) and "protocol" not in self.parm:
logger.warn('Consider updating %s recipe to use "protocol" not "proto" in SRC_URI.', d.getVar('PN', True))
self.parm["protocol"] = self.parm.get("proto", None)
if hasattr(self.method, "urldata_init"):
self.method.urldata_init(self, d)
@@ -1028,14 +640,8 @@ class FetchData(object):
elif self.localfile:
self.localpath = self.method.localpath(self.url, self, d)
dldir = d.getVar("DL_DIR", True)
# Note: .done and .lock files should always be in DL_DIR whereas localpath may not be.
if self.localpath and self.localpath.startswith(dldir):
basepath = self.localpath
elif self.localpath:
basepath = dldir + os.sep + os.path.basename(self.localpath)
else:
basepath = dldir + os.sep + (self.basepath or self.basename)
# Note: These files should always be in DL_DIR whereas localpath may not be.
basepath = d.expand("${DL_DIR}/%s" % os.path.basename(self.localpath or self.basename))
self.donestamp = basepath + '.done'
self.lockfile = basepath + '.lock'
@@ -1088,26 +694,6 @@ class FetchMethod(object):
"""
return os.path.join(data.getVar("DL_DIR", d, True), urldata.localfile)
def supports_checksum(self, urldata):
"""
Is localpath something that can be represented by a checksum?
"""
# We cannot compute checksums for directories
if os.path.isdir(urldata.localpath) == True:
return False
if urldata.localpath.find("*") != -1:
return False
return True
def recommends_checksum(self, urldata):
"""
Is the backend on where checksumming is recommended (should warnings
by displayed if there is no checksum)?
"""
return False
def _strip_leading_slashes(self, relpath):
"""
Remove leading slash as os.path.join can't cope
@@ -1158,7 +744,7 @@ class FetchMethod(object):
dots = file.split(".")
if dots[-1] in ['gz', 'bz2', 'Z']:
efile = os.path.join(rootdir, os.path.basename('.'.join(dots[0:-1])))
efile = os.path.join(data.getVar('WORKDIR', True),os.path.basename('.'.join(dots[0:-1])))
else:
efile = file
cmd = None
@@ -1188,34 +774,29 @@ class FetchMethod(object):
if dos:
cmd = '%s -a' % cmd
cmd = "%s '%s'" % (cmd, file)
elif file.endswith('.rpm') or file.endswith('.srpm'):
elif file.endswith('.src.rpm') or file.endswith('.srpm'):
if 'extract' in urldata.parm:
unpack_file = urldata.parm.get('extract')
cmd = 'rpm2cpio.sh %s | cpio -id %s' % (file, unpack_file)
cmd = 'rpm2cpio.sh %s | cpio -i %s' % (file, unpack_file)
iterate = True
iterate_file = unpack_file
else:
cmd = 'rpm2cpio.sh %s | cpio -id' % (file)
elif file.endswith('.deb') or file.endswith('.ipk'):
cmd = 'ar -p %s data.tar.gz | zcat | tar --no-same-owner -xpf -' % file
cmd = 'rpm2cpio.sh %s | cpio -i' % (file)
if not unpack or not cmd:
# If file == dest, then avoid any copies, as we already put the file into dest!
dest = os.path.join(rootdir, os.path.basename(file))
if (file != dest) and not (os.path.exists(dest) and os.path.samefile(file, dest)):
if os.path.isdir(file):
# If for example we're asked to copy file://foo/bar, we need to unpack the result into foo/bar
basepath = getattr(urldata, "basepath", None)
filesdir = os.path.realpath(data.getVar("FILESDIR", True))
destdir = "."
if basepath and basepath.endswith("/"):
basepath = basepath.rstrip("/")
elif basepath:
basepath = os.path.dirname(basepath)
if basepath and basepath.find("/") != -1:
destdir = basepath[:basepath.rfind('/')]
if file[0:len(filesdir)] == filesdir:
destdir = file[len(filesdir):file.rfind('/')]
destdir = destdir.strip('/')
if destdir != "." and not os.access("%s/%s" % (rootdir, destdir), os.F_OK):
os.makedirs("%s/%s" % (rootdir, destdir))
if len(destdir) < 1:
destdir = "."
elif not os.access("%s/%s" % (rootdir, destdir), os.F_OK):
os.makedirs("%s/%s" % (rootdir, destdir))
cmd = 'cp -pPR %s %s/%s/' % (file, rootdir, destdir)
#cmd = 'tar -cf - -C "%d" -ps . | tar -xf - -C "%s/%s/"' % (file, rootdir, destdir)
else:
@@ -1239,9 +820,7 @@ class FetchMethod(object):
bb.utils.mkdirhier(newdir)
os.chdir(newdir)
path = data.getVar('PATH', True)
if path:
cmd = "PATH=\"%s\" %s" % (path, cmd)
cmd = "PATH=\"%s\" %s" % (data.getVar('PATH', True), cmd)
bb.note("Unpacking %s to %s/" % (file, os.getcwd()))
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True)
@@ -1277,6 +856,23 @@ class FetchMethod(object):
logger.info("URL %s could not be checked for status since no method exists.", url)
return True
def localcount_internal_helper(ud, d, name):
"""
Return:
a) a locked localcount if specified
b) None otherwise
"""
localcount = None
if name != '':
pn = d.getVar("PN", True)
localcount = d.getVar("LOCALCOUNT_" + name, True)
if not localcount:
localcount = d.getVar("LOCALCOUNT", True)
return localcount
localcount_internal_helper = staticmethod(localcount_internal_helper)
def latest_revision(self, url, ud, d, name):
"""
Look in the cache for the latest revision, if not present ask the SCM.
@@ -1299,18 +895,43 @@ class FetchMethod(object):
if hasattr(self, "_sortable_revision"):
return self._sortable_revision(url, ud, d)
localcounts = bb.persist_data.persist('BB_URI_LOCALCOUNT', d)
key = self.generate_revision_key(url, ud, d, name)
latest_rev = self._build_revision(url, ud, d, name)
return 'AUTOINC+%s' % str(latest_rev)
last_rev = localcounts.get(key + '_rev')
uselocalcount = d.getVar("BB_LOCALCOUNT_OVERRIDE", True) or False
count = None
if uselocalcount:
count = FetchMethod.localcount_internal_helper(ud, d, name)
if count is None:
count = localcounts.get(key + '_count') or "0"
if last_rev == latest_rev:
return str(count + "+" + latest_rev)
buildindex_provided = hasattr(self, "_sortable_buildindex")
if buildindex_provided:
count = self._sortable_buildindex(url, ud, d, latest_rev)
if count is None:
count = "0"
elif uselocalcount or buildindex_provided:
count = str(count)
else:
count = str(int(count) + 1)
localcounts[key + '_rev'] = latest_rev
localcounts[key + '_count'] = count
return str(count + "+" + latest_rev)
def generate_revision_key(self, url, ud, d, name):
key = self._revision_key(url, ud, d, name)
return "%s-%s" % (key, d.getVar("PN", True) or "")
class Fetch(object):
def __init__(self, urls, d, cache = True, localonly = False):
if localonly and cache:
raise Exception("bb.fetch2.Fetch.__init__: cannot set cache and localonly at same time")
def __init__(self, urls, d, cache = True):
if len(urls) == 0:
urls = d.getVar("SRC_URI", True).split()
self.urls = urls
@@ -1323,12 +944,7 @@ class Fetch(object):
for url in urls:
if url not in self.ud:
try:
self.ud[url] = FetchData(url, d, localonly)
except NonLocalMethod:
if localonly:
self.ud[url] = None
pass
self.ud[url] = FetchData(url, d)
if fn and cache:
urldata_cache[fn] = self.ud
@@ -1402,18 +1018,12 @@ class Fetch(object):
raise
except BBFetchException as e:
if isinstance(e, ChecksumError):
logger.warn("Checksum failure encountered with download of %s - will attempt other sources if available" % u)
logger.debug(1, str(e))
rename_bad_checksum(ud, e.checksum)
elif isinstance(e, NoChecksumError):
raise
else:
logger.warn('Failed to fetch URL %s, attempting MIRRORS if available' % u)
logger.debug(1, str(e))
logger.warn('Failed to fetch URL %s' % u)
logger.debug(1, str(e))
firsterr = e
# Remove any incomplete fetch
m.clean(ud, self.d)
if os.path.isfile(ud.localpath):
bb.utils.remove(ud.localpath)
logger.debug(1, "Trying MIRRORS")
mirrors = mirror_from_string(self.d.getVar('MIRRORS', True))
localpath = try_mirrors (self.d, ud, mirrors)
@@ -1425,13 +1035,6 @@ class Fetch(object):
update_stamp(u, ud, self.d)
except BBFetchException as e:
if isinstance(e, NoChecksumError):
logger.error("%s" % str(e))
elif isinstance(e, ChecksumError):
logger.error("Checksum failure fetching %s" % u)
raise
finally:
bb.utils.unlockfile(lf)
@@ -1515,13 +1118,11 @@ class Fetch(object):
from . import cvs
from . import git
from . import gitsm
from . import local
from . import svn
from . import wget
from . import svk
from . import ssh
from . import sftp
from . import perforce
from . import bzr
from . import hg
@@ -1532,11 +1133,9 @@ methods.append(local.Local())
methods.append(wget.Wget())
methods.append(svn.Svn())
methods.append(git.Git())
methods.append(gitsm.GitSM())
methods.append(cvs.Cvs())
methods.append(svk.Svk())
methods.append(ssh.SSH())
methods.append(sftp.SFTP())
methods.append(perforce.Perforce())
methods.append(bzr.Bzr())
methods.append(hg.Hg())

View File

@@ -60,7 +60,7 @@ class Bzr(FetchMethod):
basecmd = data.expand('${FETCHCMD_bzr}', d)
proto = ud.parm.get('protocol', 'http')
proto = ud.parm.get('proto', 'http')
bzrroot = ud.host + ud.path
@@ -73,7 +73,7 @@ class Bzr(FetchMethod):
options.append("-r %s" % ud.revision)
if command == "fetch":
bzrcmd = "%s branch %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
bzrcmd = "%s co %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
elif command == "update":
bzrcmd = "%s pull %s --overwrite" % (basecmd, " ".join(options))
else:

View File

@@ -29,6 +29,7 @@ BitBake build tools.
import os
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod, FetchError, MissingParameterError, logger
from bb.fetch2 import runfetchcmd
@@ -63,7 +64,7 @@ class Cvs(FetchMethod):
if 'fullpath' in ud.parm:
fullpath = '_fullpath'
ud.localfile = bb.data.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath), d)
ud.localfile = data.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath), d)
def need_update(self, url, ud, d):
if (ud.date == "now"):
@@ -87,10 +88,10 @@ class Cvs(FetchMethod):
cvsroot = ud.path
else:
cvsroot = ":" + method
cvsproxyhost = d.getVar('CVS_PROXY_HOST', True)
cvsproxyhost = data.getVar('CVS_PROXY_HOST', d, True)
if cvsproxyhost:
cvsroot += ";proxy=" + cvsproxyhost
cvsproxyport = d.getVar('CVS_PROXY_PORT', True)
cvsproxyport = data.getVar('CVS_PROXY_PORT', d, True)
if cvsproxyport:
cvsroot += ";proxyport=" + cvsproxyport
cvsroot += ":" + ud.user
@@ -110,9 +111,15 @@ class Cvs(FetchMethod):
if ud.tag:
options.append("-r %s" % ud.tag)
cvsbasecmd = d.getVar("FETCHCMD_cvs", True)
cvscmd = cvsbasecmd + " '-d" + cvsroot + "' co " + " ".join(options) + " " + ud.module
cvsupdatecmd = cvsbasecmd + " '-d" + cvsroot + "' update -d -P " + " ".join(options)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "cvs:%s" % data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
data.setVar('CVSROOT', cvsroot, localdata)
data.setVar('CVSCOOPTS', " ".join(options), localdata)
data.setVar('CVSMODULE', ud.module, localdata)
cvscmd = data.getVar('FETCHCOMMAND', localdata, True)
cvsupdatecmd = data.getVar('UPDATECOMMAND', localdata, True)
if cvs_rsh:
cvscmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvscmd)
@@ -120,8 +127,8 @@ class Cvs(FetchMethod):
# create module directory
logger.debug(2, "Fetch: checking for module directory")
pkg = d.getVar('PN', True)
pkgdir = os.path.join(d.getVar('CVSDIR', True), pkg)
pkg = data.expand('${PN}', d)
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
moddir = os.path.join(pkgdir, localdir)
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
logger.info("Update " + loc)
@@ -162,9 +169,12 @@ class Cvs(FetchMethod):
def clean(self, ud, d):
""" Clean CVS Files and tarballs """
pkg = d.getVar('PN', True)
pkgdir = os.path.join(d.getVar("CVSDIR", True), pkg)
pkg = data.expand('${PN}', d)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "cvs:%s" % data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
bb.utils.remove(pkgdir, True)
bb.utils.remove(ud.localpath)

View File

@@ -11,8 +11,8 @@ Supported SRC_URI options are:
- branch
The git branch to retrieve from. The default is "master"
This option also supports multiple branch fetching, with branches
separated by commas. In multiple branches case, the name option
this option also support multiple branches fetching, branches
are seperated by comma. in multiple branches case, the name option
must have the same number of names to match the branches, which is
used to specify the SRC_REV for the branch
e.g:
@@ -25,13 +25,13 @@ Supported SRC_URI options are:
- protocol
The method to use to access the repository. Common options are "git",
"http", "https", "file", "ssh" and "rsync". The default is "git".
"http", "file" and "rsync". The default is "git"
- rebaseable
rebaseable indicates that the upstream git repo may rebase in the future,
and current revision may disappear from upstream repo. This option will
remind fetcher to preserve local cache carefully for future use.
The default value is "0", set rebaseable=1 for rebaseable git repo.
reminder fetcher to preserve local cache carefully for future use.
The default value is "0", set rebaseable=1 for rebaseable git repo
- nocheckout
Don't checkout source code when unpacking. set this option for the recipe
@@ -71,17 +71,17 @@ from bb.fetch2 import logger
class Git(FetchMethod):
"""Class to fetch a module or modules from git repositories"""
def init(self, d):
pass
#
# Only enable _sortable revision if the key is set
#
if d.getVar("BB_GIT_CLONE_FOR_SRCREV", True):
self._sortable_buildindex = self._sortable_buildindex_disabled
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with git.
"""
return ud.type in ['git']
def supports_checksum(self, urldata):
return False
def urldata_init(self, ud, d):
"""
init git specific variable within url data
@@ -123,11 +123,10 @@ class Git(FetchMethod):
for name in ud.names:
# Ensure anything that doesn't look like a sha256 checksum/revision is translated into one
if not ud.revisions[name] or len(ud.revisions[name]) != 40 or (False in [c in "abcdef0123456789" for c in ud.revisions[name]]):
if ud.revisions[name]:
ud.branches[name] = ud.revisions[name]
ud.branches[name] = ud.revisions[name]
ud.revisions[name] = self.latest_revision(ud.url, ud, d, name)
gitsrcname = '%s%s' % (ud.host.replace(':','.'), ud.path.replace('/', '.').replace('*', '.'))
gitsrcname = '%s%s' % (ud.host.replace(':','.'), ud.path.replace('/', '.'))
# for rebaseable git repo, it is necessary to keep mirror tar ball
# per revision, so that even the revision disappears from the
# upstream repo in the future, the mirror will remain intact and still
@@ -136,9 +135,8 @@ class Git(FetchMethod):
for name in ud.names:
gitsrcname = gitsrcname + '_' + ud.revisions[name]
ud.mirrortarball = 'git2_%s.tar.gz' % (gitsrcname)
ud.fullmirror = os.path.join(d.getVar("DL_DIR", True), ud.mirrortarball)
gitdir = d.getVar("GITDIR", True) or (d.getVar("DL_DIR", True) + "/git2/")
ud.clonedir = os.path.join(gitdir, gitsrcname)
ud.fullmirror = os.path.join(data.getVar("DL_DIR", d, True), ud.mirrortarball)
ud.clonedir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
ud.localfile = ud.clonedir
@@ -185,12 +183,8 @@ class Git(FetchMethod):
# If the repo still doesn't exist, fallback to cloning it
if not os.path.exists(ud.clonedir):
# We do this since git will use a "-l" option automatically for local urls where possible
if repourl.startswith("file://"):
repourl = repourl[7:]
clone_cmd = "%s clone --bare --mirror %s %s" % (ud.basecmd, repourl, ud.clonedir)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, clone_cmd)
bb.fetch2.check_network_access(d, clone_cmd)
runfetchcmd(clone_cmd, d)
os.chdir(ud.clonedir)
@@ -201,14 +195,14 @@ class Git(FetchMethod):
needupdate = True
if needupdate:
try:
runfetchcmd("%s remote prune origin" % ud.basecmd, d)
runfetchcmd("%s remote rm origin" % ud.basecmd, d)
except bb.fetch2.FetchError:
logger.debug(1, "No Origin")
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d)
fetch_cmd = "%s fetch -f --prune %s refs/*:refs/*" % (ud.basecmd, repourl)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
runfetchcmd(fetch_cmd, d)
runfetchcmd("%s prune-packed" % ud.basecmd, d)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
@@ -217,10 +211,6 @@ class Git(FetchMethod):
def build_mirror_data(self, url, ud, d):
# Generate a mirror tarball if needed
if ud.write_tarballs and (ud.repochanged or not os.path.exists(ud.fullmirror)):
# it's possible that this symlink points to read-only filesystem with PREMIRROR
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
os.chdir(ud.clonedir)
logger.info("Creating tarball of git repository")
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, os.path.join(".") ), d)
@@ -238,7 +228,7 @@ class Git(FetchMethod):
def_destsuffix = "git/"
destsuffix = ud.parm.get("destsuffix", def_destsuffix)
destdir = ud.destdir = os.path.join(destdir, destsuffix)
destdir = os.path.join(destdir, destsuffix)
if os.path.exists(destdir):
bb.utils.prunedir(destdir)
@@ -246,23 +236,7 @@ class Git(FetchMethod):
if ud.bareclone:
cloneflags += " --mirror"
# Versions of git prior to 1.7.9.2 have issues where foo.git and foo get confused
# and you end up with some horrible union of the two when you attempt to clone it
# The least invasive workaround seems to be a symlink to the real directory to
# fool git into ignoring any .git version that may also be present.
#
# The issue is fixed in more recent versions of git so we can drop this hack in future
# when that version becomes common enough.
clonedir = ud.clonedir
if not ud.path.endswith(".git"):
indirectiondir = destdir[:-1] + ".indirectionsymlink"
if os.path.exists(indirectiondir):
os.remove(indirectiondir)
bb.utils.mkdirhier(os.path.dirname(indirectiondir))
os.symlink(ud.clonedir, indirectiondir)
clonedir = indirectiondir
runfetchcmd("git clone %s %s/ %s" % (cloneflags, clonedir, destdir), d)
runfetchcmd("git clone %s %s/ %s" % (cloneflags, ud.clonedir, destdir), d)
if not ud.nocheckout:
os.chdir(destdir)
if subdir != "":
@@ -307,8 +281,7 @@ class Git(FetchMethod):
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
cmd = "%s ls-remote %s://%s%s%s %s" % \
(basecmd, ud.proto, username, ud.host, ud.path, ud.branches[name])
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, cmd)
bb.fetch2.check_network_access(d, cmd)
output = runfetchcmd(cmd, d, True)
if not output:
raise bb.fetch2.FetchError("The command %s gave empty output unexpectedly" % cmd, url)
@@ -317,6 +290,38 @@ class Git(FetchMethod):
def _build_revision(self, url, ud, d, name):
return ud.revisions[name]
def _sortable_buildindex_disabled(self, url, ud, d, rev):
"""
Return a suitable buildindex for the revision specified. This is done by counting revisions
using "git rev-list" which may or may not work in different circumstances.
"""
cwd = os.getcwd()
# Check if we have the rev already
if not os.path.exists(ud.clonedir):
logger.debug(1, "GIT repository for %s does not exist in %s. \
Downloading.", url, ud.clonedir)
self.download(None, ud, d)
if not os.path.exists(ud.clonedir):
logger.error("GIT repository for %s does not exist in %s after \
download. Cannot get sortable buildnumber, using \
old value", url, ud.clonedir)
return None
os.chdir(ud.clonedir)
if not self._contains_ref(rev, d):
self.download(None, ud, d)
output = runfetchcmd("%s rev-list %s -- 2> /dev/null | wc -l" % (ud.basecmd, rev), d, quiet=True)
os.chdir(cwd)
buildindex = "%s" % output.split()[0]
logger.debug(1, "GIT repository for %s in %s is returning %s revisions in rev-list before %s", url, ud.clonedir, buildindex, rev)
return buildindex
def checkstatus(self, uri, ud, d):
fetchcmd = "%s ls-remote %s" % (ud.basecmd, uri)
try:

View File

@@ -1,78 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' git submodules implementation
"""
# Copyright (C) 2013 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import bb
from bb import data
from bb.fetch2.git import Git
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class GitSM(Git):
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with git.
"""
return ud.type in ['gitsm']
def uses_submodules(self, ud, d):
for name in ud.names:
try:
runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revisions[name]), d, quiet=True)
return True
except bb.fetch.FetchError:
pass
return False
def update_submodules(self, u, ud, d):
# We have to convert bare -> full repo, do the submodule bit, then convert back
tmpclonedir = ud.clonedir + ".tmp"
gitdir = tmpclonedir + os.sep + ".git"
bb.utils.remove(tmpclonedir, True)
os.mkdir(tmpclonedir)
os.rename(ud.clonedir, gitdir)
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*true/bare = false/'", d)
os.chdir(tmpclonedir)
runfetchcmd("git reset --hard", d)
runfetchcmd("git submodule init", d)
runfetchcmd("git submodule update", d)
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*false/bare = true/'", d)
os.rename(gitdir, ud.clonedir,)
bb.utils.remove(tmpclonedir, True)
def download(self, loc, ud, d):
Git.download(self, loc, ud, d)
os.chdir(ud.clonedir)
submodules = self.uses_submodules(ud, d)
if submodules:
self.update_submodules(loc, ud, d)
def unpack(self, ud, destdir, d):
Git.unpack(self, ud, destdir, d)
os.chdir(ud.destdir)
submodules = self.uses_submodules(ud, d)
if submodules:
runfetchcmd("cp -r " + ud.clonedir + "/modules " + ud.destdir + "/.git/", d)
runfetchcmd("git submodule init", d)
runfetchcmd("git submodule update", d)

View File

@@ -82,7 +82,7 @@ class Hg(FetchMethod):
basecmd = data.expand('${FETCHCMD_hg}', d)
proto = ud.parm.get('protocol', 'http')
proto = ud.parm.get('proto', 'http')
host = ud.host
if proto == "file":
@@ -92,21 +92,13 @@ class Hg(FetchMethod):
if not ud.user:
hgroot = host + ud.path
else:
if ud.pswd:
hgroot = ud.user + ":" + ud.pswd + "@" + host + ud.path
else:
hgroot = ud.user + "@" + host + ud.path
hgroot = ud.user + "@" + host + ud.path
if command == "info":
return "%s identify -i %s://%s/%s" % (basecmd, proto, hgroot, ud.module)
options = [];
# Don't specify revision for the fetch; clone the entire repo.
# This avoids an issue if the specified revision is a tag, because
# the tag actually exists in the specified revision + 1, so it won't
# be available when used in any successive commands.
if ud.revision and command != "fetch":
if ud.revision:
options.append("-r %s" % ud.revision)
if command == "fetch":
@@ -115,10 +107,7 @@ class Hg(FetchMethod):
# do not pass options list; limiting pull to rev causes the local
# repo not to contain it and immediately following "update" command
# will crash
if ud.user and ud.pswd:
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" pull" % (basecmd, ud.user, ud.pswd, proto)
else:
cmd = "%s pull" % (basecmd)
cmd = "%s pull" % (basecmd)
elif command == "update":
cmd = "%s update -C %s" % (basecmd, " ".join(options))
else:

View File

@@ -26,12 +26,10 @@ BitBake build tools.
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import urllib
import bb
import bb.utils
from bb import data
from bb.fetch2 import FetchMethod, FetchError
from bb.fetch2 import logger
from bb.fetch2 import FetchMethod
class Local(FetchMethod):
def supports(self, url, urldata, d):
@@ -42,37 +40,27 @@ class Local(FetchMethod):
def urldata_init(self, ud, d):
# We don't set localfile as for this fetcher the file is already local!
ud.decodedurl = urllib.unquote(ud.url.split("://")[1].split(";")[0])
ud.basename = os.path.basename(ud.decodedurl)
ud.basepath = ud.decodedurl
ud.basename = os.path.basename(ud.url.split("://")[1].split(";")[0])
return
def localpath(self, url, urldata, d):
"""
Return the local filename of a given url assuming a successful fetch.
"""
path = urldata.decodedurl
path = url.split("://")[1]
path = path.split(";")[0]
newpath = path
if path[0] != "/":
filespath = data.getVar('FILESPATH', d, True)
if filespath:
logger.debug(2, "Searching for %s in paths: \n%s" % (path, "\n ".join(filespath.split(":"))))
newpath = bb.utils.which(filespath, path)
if not newpath:
filesdir = data.getVar('FILESDIR', d, True)
if filesdir:
logger.debug(2, "Searching for %s in path: %s" % (path, filesdir))
newpath = os.path.join(filesdir, path)
if (not newpath or not os.path.exists(newpath)) and path.find("*") != -1:
# For expressions using '*', best we can do is take the first directory in FILESPATH that exists
newpath = bb.utils.which(filespath, ".")
logger.debug(2, "Searching for %s in path: %s" % (path, newpath))
return newpath
if not os.path.exists(newpath):
dldirfile = os.path.join(d.getVar("DL_DIR", True), path)
logger.debug(2, "Defaulting to %s for %s" % (dldirfile, path))
bb.utils.mkdirhier(os.path.dirname(dldirfile))
return dldirfile
if not os.path.exists(newpath) and path.find("*") == -1:
dldirfile = os.path.join(data.getVar("DL_DIR", d, True), os.path.basename(path))
return dldirfile
return newpath
def need_update(self, url, ud, d):
@@ -85,20 +73,7 @@ class Local(FetchMethod):
def download(self, url, urldata, d):
"""Fetch urls (no-op for Local method)"""
# no need to fetch local files, we'll deal with them in place.
if self.supports_checksum(urldata) and not os.path.exists(urldata.localpath):
locations = []
filespath = data.getVar('FILESPATH', d, True)
if filespath:
locations = filespath.split(":")
filesdir = data.getVar('FILESDIR', d, True)
if filesdir:
locations.append(filesdir)
locations.append(d.getVar("DL_DIR", True))
msg = "Unable to find file " + url + " anywhere. The paths that were searched were:\n " + "\n ".join(locations)
raise FetchError(msg)
return True
return 1
def checkstatus(self, url, urldata, d):
"""

View File

@@ -57,7 +57,7 @@ class Osc(FetchMethod):
basecmd = data.expand('${FETCHCMD_osc}', d)
proto = ud.parm.get('protocol', 'ocs')
proto = ud.parm.get('proto', 'ocs')
options = []

View File

@@ -27,7 +27,6 @@ BitBake build tools.
from future_builtins import zip
import os
import subprocess
import logging
import bb
from bb import data
@@ -91,8 +90,8 @@ class Perforce(FetchMethod):
p4cmd = data.getVar('FETCHCOMMAND_p4', d, True)
logger.debug(1, "Running %s%s changes -m 1 %s", p4cmd, p4opt, depot)
p4file, errors = bb.process.run("%s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
cset = p4file.strip()
p4file = os.popen("%s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
cset = p4file.readline().strip()
logger.debug(1, "READ %s", cset)
if not cset:
return -1
@@ -112,7 +111,7 @@ class Perforce(FetchMethod):
base = path
which = path.find('/...')
if which != -1:
base = path[:which-1]
base = path[:which]
base = self._strip_leading_slashes(base)
@@ -155,8 +154,8 @@ class Perforce(FetchMethod):
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
data.setVar('TMPBASE', data.expand('${WORKDIR}/oep4.XXXXXX', localdata), localdata)
tmpfile, errors = bb.process.run(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
tmpfile = tmpfile.strip()
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
tmpfile = tmppipe.readline().strip()
if not tmpfile:
raise FetchError("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.", loc)
@@ -169,8 +168,7 @@ class Perforce(FetchMethod):
os.chdir(tmpfile)
logger.info("Fetch " + loc)
logger.info("%s%s files %s", p4cmd, p4opt, depot)
p4file, errors = bb.process.run("%s%s files %s" % (p4cmd, p4opt, depot))
p4file = [f.rstrip() for f in p4file.splitlines()]
p4file = os.popen("%s%s files %s" % (p4cmd, p4opt, depot))
if not p4file:
raise FetchError("Fetch: unable to get the P4 files from %s" % depot, loc)
@@ -186,7 +184,7 @@ class Perforce(FetchMethod):
dest = list[0][len(path)+1:]
where = dest.find("#")
subprocess.call("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module, dest[:where], list[0]), shell=True)
os.system("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module, dest[:where], list[0]))
count = count + 1
if count == 0:

View File

@@ -1,129 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake SFTP Fetch implementation
Class for fetching files via SFTP. It tries to adhere to the (now
expired) IETF Internet Draft for "Uniform Resource Identifier (URI)
Scheme for Secure File Transfer Protocol (SFTP) and Secure Shell
(SSH)" (SECSH URI).
It uses SFTP (as to adhere to the SECSH URI specification). It only
supports key based authentication, not password. This class, unlike
the SSH fetcher, does not support fetching a directory tree from the
remote.
http://tools.ietf.org/html/draft-ietf-secsh-scp-sftp-ssh-uri-04
https://www.iana.org/assignments/uri-schemes/prov/sftp
https://tools.ietf.org/html/draft-ietf-secsh-filexfer-13
Please note that '/' is used as host path seperator, and not ":"
as you may be used to from the scp/sftp commands. You can use a
~ (tilde) to specify a path relative to your home directory.
(The /~user/ syntax, for specyfing a path relative to another
user's home directory is not supported.) Note that the tilde must
still follow the host path seperator ("/"). See exampels below.
Example SRC_URIs:
SRC_URI = "sftp://host.example.com/dir/path.file.txt"
A path relative to your home directory.
SRC_URI = "sftp://host.example.com/~/dir/path.file.txt"
You can also specify a username (specyfing password in the
URI is not supported, use SSH keys to authenticate):
SRC_URI = "sftp://user@host.example.com/dir/path.file.txt"
"""
# Copyright (C) 2013, Olof Johansson <olof.johansson@axis.com>
#
# Based in part on bb.fetch2.wget:
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import bb
import urllib
import commands
from bb import data
from bb.fetch2 import URI
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
class SFTP(FetchMethod):
"""Class to fetch urls via 'sftp'"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with sftp.
"""
return ud.type in ['sftp']
def recommends_checksum(self, urldata):
return True
def urldata_init(self, ud, d):
if 'protocol' in ud.parm and ud.parm['protocol'] == 'git':
raise bb.fetch2.ParameterError(
"Invalid protocol - if you wish to fetch from a " +
"git repository using ssh, you need to use the " +
"git:// prefix with protocol=ssh", ud.url)
if 'downloadfilename' in ud.parm:
ud.basename = ud.parm['downloadfilename']
else:
ud.basename = os.path.basename(ud.path)
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
def download(self, uri, ud, d):
"""Fetch urls"""
urlo = URI(uri)
basecmd = 'sftp -oPasswordAuthentication=no'
port = ''
if urlo.port:
port = '-P %d' % urlo.port
urlo.port = None
dldir = data.getVar('DL_DIR', d, True)
lpath = os.path.join(dldir, ud.localfile)
user = ''
if urlo.userinfo:
user = urlo.userinfo + '@'
path = urlo.path
# Supoprt URIs relative to the user's home directory, with
# the tilde syntax. (E.g. <sftp://example.com/~/foo.diff>).
if path[:3] == '/~/':
path = path[3:]
remote = '%s%s:%s' % (user, urlo.hostname, path)
cmd = '%s %s %s %s' % (basecmd, port, commands.mkarg(remote),
commands.mkarg(lpath))
bb.fetch2.check_network_access(d, cmd, uri)
runfetchcmd(cmd, d)
return True

View File

@@ -10,12 +10,6 @@ IETF secsh internet draft:
Currently does not support the sftp parameters, as this uses scp
Also does not support the 'fingerprint' connection parameter.
Please note that '/' is used as host, path separator not ':' as you may
be used to, also '~' can be used to specify user HOME, but again after '/'
Example SRC_URI:
SRC_URI = "ssh://user@host.example.com/dir/path/file.txt"
SRC_URI = "ssh://user@host.example.com/~/file.txt"
'''
# Copyright (C) 2006 OpenedHand Ltd.
@@ -75,22 +69,15 @@ class SSH(FetchMethod):
def supports(self, url, urldata, d):
return __pattern__.match(url) != None
def supports_checksum(self, urldata):
return False
def urldata_init(self, urldata, d):
if 'protocol' in urldata.parm and urldata.parm['protocol'] == 'git':
raise bb.fetch2.ParameterError(
"Invalid protocol - if you wish to fetch from a git " +
"repository using ssh, you need to use " +
"git:// prefix with protocol=ssh", urldata.url)
def localpath(self, url, urldata, d):
m = __pattern__.match(urldata.url)
path = m.group('path')
host = m.group('host')
urldata.localpath = os.path.join(d.getVar('DL_DIR', True), os.path.basename(path))
lpath = os.path.join(data.getVar('DL_DIR', d, True), host, os.path.basename(path))
return lpath
def download(self, url, urldata, d):
dldir = d.getVar('DL_DIR', True)
dldir = data.getVar('DL_DIR', d, True)
m = __pattern__.match(url)
path = m.group('path')
@@ -99,10 +86,16 @@ class SSH(FetchMethod):
user = m.group('user')
password = m.group('pass')
ldir = os.path.join(dldir, host)
lpath = os.path.join(ldir, os.path.basename(path))
if not os.path.exists(ldir):
os.makedirs(ldir)
if port:
portarg = '-P %s' % port
port = '-P %s' % port
else:
portarg = ''
port = ''
if user:
fr = user
@@ -116,9 +109,9 @@ class SSH(FetchMethod):
import commands
cmd = 'scp -B -r %s %s %s/' % (
portarg,
port,
commands.mkarg(fr),
commands.mkarg(dldir)
commands.mkarg(ldir)
)
bb.fetch2.check_network_access(d, cmd, urldata.url)

View File

@@ -77,8 +77,8 @@ class Svk(FetchMethod):
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
data.setVar('TMPBASE', data.expand('${WORKDIR}/oesvk.XXXXXX', localdata), localdata)
tmpfile, errors = bb.process.run(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
tmpfile = tmpfile.strip()
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
tmpfile = tmppipe.readline().strip()
if not tmpfile:
logger.error()
raise FetchError("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.", loc)

View File

@@ -27,7 +27,6 @@ import os
import sys
import logging
import bb
import re
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
@@ -50,8 +49,6 @@ class Svn(FetchMethod):
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
ud.basecmd = d.getVar('FETCHCMD_svn', True)
ud.module = ud.parm["module"]
# Create paths to svn checkouts
@@ -72,7 +69,9 @@ class Svn(FetchMethod):
command is "fetch", "update", "info"
"""
proto = ud.parm.get('protocol', 'svn')
basecmd = data.expand('${FETCHCMD_svn}', d)
proto = ud.parm.get('proto', 'svn')
svn_rsh = None
if proto == "svn+ssh" and "rsh" in ud.parm:
@@ -89,9 +88,7 @@ class Svn(FetchMethod):
options.append("--password %s" % ud.pswd)
if command == "info":
svncmd = "%s info %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
elif command == "log1":
svncmd = "%s log --limit 1 %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
svncmd = "%s info %s %s://%s/%s/" % (basecmd, " ".join(options), proto, svnroot, ud.module)
else:
suffix = ""
if ud.revision:
@@ -99,9 +96,9 @@ class Svn(FetchMethod):
suffix = "@%s" % (ud.revision)
if command == "fetch":
svncmd = "%s co %s %s://%s/%s%s %s" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module, suffix, ud.module)
svncmd = "%s co %s %s://%s/%s%s %s" % (basecmd, " ".join(options), proto, svnroot, ud.module, suffix, ud.module)
elif command == "update":
svncmd = "%s update %s" % (ud.basecmd, " ".join(options))
svncmd = "%s update %s" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid svn command %s" % command, ud.url)
@@ -120,11 +117,6 @@ class Svn(FetchMethod):
logger.info("Update " + loc)
# update sources there
os.chdir(ud.moddir)
# We need to attempt to run svn upgrade first in case its an older working format
try:
runfetchcmd(ud.basecmd + " upgrade", d)
except FetchError:
pass
logger.debug(1, "Running %s", svnupdatecmd)
bb.fetch2.check_network_access(d, svnupdatecmd, ud.url)
runfetchcmd(svnupdatecmd, d)
@@ -168,13 +160,14 @@ class Svn(FetchMethod):
"""
Return the latest upstream revision number
"""
bb.fetch2.check_network_access(d, self._buildsvncommand(ud, d, "log1"))
bb.fetch2.check_network_access(d, self._buildsvncommand(ud, d, "info"))
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "log1"), d, True)
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "info"), d, True)
# skip the first line, as per output of svn log
# then we expect the revision on the 2nd line
revision = re.search('^r([0-9]*)', output.splitlines()[1]).group(1)
revision = None
for line in output.splitlines():
if "Last Changed Rev" in line:
revision = line.split(":")[1].strip()
return revision

View File

@@ -32,6 +32,8 @@ import urllib
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import encodeurl
from bb.fetch2 import decodeurl
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
@@ -43,54 +45,47 @@ class Wget(FetchMethod):
"""
return ud.type in ['http', 'https', 'ftp']
def recommends_checksum(self, urldata):
return True
def urldata_init(self, ud, d):
if 'protocol' in ud.parm:
if ud.parm['protocol'] == 'git':
raise bb.fetch2.ParameterError("Invalid protocol - if you wish to fetch from a git repository using http, you need to instead use the git:// prefix with protocol=http", ud.url)
if 'downloadfilename' in ud.parm:
ud.basename = ud.parm['downloadfilename']
else:
ud.basename = os.path.basename(ud.path)
ud.basename = os.path.basename(ud.path)
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
def download(self, uri, ud, d, checkonly = False):
"""Fetch urls"""
basecmd = d.getVar("FETCHCMD_wget", True) or "/usr/bin/env wget -t 2 -T 30 -nv --passive-ftp --no-check-certificate"
def fetch_uri(uri, ud, d):
if checkonly:
fetchcmd = data.getVar("CHECKCOMMAND", d, True)
elif os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
fetchcmd = data.getVar("RESUMECOMMAND", d, True)
else:
fetchcmd = data.getVar("FETCHCOMMAND", d, True)
if not checkonly and 'downloadfilename' in ud.parm:
dldir = d.getVar("DL_DIR", True)
bb.utils.mkdirhier(os.path.dirname(dldir + os.sep + ud.localfile))
basecmd += " -O " + dldir + os.sep + ud.localfile
uri = uri.split(";")[0]
uri_decoded = list(decodeurl(uri))
uri_type = uri_decoded[0]
uri_host = uri_decoded[1]
if checkonly:
fetchcmd = d.getVar("CHECKCOMMAND_wget", True) or d.expand(basecmd + " --spider '${URI}'")
elif os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
fetchcmd = d.getVar("RESUMECOMMAND_wget", True) or d.expand(basecmd + " -c -P ${DL_DIR} '${URI}'")
else:
fetchcmd = d.getVar("FETCHCOMMAND_wget", True) or d.expand(basecmd + " -P ${DL_DIR} '${URI}'")
fetchcmd = fetchcmd.replace("${URI}", uri.split(";")[0])
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
if not checkonly:
logger.info("fetch " + uri)
logger.debug(2, "executing " + fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd)
runfetchcmd(fetchcmd, d, quiet=checkonly)
uri = uri.split(";")[0]
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(ud.localpath) and not checkonly:
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, ud.localpath), uri)
fetchcmd = fetchcmd.replace("${URI}", uri.split(";")[0])
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
if not checkonly:
logger.info("fetch " + uri)
logger.debug(2, "executing " + fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd)
runfetchcmd(fetchcmd, d, quiet=checkonly)
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(ud.localpath) and not checkonly:
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, ud.localpath), uri)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "wget:" + data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
fetch_uri(uri, ud, localdata)
return True
def checkstatus(self, uri, ud, d):

View File

@@ -17,7 +17,26 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
What is a method pool?
BitBake has a global method scope where .bb, .inc and .bbclass
files can install methods. These methods are parsed from strings.
To avoid recompiling and executing these string we introduce
a method pool to do this task.
This pool will be used to compile and execute the functions. It
will be smart enough to
"""
from bb.utils import better_compile, better_exec
from bb import error
# A dict of modules we have handled
# it is the number of .bbclasses + x in size
_parsed_methods = { }
_parsed_fns = { }
def insert_method(modulename, code, fn):
"""
@@ -26,3 +45,40 @@ def insert_method(modulename, code, fn):
"""
comp = better_compile(code, modulename, fn )
better_exec(comp, None, code, fn)
# now some instrumentation
code = comp.co_names
for name in code:
if name in ['None', 'False']:
continue
elif name in _parsed_fns and not _parsed_fns[name] == modulename:
error( "Error Method already seen: %s in' %s' now in '%s'" % (name, _parsed_fns[name], modulename))
else:
_parsed_fns[name] = modulename
def check_insert_method(modulename, code, fn):
"""
Add the code if it wasnt added before. The module
name will be used for that
Variables:
@modulename a short name e.g. base.bbclass
@code The actual python code
@fn The filename from the outer file
"""
if not modulename in _parsed_methods:
return insert_method(modulename, code, fn)
_parsed_methods[modulename] = 1
def parsed_module(modulename):
"""
Inform me file xyz was parsed
"""
return modulename in _parsed_methods
def get_parsed_dict():
"""
shortcut
"""
return _parsed_methods

View File

@@ -107,7 +107,7 @@ def getDiskData(BBDirs, configuration):
printErr("Invalid disk space value in BB_DISKMON_DIRS: %s" % pathSpaceInodeRe.group(3))
return None
else:
# None means that it is not specified
# 0 means that it is not specified
minSpace = None
minInode = pathSpaceInodeRe.group(4)
@@ -117,7 +117,7 @@ def getDiskData(BBDirs, configuration):
printErr("Invalid inode value in BB_DISKMON_DIRS: %s" % pathSpaceInodeRe.group(4))
return None
else:
# None means that it is not specified
# 0 means that it is not specified
minInode = None
if minSpace is None and minInode is None:
@@ -127,9 +127,8 @@ def getDiskData(BBDirs, configuration):
# DL_DIR may not exist at the very beginning
if not os.path.exists(path):
bb.utils.mkdirhier(path)
dev = getMountedDev(path)
# Use path/action as the key
devDict[os.path.join(path, action)] = [dev, minSpace, minInode]
mountedDev = getMountedDev(path)
devDict[mountedDev] = action, path, minSpace, minInode
return devDict
@@ -177,7 +176,6 @@ class diskMonitor:
def __init__(self, configuration):
self.enableMonitor = False
self.configuration = configuration
BBDirs = configuration.getVar("BB_DISKMON_DIRS", True) or None
if BBDirs:
@@ -193,10 +191,10 @@ class diskMonitor:
# This is for STOPTASKS and ABORT, to avoid print the message repeatly
# during waiting the tasks to finish
self.checked = {}
for k in self.devDict:
self.preFreeS[k] = 0
self.preFreeI[k] = 0
self.checked[k] = False
for dev in self.devDict:
self.preFreeS[dev] = 0
self.preFreeI[dev] = 0
self.checked[dev] = False
if self.spaceInterval is None and self.inodeInterval is None:
self.enableMonitor = False
@@ -205,61 +203,42 @@ class diskMonitor:
""" Take action for the monitor """
if self.enableMonitor:
for k in self.devDict:
path = os.path.dirname(k)
action = os.path.basename(k)
dev = self.devDict[k][0]
minSpace = self.devDict[k][1]
minInode = self.devDict[k][2]
st = os.statvfs(path)
for dev in self.devDict:
st = os.statvfs(self.devDict[dev][1])
# The free space, float point number
freeSpace = st.f_bavail * st.f_frsize
if minSpace and freeSpace < minSpace:
if self.devDict[dev][2] and freeSpace < self.devDict[dev][2]:
# Always show warning, the self.checked would always be False if the action is WARN
if self.preFreeS[k] == 0 or self.preFreeS[k] - freeSpace > self.spaceInterval and not self.checked[k]:
logger.warn("The free space of %s (%s) is running low (%.3fGB left)" % \
(path, dev, freeSpace / 1024 / 1024 / 1024.0))
self.preFreeS[k] = freeSpace
if self.preFreeS[dev] == 0 or self.preFreeS[dev] - freeSpace > self.spaceInterval and not self.checked[dev]:
logger.warn("The free space of %s is running low (%.3fGB left)" % (dev, freeSpace / 1024 / 1024 / 1024.0))
self.preFreeS[dev] = freeSpace
if action == "STOPTASKS" and not self.checked[k]:
if self.devDict[dev][0] == "STOPTASKS" and not self.checked[dev]:
logger.error("No new tasks can be excuted since the disk space monitor action is \"STOPTASKS\"!")
self.checked[k] = True
self.checked[dev] = True
rq.finish_runqueue(False)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
elif action == "ABORT" and not self.checked[k]:
elif self.devDict[dev][0] == "ABORT" and not self.checked[dev]:
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
self.checked[k] = True
self.checked[dev] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
# The free inodes, float point number
freeInode = st.f_favail
if minInode and freeInode < minInode:
# Some fs formats' (e.g., btrfs) statvfs.f_files (inodes) is
# zero, this is a feature of the fs, we disable the inode
# checking for such a fs.
if st.f_files == 0:
logger.warn("Inode check for %s is unavaliable, will remove it from disk monitor" % path)
self.devDict[k][2] = None
continue
if self.devDict[dev][3] and freeInode < self.devDict[dev][3]:
# Always show warning, the self.checked would always be False if the action is WARN
if self.preFreeI[k] == 0 or self.preFreeI[k] - freeInode > self.inodeInterval and not self.checked[k]:
logger.warn("The free inode of %s (%s) is running low (%.3fK left)" % \
(path, dev, freeInode / 1024.0))
self.preFreeI[k] = freeInode
if self.preFreeI[dev] == 0 or self.preFreeI[dev] - freeInode > self.inodeInterval and not self.checked[dev]:
logger.warn("The free inode of %s is running low (%.3fK left)" % (dev, freeInode / 1024.0))
self.preFreeI[dev] = freeInode
if action == "STOPTASKS" and not self.checked[k]:
if self.devDict[dev][0] == "STOPTASKS" and not self.checked[dev]:
logger.error("No new tasks can be excuted since the disk space monitor action is \"STOPTASKS\"!")
self.checked[k] = True
self.checked[dev] = True
rq.finish_runqueue(False)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
elif action == "ABORT" and not self.checked[k]:
elif self.devDict[dev][0] == "ABORT" and not self.checked[dev]:
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
self.checked[k] = True
self.checked[dev] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
return

View File

@@ -23,7 +23,6 @@ Message handling infrastructure for bitbake
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import sys
import copy
import logging
import collections
from itertools import groupby
@@ -56,25 +55,6 @@ class BBLogFormatter(logging.Formatter):
CRITICAL: 'ERROR',
}
color_enabled = False
BASECOLOR, BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = range(29,38)
COLORS = {
DEBUG3 : CYAN,
DEBUG2 : CYAN,
DEBUG : CYAN,
VERBOSE : BASECOLOR,
NOTE : BASECOLOR,
PLAIN : BASECOLOR,
WARNING : YELLOW,
ERROR : RED,
CRITICAL: RED,
}
BLD = '\033[1;%dm'
STD = '\033[%dm'
RST = '\033[0m'
def getLevelName(self, levelno):
try:
return self.levelnames[levelno]
@@ -87,8 +67,6 @@ class BBLogFormatter(logging.Formatter):
if record.levelno == self.PLAIN:
msg = record.getMessage()
else:
if self.color_enabled:
record = self.colorize(record)
msg = logging.Formatter.format(self, record)
if hasattr(record, 'bb_exc_info'):
@@ -97,17 +75,6 @@ class BBLogFormatter(logging.Formatter):
msg += '\n' + ''.join(formatted)
return msg
def colorize(self, record):
color = self.COLORS[record.levelno]
if self.color_enabled and color is not None:
record = copy.copy(record)
record.levelname = "".join([self.BLD % color, record.levelname, self.RST])
record.msg = "".join([self.STD % color, record.msg, self.RST])
return record
def enable_color(self):
self.color_enabled = True
class BBLogFilter(object):
def __init__(self, handler, level, debug_domains):
self.stdlevel = level

View File

@@ -73,7 +73,8 @@ def update_mtime(f):
def mark_dependency(d, f):
if f.startswith('./'):
f = "%s/%s" % (os.getcwd(), f[2:])
deps = (d.getVar('__depends') or []) + [(f, cached_mtime(f))]
deps = d.getVar('__depends') or set()
deps.update([(f, cached_mtime(f))])
d.setVar('__depends', deps)
def supports(fn, data):
@@ -87,8 +88,7 @@ def handle(fn, data, include = 0):
"""Call the handler that is appropriate for this file"""
for h in handlers:
if h['supports'](fn, data):
with data.inchistory.include(fn):
return h['handle'](fn, data, include)
return h['handle'](fn, data, include)
raise ParseError("not a BitBake file", fn)
def init(fn, data):
@@ -134,8 +134,8 @@ def vars_from_file(mypkg, d):
def get_file_depends(d):
'''Return the dependent files'''
dep_files = []
depends = d.getVar('__base_depends', True) or []
depends = depends + (d.getVar('__depends', True) or [])
depends = d.getVar('__depends', True) or set()
depends = depends.union(d.getVar('__base_depends', True) or set())
for (fn, _) in depends:
dep_files.append(os.path.abspath(fn))
return " ".join(dep_files)

View File

@@ -31,6 +31,7 @@ import itertools
from bb import methodpool
from bb.parse import logger
__parsed_methods__ = bb.methodpool.get_parsed_dict()
_bbversions_re = re.compile(r"\[(?P<from>[0-9]+)-(?P<to>[0-9]+)\]")
class StatementGroup(list):
@@ -68,7 +69,7 @@ class ExportNode(AstNode):
self.var = var
def eval(self, data):
data.setVarFlag(self.var, "export", 1, op = 'exported')
data.setVarFlag(self.var, "export", 1)
class DataNode(AstNode):
"""
@@ -90,53 +91,33 @@ class DataNode(AstNode):
def eval(self, data):
groupd = self.groupd
key = groupd["var"]
loginfo = {
'variable': key,
'file': self.filename,
'line': self.lineno,
}
if "exp" in groupd and groupd["exp"] != None:
data.setVarFlag(key, "export", 1, op = 'exported', **loginfo)
op = "set"
data.setVarFlag(key, "export", 1)
if "ques" in groupd and groupd["ques"] != None:
val = self.getFunc(key, data)
op = "set?"
if val == None:
val = groupd["value"]
elif "colon" in groupd and groupd["colon"] != None:
e = data.createCopy()
bb.data.update_data(e)
op = "immediate"
val = e.expand(groupd["value"], key + "[:=]")
elif "append" in groupd and groupd["append"] != None:
op = "append"
val = "%s %s" % ((self.getFunc(key, data) or ""), groupd["value"])
elif "prepend" in groupd and groupd["prepend"] != None:
op = "prepend"
val = "%s %s" % (groupd["value"], (self.getFunc(key, data) or ""))
elif "postdot" in groupd and groupd["postdot"] != None:
op = "postdot"
val = "%s%s" % ((self.getFunc(key, data) or ""), groupd["value"])
elif "predot" in groupd and groupd["predot"] != None:
op = "predot"
val = "%s%s" % (groupd["value"], (self.getFunc(key, data) or ""))
else:
val = groupd["value"]
flag = None
if 'flag' in groupd and groupd['flag'] != None:
flag = groupd['flag']
data.setVarFlag(key, groupd['flag'], val)
elif groupd["lazyques"]:
flag = "defaultval"
loginfo['op'] = op
loginfo['detail'] = groupd["value"]
if flag:
data.setVarFlag(key, flag, val, **loginfo)
data.setVarFlag(key, "defaultval", val)
else:
data.setVar(key, val, **loginfo)
data.setVar(key, val)
class MethodNode(AstNode):
def __init__(self, filename, lineno, func_name, body):
@@ -145,24 +126,23 @@ class MethodNode(AstNode):
self.body = body
def eval(self, data):
text = '\n'.join(self.body)
if self.func_name == "__anonymous":
funcname = ("__anon_%s_%s" % (self.lineno, self.filename.translate(string.maketrans('/.+-', '____'))))
text = "def %s(d):\n" % (funcname) + text
bb.methodpool.insert_method(funcname, text, self.filename)
if not funcname in bb.methodpool._parsed_fns:
text = "def %s(d):\n" % (funcname) + '\n'.join(self.body)
bb.methodpool.insert_method(funcname, text, self.filename)
anonfuncs = data.getVar('__BBANONFUNCS') or []
anonfuncs.append(funcname)
data.setVar('__BBANONFUNCS', anonfuncs)
data.setVar(funcname, text)
else:
data.setVarFlag(self.func_name, "func", 1)
data.setVar(self.func_name, text)
data.setVar(self.func_name, '\n'.join(self.body))
class PythonMethodNode(AstNode):
def __init__(self, filename, lineno, function, modulename, body):
def __init__(self, filename, lineno, function, define, body):
AstNode.__init__(self, filename, lineno)
self.function = function
self.modulename = modulename
self.define = define
self.body = body
def eval(self, data):
@@ -170,7 +150,8 @@ class PythonMethodNode(AstNode):
# 'this' file. This means we will not parse methods from
# bb classes twice
text = '\n'.join(self.body)
bb.methodpool.insert_method(self.modulename, text, self.filename)
if not bb.methodpool.parsed_module(self.define):
bb.methodpool.insert_method(self.define, text, self.filename)
data.setVarFlag(self.function, "func", 1)
data.setVarFlag(self.function, "python", 1)
data.setVar(self.function, text)
@@ -197,35 +178,44 @@ class MethodFlagsNode(AstNode):
data.delVarFlag(self.key, "fakeroot")
class ExportFuncsNode(AstNode):
def __init__(self, filename, lineno, fns, classname):
def __init__(self, filename, lineno, fns, classes):
AstNode.__init__(self, filename, lineno)
self.n = fns.split()
self.classname = classname
self.classes = classes
def eval(self, data):
for f in self.n:
allvars = []
allvars.append(f)
allvars.append(self.classes[-1] + "_" + f)
for func in self.n:
calledfunc = self.classname + "_" + func
vars = [[ allvars[0], allvars[1] ]]
if len(self.classes) > 1 and self.classes[-2] is not None:
allvars.append(self.classes[-2] + "_" + f)
vars = []
vars.append([allvars[2], allvars[1]])
vars.append([allvars[0], allvars[2]])
if data.getVar(func) and not data.getVarFlag(func, 'export_func'):
continue
for (var, calledvar) in vars:
if data.getVar(var) and not data.getVarFlag(var, 'export_func'):
continue
if data.getVar(func):
data.setVarFlag(func, 'python', None)
data.setVarFlag(func, 'func', None)
if data.getVar(var):
data.setVarFlag(var, 'python', None)
data.setVarFlag(var, 'func', None)
for flag in [ "func", "python" ]:
if data.getVarFlag(calledfunc, flag):
data.setVarFlag(func, flag, data.getVarFlag(calledfunc, flag))
for flag in [ "dirs" ]:
if data.getVarFlag(func, flag):
data.setVarFlag(calledfunc, flag, data.getVarFlag(func, flag))
for flag in [ "func", "python" ]:
if data.getVarFlag(calledvar, flag):
data.setVarFlag(var, flag, data.getVarFlag(calledvar, flag))
for flag in [ "dirs" ]:
if data.getVarFlag(var, flag):
data.setVarFlag(calledvar, flag, data.getVarFlag(var, flag))
if data.getVarFlag(calledfunc, "python"):
data.setVar(func, " bb.build.exec_func('" + calledfunc + "', d)\n")
else:
data.setVar(func, " " + calledfunc + "\n")
data.setVarFlag(func, 'export_func', '1')
if data.getVarFlag(calledvar, "python"):
data.setVar(var, "\tbb.build.exec_func('" + calledvar + "', d)\n")
else:
data.setVar(var, "\t" + calledvar + "\n")
data.setVarFlag(var, 'export_func', '1')
class AddTaskNode(AstNode):
def __init__(self, filename, lineno, func, before, after):
@@ -291,14 +281,14 @@ def handleData(statements, filename, lineno, groupd):
def handleMethod(statements, filename, lineno, func_name, body):
statements.append(MethodNode(filename, lineno, func_name, body))
def handlePythonMethod(statements, filename, lineno, funcname, modulename, body):
statements.append(PythonMethodNode(filename, lineno, funcname, modulename, body))
def handlePythonMethod(statements, filename, lineno, funcname, root, body):
statements.append(PythonMethodNode(filename, lineno, funcname, root, body))
def handleMethodFlags(statements, filename, lineno, key, m):
statements.append(MethodFlagsNode(filename, lineno, key, m))
def handleExportFuncs(statements, filename, lineno, m, classname):
statements.append(ExportFuncsNode(filename, lineno, m.group(1), classname))
def handleExportFuncs(statements, filename, lineno, m, classes):
statements.append(ExportFuncsNode(filename, lineno, m.group(1), classes))
def handleAddTask(statements, filename, lineno, m):
func = m.group("func")
@@ -330,7 +320,7 @@ def finalize(fn, d, variant = None):
code = []
for funcname in d.getVar("__BBANONFUNCS") or []:
code.append("%s(d)" % funcname)
bb.utils.better_exec("\n".join(code), {"d": d})
bb.utils.simple_exec("\n".join(code), {"d": d})
bb.data.update_data(d)
tasklist = d.getVar('__BBTASKS') or []

View File

@@ -51,6 +51,7 @@ __infunc__ = ""
__inpython__ = False
__body__ = []
__classname__ = ""
classes = [ None, ]
cached_statements = {}
@@ -68,25 +69,18 @@ def supports(fn, d):
return os.path.splitext(fn)[-1] in [".bb", ".bbclass", ".inc"]
def inherit(files, fn, lineno, d):
__inherit_cache = d.getVar('__inherit_cache') or []
__inherit_cache = data.getVar('__inherit_cache', d) or []
files = d.expand(files).split()
for file in files:
if not os.path.isabs(file) and not file.endswith(".bbclass"):
file = os.path.join('classes', '%s.bbclass' % file)
if not os.path.isabs(file):
dname = os.path.dirname(fn)
bbpath = "%s:%s" % (dname, d.getVar("BBPATH", True))
abs_fn = bb.utils.which(bbpath, file)
if abs_fn:
file = abs_fn
if not file in __inherit_cache:
logger.log(logging.DEBUG -1, "BB %s:%d: inheriting %s", fn, lineno, file)
__inherit_cache.append( file )
d.setVar('__inherit_cache', __inherit_cache)
data.setVar('__inherit_cache', __inherit_cache, d)
include(fn, file, lineno, d, "inherit")
__inherit_cache = d.getVar('__inherit_cache') or []
__inherit_cache = data.getVar('__inherit_cache', d) or []
def get_statements(filename, absolute_filename, base_name):
global cached_statements
@@ -113,7 +107,7 @@ def get_statements(filename, absolute_filename, base_name):
return statements
def handle(fn, d, include):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __infunc__, __body__, __residue__, __classname__
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __infunc__, __body__, __residue__
__body__ = []
__infunc__ = ""
__classname__ = ""
@@ -131,13 +125,14 @@ def handle(fn, d, include):
if ext == ".bbclass":
__classname__ = root
__inherit_cache = d.getVar('__inherit_cache') or []
classes.append(__classname__)
__inherit_cache = data.getVar('__inherit_cache', d) or []
if not fn in __inherit_cache:
__inherit_cache.append(fn)
d.setVar('__inherit_cache', __inherit_cache)
data.setVar('__inherit_cache', __inherit_cache, d)
if include != 0:
oldfile = d.getVar('FILE')
oldfile = data.getVar('FILE', d)
else:
oldfile = None
@@ -151,25 +146,27 @@ def handle(fn, d, include):
# DONE WITH PARSING... time to evaluate
if ext != ".bbclass":
d.setVar('FILE', abs_fn)
data.setVar('FILE', abs_fn, d)
try:
statements.eval(d)
except bb.parse.SkipPackage:
bb.data.setVar("__SKIPPED", True, d)
statements.eval(d)
if ext == ".bbclass":
classes.remove(__classname__)
else:
if include == 0:
return { "" : d }
if ext != ".bbclass" and include == 0:
return ast.multi_finalize(fn, d)
return ast.multi_finalize(fn, d)
if oldfile:
d.setVar("FILE", oldfile)
# we have parsed the bb class now
if ext == ".bbclass" or ext == ".inc":
bb.methodpool.get_parsed_dict()[base_name] = 1
return d
def feeder(lineno, s, fn, root, statements):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__, __infunc__, __body__, bb, __residue__, __classname__
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__, __infunc__, __body__, classes, bb, __residue__
if __infunc__:
if s == '}':
__body__.append('')
@@ -196,10 +193,7 @@ def feeder(lineno, s, fn, root, statements):
if s and s[0] == '#':
if len(__residue__) != 0 and __residue__[0][0] != "#":
bb.fatal("There is a comment on line %s of file %s (%s) which is in the middle of a multiline expression.\nBitbake used to ignore these but no longer does so, please fix your metadata as errors are likely as a result of this change." % (lineno, fn, s))
if len(__residue__) != 0 and __residue__[0][0] == "#" and (not s or s[0] != "#"):
bb.fatal("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
bb.error("There is a comment on line %s of file %s (%s) which is in the middle of a multiline expression.\nBitbake used to ignore these but no longer does so, please fix your metadata as errors are likely as a result of this change." % (lineno, fn, s))
if s and s[-1] == '\\':
__residue__.append(s[:-1])
@@ -231,7 +225,7 @@ def feeder(lineno, s, fn, root, statements):
m = __export_func_regexp__.match(s)
if m:
ast.handleExportFuncs(statements, fn, lineno, m, __classname__)
ast.handleExportFuncs(statements, fn, lineno, m, classes)
return
m = __addtask_regexp__.match(s)

View File

@@ -29,30 +29,7 @@ import logging
import bb.utils
from bb.parse import ParseError, resolve_file, ast, logger
__config_regexp__ = re.compile( r"""
^
(?P<exp>export\s*)?
(?P<var>[a-zA-Z0-9\-~_+.${}/]+?)
(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
\s* (
(?P<colon>:=) |
(?P<lazyques>\?\?=) |
(?P<ques>\?=) |
(?P<append>\+=) |
(?P<prepend>=\+) |
(?P<predot>=\.) |
(?P<postdot>\.=) |
=
) \s*
(?!'[^']*'[^']*'$)
(?!\"[^\"]*\"[^\"]*\"$)
(?P<apo>['\"])
(?P<value>.*)
(?P=apo)
$
""", re.X)
__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}/]+)(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?\s*((?P<colon>:=)|(?P<lazyques>\?\?=)|(?P<ques>\?=)|(?P<append>\+=)|(?P<prepend>=\+)|(?P<predot>=\.)|(?P<postdot>\.=)|=)\s*(?P<apo>['\"])(?P<value>.*)(?P=apo)$")
__include_regexp__ = re.compile( r"include\s+(.+)" )
__require_regexp__ = re.compile( r"require\s+(.+)" )
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/]+)$" )
@@ -121,22 +98,15 @@ def handle(fn, data, include):
while True:
lineno = lineno + 1
s = f.readline()
if not s:
break
if not s: break
w = s.strip()
# skip empty lines
if not w:
continue
if not w: continue # skip empty lines
s = s.rstrip()
if s[0] == '#': continue # skip comments
while s[-1] == '\\':
s2 = f.readline().strip()
lineno = lineno + 1
if (not s2 or s2 and s2[0] != "#") and s[0] == "#" :
bb.fatal("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
s = s[:-1] + s2
# skip comments
if s[0] == '#':
continue
feeder(lineno, s, fn, statements)
# DONE WITH PARSING... time to evaluate

View File

@@ -125,11 +125,6 @@ class SQLTable(collections.MutableMapping):
return len(self) < len(other)
def get_by_pattern(self, pattern):
data = self._execute("SELECT * FROM %s WHERE key LIKE ?;" %
self.table, [pattern])
return [row[1] for row in data]
def values(self):
return list(self.itervalues())

View File

@@ -1,8 +1,6 @@
import logging
import signal
import subprocess
import errno
import select
logger = logging.getLogger('BitBake.Process')
@@ -70,38 +68,20 @@ def _logged_communicate(pipe, log, input):
pipe.stdin.write(input)
pipe.stdin.close()
bufsize = 512
outdata, errdata = [], []
rin = []
while pipe.poll() is None:
if pipe.stdout is not None:
data = pipe.stdout.read(bufsize)
if data is not None:
outdata.append(data)
log.write(data)
if pipe.stdout is not None:
bb.utils.nonblockingfd(pipe.stdout.fileno())
rin.append(pipe.stdout)
if pipe.stderr is not None:
bb.utils.nonblockingfd(pipe.stderr.fileno())
rin.append(pipe.stderr)
try:
while pipe.poll() is None:
rlist = rin
try:
r,w,e = select.select (rlist, [], [])
except OSError, e:
if e.errno != errno.EINTR:
raise
if pipe.stdout in r:
data = pipe.stdout.read()
if data is not None:
outdata.append(data)
log.write(data)
if pipe.stderr in r:
data = pipe.stderr.read()
if data is not None:
errdata.append(data)
log.write(data)
finally:
log.flush()
if pipe.stderr is not None:
data = pipe.stderr.read(bufsize)
if data is not None:
errdata.append(data)
log.write(data)
return ''.join(outdata), ''.join(errdata)
def run(cmd, input=None, log=None, **options):

View File

@@ -35,8 +35,6 @@ class NoProvider(bb.BBHandledException):
class NoRProvider(bb.BBHandledException):
"""Exception raised when no provider of a runtime dependency can be found"""
class MultipleRProvider(bb.BBHandledException):
"""Exception raised when multiple providers of a runtime dependency can be found"""
def findProviders(cfgData, dataCache, pkg_pn = None):
"""
@@ -130,7 +128,7 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
m = re.match('(\d+:)*(.*)(_.*)*', preferred_v)
if m:
if m.group(1):
preferred_e = m.group(1)[:-1]
preferred_e = int(m.group(1)[:-1])
else:
preferred_e = None
preferred_v = m.group(2)

View File

@@ -375,8 +375,9 @@ class RunQueueData:
"""
runq_build = []
recursivetasks = {}
recursivetasksselfref = set()
recursive_tdepends = {}
runq_recrdepends = []
tdepends_fnid = {}
taskData = self.taskData
@@ -405,10 +406,11 @@ class RunQueueData:
depdata = taskData.build_targets[depid][0]
if depdata is None:
continue
dep = taskData.fn_index[depdata]
for taskname in tasknames:
taskid = taskData.gettask_id_fromfnid(depdata, taskname)
taskid = taskData.gettask_id(dep, taskname, False)
if taskid is not None:
depends.add(taskid)
depends.append(taskid)
def add_runtime_dependencies(depids, tasknames, depends):
for depid in depids:
@@ -417,20 +419,15 @@ class RunQueueData:
depdata = taskData.run_targets[depid][0]
if depdata is None:
continue
dep = taskData.fn_index[depdata]
for taskname in tasknames:
taskid = taskData.gettask_id_fromfnid(depdata, taskname)
taskid = taskData.gettask_id(dep, taskname, False)
if taskid is not None:
depends.add(taskid)
def add_resolved_dependencies(depids, tasknames, depends):
for depid in depids:
for taskname in tasknames:
taskid = taskData.gettask_id_fromfnid(depid, taskname)
if taskid is not None:
depends.add(taskid)
depends.append(taskid)
for task in xrange(len(taskData.tasks_name)):
depends = set()
depends = []
recrdepends = []
fnid = taskData.tasks_fnid[task]
fn = taskData.fn_index[fnid]
task_deps = self.dataCache.task_deps[fn]
@@ -442,7 +439,7 @@ class RunQueueData:
# Resolve task internal dependencies
#
# e.g. addtask before X after Y
depends = set(taskData.tasks_tdepends[task])
depends = taskData.tasks_tdepends[task]
# Resolve 'deptask' dependencies
#
@@ -457,91 +454,99 @@ class RunQueueData:
# e.g. do_sometask[rdeptask] = "do_someothertask"
# (makes sure sometask runs after someothertask of all RDEPENDS)
if 'rdeptask' in task_deps and taskData.tasks_name[task] in task_deps['rdeptask']:
tasknames = task_deps['rdeptask'][taskData.tasks_name[task]].split()
add_runtime_dependencies(taskData.rdepids[fnid], tasknames, depends)
taskname = task_deps['rdeptask'][taskData.tasks_name[task]]
add_runtime_dependencies(taskData.rdepids[fnid], [taskname], depends)
# Resolve inter-task dependencies
#
# e.g. do_sometask[depends] = "targetname:do_someothertask"
# (makes sure sometask runs after targetname's someothertask)
if fnid not in tdepends_fnid:
tdepends_fnid[fnid] = set()
idepends = taskData.tasks_idepends[task]
for (depid, idependtask) in idepends:
if depid in taskData.build_targets and not depid in taskData.failed_deps:
if depid in taskData.build_targets:
# Won't be in build_targets if ASSUME_PROVIDED
depdata = taskData.build_targets[depid][0]
if depdata is not None:
taskid = taskData.gettask_id_fromfnid(depdata, idependtask)
dep = taskData.fn_index[depdata]
taskid = taskData.gettask_id(dep, idependtask, False)
if taskid is None:
bb.msg.fatal("RunQueue", "Task %s in %s depends upon non-existent task %s in %s" % (taskData.tasks_name[task], fn, idependtask, taskData.fn_index[depdata]))
depends.add(taskid)
irdepends = taskData.tasks_irdepends[task]
for (depid, idependtask) in irdepends:
if depid in taskData.run_targets:
# Won't be in run_targets if ASSUME_PROVIDED
depdata = taskData.run_targets[depid][0]
if depdata is not None:
taskid = taskData.gettask_id_fromfnid(depdata, idependtask)
if taskid is None:
bb.msg.fatal("RunQueue", "Task %s in %s rdepends upon non-existent task %s in %s" % (taskData.tasks_name[task], fn, idependtask, taskData.fn_index[depdata]))
depends.add(taskid)
bb.msg.fatal("RunQueue", "Task %s in %s depends upon non-existent task %s in %s" % (taskData.tasks_name[task], fn, idependtask, dep))
depends.append(taskid)
if depdata != fnid:
tdepends_fnid[fnid].add(taskid)
# Resolve recursive 'recrdeptask' dependencies (Part A)
# Resolve recursive 'recrdeptask' dependencies (A)
#
# e.g. do_sometask[recrdeptask] = "do_someothertask"
# (makes sure sometask runs after someothertask of all DEPENDS, RDEPENDS and intertask dependencies, recursively)
# We cover the recursive part of the dependencies below
if 'recrdeptask' in task_deps and taskData.tasks_name[task] in task_deps['recrdeptask']:
tasknames = task_deps['recrdeptask'][taskData.tasks_name[task]].split()
recursivetasks[task] = tasknames
add_build_dependencies(taskData.depids[fnid], tasknames, depends)
add_runtime_dependencies(taskData.rdepids[fnid], tasknames, depends)
if taskData.tasks_name[task] in tasknames:
recursivetasksselfref.add(task)
for taskname in task_deps['recrdeptask'][taskData.tasks_name[task]].split():
recrdepends.append(taskname)
add_build_dependencies(taskData.depids[fnid], [taskname], depends)
add_runtime_dependencies(taskData.rdepids[fnid], [taskname], depends)
# Rmove all self references
if task in depends:
newdep = []
logger.debug(2, "Task %s (%s %s) contains self reference! %s", task, taskData.fn_index[taskData.tasks_fnid[task]], taskData.tasks_name[task], depends)
for dep in depends:
if task != dep:
newdep.append(dep)
depends = newdep
self.runq_fnid.append(taskData.tasks_fnid[task])
self.runq_task.append(taskData.tasks_name[task])
self.runq_depends.append(depends)
self.runq_depends.append(set(depends))
self.runq_revdeps.append(set())
self.runq_hash.append("")
runq_build.append(0)
runq_recrdepends.append(recrdepends)
# Resolve recursive 'recrdeptask' dependencies (Part B)
#
# Build a list of recursive cumulative dependencies for each fnid
# We do this by fnid, since if A depends on some task in B
# we're interested in later tasks B's fnid might have but B itself
# doesn't depend on
#
# Algorithm is O(tasks) + O(tasks)*O(fnids)
#
reccumdepends = {}
for task in xrange(len(self.runq_fnid)):
fnid = self.runq_fnid[task]
if fnid not in reccumdepends:
if fnid in tdepends_fnid:
reccumdepends[fnid] = tdepends_fnid[fnid]
else:
reccumdepends[fnid] = set()
reccumdepends[fnid].update(self.runq_depends[task])
for task in xrange(len(self.runq_fnid)):
taskfnid = self.runq_fnid[task]
for fnid in reccumdepends:
if task in reccumdepends[fnid]:
reccumdepends[fnid].add(task)
if taskfnid in reccumdepends:
reccumdepends[fnid].update(reccumdepends[taskfnid])
# Resolve recursive 'recrdeptask' dependencies (B)
#
# e.g. do_sometask[recrdeptask] = "do_someothertask"
# (makes sure sometask runs after someothertask of all DEPENDS, RDEPENDS and intertask dependencies, recursively)
# We need to do this separately since we need all of self.runq_depends to be complete before this is processed
extradeps = {}
for task in recursivetasks:
extradeps[task] = set(self.runq_depends[task])
tasknames = recursivetasks[task]
seendeps = set()
seenfnid = []
def generate_recdeps(t):
newdeps = set()
add_resolved_dependencies([taskData.tasks_fnid[t]], tasknames, newdeps)
extradeps[task].update(newdeps)
seendeps.add(t)
newdeps.add(t)
for i in newdeps:
for n in self.runq_depends[i]:
if n not in seendeps:
generate_recdeps(n)
generate_recdeps(task)
# Remove circular references so that do_a[recrdeptask] = "do_a do_b" can work
for task in recursivetasks:
extradeps[task].difference_update(recursivetasksselfref)
for task in xrange(len(taskData.tasks_name)):
# Add in extra dependencies
if task in extradeps:
self.runq_depends[task] = extradeps[task]
# Remove all self references
if task in self.runq_depends[task]:
logger.debug(2, "Task %s (%s %s) contains self reference! %s", task, taskData.fn_index[taskData.tasks_fnid[task]], taskData.tasks_name[task], self.runq_depends[task])
self.runq_depends[task].remove(task)
for task in xrange(len(self.runq_fnid)):
if len(runq_recrdepends[task]) > 0:
taskfnid = self.runq_fnid[task]
for dep in reccumdepends[taskfnid]:
# Ignore self references
if dep == task:
continue
for taskname in runq_recrdepends[task]:
if taskData.tasks_name[dep] == taskname:
self.runq_depends[task].add(dep)
# Step B - Mark all active tasks
#
@@ -692,36 +697,19 @@ class RunQueueData:
stampfnwhitelist.append(fn)
self.stampfnwhitelist = stampfnwhitelist
# Iterate over the task list looking for tasks with a 'setscene' function
# Interate over the task list looking for tasks with a 'setscene' function
self.runq_setscene = []
if not self.cooker.configuration.nosetscene:
for task in range(len(self.runq_fnid)):
setscene = taskData.gettask_id(self.taskData.fn_index[self.runq_fnid[task]], self.runq_task[task] + "_setscene", False)
if not setscene:
continue
self.runq_setscene.append(task)
def invalidate_task(fn, taskname, error_nostamp):
taskdep = self.dataCache.task_deps[fn]
if 'nostamp' in taskdep and taskname in taskdep['nostamp']:
if error_nostamp:
bb.fatal("Task %s is marked nostamp, cannot invalidate this task" % taskname)
else:
bb.debug(1, "Task %s is marked nostamp, cannot invalidate this task" % taskname)
else:
logger.verbose("Invalidate task %s, %s", taskname, fn)
bb.parse.siggen.invalidate_task(taskname, self.dataCache, fn)
for task in range(len(self.runq_fnid)):
setscene = taskData.gettask_id(self.taskData.fn_index[self.runq_fnid[task]], self.runq_task[task] + "_setscene", False)
if not setscene:
continue
self.runq_setscene.append(task)
# Invalidate task if force mode active
if self.cooker.configuration.force:
for (fn, target) in self.target_pairs:
invalidate_task(fn, target, False)
# Invalidate task if invalidate mode active
if self.cooker.configuration.invalidate_stamp:
for (fn, target) in self.target_pairs:
for st in self.cooker.configuration.invalidate_stamp.split(','):
invalidate_task(fn, "do_%s" % st, True)
logger.verbose("Invalidate task %s, %s", target, fn)
bb.parse.siggen.invalidate_task(target, self.dataCache, fn)
# Interate over the task list and call into the siggen code
dealtwith = set()
@@ -785,7 +773,6 @@ class RunQueue:
self.stamppolicy = cfgData.getVar("BB_STAMP_POLICY", True) or "perfile"
self.hashvalidate = cfgData.getVar("BB_HASHCHECK_FUNCTION", True) or None
self.setsceneverify = cfgData.getVar("BB_SETSCENE_VERIFY_FUNCTION", True) or None
self.depvalidate = cfgData.getVar("BB_SETSCENE_DEPVALID", True) or None
self.state = runQueuePrepare
@@ -794,7 +781,101 @@ class RunQueue:
self.rqexe = None
def check_stamp_task(self, task, taskname = None, recurse = False, cache = None):
def check_stamps(self):
unchecked = {}
current = []
notcurrent = []
buildable = []
if self.stamppolicy == "perfile":
fulldeptree = False
else:
fulldeptree = True
stampwhitelist = []
if self.stamppolicy == "whitelist":
stampwhitelist = self.rqdata.stampfnwhitelist
for task in xrange(len(self.rqdata.runq_fnid)):
unchecked[task] = ""
if len(self.rqdata.runq_depends[task]) == 0:
buildable.append(task)
def check_buildable(self, task, buildable):
for revdep in self.rqdata.runq_revdeps[task]:
alldeps = 1
for dep in self.rqdata.runq_depends[revdep]:
if dep in unchecked:
alldeps = 0
if alldeps == 1:
if revdep in unchecked:
buildable.append(revdep)
for task in xrange(len(self.rqdata.runq_fnid)):
if task not in unchecked:
continue
fn = self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[task]]
taskname = self.rqdata.runq_task[task]
stampfile = bb.build.stampfile(taskname, self.rqdata.dataCache, fn)
# If the stamp is missing its not current
if not os.access(stampfile, os.F_OK):
del unchecked[task]
notcurrent.append(task)
check_buildable(self, task, buildable)
continue
# If its a 'nostamp' task, it's not current
taskdep = self.rqdata.dataCache.task_deps[fn]
if 'nostamp' in taskdep and task in taskdep['nostamp']:
del unchecked[task]
notcurrent.append(task)
check_buildable(self, task, buildable)
continue
while (len(buildable) > 0):
nextbuildable = []
for task in buildable:
if task in unchecked:
fn = self.taskData.fn_index[self.rqdata.runq_fnid[task]]
taskname = self.rqdata.runq_task[task]
stampfile = bb.build.stampfile(taskname, self.rqdata.dataCache, fn)
iscurrent = True
t1 = os.stat(stampfile)[stat.ST_MTIME]
for dep in self.rqdata.runq_depends[task]:
if iscurrent:
fn2 = self.taskData.fn_index[self.rqdata.runq_fnid[dep]]
taskname2 = self.rqdata.runq_task[dep]
stampfile2 = bb.build.stampfile(taskname2, self.rqdata.dataCache, fn2)
if fn == fn2 or (fulldeptree and fn2 not in stampwhitelist):
if dep in notcurrent:
iscurrent = False
else:
t2 = os.stat(stampfile2)[stat.ST_MTIME]
if t1 < t2:
iscurrent = False
del unchecked[task]
if iscurrent:
current.append(task)
else:
notcurrent.append(task)
check_buildable(self, task, nextbuildable)
buildable = nextbuildable
#for task in range(len(self.runq_fnid)):
# fn = self.taskData.fn_index[self.runq_fnid[task]]
# taskname = self.runq_task[task]
# print "%s %s.%s" % (task, taskname, fn)
#print "Unchecked: %s" % unchecked
#print "Current: %s" % current
#print "Not current: %s" % notcurrent
if len(unchecked) > 0:
bb.msg.fatal("RunQueue", "check_stamps fatal internal error")
return current
def check_stamp_task(self, task, taskname = None, recurse = False):
def get_timestamp(f):
try:
if not os.access(f, os.F_OK):
@@ -830,9 +911,6 @@ class RunQueue:
if taskname != "do_setscene" and taskname.endswith("_setscene"):
return True
if cache is None:
cache = {}
iscurrent = True
t1 = get_timestamp(stampfile)
for dep in self.rqdata.runq_depends[task]:
@@ -853,18 +931,10 @@ class RunQueue:
logger.debug(2, 'Stampfile %s < %s', stampfile, stampfile2)
iscurrent = False
if recurse and iscurrent:
if dep in cache:
iscurrent = cache[dep]
if not iscurrent:
logger.debug(2, 'Stampfile for dependency %s:%s invalid (cached)' % (fn2, taskname2))
else:
iscurrent = self.check_stamp_task(dep, recurse=True, cache=cache)
cache[dep] = iscurrent
if recurse:
cache[task] = iscurrent
iscurrent = self.check_stamp_task(dep, recurse=True)
return iscurrent
def _execute_runqueue(self):
def execute_runqueue(self):
"""
Run the tasks in a queue prepared by rqdata.prepare()
Upon failure, optionally try to recover the build using any alternate providers
@@ -928,19 +998,6 @@ class RunQueue:
# Loop
return retval
def execute_runqueue(self):
# Catch unexpected exceptions and ensure we exit when an error occurs, not loop.
try:
return self._execute_runqueue()
except bb.runqueue.TaskFailure:
raise
except SystemExit:
raise
except:
logger.error("An uncaught exception occured in runqueue, please see the failure below:")
self.state = runQueueComplete
raise
def finish_runqueue(self, now = False):
if not self.rqexe:
return
@@ -984,36 +1041,23 @@ class RunQueueExecute:
self.build_stamps = {}
self.failed_fnids = []
self.stampcache = {}
def runqueue_process_waitpid(self):
"""
Return none is there are no processes awaiting result collection, otherwise
collect the process exit codes and close the information pipe.
"""
pid, status = os.waitpid(-1, os.WNOHANG)
if pid == 0 or os.WIFSTOPPED(status):
result = os.waitpid(-1, os.WNOHANG)
if result[0] == 0 and result[1] == 0:
return None
if os.WIFEXITED(status):
status = os.WEXITSTATUS(status)
elif os.WIFSIGNALED(status):
# Per shell conventions for $?, when a process exits due to
# a signal, we return an exit code of 128 + SIGNUM
status = 128 + os.WTERMSIG(status)
task = self.build_pids[pid]
del self.build_pids[pid]
self.build_pipes[pid].close()
del self.build_pipes[pid]
# self.build_stamps[pid] may not exist when use shared work directory.
if pid in self.build_stamps:
del self.build_stamps[pid]
if status != 0:
self.task_fail(task, status)
task = self.build_pids[result[0]]
del self.build_pids[result[0]]
self.build_pipes[result[0]].close()
del self.build_pipes[result[0]]
# self.build_stamps[result[0]] may not exist when use shared work directory.
if result[0] in self.build_stamps.keys():
del self.build_stamps[result[0]]
if result[1] != 0:
self.task_fail(task, result[1]>>8)
else:
self.task_complete(task)
return True
@@ -1120,6 +1164,8 @@ class RunQueueExecute:
os.umask(umask)
self.cooker.configuration.data.setVar("BB_WORKERCONTEXT", "1")
self.cooker.configuration.data.setVar("__RUNQUEUE_DO_NOT_USE_EXTERNALLY", self)
self.cooker.configuration.data.setVar("__RUNQUEUE_DO_NOT_USE_EXTERNALLY2", fn)
bb.parse.siggen.set_taskdata(self.rqdata.hashes, self.rqdata.hash_deps)
ret = 0
try:
@@ -1149,8 +1195,7 @@ class RunQueueExecute:
os._exit(1)
try:
if not self.cooker.configuration.dry_run:
profile = self.cooker.configuration.profile
ret = bb.build.exec_task(fn, taskname, the_data, profile)
ret = bb.build.exec_task(fn, taskname, the_data)
os._exit(ret)
except:
os._exit(1)
@@ -1163,26 +1208,6 @@ class RunQueueExecute:
return pid, pipein, pipeout
def check_dependencies(self, task, taskdeps, setscene = False):
if not self.rq.depvalidate:
return False
taskdata = {}
taskdeps.add(task)
for dep in taskdeps:
if setscene:
depid = self.rqdata.runq_setscene[dep]
else:
depid = dep
fn = self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[depid]]
pn = self.rqdata.dataCache.pkg_fn[fn]
taskname = self.rqdata.runq_task[depid]
taskdata[dep] = [pn, taskname, fn]
call = self.rq.depvalidate + "(task, taskdata, notneeded, d)"
locs = { "task" : task, "taskdata" : taskdata, "notneeded" : self.scenequeue_notneeded, "d" : self.cooker.configuration.data }
valid = bb.utils.better_eval(call, locs)
return valid
class RunQueueExecuteDummy(RunQueueExecute):
def __init__(self, rq):
self.rq = rq
@@ -1198,8 +1223,6 @@ class RunQueueExecuteTasks(RunQueueExecute):
self.stats = RunQueueStats(len(self.rqdata.runq_fnid))
self.stampcache = {}
# Mark initial buildable tasks
for task in xrange(self.stats.total):
self.runq_running.append(0)
@@ -1208,7 +1231,7 @@ class RunQueueExecuteTasks(RunQueueExecute):
self.runq_buildable.append(1)
else:
self.runq_buildable.append(0)
if len(self.rqdata.runq_revdeps[task]) > 0 and self.rqdata.runq_revdeps[task].issubset(self.rq.scenequeue_covered) and task not in self.rq.scenequeue_notcovered:
if len(self.rqdata.runq_revdeps[task]) > 0 and self.rqdata.runq_revdeps[task].issubset(self.rq.scenequeue_covered):
self.rq.scenequeue_covered.add(task)
found = True
@@ -1219,39 +1242,26 @@ class RunQueueExecuteTasks(RunQueueExecute):
continue
logger.debug(1, 'Considering %s (%s): %s' % (task, self.rqdata.get_user_idstring(task), str(self.rqdata.runq_revdeps[task])))
if len(self.rqdata.runq_revdeps[task]) > 0 and self.rqdata.runq_revdeps[task].issubset(self.rq.scenequeue_covered) and task not in self.rq.scenequeue_notcovered:
found = True
self.rq.scenequeue_covered.add(task)
if len(self.rqdata.runq_revdeps[task]) > 0 and self.rqdata.runq_revdeps[task].issubset(self.rq.scenequeue_covered):
ok = True
for revdep in self.rqdata.runq_revdeps[task]:
if self.rqdata.runq_fnid[task] != self.rqdata.runq_fnid[revdep]:
logger.debug(1, 'Found "bad" dep %s (%s) for %s (%s)' % (revdep, self.rqdata.get_user_idstring(revdep), task, self.rqdata.get_user_idstring(task)))
ok = False
break
if ok:
found = True
self.rq.scenequeue_covered.add(task)
logger.debug(1, 'Skip list (pre setsceneverify) %s', sorted(self.rq.scenequeue_covered))
# Allow the metadata to elect for setscene tasks to run anyway
covered_remove = set()
if self.rq.setsceneverify:
invalidtasks = []
for task in xrange(len(self.rqdata.runq_task)):
fn = self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[task]]
taskname = self.rqdata.runq_task[task]
taskdep = self.rqdata.dataCache.task_deps[fn]
if 'noexec' in taskdep and taskname in taskdep['noexec']:
continue
if self.rq.check_stamp_task(task, taskname + "_setscene", cache=self.stampcache):
logger.debug(2, 'Setscene stamp current for task %s(%s)', task, self.rqdata.get_user_idstring(task))
continue
if self.rq.check_stamp_task(task, taskname, recurse = True, cache=self.stampcache):
logger.debug(2, 'Normal stamp current for task %s(%s)', task, self.rqdata.get_user_idstring(task))
continue
invalidtasks.append(task)
call = self.rq.setsceneverify + "(covered, tasknames, fnids, fns, d, invalidtasks=invalidtasks)"
call2 = self.rq.setsceneverify + "(covered, tasknames, fnids, fns, d)"
locs = { "covered" : self.rq.scenequeue_covered, "tasknames" : self.rqdata.runq_task, "fnids" : self.rqdata.runq_fnid, "fns" : self.rqdata.taskData.fn_index, "d" : self.cooker.configuration.data, "invalidtasks" : invalidtasks }
# Backwards compatibility with older versions without invalidtasks
try:
covered_remove = bb.utils.better_eval(call, locs)
except TypeError:
covered_remove = bb.utils.better_eval(call2, locs)
call = self.rq.setsceneverify + "(covered, tasknames, fnids, fns, d)"
locs = { "covered" : self.rq.scenequeue_covered, "tasknames" : self.rqdata.runq_task, "fnids" : self.rqdata.runq_fnid, "fns" : self.rqdata.taskData.fn_index, "d" : self.cooker.configuration.data }
covered_remove = bb.utils.better_eval(call, locs)
for task in covered_remove:
fn = self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[task]]
@@ -1363,7 +1373,7 @@ class RunQueueExecuteTasks(RunQueueExecute):
self.task_skip(task)
return True
if self.rq.check_stamp_task(task, taskname, cache=self.stampcache):
if self.rq.check_stamp_task(task, taskname):
logger.debug(2, "Stamp current task %s (%s)", task,
self.rqdata.get_user_idstring(task))
self.task_skip(task)
@@ -1422,7 +1432,6 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
self.scenequeue_covered = set()
self.scenequeue_notcovered = set()
self.scenequeue_notneeded = set()
# If we don't have any setscene functions, skip this step
if len(self.rqdata.runq_setscene) == 0:
@@ -1432,6 +1441,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
self.stats = RunQueueStats(len(self.rqdata.runq_setscene))
endpoints = {}
sq_revdeps = []
sq_revdeps_new = []
sq_revdeps_squash = []
@@ -1446,15 +1456,12 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
self.runq_complete.append(0)
self.runq_buildable.append(0)
# First process the chains up to the first setscene task.
endpoints = {}
for task in xrange(len(self.rqdata.runq_fnid)):
sq_revdeps.append(copy.copy(self.rqdata.runq_revdeps[task]))
sq_revdeps_new.append(set())
if (len(self.rqdata.runq_revdeps[task]) == 0) and task not in self.rqdata.runq_setscene:
endpoints[task] = set()
# Secondly process the chains between setscene tasks.
for task in self.rqdata.runq_setscene:
for dep in self.rqdata.runq_depends[task]:
if dep not in endpoints:
@@ -1470,8 +1477,6 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
if sq_revdeps_new[point]:
tasks |= sq_revdeps_new[point]
sq_revdeps_new[point] = set()
if point in self.rqdata.runq_setscene:
sq_revdeps_new[point] = tasks
for dep in self.rqdata.runq_depends[point]:
if point in sq_revdeps[dep]:
sq_revdeps[dep].remove(point)
@@ -1484,42 +1489,6 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
process_endpoints(endpoints)
# Build a list of setscene tasks which as "unskippable"
# These are direct endpoints referenced by the build
endpoints2 = {}
sq_revdeps2 = []
sq_revdeps_new2 = []
def process_endpoints2(endpoints):
newendpoints = {}
for point, task in endpoints.items():
tasks = set([point])
if task:
tasks |= task
if sq_revdeps_new2[point]:
tasks |= sq_revdeps_new2[point]
sq_revdeps_new2[point] = set()
if point in self.rqdata.runq_setscene:
sq_revdeps_new2[point] = tasks
for dep in self.rqdata.runq_depends[point]:
if point in sq_revdeps2[dep]:
sq_revdeps2[dep].remove(point)
if tasks:
sq_revdeps_new2[dep] |= tasks
if (len(sq_revdeps2[dep]) == 0 or len(sq_revdeps_new2[dep]) != 0) and dep not in self.rqdata.runq_setscene:
newendpoints[dep] = tasks
if len(newendpoints) != 0:
process_endpoints2(newendpoints)
for task in xrange(len(self.rqdata.runq_fnid)):
sq_revdeps2.append(copy.copy(self.rqdata.runq_revdeps[task]))
sq_revdeps_new2.append(set())
if (len(self.rqdata.runq_revdeps[task]) == 0) and task not in self.rqdata.runq_setscene:
endpoints2[task] = set()
process_endpoints2(endpoints2)
self.unskippable = []
for task in self.rqdata.runq_setscene:
if sq_revdeps_new2[task]:
self.unskippable.append(self.rqdata.runq_setscene.index(task))
for task in xrange(len(self.rqdata.runq_fnid)):
if task in self.rqdata.runq_setscene:
deps = set()
@@ -1545,7 +1514,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
dep = self.rqdata.taskData.fn_index[depdata]
taskid = self.rqdata.get_task_id(self.rqdata.taskData.getfn_id(dep), idependtask.replace("_setscene", ""))
if taskid is None:
bb.msg.fatal("RunQueue", "Task %s:%s depends upon non-existent task %s:%s" % (self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[realid]], self.rqdata.taskData.tasks_name[realid], dep, idependtask))
bb.msg.fatal("RunQueue", "Task %s depends upon non-existent task %s:%s" % (self.rqdata.taskData.tasks_name[realid], dep, idependtask))
sq_revdeps_squash[self.rqdata.runq_setscene.index(task)].add(self.rqdata.runq_setscene.index(taskid))
# Have to zero this to avoid circular dependencies
@@ -1588,18 +1557,12 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
bb.build.make_stamp(taskname + "_setscene", self.rqdata.dataCache, fn)
continue
if self.rq.check_stamp_task(realtask, taskname + "_setscene", cache=self.stampcache):
if self.rq.check_stamp_task(realtask, taskname + "_setscene"):
logger.debug(2, 'Setscene stamp current for task %s(%s)', task, self.rqdata.get_user_idstring(realtask))
stamppresent.append(task)
self.task_skip(task)
continue
if self.rq.check_stamp_task(realtask, taskname, recurse = True, cache=self.stampcache):
logger.debug(2, 'Normal stamp current for task %s(%s)', task, self.rqdata.get_user_idstring(realtask))
stamppresent.append(task)
self.task_skip(task)
continue
sq_fn.append(fn)
sq_hashfn.append(self.rqdata.dataCache.hashfn[fn])
sq_hash.append(self.rqdata.runq_hash[realtask])
@@ -1680,13 +1643,6 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
# Find the next setscene to run
for nexttask in xrange(self.stats.total):
if self.runq_buildable[nexttask] == 1 and self.runq_running[nexttask] != 1:
if nexttask in self.unskippable:
logger.debug(2, "Setscene task %s is unskippable" % self.rqdata.get_user_idstring(self.rqdata.runq_setscene[nexttask]))
if nexttask not in self.unskippable and len(self.sq_revdeps[nexttask]) > 0 and self.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and self.check_dependencies(nexttask, self.sq_revdeps[nexttask], True):
logger.debug(2, "Skipping setscene for task %s" % self.rqdata.get_user_idstring(self.rqdata.runq_setscene[nexttask]))
self.task_skip(nexttask)
self.scenequeue_notneeded.add(nexttask)
return True
task = nexttask
break
if task is not None:
@@ -1694,7 +1650,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
fn = self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[realtask]]
taskname = self.rqdata.runq_task[realtask] + "_setscene"
if self.rq.check_stamp_task(realtask, self.rqdata.runq_task[realtask], recurse = True, cache=self.stampcache):
if self.rq.check_stamp_task(realtask, self.rqdata.runq_task[realtask], recurse = True):
logger.debug(2, 'Stamp for underlying task %s(%s) is current, so skipping setscene variant',
task, self.rqdata.get_user_idstring(realtask))
self.task_failoutright(task)
@@ -1706,7 +1662,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
self.task_failoutright(task)
return True
if self.rq.check_stamp_task(realtask, taskname, cache=self.stampcache):
if self.rq.check_stamp_task(realtask, taskname):
logger.debug(2, 'Setscene stamp current task %s(%s), so skip it and its dependencies',
task, self.rqdata.get_user_idstring(realtask))
self.task_skip(task)
@@ -1737,9 +1693,6 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
self.rq.scenequeue_covered = set()
for task in oldcovered:
self.rq.scenequeue_covered.add(self.rqdata.runq_setscene[task])
self.rq.scenequeue_notcovered = set()
for task in self.scenequeue_notcovered:
self.rq.scenequeue_notcovered.add(self.rqdata.runq_setscene[task])
logger.debug(1, 'We can skip tasks %s', sorted(self.rq.scenequeue_covered))
@@ -1823,6 +1776,15 @@ class runQueueTaskCompleted(runQueueEvent):
Event notifing a task completed
"""
def check_stamp_fn(fn, taskname, d):
rqexe = d.getVar("__RUNQUEUE_DO_NOT_USE_EXTERNALLY")
fn = d.getVar("__RUNQUEUE_DO_NOT_USE_EXTERNALLY2")
fnid = rqexe.rqdata.taskData.getfn_id(fn)
taskid = rqexe.rqdata.get_task_id(fnid, taskname)
if taskid is not None:
return rqexe.rq.check_stamp_task(taskid)
return None
class runQueuePipe():
"""
Abstraction for a pipe between a worker thread and the server
@@ -1830,7 +1792,7 @@ class runQueuePipe():
def __init__(self, pipein, pipeout, d):
self.input = pipein
pipeout.close()
bb.utils.nonblockingfd(self.input)
fcntl.fcntl(self.input, fcntl.F_SETFL, fcntl.fcntl(self.input, fcntl.F_GETFL) | os.O_NONBLOCK)
self.queue = ""
self.d = d

View File

@@ -191,8 +191,8 @@ class BitBakeServer(object):
def saveConnectionDetails(self):
return
def detach(self):
return
def detach(self, cooker_logfile):
self.logfile = cooker_logfile
def establishConnection(self):
self.connection = BitBakeServerConnection(self)

View File

@@ -45,10 +45,10 @@ class ServerCommunicator():
while True:
# don't let the user ctrl-c while we're waiting for a response
try:
if self.connection.poll(20):
if self.connection.poll(.5):
return self.connection.recv()
else:
bb.fatal("Timeout while attempting to communicate with bitbake server")
return None, "Timeout while attempting to communicate with bitbake server"
except KeyboardInterrupt:
pass
@@ -256,7 +256,7 @@ class BitBakeServer(object):
def saveConnectionDetails(self):
return
def detach(self):
def detach(self, cooker_logfile):
self.server.start()
return
@@ -266,5 +266,5 @@ class BitBakeServer(object):
return self.connection
def launchUI(self, uifunc, *args):
return uifunc(*args)
return bb.cooker.server_main(self.cooker, uifunc, *args)

View File

@@ -280,8 +280,8 @@ class BitBakeServer(object):
def saveConnectionDetails(self):
self.serverinfo = BitbakeServerInfo(self.server.host, self.server.port)
def detach(self):
daemonize.createDaemon(self.server.serve_forever, "bitbake-cookerdaemon.log")
def detach(self, cooker_logfile):
daemonize.createDaemon(self.server.serve_forever, cooker_logfile)
del self.cooker
del self.server

View File

@@ -2,7 +2,6 @@ import hashlib
import logging
import os
import re
import tempfile
import bb.data
logger = logging.getLogger('BitBake.SigGen')
@@ -48,9 +47,6 @@ class SignatureGenerator(object):
def stampfile(self, stampbase, file_name, taskname, extrainfo):
return ("%s.%s.%s" % (stampbase, taskname, extrainfo)).rstrip('.')
def stampcleanmask(self, stampbase, file_name, taskname, extrainfo):
return ("%s.%s.%s" % (stampbase, taskname, extrainfo)).rstrip('.')
def dump_sigtask(self, fn, task, stampbase, runtime):
return
@@ -68,7 +64,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.taskhash = {}
self.taskdeps = {}
self.runtaskdeps = {}
self.file_checksum_values = {}
self.gendeps = {}
self.lookupcache = {}
self.pkgnameextract = re.compile("(?P<fn>.*)\..*")
@@ -98,7 +93,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
data = ''
gendeps[task] -= self.basewhitelist
newdeps = gendeps[task]
seen = set()
while newdeps:
@@ -108,26 +102,22 @@ class SignatureGeneratorBasic(SignatureGenerator):
for dep in nextdeps:
if dep in self.basewhitelist:
continue
gendeps[dep] -= self.basewhitelist
newdeps |= gendeps[dep]
newdeps -= seen
alldeps = sorted(seen)
for dep in alldeps:
alldeps = seen - self.basewhitelist
for dep in sorted(alldeps):
data = data + dep
if dep in lookupcache:
var = lookupcache[dep]
elif dep[-1] == ']':
vf = dep[:-1].split('[')
var = d.getVarFlag(vf[0], vf[1], False)
lookupcache[dep] = var
else:
var = d.getVar(dep, False)
lookupcache[dep] = var
if var:
data = data + str(var)
self.basehash[fn + "." + task] = hashlib.md5(data).hexdigest()
taskdeps[task] = alldeps
taskdeps[task] = sorted(alldeps)
self.taskdeps[fn] = taskdeps
self.gendeps[fn] = gendeps
@@ -175,7 +165,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
k = fn + "." + task
data = dataCache.basetaskhash[k]
self.runtaskdeps[k] = []
self.file_checksum_values[k] = {}
recipename = dataCache.pkg_fn[fn]
for dep in sorted(deps, key=clean_basepath):
depname = dataCache.pkg_fn[self.pkgnameextract.search(dep).group('fn')]
@@ -186,12 +175,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
data = data + self.taskhash[dep]
self.runtaskdeps[k].append(dep)
if task in dataCache.file_checksums[fn]:
checksums = bb.fetch2.get_file_checksums(dataCache.file_checksums[fn][task], recipename)
for (f,cs) in checksums:
self.file_checksum_values[k][f] = cs
data = data + cs
taint = self.read_taint(fn, task, dataCache.stamp[fn])
if taint:
data = data + taint
@@ -232,7 +215,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
if runtime and k in self.taskhash:
data['runtaskdeps'] = self.runtaskdeps[k]
data['file_checksum_values'] = self.file_checksum_values[k]
data['runtaskhashes'] = {}
for dep in data['runtaskdeps']:
data['runtaskhashes'][dep] = self.taskhash[dep]
@@ -241,20 +223,9 @@ class SignatureGeneratorBasic(SignatureGenerator):
if taint:
data['taint'] = taint
fd, tmpfile = tempfile.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
try:
with os.fdopen(fd, "wb") as stream:
p = pickle.dump(data, stream, -1)
stream.flush()
os.fsync(fd)
os.chmod(tmpfile, 0664)
os.rename(tmpfile, sigfile)
except (OSError, IOError), err:
try:
os.unlink(tmpfile)
except OSError:
pass
raise err
p = pickle.Pickler(file(sigfile, "wb"), -1)
p.dump(data)
def dump_sigs(self, dataCache):
for fn in self.taskdeps:
@@ -270,23 +241,18 @@ class SignatureGeneratorBasic(SignatureGenerator):
class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
name = "basichash"
def stampfile(self, stampbase, fn, taskname, extrainfo, clean=False):
def stampfile(self, stampbase, fn, taskname, extrainfo):
if taskname != "do_setscene" and taskname.endswith("_setscene"):
k = fn + "." + taskname[:-9]
else:
k = fn + "." + taskname
if clean:
h = "*"
elif k in self.taskhash:
if k in self.taskhash:
h = self.taskhash[k]
else:
# If k is not in basehash, then error
h = self.basehash[k]
return ("%s.%s.%s.%s" % (stampbase, taskname, h, extrainfo)).rstrip('.')
def stampcleanmask(self, stampbase, fn, taskname, extrainfo):
return self.stampfile(stampbase, fn, taskname, extrainfo, clean=True)
def invalidate_task(self, task, d, fn):
bb.note("Tainting hash to force rebuild of task %s, %s" % (fn, task))
bb.build.write_taint(task, d, fn)
@@ -299,7 +265,7 @@ def dump_this_task(outfile, d):
def clean_basepath(a):
if a.startswith("virtual:"):
b = a.rsplit(":", 1)[0] + ":" + a.rsplit("/", 1)[1]
b = a.rsplit(":", 1)[0] + a.rsplit("/", 1)[1]
else:
b = a.rsplit("/", 1)[1]
return b
@@ -310,12 +276,10 @@ def clean_basepaths(a):
b[clean_basepath(x)] = a[x]
return b
def compare_sigfiles(a, b, recursecb = None):
output = []
p1 = pickle.Unpickler(open(a, "rb"))
def compare_sigfiles(a, b):
p1 = pickle.Unpickler(file(a, "rb"))
a_data = p1.load()
p2 = pickle.Unpickler(open(b, "rb"))
p2 = pickle.Unpickler(file(b, "rb"))
b_data = p2.load()
def dict_diff(a, b, whitelist=set()):
@@ -331,123 +295,97 @@ def compare_sigfiles(a, b, recursecb = None):
return changed, added, removed
if 'basewhitelist' in a_data and a_data['basewhitelist'] != b_data['basewhitelist']:
output.append("basewhitelist changed from '%s' to '%s'" % (a_data['basewhitelist'], b_data['basewhitelist']))
print "basewhitelist changed from %s to %s" % (a_data['basewhitelist'], b_data['basewhitelist'])
if a_data['basewhitelist'] and b_data['basewhitelist']:
output.append("changed items: %s" % a_data['basewhitelist'].symmetric_difference(b_data['basewhitelist']))
print "changed items: %s" % a_data['basewhitelist'].symmetric_difference(b_data['basewhitelist'])
if 'taskwhitelist' in a_data and a_data['taskwhitelist'] != b_data['taskwhitelist']:
output.append("taskwhitelist changed from '%s' to '%s'" % (a_data['taskwhitelist'], b_data['taskwhitelist']))
print "taskwhitelist changed from %s to %s" % (a_data['taskwhitelist'], b_data['taskwhitelist'])
if a_data['taskwhitelist'] and b_data['taskwhitelist']:
output.append("changed items: %s" % a_data['taskwhitelist'].symmetric_difference(b_data['taskwhitelist']))
print "changed items: %s" % a_data['taskwhitelist'].symmetric_difference(b_data['taskwhitelist'])
if a_data['taskdeps'] != b_data['taskdeps']:
output.append("Task dependencies changed from:\n%s\nto:\n%s" % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps'])))
print "Task dependencies changed from:\n%s\nto:\n%s" % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps']))
if a_data['basehash'] != b_data['basehash']:
output.append("basehash changed from %s to %s" % (a_data['basehash'], b_data['basehash']))
print "basehash changed from %s to %s" % (a_data['basehash'], b_data['basehash'])
changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basewhitelist'] & b_data['basewhitelist'])
if changed:
for dep in changed:
output.append("List of dependencies for variable %s changed from '%s' to '%s'" % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep]))
print "List of dependencies for variable %s changed from %s to %s" % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep])
if a_data['gendeps'][dep] and b_data['gendeps'][dep]:
output.append("changed items: %s" % a_data['gendeps'][dep].symmetric_difference(b_data['gendeps'][dep]))
print "changed items: %s" % a_data['gendeps'][dep].symmetric_difference(b_data['gendeps'][dep])
if added:
for dep in added:
output.append("Dependency on variable %s was added" % (dep))
print "Dependency on variable %s was added" % (dep)
if removed:
for dep in removed:
output.append("Dependency on Variable %s was removed" % (dep))
print "Dependency on Variable %s was removed" % (dep)
changed, added, removed = dict_diff(a_data['varvals'], b_data['varvals'])
if changed:
for dep in changed:
output.append("Variable %s value changed from '%s' to '%s'" % (dep, a_data['varvals'][dep], b_data['varvals'][dep]))
changed, added, removed = dict_diff(a_data['file_checksum_values'], b_data['file_checksum_values'])
if changed:
for f in changed:
output.append("Checksum for file %s changed from %s to %s" % (f, a_data['file_checksum_values'][f], b_data['file_checksum_values'][f]))
if added:
for f in added:
output.append("Dependency on checksum of file %s was added" % (f))
if removed:
for f in removed:
output.append("Dependency on checksum of file %s was removed" % (f))
print "Variable %s value changed from %s to %s" % (dep, a_data['varvals'][dep], b_data['varvals'][dep])
if 'runtaskhashes' in a_data and 'runtaskhashes' in b_data:
a = a_data['runtaskhashes']
b = b_data['runtaskhashes']
a = clean_basepaths(a_data['runtaskhashes'])
b = clean_basepaths(b_data['runtaskhashes'])
changed, added, removed = dict_diff(a, b)
if added:
for dep in added:
bdep_found = False
if removed:
for bdep in removed:
if a[dep] == b[bdep]:
#output.append("Dependency on task %s was replaced by %s with same hash" % (dep, bdep))
bdep_found = True
if not bdep_found:
output.append("Dependency on task %s was added with hash %s" % (clean_basepath(dep), a[dep]))
bdep_found = False
if removed:
for bdep in removed:
if a[dep] == b[bdep]:
#print "Dependency on task %s was replaced by %s with same hash" % (dep, bdep)
bdep_found = True
if not bdep_found:
print "Dependency on task %s was added with hash %s" % (dep, a[dep])
if removed:
for dep in removed:
adep_found = False
if added:
for adep in added:
if a[adep] == b[dep]:
#output.append("Dependency on task %s was replaced by %s with same hash" % (adep, dep))
adep_found = True
if not adep_found:
output.append("Dependency on task %s was removed with hash %s" % (clean_basepath(dep), b[dep]))
adep_found = False
if added:
for adep in added:
if a[adep] == b[dep]:
#print "Dependency on task %s was replaced by %s with same hash" % (adep, dep)
adep_found = True
if not adep_found:
print "Dependency on task %s was removed with hash %s" % (dep, b[dep])
if changed:
for dep in changed:
output.append("Hash for dependent task %s changed from %s to %s" % (clean_basepath(dep), a[dep], b[dep]))
if callable(recursecb):
recout = recursecb(dep, a[dep], b[dep])
if recout:
output.extend(recout)
print "Hash for dependent task %s changed from %s to %s" % (dep, a[dep], b[dep])
a_taint = a_data.get('taint', None)
b_taint = b_data.get('taint', None)
if a_taint != b_taint:
output.append("Taint (by forced/invalidated task) changed from %s to %s" % (a_taint, b_taint))
return output
print "Taint (by forced/invalidated task) changed from %s to %s" % (a_taint, b_taint)
def dump_sigfile(a):
output = []
p1 = pickle.Unpickler(open(a, "rb"))
p1 = pickle.Unpickler(file(a, "rb"))
a_data = p1.load()
output.append("basewhitelist: %s" % (a_data['basewhitelist']))
print "basewhitelist: %s" % (a_data['basewhitelist'])
output.append("taskwhitelist: %s" % (a_data['taskwhitelist']))
print "taskwhitelist: %s" % (a_data['taskwhitelist'])
output.append("Task dependencies: %s" % (sorted(a_data['taskdeps'])))
print "Task dependencies: %s" % (sorted(a_data['taskdeps']))
output.append("basehash: %s" % (a_data['basehash']))
print "basehash: %s" % (a_data['basehash'])
for dep in a_data['gendeps']:
output.append("List of dependencies for variable %s is %s" % (dep, a_data['gendeps'][dep]))
print "List of dependencies for variable %s is %s" % (dep, a_data['gendeps'][dep])
for dep in a_data['varvals']:
output.append("Variable %s value is %s" % (dep, a_data['varvals'][dep]))
print "Variable %s value is %s" % (dep, a_data['varvals'][dep])
if 'runtaskdeps' in a_data:
output.append("Tasks this task depends on: %s" % (a_data['runtaskdeps']))
if 'file_checksum_values' in a_data:
output.append("This task depends on the checksums of files: %s" % (a_data['file_checksum_values']))
print "Tasks this task depends on: %s" % (a_data['runtaskdeps'])
if 'runtaskhashes' in a_data:
for dep in a_data['runtaskhashes']:
output.append("Hash for dependent task %s is %s" % (dep, a_data['runtaskhashes'][dep]))
print "Hash for dependent task %s is %s" % (dep, a_data['runtaskhashes'][dep])
if 'taint' in a_data:
output.append("Tainted (by forced/invalidated task): %s" % a_data['taint'])
return output
print "Tainted (by forced/invalidated task): %s" % a_data['taint']

View File

@@ -55,7 +55,6 @@ class TaskData:
self.tasks_name = []
self.tasks_tdepends = []
self.tasks_idepends = []
self.tasks_irdepends = []
# Cache to speed up task ID lookups
self.tasks_lookup = {}
@@ -116,16 +115,6 @@ class TaskData:
ids.append(self.tasks_lookup[fnid][task])
return ids
def gettask_id_fromfnid(self, fnid, task):
"""
Return an ID number for the task matching fnid and task.
"""
if fnid in self.tasks_lookup:
if task in self.tasks_lookup[fnid]:
return self.tasks_lookup[fnid][task]
return None
def gettask_id(self, fn, task, create = True):
"""
Return an ID number for the task matching fn and task.
@@ -145,7 +134,6 @@ class TaskData:
self.tasks_fnid.append(fnid)
self.tasks_tdepends.append([])
self.tasks_idepends.append([])
self.tasks_irdepends.append([])
listid = len(self.tasks_name) - 1
@@ -176,9 +164,6 @@ class TaskData:
# Work out task dependencies
parentids = []
for dep in task_deps['parents'][task]:
if dep not in task_deps['tasks']:
bb.debug(2, "Not adding dependeny of %s on %s since %s does not exist" % (task, dep, dep))
continue
parentid = self.gettask_id(fn, dep)
parentids.append(parentid)
taskid = self.gettask_id(fn, task)
@@ -193,15 +178,6 @@ class TaskData:
bb.msg.fatal("TaskData", "Error for %s, dependency %s does not contain ':' character\n. Task 'depends' should be specified in the form 'packagename:task'" % (fn, dep))
ids.append(((self.getbuild_id(dep.split(":")[0])), dep.split(":")[1]))
self.tasks_idepends[taskid].extend(ids)
if 'rdepends' in task_deps and task in task_deps['rdepends']:
ids = []
for dep in task_deps['rdepends'][task].split():
if dep:
if ":" not in dep:
bb.msg.fatal("TaskData", "Error for %s, dependency %s does not contain ':' character\n. Task 'rdepends' should be specified in the form 'packagename:task'" % (fn, dep))
ids.append(((self.getrun_id(dep.split(":")[0])), dep.split(":")[1]))
self.tasks_irdepends[taskid].extend(ids)
# Work out build dependencies
if not fnid in self.depids:
@@ -485,7 +461,6 @@ class TaskData:
providers_list.append(dataCache.pkg_fn[fn])
bb.event.fire(bb.event.MultipleProviders(item, providers_list, runtime=True), cfgData)
self.consider_msgs_cache.append(item)
raise bb.providers.MultipleRProvider(item)
# run through the list until we find one that we can build
for fn in eligible:
@@ -558,11 +533,6 @@ class TaskData:
dependees = self.get_rdependees(targetid)
for fnid in dependees:
self.fail_fnid(fnid, missing_list)
for taskid in xrange(len(self.tasks_irdepends)):
irdepends = self.tasks_irdepends[taskid]
for (idependid, idependtask) in irdepends:
if idependid == targetid:
self.fail_fnid(self.tasks_fnid[taskid], missing_list)
def add_unresolved(self, cfgData, dataCache):
"""
@@ -584,7 +554,7 @@ class TaskData:
try:
self.add_rprovider(cfgData, dataCache, target)
added = added + 1
except (bb.providers.NoRProvider, bb.providers.MultipleRProvider):
except bb.providers.NoRProvider:
self.remove_runtarget(self.getrun_id(target))
logger.debug(1, "Resolved " + str(added) + " extra dependencies")
if added == 0:

View File

@@ -1,374 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# BitBake Test for codeparser.py
#
# Copyright (C) 2010 Chris Larson
# Copyright (C) 2012 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import unittest
import logging
import bb
logger = logging.getLogger('BitBake.TestCodeParser')
# bb.data references bb.parse but can't directly import due to circular dependencies.
# Hack around it for now :(
import bb.parse
import bb.data
class ReferenceTest(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
def setEmptyVars(self, varlist):
for k in varlist:
self.d.setVar(k, "")
def setValues(self, values):
for k, v in values.items():
self.d.setVar(k, v)
def assertReferences(self, refs):
self.assertEqual(self.references, refs)
def assertExecs(self, execs):
self.assertEqual(self.execs, execs)
class VariableReferenceTest(ReferenceTest):
def parseExpression(self, exp):
parsedvar = self.d.expandWithRefs(exp, None)
self.references = parsedvar.references
def test_simple_reference(self):
self.setEmptyVars(["FOO"])
self.parseExpression("${FOO}")
self.assertReferences(set(["FOO"]))
def test_nested_reference(self):
self.setEmptyVars(["BAR"])
self.d.setVar("FOO", "BAR")
self.parseExpression("${${FOO}}")
self.assertReferences(set(["FOO", "BAR"]))
def test_python_reference(self):
self.setEmptyVars(["BAR"])
self.parseExpression("${@bb.data.getVar('BAR', d, True) + 'foo'}")
self.assertReferences(set(["BAR"]))
class ShellReferenceTest(ReferenceTest):
def parseExpression(self, exp):
parsedvar = self.d.expandWithRefs(exp, None)
parser = bb.codeparser.ShellParser("ParserTest", logger)
parser.parse_shell(parsedvar.value)
self.references = parsedvar.references
self.execs = parser.execs
def test_quotes_inside_assign(self):
self.parseExpression('foo=foo"bar"baz')
self.assertReferences(set([]))
def test_quotes_inside_arg(self):
self.parseExpression('sed s#"bar baz"#"alpha beta"#g')
self.assertExecs(set(["sed"]))
def test_arg_continuation(self):
self.parseExpression("sed -i -e s,foo,bar,g \\\n *.pc")
self.assertExecs(set(["sed"]))
def test_dollar_in_quoted(self):
self.parseExpression('sed -i -e "foo$" *.pc')
self.assertExecs(set(["sed"]))
def test_quotes_inside_arg_continuation(self):
self.setEmptyVars(["bindir", "D", "libdir"])
self.parseExpression("""
sed -i -e s#"moc_location=.*$"#"moc_location=${bindir}/moc4"# \\
-e s#"uic_location=.*$"#"uic_location=${bindir}/uic4"# \\
${D}${libdir}/pkgconfig/*.pc
""")
self.assertReferences(set(["bindir", "D", "libdir"]))
def test_assign_subshell_expansion(self):
self.parseExpression("foo=$(echo bar)")
self.assertExecs(set(["echo"]))
def test_shell_unexpanded(self):
self.setEmptyVars(["QT_BASE_NAME"])
self.parseExpression('echo "${QT_BASE_NAME}"')
self.assertExecs(set(["echo"]))
self.assertReferences(set(["QT_BASE_NAME"]))
def test_incomplete_varexp_single_quotes(self):
self.parseExpression("sed -i -e 's:IP{:I${:g' $pc")
self.assertExecs(set(["sed"]))
def test_until(self):
self.parseExpression("until false; do echo true; done")
self.assertExecs(set(["false", "echo"]))
self.assertReferences(set())
def test_case(self):
self.parseExpression("""
case $foo in
*)
bar
;;
esac
""")
self.assertExecs(set(["bar"]))
self.assertReferences(set())
def test_assign_exec(self):
self.parseExpression("a=b c='foo bar' alpha 1 2 3")
self.assertExecs(set(["alpha"]))
def test_redirect_to_file(self):
self.setEmptyVars(["foo"])
self.parseExpression("echo foo >${foo}/bar")
self.assertExecs(set(["echo"]))
self.assertReferences(set(["foo"]))
def test_heredoc(self):
self.setEmptyVars(["theta"])
self.parseExpression("""
cat <<END
alpha
beta
${theta}
END
""")
self.assertReferences(set(["theta"]))
def test_redirect_from_heredoc(self):
v = ["B", "SHADOW_MAILDIR", "SHADOW_MAILFILE", "SHADOW_UTMPDIR", "SHADOW_LOGDIR", "bindir"]
self.setEmptyVars(v)
self.parseExpression("""
cat <<END >${B}/cachedpaths
shadow_cv_maildir=${SHADOW_MAILDIR}
shadow_cv_mailfile=${SHADOW_MAILFILE}
shadow_cv_utmpdir=${SHADOW_UTMPDIR}
shadow_cv_logdir=${SHADOW_LOGDIR}
shadow_cv_passwd_dir=${bindir}
END
""")
self.assertReferences(set(v))
self.assertExecs(set(["cat"]))
# def test_incomplete_command_expansion(self):
# self.assertRaises(reftracker.ShellSyntaxError, reftracker.execs,
# bbvalue.shparse("cp foo`", self.d), self.d)
# def test_rogue_dollarsign(self):
# self.setValues({"D" : "/tmp"})
# self.parseExpression("install -d ${D}$")
# self.assertReferences(set(["D"]))
# self.assertExecs(set(["install"]))
class PythonReferenceTest(ReferenceTest):
def setUp(self):
self.d = bb.data.init()
if hasattr(bb.utils, "_context"):
self.context = bb.utils._context
else:
import __builtin__
self.context = __builtin__.__dict__
def parseExpression(self, exp):
parsedvar = self.d.expandWithRefs(exp, None)
parser = bb.codeparser.PythonParser("ParserTest", logger)
parser.parse_python(parsedvar.value)
self.references = parsedvar.references | parser.references
self.execs = parser.execs
@staticmethod
def indent(value):
"""Python Snippets have to be indented, python values don't have to
be. These unit tests are testing snippets."""
return " " + value
def test_getvar_reference(self):
self.parseExpression("bb.data.getVar('foo', d, True)")
self.assertReferences(set(["foo"]))
self.assertExecs(set())
def test_getvar_computed_reference(self):
self.parseExpression("bb.data.getVar('f' + 'o' + 'o', d, True)")
self.assertReferences(set())
self.assertExecs(set())
def test_getvar_exec_reference(self):
self.parseExpression("eval('bb.data.getVar(\"foo\", d, True)')")
self.assertReferences(set())
self.assertExecs(set(["eval"]))
def test_var_reference(self):
self.context["foo"] = lambda x: x
self.setEmptyVars(["FOO"])
self.parseExpression("foo('${FOO}')")
self.assertReferences(set(["FOO"]))
self.assertExecs(set(["foo"]))
del self.context["foo"]
def test_var_exec(self):
for etype in ("func", "task"):
self.d.setVar("do_something", "echo 'hi mom! ${FOO}'")
self.d.setVarFlag("do_something", etype, True)
self.parseExpression("bb.build.exec_func('do_something', d)")
self.assertReferences(set(["do_something"]))
def test_function_reference(self):
self.context["testfunc"] = lambda msg: bb.msg.note(1, None, msg)
self.d.setVar("FOO", "Hello, World!")
self.parseExpression("testfunc('${FOO}')")
self.assertReferences(set(["FOO"]))
self.assertExecs(set(["testfunc"]))
del self.context["testfunc"]
def test_qualified_function_reference(self):
self.parseExpression("time.time()")
self.assertExecs(set(["time.time"]))
def test_qualified_function_reference_2(self):
self.parseExpression("os.path.dirname('/foo/bar')")
self.assertExecs(set(["os.path.dirname"]))
def test_qualified_function_reference_nested(self):
self.parseExpression("time.strftime('%Y%m%d',time.gmtime())")
self.assertExecs(set(["time.strftime", "time.gmtime"]))
def test_function_reference_chained(self):
self.context["testget"] = lambda: "\tstrip me "
self.parseExpression("testget().strip()")
self.assertExecs(set(["testget"]))
del self.context["testget"]
class DependencyReferenceTest(ReferenceTest):
pydata = """
bb.data.getVar('somevar', d, True)
def test(d):
foo = 'bar %s' % 'foo'
def test2(d):
d.getVar(foo, True)
d.getVar('bar', False)
test2(d)
def a():
\"\"\"some
stuff
\"\"\"
return "heh"
test(d)
bb.data.expand(bb.data.getVar("something", False, d), d)
bb.data.expand("${inexpand} somethingelse", d)
bb.data.getVar(a(), d, False)
"""
def test_python(self):
self.d.setVar("FOO", self.pydata)
self.setEmptyVars(["inexpand", "a", "test2", "test"])
self.d.setVarFlags("FOO", {"func": True, "python": True})
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEquals(deps, set(["somevar", "bar", "something", "inexpand", "test", "test2", "a"]))
shelldata = """
foo () {
bar
}
{
echo baz
$(heh)
eval `moo`
}
a=b
c=d
(
true && false
test -f foo
testval=something
$testval
) || aiee
! inverted
echo ${somevar}
case foo in
bar)
echo bar
;;
baz)
echo baz
;;
foo*)
echo foo
;;
esac
"""
def test_shell(self):
execs = ["bar", "echo", "heh", "moo", "true", "aiee"]
self.d.setVar("somevar", "heh")
self.d.setVar("inverted", "echo inverted...")
self.d.setVarFlag("inverted", "func", True)
self.d.setVar("FOO", self.shelldata)
self.d.setVarFlags("FOO", {"func": True})
self.setEmptyVars(execs)
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEquals(deps, set(["somevar", "inverted"] + execs))
def test_vardeps(self):
self.d.setVar("oe_libinstall", "echo test")
self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
self.d.setVarFlag("FOO", "vardeps", "oe_libinstall")
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEquals(deps, set(["oe_libinstall"]))
def test_vardeps_expand(self):
self.d.setVar("oe_libinstall", "echo test")
self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
self.d.setVarFlag("FOO", "vardeps", "${@'oe_libinstall'}")
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEquals(deps, set(["oe_libinstall"]))
#Currently no wildcard support
#def test_vardeps_wildcards(self):
# self.d.setVar("oe_libinstall", "echo test")
# self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
# self.d.setVarFlag("FOO", "vardeps", "oe_*")
# self.assertEquals(deps, set(["oe_libinstall"]))

View File

@@ -1,136 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# BitBake Tests for Copy-on-Write (cow.py)
#
# Copyright 2006 Holger Freyther <freyther@handhelds.org>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import unittest
import os
class COWTestCase(unittest.TestCase):
"""
Test case for the COW module from mithro
"""
def testGetSet(self):
"""
Test and set
"""
from bb.COW import COWDictBase
a = COWDictBase.copy()
self.assertEquals(False, a.has_key('a'))
a['a'] = 'a'
a['b'] = 'b'
self.assertEquals(True, a.has_key('a'))
self.assertEquals(True, a.has_key('b'))
self.assertEquals('a', a['a'] )
self.assertEquals('b', a['b'] )
def testCopyCopy(self):
"""
Test the copy of copies
"""
from bb.COW import COWDictBase
# create two COW dict 'instances'
b = COWDictBase.copy()
c = COWDictBase.copy()
# assign some keys to one instance, some keys to another
b['a'] = 10
b['c'] = 20
c['a'] = 30
# test separation of the two instances
self.assertEquals(False, c.has_key('c'))
self.assertEquals(30, c['a'])
self.assertEquals(10, b['a'])
# test copy
b_2 = b.copy()
c_2 = c.copy()
self.assertEquals(False, c_2.has_key('c'))
self.assertEquals(10, b_2['a'])
b_2['d'] = 40
self.assertEquals(False, c_2.has_key('d'))
self.assertEquals(True, b_2.has_key('d'))
self.assertEquals(40, b_2['d'])
self.assertEquals(False, b.has_key('d'))
self.assertEquals(False, c.has_key('d'))
c_2['d'] = 30
self.assertEquals(True, c_2.has_key('d'))
self.assertEquals(True, b_2.has_key('d'))
self.assertEquals(30, c_2['d'])
self.assertEquals(40, b_2['d'])
self.assertEquals(False, b.has_key('d'))
self.assertEquals(False, c.has_key('d'))
# test copy of the copy
c_3 = c_2.copy()
b_3 = b_2.copy()
b_3_2 = b_2.copy()
c_3['e'] = 4711
self.assertEquals(4711, c_3['e'])
self.assertEquals(False, c_2.has_key('e'))
self.assertEquals(False, b_3.has_key('e'))
self.assertEquals(False, b_3_2.has_key('e'))
self.assertEquals(False, b_2.has_key('e'))
b_3['e'] = 'viel'
self.assertEquals('viel', b_3['e'])
self.assertEquals(4711, c_3['e'])
self.assertEquals(False, c_2.has_key('e'))
self.assertEquals(True, b_3.has_key('e'))
self.assertEquals(False, b_3_2.has_key('e'))
self.assertEquals(False, b_2.has_key('e'))
def testCow(self):
from bb.COW import COWDictBase
c = COWDictBase.copy()
c['123'] = 1027
c['other'] = 4711
c['d'] = { 'abc' : 10, 'bcd' : 20 }
copy = c.copy()
self.assertEquals(1027, c['123'])
self.assertEquals(4711, c['other'])
self.assertEquals({'abc':10, 'bcd':20}, c['d'])
self.assertEquals(1027, copy['123'])
self.assertEquals(4711, copy['other'])
self.assertEquals({'abc':10, 'bcd':20}, copy['d'])
# cow it now
copy['123'] = 1028
copy['other'] = 4712
copy['d']['abc'] = 20
self.assertEquals(1027, c['123'])
self.assertEquals(4711, c['other'])
self.assertEquals({'abc':10, 'bcd':20}, c['d'])
self.assertEquals(1028, copy['123'])
self.assertEquals(4712, copy['other'])
self.assertEquals({'abc':20, 'bcd':20}, copy['d'])

View File

@@ -1,254 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# BitBake Tests for the Data Store (data.py/data_smart.py)
#
# Copyright (C) 2010 Chris Larson
# Copyright (C) 2012 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import unittest
import bb
import bb.data
class DataExpansions(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
self.d["foo"] = "value of foo"
self.d["bar"] = "value of bar"
self.d["value of foo"] = "value of 'value of foo'"
def test_one_var(self):
val = self.d.expand("${foo}")
self.assertEqual(str(val), "value of foo")
def test_indirect_one_var(self):
val = self.d.expand("${${foo}}")
self.assertEqual(str(val), "value of 'value of foo'")
def test_indirect_and_another(self):
val = self.d.expand("${${foo}} ${bar}")
self.assertEqual(str(val), "value of 'value of foo' value of bar")
def test_python_snippet(self):
val = self.d.expand("${@5*12}")
self.assertEqual(str(val), "60")
def test_expand_in_python_snippet(self):
val = self.d.expand("${@'boo ' + '${foo}'}")
self.assertEqual(str(val), "boo value of foo")
def test_python_snippet_getvar(self):
val = self.d.expand("${@d.getVar('foo', True) + ' ${bar}'}")
self.assertEqual(str(val), "value of foo value of bar")
def test_python_snippet_syntax_error(self):
self.d.setVar("FOO", "${@foo = 5}")
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
def test_python_snippet_runtime_error(self):
self.d.setVar("FOO", "${@int('test')}")
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
def test_python_snippet_error_path(self):
self.d.setVar("FOO", "foo value ${BAR}")
self.d.setVar("BAR", "bar value ${@int('test')}")
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
def test_value_containing_value(self):
val = self.d.expand("${@d.getVar('foo', True) + ' ${bar}'}")
self.assertEqual(str(val), "value of foo value of bar")
def test_reference_undefined_var(self):
val = self.d.expand("${undefinedvar} meh")
self.assertEqual(str(val), "${undefinedvar} meh")
def test_double_reference(self):
self.d.setVar("BAR", "bar value")
self.d.setVar("FOO", "${BAR} foo ${BAR}")
val = self.d.getVar("FOO", True)
self.assertEqual(str(val), "bar value foo bar value")
def test_direct_recursion(self):
self.d.setVar("FOO", "${FOO}")
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
def test_indirect_recursion(self):
self.d.setVar("FOO", "${BAR}")
self.d.setVar("BAR", "${BAZ}")
self.d.setVar("BAZ", "${FOO}")
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
def test_recursion_exception(self):
self.d.setVar("FOO", "${BAR}")
self.d.setVar("BAR", "${${@'FOO'}}")
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
def test_incomplete_varexp_single_quotes(self):
self.d.setVar("FOO", "sed -i -e 's:IP{:I${:g' $pc")
val = self.d.getVar("FOO", True)
self.assertEqual(str(val), "sed -i -e 's:IP{:I${:g' $pc")
def test_nonstring(self):
self.d.setVar("TEST", 5)
val = self.d.getVar("TEST", True)
self.assertEqual(str(val), "5")
def test_rename(self):
self.d.renameVar("foo", "newfoo")
self.assertEqual(self.d.getVar("newfoo"), "value of foo")
self.assertEqual(self.d.getVar("foo"), None)
def test_deletion(self):
self.d.delVar("foo")
self.assertEqual(self.d.getVar("foo"), None)
def test_keys(self):
keys = self.d.keys()
self.assertEqual(keys, ['value of foo', 'foo', 'bar'])
class TestNestedExpansions(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
self.d["foo"] = "foo"
self.d["bar"] = "bar"
self.d["value of foobar"] = "187"
def test_refs(self):
val = self.d.expand("${value of ${foo}${bar}}")
self.assertEqual(str(val), "187")
#def test_python_refs(self):
# val = self.d.expand("${@${@3}**2 + ${@4}**2}")
# self.assertEqual(str(val), "25")
def test_ref_in_python_ref(self):
val = self.d.expand("${@'${foo}' + 'bar'}")
self.assertEqual(str(val), "foobar")
def test_python_ref_in_ref(self):
val = self.d.expand("${${@'f'+'o'+'o'}}")
self.assertEqual(str(val), "foo")
def test_deep_nesting(self):
depth = 100
val = self.d.expand("${" * depth + "foo" + "}" * depth)
self.assertEqual(str(val), "foo")
#def test_deep_python_nesting(self):
# depth = 50
# val = self.d.expand("${@" * depth + "1" + "+1}" * depth)
# self.assertEqual(str(val), str(depth + 1))
def test_mixed(self):
val = self.d.expand("${value of ${@('${foo}'+'bar')[0:3]}${${@'BAR'.lower()}}}")
self.assertEqual(str(val), "187")
def test_runtime(self):
val = self.d.expand("${${@'value of' + ' f'+'o'+'o'+'b'+'a'+'r'}}")
self.assertEqual(str(val), "187")
class TestMemoize(unittest.TestCase):
def test_memoized(self):
d = bb.data.init()
d.setVar("FOO", "bar")
self.assertTrue(d.getVar("FOO") is d.getVar("FOO"))
def test_not_memoized(self):
d1 = bb.data.init()
d2 = bb.data.init()
d1.setVar("FOO", "bar")
d2.setVar("FOO", "bar2")
self.assertTrue(d1.getVar("FOO") is not d2.getVar("FOO"))
def test_changed_after_memoized(self):
d = bb.data.init()
d.setVar("foo", "value of foo")
self.assertEqual(str(d.getVar("foo")), "value of foo")
d.setVar("foo", "second value of foo")
self.assertEqual(str(d.getVar("foo")), "second value of foo")
def test_same_value(self):
d = bb.data.init()
d.setVar("foo", "value of")
d.setVar("bar", "value of")
self.assertEqual(d.getVar("foo"),
d.getVar("bar"))
class TestConcat(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
self.d.setVar("FOO", "foo")
self.d.setVar("VAL", "val")
self.d.setVar("BAR", "bar")
def test_prepend(self):
self.d.setVar("TEST", "${VAL}")
self.d.prependVar("TEST", "${FOO}:")
self.assertEqual(self.d.getVar("TEST", True), "foo:val")
def test_append(self):
self.d.setVar("TEST", "${VAL}")
self.d.appendVar("TEST", ":${BAR}")
self.assertEqual(self.d.getVar("TEST", True), "val:bar")
def test_multiple_append(self):
self.d.setVar("TEST", "${VAL}")
self.d.prependVar("TEST", "${FOO}:")
self.d.appendVar("TEST", ":val2")
self.d.appendVar("TEST", ":${BAR}")
self.assertEqual(self.d.getVar("TEST", True), "foo:val:val2:bar")
class TestOverrides(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
self.d.setVar("OVERRIDES", "foo:bar:local")
self.d.setVar("TEST", "testvalue")
def test_no_override(self):
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "testvalue")
def test_one_override(self):
self.d.setVar("TEST_bar", "testvalue2")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "testvalue2")
def test_multiple_override(self):
self.d.setVar("TEST_bar", "testvalue2")
self.d.setVar("TEST_local", "testvalue3")
self.d.setVar("TEST_foo", "testvalue4")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "testvalue3")
class TestFlags(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
self.d.setVar("foo", "value of foo")
self.d.setVarFlag("foo", "flag1", "value of flag1")
self.d.setVarFlag("foo", "flag2", "value of flag2")
def test_setflag(self):
self.assertEqual(self.d.getVarFlag("foo", "flag1"), "value of flag1")
self.assertEqual(self.d.getVarFlag("foo", "flag2"), "value of flag2")
def test_delflag(self):
self.d.delVarFlag("foo", "flag2")
self.assertEqual(self.d.getVarFlag("foo", "flag1"), "value of flag1")
self.assertEqual(self.d.getVarFlag("foo", "flag2"), None)

View File

@@ -1,425 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# BitBake Tests for the Fetcher (fetch2/)
#
# Copyright (C) 2012 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import unittest
import tempfile
import subprocess
import os
from bb.fetch2 import URI
import bb
class URITest(unittest.TestCase):
test_uris = {
"http://www.google.com/index.html" : {
'uri': 'http://www.google.com/index.html',
'scheme': 'http',
'hostname': 'www.google.com',
'port': None,
'hostport': 'www.google.com',
'path': '/index.html',
'userinfo': '',
'username': '',
'password': '',
'params': {},
'relative': False
},
"http://www.google.com/index.html;param1=value1" : {
'uri': 'http://www.google.com/index.html;param1=value1',
'scheme': 'http',
'hostname': 'www.google.com',
'port': None,
'hostport': 'www.google.com',
'path': '/index.html',
'userinfo': '',
'username': '',
'password': '',
'params': {
'param1': 'value1'
},
'relative': False
},
"http://www.example.com:8080/index.html" : {
'uri': 'http://www.example.com:8080/index.html',
'scheme': 'http',
'hostname': 'www.example.com',
'port': 8080,
'hostport': 'www.example.com:8080',
'path': '/index.html',
'userinfo': '',
'username': '',
'password': '',
'params': {},
'relative': False
},
"cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg" : {
'uri': 'cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg',
'scheme': 'cvs',
'hostname': 'cvs.handhelds.org',
'port': None,
'hostport': 'cvs.handhelds.org',
'path': '/cvs',
'userinfo': 'anoncvs',
'username': 'anoncvs',
'password': '',
'params': {
'module': 'familiar/dist/ipkg'
},
'relative': False
},
"cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg": {
'uri': 'cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg',
'scheme': 'cvs',
'hostname': 'cvs.handhelds.org',
'port': None,
'hostport': 'cvs.handhelds.org',
'path': '/cvs',
'userinfo': 'anoncvs:anonymous',
'username': 'anoncvs',
'password': 'anonymous',
'params': {
'tag': 'V0-99-81',
'module': 'familiar/dist/ipkg'
},
'relative': False
},
"file://example.diff": { # NOTE: Not RFC compliant!
'uri': 'file:example.diff',
'scheme': 'file',
'hostname': '',
'port': None,
'hostport': '',
'path': 'example.diff',
'userinfo': '',
'username': '',
'password': '',
'params': {},
'relative': True
},
"file:example.diff": { # NOTE: RFC compliant version of the former
'uri': 'file:example.diff',
'scheme': 'file',
'hostname': '',
'port': None,
'hostport': '',
'path': 'example.diff',
'userinfo': '',
'userinfo': '',
'username': '',
'password': '',
'params': {},
'relative': True
},
"file:///tmp/example.diff": {
'uri': 'file:///tmp/example.diff',
'scheme': 'file',
'hostname': '',
'port': None,
'hostport': '',
'path': '/tmp/example.diff',
'userinfo': '',
'userinfo': '',
'username': '',
'password': '',
'params': {},
'relative': False
},
"git:///path/example.git": {
'uri': 'git:///path/example.git',
'scheme': 'git',
'hostname': '',
'port': None,
'hostport': '',
'path': '/path/example.git',
'userinfo': '',
'userinfo': '',
'username': '',
'password': '',
'params': {},
'relative': False
},
"git:path/example.git": {
'uri': 'git:path/example.git',
'scheme': 'git',
'hostname': '',
'port': None,
'hostport': '',
'path': 'path/example.git',
'userinfo': '',
'userinfo': '',
'username': '',
'password': '',
'params': {},
'relative': True
},
"git://example.net/path/example.git": {
'uri': 'git://example.net/path/example.git',
'scheme': 'git',
'hostname': 'example.net',
'port': None,
'hostport': 'example.net',
'path': '/path/example.git',
'userinfo': '',
'userinfo': '',
'username': '',
'password': '',
'params': {},
'relative': False
}
}
def test_uri(self):
for test_uri, ref in self.test_uris.items():
uri = URI(test_uri)
self.assertEqual(str(uri), ref['uri'])
# expected attributes
self.assertEqual(uri.scheme, ref['scheme'])
self.assertEqual(uri.userinfo, ref['userinfo'])
self.assertEqual(uri.username, ref['username'])
self.assertEqual(uri.password, ref['password'])
self.assertEqual(uri.hostname, ref['hostname'])
self.assertEqual(uri.port, ref['port'])
self.assertEqual(uri.hostport, ref['hostport'])
self.assertEqual(uri.path, ref['path'])
self.assertEqual(uri.params, ref['params'])
self.assertEqual(uri.relative, ref['relative'])
def test_dict(self):
for test in self.test_uris.values():
uri = URI()
self.assertEqual(uri.scheme, '')
self.assertEqual(uri.userinfo, '')
self.assertEqual(uri.username, '')
self.assertEqual(uri.password, '')
self.assertEqual(uri.hostname, '')
self.assertEqual(uri.port, None)
self.assertEqual(uri.path, '')
self.assertEqual(uri.params, {})
uri.scheme = test['scheme']
self.assertEqual(uri.scheme, test['scheme'])
uri.userinfo = test['userinfo']
self.assertEqual(uri.userinfo, test['userinfo'])
self.assertEqual(uri.username, test['username'])
self.assertEqual(uri.password, test['password'])
uri.hostname = test['hostname']
self.assertEqual(uri.hostname, test['hostname'])
self.assertEqual(uri.hostport, test['hostname'])
uri.port = test['port']
self.assertEqual(uri.port, test['port'])
self.assertEqual(uri.hostport, test['hostport'])
uri.path = test['path']
self.assertEqual(uri.path, test['path'])
uri.params = test['params']
self.assertEqual(uri.params, test['params'])
self.assertEqual(str(uri)+str(uri.relative), str(test['uri'])+str(test['relative']))
self.assertEqual(str(uri), test['uri'])
uri.params = {}
self.assertEqual(uri.params, {})
self.assertEqual(str(uri), (str(uri).split(";"))[0])
class FetcherTest(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
self.tempdir = tempfile.mkdtemp()
self.dldir = os.path.join(self.tempdir, "download")
os.mkdir(self.dldir)
self.d.setVar("DL_DIR", self.dldir)
self.unpackdir = os.path.join(self.tempdir, "unpacked")
os.mkdir(self.unpackdir)
persistdir = os.path.join(self.tempdir, "persistdata")
self.d.setVar("PERSISTENT_DIR", persistdir)
def tearDown(self):
bb.utils.prunedir(self.tempdir)
class MirrorUriTest(FetcherTest):
replaceuris = {
("git://git.invalid.infradead.org/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "http://somewhere.org/somedir/")
: "http://somewhere.org/somedir/git2_git.invalid.infradead.org.mtd-utils.git.tar.gz",
("git://git.invalid.infradead.org/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/([^/]+/)*([^/]*)", "git://somewhere.org/somedir/\\2;protocol=http")
: "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/([^/]+/)*([^/]*)", "git://somewhere.org/somedir/\\2;protocol=http")
: "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/([^/]+/)*([^/]*)", "git://somewhere.org/\\2;protocol=http")
: "git://somewhere.org/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
("git://someserver.org/bitbake;tag=1234567890123456789012345678901234567890", "git://someserver.org/bitbake", "git://git.openembedded.org/bitbake")
: "git://git.openembedded.org/bitbake;tag=1234567890123456789012345678901234567890",
("file://sstate-xyz.tgz", "file://.*", "file:///somewhere/1234/sstate-cache")
: "file:///somewhere/1234/sstate-cache/sstate-xyz.tgz",
("file://sstate-xyz.tgz", "file://.*", "file:///somewhere/1234/sstate-cache/")
: "file:///somewhere/1234/sstate-cache/sstate-xyz.tgz",
("http://somewhere.org/somedir1/somedir2/somefile_1.2.3.tar.gz", "http://.*/.*", "http://somewhere2.org/somedir3")
: "http://somewhere2.org/somedir3/somefile_1.2.3.tar.gz",
("http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere2.org/somedir3/somefile_1.2.3.tar.gz")
: "http://somewhere2.org/somedir3/somefile_1.2.3.tar.gz",
("http://www.apache.org/dist/subversion/subversion-1.7.1.tar.bz2", "http://www.apache.org/dist", "http://archive.apache.org/dist")
: "http://archive.apache.org/dist/subversion/subversion-1.7.1.tar.bz2",
("http://www.apache.org/dist/subversion/subversion-1.7.1.tar.bz2", "http://.*/.*", "file:///somepath/downloads/")
: "file:///somepath/downloads/subversion-1.7.1.tar.bz2",
("git://git.invalid.infradead.org/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "git://somewhere.org/somedir/BASENAME;protocol=http")
: "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "git://somewhere.org/somedir/BASENAME;protocol=http")
: "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "git://somewhere.org/somedir/MIRRORNAME;protocol=http")
: "git://somewhere.org/somedir/git.invalid.infradead.org.foo.mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
#Renaming files doesn't work
#("http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere2.org/somedir3/somefile_2.3.4.tar.gz") : "http://somewhere2.org/somedir3/somefile_2.3.4.tar.gz"
#("file://sstate-xyz.tgz", "file://.*/.*", "file:///somewhere/1234/sstate-cache") : "file:///somewhere/1234/sstate-cache/sstate-xyz.tgz",
}
mirrorvar = "http://.*/.* file:///somepath/downloads/ \n" \
"git://someserver.org/bitbake git://git.openembedded.org/bitbake \n" \
"https://.*/.* file:///someotherpath/downloads/ \n" \
"http://.*/.* file:///someotherpath/downloads/ \n"
def test_urireplace(self):
for k, v in self.replaceuris.items():
ud = bb.fetch.FetchData(k[0], self.d)
ud.setup_localpath(self.d)
mirrors = bb.fetch2.mirror_from_string("%s %s" % (k[1], k[2]))
newuris, uds = bb.fetch2.build_mirroruris(ud, mirrors, self.d)
self.assertEqual([v], newuris)
def test_urilist1(self):
fetcher = bb.fetch.FetchData("http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", self.d)
mirrors = bb.fetch2.mirror_from_string(self.mirrorvar)
uris, uds = bb.fetch2.build_mirroruris(fetcher, mirrors, self.d)
self.assertEqual(uris, ['file:///somepath/downloads/bitbake-1.0.tar.gz', 'file:///someotherpath/downloads/bitbake-1.0.tar.gz'])
def test_urilist2(self):
# Catch https:// -> files:// bug
fetcher = bb.fetch.FetchData("https://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", self.d)
mirrors = bb.fetch2.mirror_from_string(self.mirrorvar)
uris, uds = bb.fetch2.build_mirroruris(fetcher, mirrors, self.d)
self.assertEqual(uris, ['file:///someotherpath/downloads/bitbake-1.0.tar.gz'])
class FetcherNetworkTest(FetcherTest):
if os.environ.get("BB_SKIP_NETTESTS") == "yes":
print("Unset BB_SKIP_NETTESTS to run network tests")
else:
def test_fetch(self):
fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
fetcher.download()
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.1.tar.gz"), 57892)
self.d.setVar("BB_NO_NETWORK", "1")
fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
fetcher.download()
fetcher.unpack(self.unpackdir)
self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.0/")), 9)
self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.1/")), 9)
def test_fetch_mirror(self):
self.d.setVar("MIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
fetcher.download()
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
def test_fetch_premirror(self):
self.d.setVar("PREMIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
fetcher.download()
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
def gitfetcher(self, url1, url2):
def checkrevision(self, fetcher):
fetcher.unpack(self.unpackdir)
revision = bb.process.run("git rev-parse HEAD", shell=True, cwd=self.unpackdir + "/git")[0].strip()
self.assertEqual(revision, "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
self.d.setVar("BB_GENERATE_MIRROR_TARBALLS", "1")
self.d.setVar("SRCREV", "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
fetcher = bb.fetch.Fetch([url1], self.d)
fetcher.download()
checkrevision(self, fetcher)
# Wipe out the dldir clone and the unpacked source, turn off the network and check mirror tarball works
bb.utils.prunedir(self.dldir + "/git2/")
bb.utils.prunedir(self.unpackdir)
self.d.setVar("BB_NO_NETWORK", "1")
fetcher = bb.fetch.Fetch([url2], self.d)
fetcher.download()
checkrevision(self, fetcher)
def test_gitfetch(self):
url1 = url2 = "git://git.openembedded.org/bitbake"
self.gitfetcher(url1, url2)
def test_gitfetch_premirror(self):
url1 = "git://git.openembedded.org/bitbake"
url2 = "git://someserver.org/bitbake"
self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
self.gitfetcher(url1, url2)
def test_gitfetch_premirror2(self):
url1 = url2 = "git://someserver.org/bitbake"
self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
self.gitfetcher(url1, url2)
def test_gitfetch_premirror3(self):
realurl = "git://git.openembedded.org/bitbake"
dummyurl = "git://someserver.org/bitbake"
self.sourcedir = self.unpackdir.replace("unpacked", "sourcemirror.git")
os.chdir(self.tempdir)
bb.process.run("git clone %s %s 2> /dev/null" % (realurl, self.sourcedir), shell=True)
self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" % (dummyurl, self.sourcedir))
self.gitfetcher(dummyurl, dummyurl)
class URLHandle(unittest.TestCase):
datatable = {
"http://www.google.com/index.html" : ('http', 'www.google.com', '/index.html', '', '', {}),
"cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg" : ('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', '', {'module': 'familiar/dist/ipkg'}),
"cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg" : ('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', 'anonymous', {'tag': 'V0-99-81', 'module': 'familiar/dist/ipkg'}),
"git://git.openembedded.org/bitbake;branch=@foo" : ('git', 'git.openembedded.org', '/bitbake', '', '', {'branch': '@foo'})
}
def test_decodeurl(self):
for k, v in self.datatable.items():
result = bb.fetch.decodeurl(k)
self.assertEqual(result, v)
def test_encodeurl(self):
for k, v in self.datatable.items():
result = bb.fetch.encodeurl(v)
self.assertEqual(result, k)

View File

@@ -1,53 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# BitBake Tests for utils.py
#
# Copyright (C) 2012 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import unittest
import bb
class VerCmpString(unittest.TestCase):
def test_vercmpstring(self):
result = bb.utils.vercmp_string('1', '2')
self.assertTrue(result < 0)
result = bb.utils.vercmp_string('2', '1')
self.assertTrue(result > 0)
result = bb.utils.vercmp_string('1', '1.0')
self.assertTrue(result < 0)
result = bb.utils.vercmp_string('1', '1.1')
self.assertTrue(result < 0)
result = bb.utils.vercmp_string('1.1', '1_p2')
self.assertTrue(result < 0)
def test_explode_dep_versions(self):
correctresult = {"foo" : ["= 1.10"]}
result = bb.utils.explode_dep_versions2("foo (= 1.10)")
self.assertEqual(result, correctresult)
result = bb.utils.explode_dep_versions2("foo (=1.10)")
self.assertEqual(result, correctresult)
result = bb.utils.explode_dep_versions2("foo ( = 1.10)")
self.assertEqual(result, correctresult)
result = bb.utils.explode_dep_versions2("foo ( =1.10)")
self.assertEqual(result, correctresult)
result = bb.utils.explode_dep_versions2("foo ( = 1.10 )")
self.assertEqual(result, correctresult)
result = bb.utils.explode_dep_versions2("foo ( =1.10 )")
self.assertEqual(result, correctresult)

View File

@@ -1,100 +0,0 @@
# tinfoil: a simple wrapper around cooker for bitbake-based command-line utilities
#
# Copyright (C) 2012 Intel Corporation
# Copyright (C) 2011 Mentor Graphics Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import logging
import warnings
import os
import sys
import bb.cache
import bb.cooker
import bb.providers
import bb.utils
from bb.cooker import state
import bb.fetch2
class Tinfoil:
def __init__(self, output=sys.stdout):
# Needed to avoid deprecation warnings with python 2.6
warnings.filterwarnings("ignore", category=DeprecationWarning)
# Set up logging
self.logger = logging.getLogger('BitBake')
console = logging.StreamHandler(output)
bb.msg.addDefaultlogFilter(console)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if output.isatty():
format.enable_color()
console.setFormatter(format)
self.logger.addHandler(console)
initialenv = os.environ.copy()
bb.utils.clean_environment()
self.config = TinfoilConfig(parse_only=True)
self.cooker = bb.cooker.BBCooker(self.config,
self.register_idle_function,
initialenv)
self.config_data = self.cooker.configuration.data
bb.providers.logger.setLevel(logging.ERROR)
self.cooker_data = None
def register_idle_function(self, function, data):
pass
def parseRecipes(self):
sys.stderr.write("Parsing recipes..")
self.logger.setLevel(logging.WARNING)
try:
while self.cooker.state in (state.initial, state.parsing):
self.cooker.updateCache()
except KeyboardInterrupt:
self.cooker.shutdown()
self.cooker.updateCache()
sys.exit(2)
self.logger.setLevel(logging.INFO)
sys.stderr.write("done.\n")
self.cooker_data = self.cooker.status
def prepare(self, config_only = False):
if not self.cooker_data:
if config_only:
self.cooker.parseConfiguration()
self.cooker_data = self.cooker.status
else:
self.parseRecipes()
class TinfoilConfig(object):
def __init__(self, **options):
self.pkgs_to_build = []
self.debug_domains = []
self.extra_assume_provided = []
self.prefile = []
self.postfile = []
self.debug = 0
self.__dict__.update(options)
def __getattr__(self, attribute):
try:
return super(TinfoilConfig, self).__getattribute__(attribute)
except AttributeError:
return None

View File

@@ -23,13 +23,11 @@
import gtk
import pango
import gobject
import bb.process
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hobwidget import hic, HobNotebook, HobAltButton, HobWarpCellRendererText, HobButton, HobInfoButton
from bb.ui.crumbs.hobwidget import hic, HobNotebook, HobAltButton, HobWarpCellRendererText
from bb.ui.crumbs.runningbuild import RunningBuildTreeView
from bb.ui.crumbs.runningbuild import BuildFailureTreeView
from bb.ui.crumbs.hobpages import HobPage
from bb.ui.crumbs.hobcolor import HobColors
class BuildConfigurationTreeView(gtk.TreeView):
def __init__ (self):
@@ -98,12 +96,11 @@ class BuildConfigurationTreeView(gtk.TreeView):
for path in src_config_info.layers:
import os, os.path
if os.path.exists(path):
branch = bb.process.run('cd %s; git branch | grep "^* " | tr -d "* "' % path)[0]
if branch.startswith("fatal:"):
branch = "(unknown)"
if branch:
branch = branch.strip('\n')
f = os.popen('cd %s; git branch 2>&1 | grep "^* " | tr -d "* "' % path)
if f:
branch = f.readline().lstrip('\n').rstrip('\n')
vars.append(self.set_vars("Branch:", branch))
f.close()
break
self.set_config_model(vars)
@@ -147,7 +144,7 @@ class BuildDetailsPage (HobPage):
self.scrolled_view_config = gtk.ScrolledWindow ()
self.scrolled_view_config.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
self.scrolled_view_config.add(self.config_tv)
self.notebook.append_page(self.scrolled_view_config, "Build configuration")
self.notebook.append_page(self.scrolled_view_config, gtk.Label("Build configuration"))
self.failure_tv = BuildFailureTreeView()
self.failure_model = self.builder.handler.build.model.failure_model()
@@ -155,19 +152,19 @@ class BuildDetailsPage (HobPage):
self.scrolled_view_failure = gtk.ScrolledWindow ()
self.scrolled_view_failure.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
self.scrolled_view_failure.add(self.failure_tv)
self.notebook.append_page(self.scrolled_view_failure, "Issues")
self.notebook.append_page(self.scrolled_view_failure, gtk.Label("Issues"))
self.build_tv = RunningBuildTreeView(readonly=True, hob=True)
self.build_tv.set_model(self.builder.handler.build.model)
self.scrolled_view_build = gtk.ScrolledWindow ()
self.scrolled_view_build.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
self.scrolled_view_build.add(self.build_tv)
self.notebook.append_page(self.scrolled_view_build, "Log")
self.notebook.append_page(self.scrolled_view_build, gtk.Label("Log"))
self.builder.handler.build.model.connect_after("row-changed", self.scroll_to_present_row, self.scrolled_view_build.get_vadjustment(), self.build_tv)
self.button_box = gtk.HBox(False, 6)
self.back_button = HobAltButton('&lt;&lt; Back')
self.back_button = HobAltButton("<< Back to image configuration")
self.back_button.connect("clicked", self.back_button_clicked_cb)
self.button_box.pack_start(self.back_button, expand=False, fill=False)
@@ -201,155 +198,6 @@ class BuildDetailsPage (HobPage):
for child in children:
self.remove(child)
def add_build_fail_top_bar(self, actions, log_file=None):
primary_action = "Edit %s" % actions
color = HobColors.ERROR
build_fail_top = gtk.EventBox()
#build_fail_top.set_size_request(-1, 200)
build_fail_top.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse(color))
build_fail_tab = gtk.Table(14, 46, True)
build_fail_top.add(build_fail_tab)
icon = gtk.Image()
icon_pix_buffer = gtk.gdk.pixbuf_new_from_file(hic.ICON_INDI_ERROR_FILE)
icon.set_from_pixbuf(icon_pix_buffer)
build_fail_tab.attach(icon, 1, 4, 0, 6)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
label.set_markup("<span size='x-large'><b>%s</b></span>" % self.title)
build_fail_tab.attach(label, 4, 26, 0, 6)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
# Ensure variable disk_full is defined
if not hasattr(self.builder, 'disk_full'):
self.builder.disk_full = False
if self.builder.disk_full:
markup = "<span size='medium'>There is no disk space left, so Hob cannot finish building your image. Free up some disk space\n"
markup += "and restart the build. Check the \"Issues\" tab for more details</span>"
label.set_markup(markup)
else:
label.set_markup("<span size='medium'>Check the \"Issues\" information for more details</span>")
build_fail_tab.attach(label, 4, 40, 4, 9)
# create button 'Edit packages'
action_button = HobButton(primary_action)
#action_button.set_size_request(-1, 40)
action_button.set_tooltip_text("Edit the %s parameters" % actions)
action_button.connect('clicked', self.failure_primary_action_button_clicked_cb, primary_action)
if log_file:
open_log_button = HobAltButton("Open log")
open_log_button.set_relief(gtk.RELIEF_HALF)
open_log_button.set_tooltip_text("Open the build's log file")
open_log_button.connect('clicked', self.open_log_button_clicked_cb, log_file)
attach_pos = (24 if log_file else 14)
file_bug_button = HobAltButton('File a bug')
file_bug_button.set_relief(gtk.RELIEF_HALF)
file_bug_button.set_tooltip_text("Open the Yocto Project bug tracking website")
file_bug_button.connect('clicked', self.failure_activate_file_bug_link_cb)
if not self.builder.disk_full:
build_fail_tab.attach(action_button, 4, 13, 9, 12)
if log_file:
build_fail_tab.attach(open_log_button, 14, 23, 9, 12)
build_fail_tab.attach(file_bug_button, attach_pos, attach_pos + 9, 9, 12)
else:
restart_build = HobButton("Restart the build")
restart_build.set_tooltip_text("Restart the build")
restart_build.connect('clicked', self.restart_build_button_clicked_cb)
build_fail_tab.attach(restart_build, 4, 13, 9, 12)
build_fail_tab.attach(action_button, 14, 23, 9, 12)
if log_file:
build_fail_tab.attach(open_log_button, attach_pos, attach_pos + 9, 9, 12)
self.builder.disk_full = False
return build_fail_top
def show_fail_page(self, title):
self._remove_all_widget()
self.title = "Hob cannot build your %s" % title
self.build_fail_bar = self.add_build_fail_top_bar(title, self.builder.current_logfile)
self.pack_start(self.group_align, expand=True, fill=True)
self.box_group_area.pack_start(self.build_fail_bar, expand=False, fill=False)
self.box_group_area.pack_start(self.vbox, expand=True, fill=True)
self.vbox.pack_start(self.notebook, expand=True, fill=True)
self.show_all()
self.notebook.set_page("Issues")
self.back_button.hide()
def add_build_stop_top_bar(self, action, log_file=None):
color = HobColors.LIGHT_GRAY
build_stop_top = gtk.EventBox()
#build_stop_top.set_size_request(-1, 200)
build_stop_top.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse(color))
build_stop_top.set_flags(gtk.CAN_DEFAULT)
build_stop_top.grab_default()
build_stop_tab = gtk.Table(11, 46, True)
build_stop_top.add(build_stop_tab)
icon = gtk.Image()
icon_pix_buffer = gtk.gdk.pixbuf_new_from_file(hic.ICON_INFO_HOVER_FILE)
icon.set_from_pixbuf(icon_pix_buffer)
build_stop_tab.attach(icon, 1, 4, 0, 6)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
label.set_markup("<span size='x-large'><b>%s</b></span>" % self.title)
build_stop_tab.attach(label, 4, 26, 0, 6)
action_button = HobButton("Edit %s" % action)
action_button.set_size_request(-1, 40)
if action == "image":
action_button.set_tooltip_text("Edit the image parameters")
elif action == "recipes":
action_button.set_tooltip_text("Edit the included recipes")
elif action == "packages":
action_button.set_tooltip_text("Edit the included packages")
action_button.connect('clicked', self.stop_primary_action_button_clicked_cb, action)
build_stop_tab.attach(action_button, 4, 13, 6, 9)
if log_file:
open_log_button = HobAltButton("Open log")
open_log_button.set_relief(gtk.RELIEF_HALF)
open_log_button.set_tooltip_text("Open the build's log file")
open_log_button.connect('clicked', self.open_log_button_clicked_cb, log_file)
build_stop_tab.attach(open_log_button, 14, 23, 6, 9)
attach_pos = (24 if log_file else 14)
build_button = HobAltButton("Build new image")
#build_button.set_size_request(-1, 40)
build_button.set_tooltip_text("Create a new image from scratch")
build_button.connect('clicked', self.new_image_button_clicked_cb)
build_stop_tab.attach(build_button, attach_pos, attach_pos + 9, 6, 9)
return build_stop_top, action_button
def show_stop_page(self, action):
self._remove_all_widget()
self.title = "Build stopped"
self.build_stop_bar, action_button = self.add_build_stop_top_bar(action, self.builder.current_logfile)
self.pack_start(self.group_align, expand=True, fill=True)
self.box_group_area.pack_start(self.build_stop_bar, expand=False, fill=False)
self.box_group_area.pack_start(self.vbox, expand=True, fill=True)
self.vbox.pack_start(self.notebook, expand=True, fill=True)
self.show_all()
self.back_button.hide()
return action_button
def show_page(self, step):
self._remove_all_widget()
if step == self.builder.PACKAGE_GENERATING or step == self.builder.FAST_IMAGE_GENERATING:
@@ -370,7 +218,6 @@ class BuildDetailsPage (HobPage):
self.box_group_area.pack_end(self.button_box, expand=False, fill=False)
self.show_all()
self.notebook.set_page("Log")
self.back_button.hide()
self.reset_build_status()
@@ -384,9 +231,6 @@ class BuildDetailsPage (HobPage):
def back_button_clicked_cb(self, button):
self.builder.show_configuration()
def new_image_button_clicked_cb(self, button):
self.builder.reset()
def show_back_button(self):
self.back_button.show()
@@ -407,30 +251,3 @@ class BuildDetailsPage (HobPage):
def show_configurations(self, configurations, params):
self.config_tv.show(configurations, params)
def failure_primary_action_button_clicked_cb(self, button, action):
if "Edit recipes" in action:
self.builder.show_recipes()
elif "Edit packages" in action:
self.builder.show_packages()
elif "Edit image" in action:
self.builder.show_configuration()
def restart_build_button_clicked_cb(self, button):
self.builder.just_bake()
def stop_primary_action_button_clicked_cb(self, button, action):
if "recipes" in action:
self.builder.show_recipes()
elif "packages" in action:
self.builder.show_packages(ask=False)
elif "image" in action:
self.builder.show_configuration()
def open_log_button_clicked_cb(self, button, log_file):
if log_file:
log_file = "file:///" + log_file
gtk.show_uri(screen=button.get_screen(), uri=log_file, timestamp=0)
def failure_activate_file_bug_link_cb(self, button):
button.child.emit('activate-link', "http://bugzilla.yoctoproject.org")

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,336 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import hashlib
from bb.ui.crumbs.hobwidget import HobInfoButton, HobButton
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hig.settingsuihelper import SettingsUIHelper
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
from bb.ui.crumbs.hig.proxydetailsdialog import ProxyDetailsDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class AdvancedSettingsDialog (CrumbsDialog, SettingsUIHelper):
def details_cb(self, button, parent, protocol):
dialog = ProxyDetailsDialog(title = protocol.upper() + " Proxy Details",
user = self.configuration.proxies[protocol][1],
passwd = self.configuration.proxies[protocol][2],
parent = parent,
flags = gtk.DIALOG_MODAL
| gtk.DIALOG_DESTROY_WITH_PARENT
| gtk.DIALOG_NO_SEPARATOR)
dialog.add_button(gtk.STOCK_CLOSE, gtk.RESPONSE_OK)
response = dialog.run()
if response == gtk.RESPONSE_OK:
self.configuration.proxies[protocol][1] = dialog.user
self.configuration.proxies[protocol][2] = dialog.passwd
self.refresh_proxy_components()
dialog.destroy()
def set_save_button(self, button):
self.save_button = button
def rootfs_combo_changed_cb(self, rootfs_combo, all_package_format, check_hbox):
combo_item = self.rootfs_combo.get_active_text()
modified = False
for child in check_hbox.get_children():
if isinstance(child, gtk.CheckButton):
check_hbox.remove(child)
modified = True
for format in all_package_format:
if format != combo_item:
check_button = gtk.CheckButton(format)
check_hbox.pack_start(check_button, expand=False, fill=False)
modified = True
if modified:
check_hbox.remove(self.pkgfmt_info)
check_hbox.pack_start(self.pkgfmt_info, expand=False, fill=False)
check_hbox.show_all()
def gen_pkgfmt_widget(self, curr_package_format, all_package_format, tooltip_combo="", tooltip_extra=""):
pkgfmt_vbox = gtk.VBox(False, 6)
label = self.gen_label_widget("Root file system package format")
pkgfmt_vbox.pack_start(label, expand=False, fill=False)
rootfs_format = ""
if curr_package_format:
rootfs_format = curr_package_format.split()[0]
rootfs_format_widget, rootfs_combo = self.gen_combo_widget(rootfs_format, all_package_format, tooltip_combo)
pkgfmt_vbox.pack_start(rootfs_format_widget, expand=False, fill=False)
label = self.gen_label_widget("Additional package formats")
pkgfmt_vbox.pack_start(label, expand=False, fill=False)
check_hbox = gtk.HBox(False, 12)
pkgfmt_vbox.pack_start(check_hbox, expand=False, fill=False)
for format in all_package_format:
if format != rootfs_format:
check_button = gtk.CheckButton(format)
is_active = (format in curr_package_format.split())
check_button.set_active(is_active)
check_hbox.pack_start(check_button, expand=False, fill=False)
self.pkgfmt_info = HobInfoButton(tooltip_extra, self)
check_hbox.pack_start(self.pkgfmt_info, expand=False, fill=False)
rootfs_combo.connect("changed", self.rootfs_combo_changed_cb, all_package_format, check_hbox)
pkgfmt_vbox.show_all()
return pkgfmt_vbox, rootfs_combo, check_hbox
def __init__(self, title, configuration, all_image_types,
all_package_formats, all_distros, all_sdk_machines,
max_threads, parent, flags, buttons=None):
super(AdvancedSettingsDialog, self).__init__(title, parent, flags, buttons)
# class members from other objects
# bitbake settings from Builder.Configuration
self.configuration = configuration
self.image_types = all_image_types
self.all_package_formats = all_package_formats
self.all_distros = all_distros[:]
self.all_sdk_machines = all_sdk_machines
self.max_threads = max_threads
# class members for internal use
self.distro_combo = None
self.dldir_text = None
self.sstatedir_text = None
self.sstatemirror_text = None
self.bb_spinner = None
self.pmake_spinner = None
self.rootfs_size_spinner = None
self.extra_size_spinner = None
self.gplv3_checkbox = None
self.toolchain_checkbox = None
self.image_types_checkbuttons = {}
self.md5 = self.config_md5()
self.settings_changed = False
# create visual elements on the dialog
self.save_button = None
self.create_visual_elements()
self.connect("response", self.response_cb)
def _get_sorted_value(self, var):
return " ".join(sorted(str(var).split())) + "\n"
def config_md5(self):
data = ""
data += ("PACKAGE_CLASSES: " + self.configuration.curr_package_format + '\n')
data += ("DISTRO: " + self._get_sorted_value(self.configuration.curr_distro))
data += ("IMAGE_ROOTFS_SIZE: " + self._get_sorted_value(self.configuration.image_rootfs_size))
data += ("IMAGE_EXTRA_SIZE: " + self._get_sorted_value(self.configuration.image_extra_size))
data += ("INCOMPATIBLE_LICENSE: " + self._get_sorted_value(self.configuration.incompat_license))
data += ("SDK_MACHINE: " + self._get_sorted_value(self.configuration.curr_sdk_machine))
data += ("TOOLCHAIN_BUILD: " + self._get_sorted_value(self.configuration.toolchain_build))
data += ("IMAGE_FSTYPES: " + self._get_sorted_value(self.configuration.image_fstypes))
return hashlib.md5(data).hexdigest()
def create_visual_elements(self):
self.nb = gtk.Notebook()
self.nb.set_show_tabs(True)
self.nb.append_page(self.create_image_types_page(), gtk.Label("Image types"))
self.nb.append_page(self.create_output_page(), gtk.Label("Output"))
self.nb.set_current_page(0)
self.vbox.pack_start(self.nb, expand=True, fill=True)
self.vbox.pack_end(gtk.HSeparator(), expand=True, fill=True)
self.show_all()
def get_num_checked_image_types(self):
total = 0
for b in self.image_types_checkbuttons.values():
if b.get_active():
total = total + 1
return total
def set_save_button_state(self):
if self.save_button:
self.save_button.set_sensitive(self.get_num_checked_image_types() > 0)
def image_type_checkbutton_clicked_cb(self, button):
self.set_save_button_state()
if self.get_num_checked_image_types() == 0:
# Show an error dialog
lbl = "<b>Select an image type</b>\n\nYou need to select at least one image type."
dialog = CrumbsMessageDialog(self, lbl, gtk.STOCK_DIALOG_WARNING)
button = dialog.add_button("OK", gtk.RESPONSE_OK)
HobButton.style_button(button)
response = dialog.run()
dialog.destroy()
def create_image_types_page(self):
main_vbox = gtk.VBox(False, 16)
main_vbox.set_border_width(6)
advanced_vbox = gtk.VBox(False, 6)
advanced_vbox.set_border_width(6)
distro_vbox = gtk.VBox(False, 6)
label = self.gen_label_widget("Distro:")
tooltip = "Selects the Yocto Project distribution you want"
try:
i = self.all_distros.index( "defaultsetup" )
except ValueError:
i = -1
if i != -1:
self.all_distros[ i ] = "Default"
if self.configuration.curr_distro == "defaultsetup":
self.configuration.curr_distro = "Default"
distro_widget, self.distro_combo = self.gen_combo_widget(self.configuration.curr_distro, self.all_distros,"<b>Distro</b>" + "*" + tooltip)
distro_vbox.pack_start(label, expand=False, fill=False)
distro_vbox.pack_start(distro_widget, expand=False, fill=False)
main_vbox.pack_start(distro_vbox, expand=False, fill=False)
rows = (len(self.image_types)+1)/3
table = gtk.Table(rows + 1, 10, True)
advanced_vbox.pack_start(table, expand=False, fill=False)
tooltip = "Image file system types you want."
info = HobInfoButton("<b>Image types</b>" + "*" + tooltip, self)
label = self.gen_label_widget("Image types:")
align = gtk.Alignment(0, 0.5, 0, 0)
table.attach(align, 0, 4, 0, 1)
align.add(label)
table.attach(info, 4, 5, 0, 1)
i = 1
j = 1
for image_type in sorted(self.image_types):
self.image_types_checkbuttons[image_type] = gtk.CheckButton(image_type)
self.image_types_checkbuttons[image_type].connect("toggled", self.image_type_checkbutton_clicked_cb)
article = ""
if image_type.startswith(("a", "e", "i", "o", "u")):
article = "n"
self.image_types_checkbuttons[image_type].set_tooltip_text("Build a%s %s image" % (article, image_type))
table.attach(self.image_types_checkbuttons[image_type], j - 1, j + 3, i, i + 1)
if image_type in self.configuration.image_fstypes.split():
self.image_types_checkbuttons[image_type].set_active(True)
i += 1
if i > rows:
i = 1
j = j + 4
main_vbox.pack_start(advanced_vbox, expand=False, fill=False)
self.set_save_button_state()
return main_vbox
def create_output_page(self):
advanced_vbox = gtk.VBox(False, 6)
advanced_vbox.set_border_width(6)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Package format</span>'), expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
tooltip_combo = "Selects the package format used to generate rootfs."
tooltip_extra = "Selects extra package formats to build"
pkgfmt_widget, self.rootfs_combo, self.check_hbox = self.gen_pkgfmt_widget(self.configuration.curr_package_format, self.all_package_formats,"<b>Root file system package format</b>" + "*" + tooltip_combo,"<b>Additional package formats</b>" + "*" + tooltip_extra)
sub_vbox.pack_start(pkgfmt_widget, expand=False, fill=False)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Image size</span>'), expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = self.gen_label_widget("Image basic size (in MB)")
tooltip = "Sets the basic size of your target image.\nThis is the basic size of your target image unless your selected package size exceeds this value or you select \'Image Extra Size\'."
rootfs_size_widget, self.rootfs_size_spinner = self.gen_spinner_widget(int(self.configuration.image_rootfs_size*1.0/1024), 0, 65536,"<b>Image basic size</b>" + "*" + tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(rootfs_size_widget, expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = self.gen_label_widget("Additional free space (in MB)")
tooltip = "Sets the extra free space of your target image.\nBy default, the system reserves 30% of your image size as free space. If your image contains zypper, it brings in 50MB more space. The maximum free space is 64GB."
extra_size_widget, self.extra_size_spinner = self.gen_spinner_widget(int(self.configuration.image_extra_size*1.0/1024), 0, 65536,"<b>Additional free space</b>" + "*" + tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(extra_size_widget, expand=False, fill=False)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Licensing</span>'), expand=False, fill=False)
self.gplv3_checkbox = gtk.CheckButton("Exclude GPLv3 packages")
self.gplv3_checkbox.set_tooltip_text("Check this box to prevent GPLv3 packages from being included in your image")
if "GPLv3" in self.configuration.incompat_license.split():
self.gplv3_checkbox.set_active(True)
else:
self.gplv3_checkbox.set_active(False)
advanced_vbox.pack_start(self.gplv3_checkbox, expand=False, fill=False)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Toolchain</span>'), expand=False, fill=False)
sub_hbox = gtk.HBox(False, 6)
advanced_vbox.pack_start(sub_hbox, expand=False, fill=False)
self.toolchain_checkbox = gtk.CheckButton("Build toolchain")
self.toolchain_checkbox.set_tooltip_text("Check this box to build the related toolchain with your image")
self.toolchain_checkbox.set_active(self.configuration.toolchain_build)
sub_hbox.pack_start(self.toolchain_checkbox, expand=False, fill=False)
tooltip = "Selects the host platform for which you want to run the toolchain"
sdk_machine_widget, self.sdk_machine_combo = self.gen_combo_widget(self.configuration.curr_sdk_machine, self.all_sdk_machines,"<b>Build toolchain</b>" + "*" + tooltip)
sub_hbox.pack_start(sdk_machine_widget, expand=False, fill=False)
return advanced_vbox
def response_cb(self, dialog, response_id):
package_format = []
package_format.append(self.rootfs_combo.get_active_text())
for child in self.check_hbox:
if isinstance(child, gtk.CheckButton) and child.get_active():
package_format.append(child.get_label())
self.configuration.curr_package_format = " ".join(package_format)
distro = self.distro_combo.get_active_text()
if distro == "Default":
distro = "defaultsetup"
self.configuration.curr_distro = distro
self.configuration.image_rootfs_size = self.rootfs_size_spinner.get_value_as_int() * 1024
self.configuration.image_extra_size = self.extra_size_spinner.get_value_as_int() * 1024
self.configuration.image_fstypes = ""
for image_type in self.image_types:
if self.image_types_checkbuttons[image_type].get_active():
self.configuration.image_fstypes += (" " + image_type)
self.configuration.image_fstypes.strip()
if self.gplv3_checkbox.get_active():
if "GPLv3" not in self.configuration.incompat_license.split():
self.configuration.incompat_license += " GPLv3"
else:
if "GPLv3" in self.configuration.incompat_license.split():
self.configuration.incompat_license = self.configuration.incompat_license.split().remove("GPLv3")
self.configuration.incompat_license = " ".join(self.configuration.incompat_license or [])
self.configuration.incompat_license = self.configuration.incompat_license.strip()
self.configuration.toolchain_build = self.toolchain_checkbox.get_active()
self.configuration.curr_sdk_machine = self.sdk_machine_combo.get_active_text()
md5 = self.config_md5()
self.settings_changed = (self.md5 != md5)

View File

@@ -1,44 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class CrumbsDialog(gtk.Dialog):
"""
A GNOME HIG compliant dialog widget.
Add buttons with gtk.Dialog.add_button or gtk.Dialog.add_buttons
"""
def __init__(self, title="", parent=None, flags=0, buttons=None):
super(CrumbsDialog, self).__init__(title, parent, flags, buttons)
self.set_property("has-separator", False) # note: deprecated in 2.22
self.set_border_width(6)
self.vbox.set_property("spacing", 12)
self.action_area.set_property("spacing", 12)
self.action_area.set_property("border-width", 6)

View File

@@ -1,95 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import glib
import gtk
from bb.ui.crumbs.hobwidget import HobIconChecker
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class CrumbsMessageDialog(CrumbsDialog):
"""
A GNOME HIG compliant dialog widget.
Add buttons with gtk.Dialog.add_button or gtk.Dialog.add_buttons
"""
def __init__(self, parent=None, label="", icon=gtk.STOCK_INFO, msg=""):
super(CrumbsMessageDialog, self).__init__("", parent, gtk.DIALOG_MODAL)
self.set_border_width(6)
self.vbox.set_property("spacing", 12)
self.action_area.set_property("spacing", 12)
self.action_area.set_property("border-width", 6)
first_column = gtk.HBox(spacing=12)
first_column.set_property("border-width", 6)
first_column.show()
self.vbox.add(first_column)
self.icon = gtk.Image()
# We have our own Info icon which should be used in preference of the stock icon
self.icon_chk = HobIconChecker()
self.icon.set_from_stock(self.icon_chk.check_stock_icon(icon), gtk.ICON_SIZE_DIALOG)
self.icon.set_property("yalign", 0.00)
self.icon.show()
first_column.pack_start(self.icon, expand=False, fill=True, padding=0)
if 0 <= len(msg) < 200:
lbl = label + "%s" % glib.markup_escape_text(msg)
self.label_short = gtk.Label()
self.label_short.set_use_markup(True)
self.label_short.set_line_wrap(True)
self.label_short.set_markup(lbl)
self.label_short.set_property("yalign", 0.00)
self.label_short.show()
first_column.add(self.label_short)
else:
second_row = gtk.VBox(spacing=12)
second_row.set_property("border-width", 6)
self.label_long = gtk.Label()
self.label_long.set_use_markup(True)
self.label_long.set_line_wrap(True)
self.label_long.set_markup(label)
self.label_long.set_alignment(0.0, 0.0)
second_row.pack_start(self.label_long, expand=False, fill=False, padding=0)
self.label_long.show()
self.textWindow = gtk.ScrolledWindow()
self.textWindow.set_shadow_type(gtk.SHADOW_IN)
self.textWindow.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
self.msgView = gtk.TextView()
self.msgView.set_editable(False)
self.msgView.set_wrap_mode(gtk.WRAP_WORD)
self.msgView.set_cursor_visible(False)
self.msgView.set_size_request(300, 300)
self.buf = gtk.TextBuffer()
self.buf.set_text(msg)
self.msgView.set_buffer(self.buf)
self.textWindow.add(self.msgView)
self.msgView.show()
second_row.add(self.textWindow)
self.textWindow.show()
first_column.add(second_row)
second_row.show()

View File

@@ -1,215 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import glob
import gtk
import gobject
import os
import re
import shlex
import subprocess
import tempfile
from bb.ui.crumbs.hobwidget import hic, HobButton
from bb.ui.crumbs.progressbar import HobProgressBar
import bb.ui.crumbs.utils
import bb.process
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class DeployImageDialog (CrumbsDialog):
__dummy_usb__ = "--select a usb drive--"
def __init__(self, title, image_path, parent, flags, buttons=None, standalone=False):
super(DeployImageDialog, self).__init__(title, parent, flags, buttons)
self.image_path = image_path
self.standalone = standalone
self.create_visual_elements()
self.connect("response", self.response_cb)
def create_visual_elements(self):
self.set_size_request(600, 400)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
markup = "<span font_desc='12'>The image to be written into usb drive:</span>"
label.set_markup(markup)
self.vbox.pack_start(label, expand=False, fill=False, padding=2)
table = gtk.Table(2, 10, False)
table.set_col_spacings(5)
table.set_row_spacings(5)
self.vbox.pack_start(table, expand=True, fill=True)
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
scroll.set_shadow_type(gtk.SHADOW_IN)
tv = gtk.TextView()
tv.set_editable(False)
tv.set_wrap_mode(gtk.WRAP_WORD)
tv.set_cursor_visible(False)
self.buf = gtk.TextBuffer()
self.buf.set_text(self.image_path)
tv.set_buffer(self.buf)
scroll.add(tv)
table.attach(scroll, 0, 10, 0, 1)
# There are 2 ways to use DeployImageDialog
# One way is that called by HOB when the 'Deploy Image' button is clicked
# The other way is that called by a standalone script.
# Following block of codes handles the latter way. It adds a 'Select Image' button and
# emit a signal when the button is clicked.
if self.standalone:
gobject.signal_new("select_image_clicked", self, gobject.SIGNAL_RUN_FIRST,
gobject.TYPE_NONE, ())
icon = gtk.Image()
pix_buffer = gtk.gdk.pixbuf_new_from_file(hic.ICON_IMAGES_DISPLAY_FILE)
icon.set_from_pixbuf(pix_buffer)
button = gtk.Button("Select Image")
button.set_image(icon)
#button.set_size_request(140, 50)
table.attach(button, 9, 10, 1, 2, gtk.FILL, 0, 0, 0)
button.connect("clicked", self.select_image_button_clicked_cb)
separator = gtk.HSeparator()
self.vbox.pack_start(separator, expand=False, fill=False, padding=10)
self.usb_desc = gtk.Label()
self.usb_desc.set_alignment(0.0, 0.5)
markup = "<span font_desc='12'>You haven't chosen any USB drive.</span>"
self.usb_desc.set_markup(markup)
self.usb_combo = gtk.combo_box_new_text()
self.usb_combo.connect("changed", self.usb_combo_changed_cb)
model = self.usb_combo.get_model()
model.clear()
self.usb_combo.append_text(self.__dummy_usb__)
for usb in self.find_all_usb_devices():
self.usb_combo.append_text("/dev/" + usb)
self.usb_combo.set_active(0)
self.vbox.pack_start(self.usb_combo, expand=False, fill=False)
self.vbox.pack_start(self.usb_desc, expand=False, fill=False, padding=2)
self.progress_bar = HobProgressBar()
self.vbox.pack_start(self.progress_bar, expand=False, fill=False)
separator = gtk.HSeparator()
self.vbox.pack_start(separator, expand=False, fill=True, padding=10)
self.vbox.show_all()
self.progress_bar.hide()
def set_image_text_buffer(self, image_path):
self.buf.set_text(image_path)
def set_image_path(self, image_path):
self.image_path = image_path
def popen_read(self, cmd):
tmpout, errors = bb.process.run("%s" % cmd)
return tmpout.strip()
def find_all_usb_devices(self):
usb_devs = [ os.readlink(u)
for u in glob.glob('/dev/disk/by-id/usb*')
if not re.search(r'part\d+', u) ]
return [ '%s' % u[u.rfind('/')+1:] for u in usb_devs ]
def get_usb_info(self, dev):
return "%s %s" % \
(self.popen_read('cat /sys/class/block/%s/device/vendor' % dev),
self.popen_read('cat /sys/class/block/%s/device/model' % dev))
def select_image_button_clicked_cb(self, button):
self.emit('select_image_clicked')
def usb_combo_changed_cb(self, usb_combo):
combo_item = self.usb_combo.get_active_text()
if not combo_item or combo_item == self.__dummy_usb__:
markup = "<span font_desc='12'>You haven't chosen any USB drive.</span>"
self.usb_desc.set_markup(markup)
else:
markup = "<span font_desc='12'>" + self.get_usb_info(combo_item.lstrip("/dev/")) + "</span>"
self.usb_desc.set_markup(markup)
def response_cb(self, dialog, response_id):
if response_id == gtk.RESPONSE_YES:
lbl = ''
combo_item = self.usb_combo.get_active_text()
if combo_item and combo_item != self.__dummy_usb__ and self.image_path:
cmdline = bb.ui.crumbs.utils.which_terminal()
if cmdline:
tmpfile = tempfile.NamedTemporaryFile()
cmdline += "\"sudo dd if=" + self.image_path + \
" of=" + combo_item + "; echo $? > " + tmpfile.name + "\""
subprocess.call(shlex.split(cmdline))
if int(tmpfile.readline().strip()) == 0:
lbl = "<b>Deploy image successfully.</b>"
else:
lbl = "<b>Failed to deploy image.</b>\nPlease check image <b>%s</b> exists and USB device <b>%s</b> is writable." % (self.image_path, combo_item)
tmpfile.close()
else:
if not self.image_path:
lbl = "<b>No selection made.</b>\nYou have not selected an image to deploy."
else:
lbl = "<b>No selection made.</b>\nYou have not selected a USB device."
if len(lbl):
crumbs_dialog = CrumbsMessageDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
button = crumbs_dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
crumbs_dialog.run()
crumbs_dialog.destroy()
def update_progress_bar(self, title, fraction, status=None):
self.progress_bar.update(fraction)
self.progress_bar.set_title(title)
self.progress_bar.set_rcstyle(status)
def write_file(self, ifile, ofile):
self.progress_bar.reset()
self.progress_bar.show()
f_from = os.open(ifile, os.O_RDONLY)
f_to = os.open(ofile, os.O_WRONLY)
total_size = os.stat(ifile).st_size
written_size = 0
while True:
buf = os.read(f_from, 1024*1024)
if not buf:
break
os.write(f_to, buf)
written_size += 1024*1024
self.update_progress_bar("Writing to usb:", written_size * 1.0/total_size)
self.update_progress_bar("Writing completed:", 1.0)
os.close(f_from)
os.close(f_to)
self.progress_bar.hide()

View File

@@ -1,172 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
import os
from bb.ui.crumbs.hobwidget import HobViewTable, HobInfoButton, HobButton, HobAltButton
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
from bb.ui.crumbs.hig.layerselectiondialog import LayerSelectionDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class ImageSelectionDialog (CrumbsDialog):
__columns__ = [{
'col_name' : 'Image name',
'col_id' : 0,
'col_style': 'text',
'col_min' : 400,
'col_max' : 400
}, {
'col_name' : 'Select',
'col_id' : 1,
'col_style': 'radio toggle',
'col_min' : 160,
'col_max' : 160
}]
def __init__(self, image_folder, image_types, title, parent, flags, buttons=None, image_extension = {}):
super(ImageSelectionDialog, self).__init__(title, parent, flags, buttons)
self.connect("response", self.response_cb)
self.image_folder = image_folder
self.image_types = image_types
self.image_list = []
self.image_names = []
self.image_extension = image_extension
# create visual elements on the dialog
self.create_visual_elements()
self.image_store = gtk.ListStore(gobject.TYPE_STRING, gobject.TYPE_BOOLEAN)
self.fill_image_store()
def create_visual_elements(self):
hbox = gtk.HBox(False, 6)
self.vbox.pack_start(hbox, expand=False, fill=False)
entry = gtk.Entry()
entry.set_text(self.image_folder)
table = gtk.Table(1, 10, True)
table.set_size_request(560, -1)
hbox.pack_start(table, expand=False, fill=False)
table.attach(entry, 0, 9, 0, 1)
image = gtk.Image()
image.set_from_stock(gtk.STOCK_OPEN, gtk.ICON_SIZE_BUTTON)
open_button = gtk.Button()
open_button.set_image(image)
open_button.connect("clicked", self.select_path_cb, self, entry)
table.attach(open_button, 9, 10, 0, 1)
self.image_table = HobViewTable(self.__columns__, "Images")
self.image_table.set_size_request(-1, 300)
self.image_table.connect("toggled", self.toggled_cb)
self.image_table.connect_group_selection(self.table_selected_cb)
self.image_table.connect("row-activated", self.row_actived_cb)
self.vbox.pack_start(self.image_table, expand=True, fill=True)
self.show_all()
def change_image_cb(self, model, path, columnid):
if not model:
return
iter = model.get_iter_first()
while iter:
rowpath = model.get_path(iter)
model[rowpath][columnid] = False
iter = model.iter_next(iter)
model[path][columnid] = True
def toggled_cb(self, table, cell, path, columnid, tree):
model = tree.get_model()
self.change_image_cb(model, path, columnid)
def table_selected_cb(self, selection):
model, paths = selection.get_selected_rows()
if paths:
self.change_image_cb(model, paths[0], 1)
def row_actived_cb(self, tab, model, path):
self.change_image_cb(model, path, 1)
self.emit('response', gtk.RESPONSE_YES)
def select_path_cb(self, action, parent, entry):
dialog = gtk.FileChooserDialog("", parent,
gtk.FILE_CHOOSER_ACTION_SELECT_FOLDER)
text = entry.get_text()
dialog.set_current_folder(text if len(text) > 0 else os.getcwd())
button = dialog.add_button("Cancel", gtk.RESPONSE_NO)
HobAltButton.style_button(button)
button = dialog.add_button("Open", gtk.RESPONSE_YES)
HobButton.style_button(button)
response = dialog.run()
if response == gtk.RESPONSE_YES:
path = dialog.get_filename()
entry.set_text(path)
self.image_folder = path
self.fill_image_store()
dialog.destroy()
def fill_image_store(self):
self.image_list = []
self.image_store.clear()
imageset = set()
for root, dirs, files in os.walk(self.image_folder):
# ignore the sub directories
dirs[:] = []
for f in files:
for image_type in self.image_types:
if image_type in self.image_extension:
real_types = self.image_extension[image_type]
else:
real_types = [image_type]
for real_image_type in real_types:
if f.endswith('.' + real_image_type):
imageset.add(f.rsplit('.' + real_image_type)[0].rsplit('.rootfs')[0])
self.image_list.append(f)
for image in imageset:
self.image_store.set(self.image_store.append(), 0, image, 1, False)
self.image_table.set_model(self.image_store)
def response_cb(self, dialog, response_id):
self.image_names = []
if response_id == gtk.RESPONSE_YES:
iter = self.image_store.get_iter_first()
while iter:
path = self.image_store.get_path(iter)
if self.image_store[path][1]:
for f in self.image_list:
if f.startswith(self.image_store[path][0] + '.'):
self.image_names.append(f)
break
iter = self.image_store.iter_next(iter)

View File

@@ -1,297 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
import os
import tempfile
from bb.ui.crumbs.hobwidget import hic, HobButton, HobAltButton
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class CellRendererPixbufActivatable(gtk.CellRendererPixbuf):
"""
A custom CellRenderer implementation which is activatable
so that we can handle user clicks
"""
__gsignals__ = { 'clicked' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING,)), }
def __init__(self):
gtk.CellRendererPixbuf.__init__(self)
self.set_property('mode', gtk.CELL_RENDERER_MODE_ACTIVATABLE)
self.set_property('follow-state', True)
"""
Respond to a user click on a cell
"""
def do_activate(self, even, widget, path, background_area, cell_area, flags):
self.emit('clicked', path)
#
# LayerSelectionDialog
#
class LayerSelectionDialog (CrumbsDialog):
TARGETS = [
("MY_TREE_MODEL_ROW", gtk.TARGET_SAME_WIDGET, 0),
("text/plain", 0, 1),
("TEXT", 0, 2),
("STRING", 0, 3),
]
def gen_label_widget(self, content):
label = gtk.Label()
label.set_alignment(0, 0)
label.set_markup(content)
label.show()
return label
def layer_widget_toggled_cb(self, cell, path, layer_store):
name = layer_store[path][0]
toggle = not layer_store[path][1]
layer_store[path][1] = toggle
def layer_widget_add_clicked_cb(self, action, layer_store, parent):
dialog = gtk.FileChooserDialog("Add new layer", parent,
gtk.FILE_CHOOSER_ACTION_SELECT_FOLDER)
button = dialog.add_button("Cancel", gtk.RESPONSE_NO)
HobAltButton.style_button(button)
button = dialog.add_button("Open", gtk.RESPONSE_YES)
HobButton.style_button(button)
label = gtk.Label("Select the layer you wish to add")
label.show()
dialog.set_extra_widget(label)
response = dialog.run()
path = dialog.get_filename()
dialog.destroy()
lbl = "<b>Error</b>\nUnable to load layer <i>%s</i> because " % path
if response == gtk.RESPONSE_YES:
import os
import os.path
layers = []
it = layer_store.get_iter_first()
while it:
layers.append(layer_store.get_value(it, 0))
it = layer_store.iter_next(it)
if not path:
lbl += "it is an invalid path."
elif not os.path.exists(path+"/conf/layer.conf"):
lbl += "there is no layer.conf inside the directory."
elif path in layers:
lbl += "it is already in loaded layers."
else:
layer_store.append([path])
return
dialog = CrumbsMessageDialog(parent, lbl)
dialog.add_button(gtk.STOCK_CLOSE, gtk.RESPONSE_OK)
response = dialog.run()
dialog.destroy()
def layer_widget_del_clicked_cb(self, action, tree_selection, layer_store):
model, iter = tree_selection.get_selected()
if iter:
layer_store.remove(iter)
def gen_layer_widget(self, layers, layers_avail, window, tooltip=""):
hbox = gtk.HBox(False, 6)
layer_tv = gtk.TreeView()
layer_tv.set_rules_hint(True)
layer_tv.set_headers_visible(False)
tree_selection = layer_tv.get_selection()
tree_selection.set_mode(gtk.SELECTION_SINGLE)
# Allow enable drag and drop of rows including row move
dnd_internal_target = ''
dnd_targets = [(dnd_internal_target, gtk.TARGET_SAME_WIDGET, 0)]
layer_tv.enable_model_drag_source( gtk.gdk.BUTTON1_MASK,
dnd_targets,
gtk.gdk.ACTION_MOVE)
layer_tv.enable_model_drag_dest(dnd_targets,
gtk.gdk.ACTION_MOVE)
layer_tv.connect("drag_data_get", self.drag_data_get_cb)
layer_tv.connect("drag_data_received", self.drag_data_received_cb)
col0= gtk.TreeViewColumn('Path')
cell0 = gtk.CellRendererText()
cell0.set_padding(5,2)
col0.pack_start(cell0, True)
col0.set_cell_data_func(cell0, self.draw_layer_path_cb)
layer_tv.append_column(col0)
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
scroll.set_shadow_type(gtk.SHADOW_IN)
scroll.add(layer_tv)
table_layer = gtk.Table(2, 10, False)
hbox.pack_start(table_layer, expand=True, fill=True)
table_layer.attach(scroll, 0, 10, 0, 1)
layer_store = gtk.ListStore(gobject.TYPE_STRING)
for layer in layers:
layer_store.append([layer])
col1 = gtk.TreeViewColumn('Enabled')
layer_tv.append_column(col1)
cell1 = CellRendererPixbufActivatable()
cell1.set_fixed_size(-1,35)
cell1.connect("clicked", self.del_cell_clicked_cb, layer_store)
col1.pack_start(cell1, True)
col1.set_cell_data_func(cell1, self.draw_delete_button_cb, layer_tv)
add_button = gtk.Button()
add_button.set_relief(gtk.RELIEF_NONE)
box = gtk.HBox(False, 6)
box.show()
add_button.add(box)
add_button.connect("enter-notify-event", self.add_hover_cb)
add_button.connect("leave-notify-event", self.add_leave_cb)
self.im = gtk.Image()
self.im.set_from_file(hic.ICON_INDI_ADD_FILE)
self.im.show()
box.pack_start(self.im, expand=False, fill=False, padding=6)
lbl = gtk.Label("Add layer")
lbl.set_alignment(0.0, 0.5)
lbl.show()
box.pack_start(lbl, expand=True, fill=True, padding=6)
add_button.connect("clicked", self.layer_widget_add_clicked_cb, layer_store, window)
table_layer.attach(add_button, 0, 10, 1, 2, gtk.EXPAND | gtk.FILL, 0, 0, 6)
layer_tv.set_model(layer_store)
hbox.show_all()
return hbox, layer_store
def drag_data_get_cb(self, treeview, context, selection, target_id, etime):
treeselection = treeview.get_selection()
model, iter = treeselection.get_selected()
data = model.get_value(iter, 0)
selection.set(selection.target, 8, data)
def drag_data_received_cb(self, treeview, context, x, y, selection, info, etime):
model = treeview.get_model()
data = selection.data
drop_info = treeview.get_dest_row_at_pos(x, y)
if drop_info:
path, position = drop_info
iter = model.get_iter(path)
if (position == gtk.TREE_VIEW_DROP_BEFORE or position == gtk.TREE_VIEW_DROP_INTO_OR_BEFORE):
model.insert_before(iter, [data])
else:
model.insert_after(iter, [data])
else:
model.append([data])
if context.action == gtk.gdk.ACTION_MOVE:
context.finish(True, True, etime)
return
def add_hover_cb(self, button, event):
self.im.set_from_file(hic.ICON_INDI_ADD_HOVER_FILE)
def add_leave_cb(self, button, event):
self.im.set_from_file(hic.ICON_INDI_ADD_FILE)
def __init__(self, title, layers, layers_non_removable, all_layers, parent, flags, buttons=None):
super(LayerSelectionDialog, self).__init__(title, parent, flags, buttons)
# class members from other objects
self.layers = layers
self.layers_non_removable = layers_non_removable
self.all_layers = all_layers
self.layers_changed = False
# icon for remove button in TreeView
im = gtk.Image()
im.set_from_file(hic.ICON_INDI_REMOVE_FILE)
self.rem_icon = im.get_pixbuf()
# class members for internal use
self.layer_store = None
# create visual elements on the dialog
self.create_visual_elements()
self.connect("response", self.response_cb)
def create_visual_elements(self):
layer_widget, self.layer_store = self.gen_layer_widget(self.layers, self.all_layers, self, None)
layer_widget.set_size_request(450, 250)
self.vbox.pack_start(layer_widget, expand=True, fill=True)
self.show_all()
def response_cb(self, dialog, response_id):
model = self.layer_store
it = model.get_iter_first()
layers = []
while it:
layers.append(model.get_value(it, 0))
it = model.iter_next(it)
self.layers_changed = (self.layers != layers)
self.layers = layers
"""
A custom cell_data_func to draw a delete 'button' in the TreeView for layers
other than the meta layer. The deletion of which is prevented so that the
user can't shoot themselves in the foot too badly.
"""
def draw_delete_button_cb(self, col, cell, model, it, tv):
path = model.get_value(it, 0)
if path in self.layers_non_removable:
cell.set_sensitive(False)
cell.set_property('pixbuf', None)
cell.set_property('mode', gtk.CELL_RENDERER_MODE_INERT)
else:
cell.set_property('pixbuf', self.rem_icon)
cell.set_sensitive(True)
cell.set_property('mode', gtk.CELL_RENDERER_MODE_ACTIVATABLE)
return True
"""
A custom cell_data_func to write an extra message into the layer path cell
for the meta layer. We should inform the user that they can't remove it for
their own safety.
"""
def draw_layer_path_cb(self, col, cell, model, it):
path = model.get_value(it, 0)
if path in self.layers_non_removable:
cell.set_property('markup', "<b>It cannot be removed</b>\n%s" % path)
else:
cell.set_property('text', path)
def del_cell_clicked_cb(self, cell, path, model):
it = model.get_iter_from_string(path)
model.remove(it)

View File

@@ -1,163 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Cristiana Voicu <cristiana.voicu@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
from bb.ui.crumbs.hobwidget import HobAltButton
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
#
# ParsingWarningsDialog
#
class ParsingWarningsDialog (CrumbsDialog):
def __init__(self, title, warnings, parent, flags, buttons=None):
super(ParsingWarningsDialog, self).__init__(title, parent, flags, buttons)
self.warnings = warnings
self.warning_on = 0
self.warn_nb = len(warnings)
# create visual elements on the dialog
self.create_visual_elements()
def cancel_button_cb(self, button):
self.destroy()
def previous_button_cb(self, button):
self.warning_on = self.warning_on - 1
self.refresh_components()
def next_button_cb(self, button):
self.warning_on = self.warning_on + 1
self.refresh_components()
def refresh_components(self):
lbl = self.warnings[self.warning_on]
#when the warning text has more than 400 chars, it uses a scroll bar
if 0<= len(lbl) < 400:
self.warning_label.set_size_request(320, 230)
self.warning_label.set_use_markup(True)
self.warning_label.set_line_wrap(True)
self.warning_label.set_markup(lbl)
self.warning_label.set_property("yalign", 0.00)
else:
self.textWindow.set_shadow_type(gtk.SHADOW_IN)
self.textWindow.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
self.msgView = gtk.TextView()
self.msgView.set_editable(False)
self.msgView.set_wrap_mode(gtk.WRAP_WORD)
self.msgView.set_cursor_visible(False)
self.msgView.set_size_request(320, 230)
self.buf = gtk.TextBuffer()
self.buf.set_text(lbl)
self.msgView.set_buffer(self.buf)
self.textWindow.add(self.msgView)
self.msgView.show()
if self.warning_on==0:
self.previous_button.set_sensitive(False)
else:
self.previous_button.set_sensitive(True)
if self.warning_on==self.warn_nb-1:
self.next_button.set_sensitive(False)
else:
self.next_button.set_sensitive(True)
if self.warn_nb>1:
self.heading = "Warning " + str(self.warning_on + 1) + " of " + str(self.warn_nb)
self.heading_label.set_markup('<span weight="bold">%s</span>' % self.heading)
else:
self.heading = "Warning"
self.heading_label.set_markup('<span weight="bold">%s</span>' % self.heading)
self.show_all()
if 0<= len(lbl) < 400:
self.textWindow.hide()
else:
self.warning_label.hide()
def create_visual_elements(self):
self.set_size_request(350, 350)
self.heading_label = gtk.Label()
self.heading_label.set_alignment(0, 0)
self.warning_label = gtk.Label()
self.warning_label.set_selectable(True)
self.warning_label.set_alignment(0, 0)
self.textWindow = gtk.ScrolledWindow()
table = gtk.Table(1, 10, False)
cancel_button = gtk.Button()
cancel_button.set_label("Close")
cancel_button.connect("clicked", self.cancel_button_cb)
cancel_button.set_size_request(110, 30)
self.previous_button = gtk.Button()
image1 = gtk.image_new_from_stock(gtk.STOCK_GO_BACK, gtk.ICON_SIZE_BUTTON)
image1.show()
box = gtk.HBox(False, 6)
box.show()
self.previous_button.add(box)
lbl = gtk.Label("Previous")
lbl.show()
box.pack_start(image1, expand=False, fill=False, padding=3)
box.pack_start(lbl, expand=True, fill=True, padding=3)
self.previous_button.connect("clicked", self.previous_button_cb)
self.previous_button.set_size_request(110, 30)
self.next_button = gtk.Button()
image2 = gtk.image_new_from_stock(gtk.STOCK_GO_FORWARD, gtk.ICON_SIZE_BUTTON)
image2.show()
box = gtk.HBox(False, 6)
box.show()
self.next_button.add(box)
lbl = gtk.Label("Next")
lbl.show()
box.pack_start(lbl, expand=True, fill=True, padding=3)
box.pack_start(image2, expand=False, fill=False, padding=3)
self.next_button.connect("clicked", self.next_button_cb)
self.next_button.set_size_request(110, 30)
#when there more than one warning, we need "previous" and "next" button
if self.warn_nb>1:
self.vbox.pack_start(self.heading_label, expand=False, fill=False)
self.vbox.pack_start(self.warning_label, expand=False, fill=False)
self.vbox.pack_start(self.textWindow, expand=False, fill=False)
table.attach(cancel_button, 6, 7, 0, 1, xoptions=gtk.SHRINK)
table.attach(self.previous_button, 7, 8, 0, 1, xoptions=gtk.SHRINK)
table.attach(self.next_button, 8, 9, 0, 1, xoptions=gtk.SHRINK)
self.vbox.pack_end(table, expand=False, fill=False)
else:
self.vbox.pack_start(self.heading_label, expand=False, fill=False)
self.vbox.pack_start(self.warning_label, expand=False, fill=False)
self.vbox.pack_start(self.textWindow, expand=False, fill=False)
cancel_button = self.add_button("Close", gtk.RESPONSE_CANCEL)
HobAltButton.style_button(cancel_button)
self.refresh_components()

View File

@@ -1,437 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2013 Intel Corporation
#
# Authored by Andrei Dinu <andrei.adrianx.dinu@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import string
import gtk
import gobject
import os
import tempfile
import glib
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
from bb.ui.crumbs.hig.settingsuihelper import SettingsUIHelper
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
from bb.ui.crumbs.hig.layerselectiondialog import LayerSelectionDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class PropertyDialog(CrumbsDialog):
def __init__(self, title, parent, information, flags, buttons=None):
super(PropertyDialog, self).__init__(title, parent, flags, buttons)
self.properties = information
if len(self.properties) == 10:
self.create_recipe_visual_elements()
elif len(self.properties) == 5:
self.create_package_visual_elements()
else:
self.create_information_visual_elements()
def create_information_visual_elements(self):
HOB_ICON_BASE_DIR = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), ("icons/"))
ICON_PACKAGES_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('info/info_display.png'))
self.set_resizable(False)
self.table = gtk.Table(1,1,False)
self.table.set_row_spacings(0)
self.table.set_col_spacings(0)
self.image = gtk.Image()
self.image.set_from_file(ICON_PACKAGES_DISPLAY_FILE)
self.image.set_property("xalign",0)
#self.vbox.add(self.image)
image_info = self.properties.split("*")[0]
info = self.properties.split("*")[1]
vbox = gtk.VBox(True, spacing=30)
self.label_short = gtk.Label()
self.label_short.set_line_wrap(False)
self.label_short.set_markup(image_info)
self.label_short.set_property("xalign", 0)
self.info_label = gtk.Label()
self.info_label.set_line_wrap(True)
self.info_label.set_markup(info)
self.info_label.set_property("yalign", 0.5)
self.table.attach(self.image, 0,1,0,1, xoptions=gtk.FILL|gtk.EXPAND, yoptions=gtk.FILL,xpadding=5,ypadding=5)
self.table.attach(self.label_short, 0,1,0,1, xoptions=gtk.FILL|gtk.EXPAND, yoptions=gtk.FILL,xpadding=40,ypadding=5)
self.table.attach(self.info_label, 0,1,1,2, xoptions=gtk.FILL|gtk.EXPAND, yoptions=gtk.FILL,xpadding=40,ypadding=10)
self.vbox.add(self.table)
def treeViewTooltip( self, widget, e, tooltips, cell, emptyText="" ):
try:
(path,col,x,y) = widget.get_path_at_pos( int(e.x), int(e.y) )
it = widget.get_model().get_iter(path)
value = widget.get_model().get_value(it,cell)
if value in self.tooltip_items:
tooltips.set_tip(widget, self.tooltip_items[value])
tooltips.enable()
else:
tooltips.set_tip(widget, emptyText)
except:
tooltips.set_tip(widget, emptyText)
def create_package_visual_elements(self):
name = self.properties['name']
binb = self.properties['binb']
size = self.properties['size']
recipe = self.properties['recipe']
file_list = self.properties['files_list']
file_list = file_list.strip("{}'")
files_temp = ''
paths_temp = ''
files_binb = []
paths_binb = []
self.tooltip_items = {}
self.set_resizable(False)
#cleaning out the recipe variable
recipe = recipe.split("+")[0]
vbox = gtk.VBox(True,spacing = 0)
###################################### NAME ROW + COL #################################
self.label_short = gtk.Label()
self.label_short.set_size_request(300,-1)
self.label_short.set_selectable(True)
self.label_short.set_line_wrap(True)
self.label_short.set_markup("<span weight=\"bold\">Name: </span>" + name)
self.label_short.set_property("xalign", 0)
self.vbox.add(self.label_short)
###################################### SIZE ROW + COL ######################################
self.label_short = gtk.Label()
self.label_short.set_size_request(300,-1)
self.label_short.set_selectable(True)
self.label_short.set_line_wrap(True)
self.label_short.set_markup("<span weight=\"bold\">Size: </span>" + size)
self.label_short.set_property("xalign", 0)
self.vbox.add(self.label_short)
##################################### RECIPE ROW + COL #########################################
self.label_short = gtk.Label()
self.label_short.set_size_request(300,-1)
self.label_short.set_selectable(True)
self.label_short.set_line_wrap(True)
self.label_short.set_markup("<span weight=\"bold\">Recipe: </span>" + recipe)
self.label_short.set_property("xalign", 0)
self.vbox.add(self.label_short)
##################################### BINB ROW + COL #######################################
if binb != '':
self.label_short = gtk.Label()
self.label_short.set_selectable(True)
self.label_short.set_line_wrap(True)
self.label_short.set_markup("<span weight=\"bold\">Brought in by: </span>")
self.label_short.set_property("xalign", 0)
self.label_info = gtk.Label()
self.label_info.set_size_request(300,-1)
self.label_info.set_selectable(True)
self.label_info.set_line_wrap(True)
self.label_info.set_markup(binb)
self.label_info.set_property("xalign", 0)
self.vbox.add(self.label_short)
self.vbox.add(self.label_info)
#################################### FILES BROUGHT BY PACKAGES ###################################
if file_list != '':
self.textWindow = gtk.ScrolledWindow()
self.textWindow.set_shadow_type(gtk.SHADOW_IN)
self.textWindow.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
self.textWindow.set_size_request(100, 170)
sstatemirrors_store = gtk.ListStore(str)
self.sstatemirrors_tv = gtk.TreeView()
self.sstatemirrors_tv.set_rules_hint(True)
self.sstatemirrors_tv.set_headers_visible(True)
self.textWindow.add(self.sstatemirrors_tv)
self.cell1 = gtk.CellRendererText()
col1 = gtk.TreeViewColumn('Package files', self.cell1)
col1.set_cell_data_func(self.cell1, self.regex_field)
self.sstatemirrors_tv.append_column(col1)
for items in file_list.split(']'):
if len(items) > 1:
paths_temp = items.split(":")[0]
paths_binb.append(paths_temp.strip(" ,'"))
files_temp = items.split(":")[1]
files_binb.append(files_temp.strip(" ['"))
unsorted_list = []
for items in range(len(paths_binb)):
if len(files_binb[items]) > 1:
for aduse in (files_binb[items].split(",")):
unsorted_list.append(paths_binb[items].split(name)[len(paths_binb[items].split(name))-1] + '/' + aduse.strip(" '"))
unsorted_list.sort()
for items in unsorted_list:
temp = items
while len(items) > 35:
items = items[:len(items)/2] + "" + items[len(items)/2+1:]
if len(items) == 35:
items = items[:len(items)/2] + "..." + items[len(items)/2+3:]
self.tooltip_items[items] = temp
sstatemirrors_store.append([str(items)])
self.sstatemirrors_tv.set_model(sstatemirrors_store)
tips = gtk.Tooltips()
tips.set_tip(self.sstatemirrors_tv, "")
self.sstatemirrors_tv.connect("motion-notify-event", self.treeViewTooltip, tips, 0)
self.sstatemirrors_tv.set_events(gtk.gdk.POINTER_MOTION_MASK)
self.vbox.add(self.textWindow)
self.vbox.show_all()
def regex_field(self, column, cell, model, iter):
cell.set_property('text', model.get_value(iter, 0))
return
def create_recipe_visual_elements(self):
summary = self.properties['summary']
name = self.properties['name']
version = self.properties['version']
revision = self.properties['revision']
binb = self.properties['binb']
group = self.properties['group']
license = self.properties['license']
homepage = self.properties['homepage']
bugtracker = self.properties['bugtracker']
description = self.properties['description']
self.set_resizable(False)
#cleaning out the version variable and also the summary
version = version.split(":")[1]
if len(version) > 30:
version = version.split("+")[0]
else:
version = version.split("-")[0]
license = license.replace("&" , "and")
if (homepage == ''):
homepage = 'unknown'
if (bugtracker == ''):
bugtracker = 'unknown'
summary = summary.split("+")[0]
#calculating the rows needed for the table
binb_items_count = len(binb.split(','))
binb_items = binb.split(',')
vbox = gtk.VBox(True,spacing = 0)
######################################## SUMMARY LABEL #########################################
if summary != '':
self.label_short = gtk.Label()
self.label_short.set_size_request(300,-1)
self.label_short.set_selectable(True)
self.label_short.set_line_wrap(True)
self.label_short.set_markup("<b>" + summary + "</b>")
self.label_short.set_property("xalign", 0)
self.vbox.pack_start(self.label_short, expand=False, fill=False, padding=0)
########################################## NAME ROW + COL #######################################
self.label_short = gtk.Label()
self.label_short.set_size_request(300,-1)
self.label_short.set_selectable(True)
self.label_short.set_line_wrap(True)
self.label_short.set_markup("<span weight=\"bold\">Name: </span>" + name)
self.label_short.set_property("xalign", 0)
self.vbox.add(self.label_short)
####################################### VERSION ROW + COL ####################################
self.label_short = gtk.Label()
self.label_short.set_size_request(300,-1)
self.label_short.set_selectable(True)
self.label_short.set_line_wrap(True)
self.label_short.set_markup("<span weight=\"bold\">Version: </span>" + version)
self.label_short.set_property("xalign", 0)
self.vbox.add(self.label_short)
##################################### REVISION ROW + COL #####################################
self.label_short = gtk.Label()
self.label_short.set_size_request(300,-1)
self.label_short.set_line_wrap(True)
self.label_short.set_selectable(True)
self.label_short.set_markup("<span weight=\"bold\">Revision: </span>" + revision)
self.label_short.set_property("xalign", 0)
self.vbox.add(self.label_short)
################################## GROUP ROW + COL ############################################
self.label_short = gtk.Label()
self.label_short.set_size_request(300,-1)
self.label_short.set_selectable(True)
self.label_short.set_line_wrap(True)
self.label_short.set_markup("<span weight=\"bold\">Group: </span>" + group)
self.label_short.set_property("xalign", 0)
self.vbox.add(self.label_short)
################################# HOMEPAGE ROW + COL ############################################
if homepage != 'unknown':
self.label_info = gtk.Label()
self.label_info.set_selectable(True)
self.label_info.set_line_wrap(True)
if len(homepage) > 35:
self.label_info.set_markup("<a href=\"" + homepage + "\">" + homepage[0:35] + "..." + "</a>")
else:
self.label_info.set_markup("<a href=\"" + homepage + "\">" + homepage[0:60] + "</a>")
self.label_info.set_property("xalign", 0)
self.label_short = gtk.Label()
self.label_short.set_size_request(300,-1)
self.label_short.set_selectable(True)
self.label_short.set_line_wrap(True)
self.label_short.set_markup("<b>Homepage: </b>")
self.label_short.set_property("xalign", 0)
self.vbox.add(self.label_short)
self.vbox.add(self.label_info)
################################# BUGTRACKER ROW + COL ###########################################
if bugtracker != 'unknown':
self.label_info = gtk.Label()
self.label_info.set_selectable(True)
self.label_info.set_line_wrap(True)
if len(bugtracker) > 35:
self.label_info.set_markup("<a href=\"" + bugtracker + "\">" + bugtracker[0:35] + "..." + "</a>")
else:
self.label_info.set_markup("<a href=\"" + bugtracker + "\">" + bugtracker[0:60] + "</a>")
self.label_info.set_property("xalign", 0)
self.label_short = gtk.Label()
self.label_short.set_size_request(300,-1)
self.label_short.set_selectable(True)
self.label_short.set_line_wrap(True)
self.label_short.set_markup("<b>Bugtracker: </b>")
self.label_short.set_property("xalign", 0)
self.vbox.add(self.label_short)
self.vbox.add(self.label_info)
################################# LICENSE ROW + COL ############################################
self.label_info = gtk.Label()
self.label_info.set_size_request(300,-1)
self.label_info.set_selectable(True)
self.label_info.set_line_wrap(True)
self.label_info.set_markup(license)
self.label_info.set_property("xalign", 0)
self.label_short = gtk.Label()
self.label_short.set_selectable(True)
self.label_short.set_line_wrap(True)
self.label_short.set_markup("<span weight=\"bold\">License: </span>")
self.label_short.set_property("xalign", 0)
self.vbox.add(self.label_short)
self.vbox.add(self.label_info)
################################### BINB ROW+COL #############################################
if binb != '':
self.label_short = gtk.Label()
self.label_short.set_selectable(True)
self.label_short.set_line_wrap(True)
self.label_short.set_markup("<span weight=\"bold\">Brought in by: </span>")
self.label_short.set_property("xalign", 0)
self.label_info = gtk.Label()
self.label_info.set_size_request(300,-1)
self.label_info.set_selectable(True)
self.label_info.set_markup(binb)
self.label_info.set_property("xalign", 0)
self.label_info.set_line_wrap(True)
self.vbox.add(self.label_short)
self.vbox.add(self.label_info)
################################ DESCRIPTION TAG ROW #################################################
self.label_short = gtk.Label()
self.label_short.set_line_wrap(True)
self.label_short.set_markup("<span weight=\"bold\">Description </span>")
self.label_short.set_property("xalign", 0)
self.vbox.add(self.label_short)
################################ DESCRIPTION INFORMATION ROW ##########################################
hbox = gtk.HBox(True,spacing = 0)
self.label_short = gtk.Label()
self.label_short.set_size_request(300,-1)
self.label_short.set_selectable(True)
self.label_short.set_text(description)
self.label_short.set_line_wrap(True)
self.label_short.set_property("xalign", 0)
self.vbox.add(self.label_short)
self.vbox.show_all()

View File

@@ -1,90 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class ProxyDetailsDialog (CrumbsDialog):
def __init__(self, title, user, passwd, parent, flags, buttons=None):
super(ProxyDetailsDialog, self).__init__(title, parent, flags, buttons)
self.connect("response", self.response_cb)
self.auth = not (user == None or passwd == None or user == "")
self.user = user or ""
self.passwd = passwd or ""
# create visual elements on the dialog
self.create_visual_elements()
def create_visual_elements(self):
self.auth_checkbox = gtk.CheckButton("Use authentication")
self.auth_checkbox.set_tooltip_text("Check this box to set the username and the password")
self.auth_checkbox.set_active(self.auth)
self.auth_checkbox.connect("toggled", self.auth_checkbox_toggled_cb)
self.vbox.pack_start(self.auth_checkbox, expand=False, fill=False)
hbox = gtk.HBox(False, 6)
self.user_label = gtk.Label("Username:")
self.user_text = gtk.Entry()
self.user_text.set_text(self.user)
hbox.pack_start(self.user_label, expand=False, fill=False)
hbox.pack_end(self.user_text, expand=False, fill=False)
self.vbox.pack_start(hbox, expand=False, fill=False)
hbox = gtk.HBox(False, 6)
self.passwd_label = gtk.Label("Password:")
self.passwd_text = gtk.Entry()
self.passwd_text.set_text(self.passwd)
hbox.pack_start(self.passwd_label, expand=False, fill=False)
hbox.pack_end(self.passwd_text, expand=False, fill=False)
self.vbox.pack_start(hbox, expand=False, fill=False)
self.refresh_auth_components()
self.show_all()
def refresh_auth_components(self):
self.user_label.set_sensitive(self.auth)
self.user_text.set_editable(self.auth)
self.user_text.set_sensitive(self.auth)
self.passwd_label.set_sensitive(self.auth)
self.passwd_text.set_editable(self.auth)
self.passwd_text.set_sensitive(self.auth)
def auth_checkbox_toggled_cb(self, button):
self.auth = self.auth_checkbox.get_active()
self.refresh_auth_components()
def response_cb(self, dialog, response_id):
if response_id == gtk.RESPONSE_OK:
if self.auth:
self.user = self.user_text.get_text()
self.passwd = self.passwd_text.get_text()
else:
self.user = None
self.passwd = None

View File

@@ -1,122 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import os
from bb.ui.crumbs.hobwidget import HobInfoButton, HobButton, HobAltButton
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class SettingsUIHelper():
def gen_label_widget(self, content):
label = gtk.Label()
label.set_alignment(0, 0)
label.set_markup(content)
label.show()
return label
def gen_label_info_widget(self, content, tooltip):
table = gtk.Table(1, 10, False)
label = self.gen_label_widget(content)
info = HobInfoButton(tooltip, self)
table.attach(label, 0, 1, 0, 1, xoptions=gtk.FILL)
table.attach(info, 1, 2, 0, 1, xoptions=gtk.FILL, xpadding=10)
return table
def gen_spinner_widget(self, content, lower, upper, tooltip=""):
hbox = gtk.HBox(False, 12)
adjust = gtk.Adjustment(value=content, lower=lower, upper=upper, step_incr=1)
spinner = gtk.SpinButton(adjustment=adjust, climb_rate=1, digits=0)
spinner.set_value(content)
hbox.pack_start(spinner, expand=False, fill=False)
info = HobInfoButton(tooltip, self)
hbox.pack_start(info, expand=False, fill=False)
hbox.show_all()
return hbox, spinner
def gen_combo_widget(self, curr_item, all_item, tooltip=""):
hbox = gtk.HBox(False, 12)
combo = gtk.combo_box_new_text()
hbox.pack_start(combo, expand=False, fill=False)
index = 0
for item in all_item or []:
combo.append_text(item)
if item == curr_item:
combo.set_active(index)
index += 1
info = HobInfoButton(tooltip, self)
hbox.pack_start(info, expand=False, fill=False)
hbox.show_all()
return hbox, combo
def entry_widget_select_path_cb(self, action, parent, entry):
dialog = gtk.FileChooserDialog("", parent,
gtk.FILE_CHOOSER_ACTION_SELECT_FOLDER)
text = entry.get_text()
dialog.set_current_folder(text if len(text) > 0 else os.getcwd())
button = dialog.add_button("Cancel", gtk.RESPONSE_NO)
HobAltButton.style_button(button)
button = dialog.add_button("Open", gtk.RESPONSE_YES)
HobButton.style_button(button)
response = dialog.run()
if response == gtk.RESPONSE_YES:
path = dialog.get_filename()
entry.set_text(path)
dialog.destroy()
def gen_entry_widget(self, content, parent, tooltip="", need_button=True):
hbox = gtk.HBox(False, 12)
entry = gtk.Entry()
entry.set_text(content)
entry.set_size_request(350,30)
if need_button:
table = gtk.Table(1, 10, False)
hbox.pack_start(table, expand=True, fill=True)
table.attach(entry, 0, 9, 0, 1, xoptions=gtk.SHRINK)
image = gtk.Image()
image.set_from_stock(gtk.STOCK_OPEN,gtk.ICON_SIZE_BUTTON)
open_button = gtk.Button()
open_button.set_image(image)
open_button.connect("clicked", self.entry_widget_select_path_cb, parent, entry)
table.attach(open_button, 9, 10, 0, 1, xoptions=gtk.SHRINK)
else:
hbox.pack_start(entry, expand=True, fill=True)
if tooltip != "":
info = HobInfoButton(tooltip, self)
hbox.pack_start(info, expand=False, fill=False)
hbox.show_all()
return hbox, entry

View File

@@ -1,900 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
import hashlib
from bb.ui.crumbs.hobwidget import hic, HobInfoButton, HobButton, HobAltButton
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hig.settingsuihelper import SettingsUIHelper
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
from bb.ui.crumbs.hig.proxydetailsdialog import ProxyDetailsDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
(BUILD_ENV_PAGE_ID,
SHARED_STATE_PAGE_ID,
PROXIES_PAGE_ID,
OTHERS_PAGE_ID) = range(4)
(TEST_NETWORK_NONE,
TEST_NETWORK_INITIAL,
TEST_NETWORK_RUNNING,
TEST_NETWORK_PASSED,
TEST_NETWORK_FAILED,
TEST_NETWORK_CANCELED) = range(6)
TARGETS = [
("MY_TREE_MODEL_ROW", gtk.TARGET_SAME_WIDGET, 0),
("text/plain", 0, 1),
("TEXT", 0, 2),
("STRING", 0, 3),
]
def __init__(self, title, configuration, all_image_types,
all_package_formats, all_distros, all_sdk_machines,
max_threads, parent, flags, handler, buttons=None):
super(SimpleSettingsDialog, self).__init__(title, parent, flags, buttons)
# class members from other objects
# bitbake settings from Builder.Configuration
self.configuration = configuration
self.image_types = all_image_types
self.all_package_formats = all_package_formats
self.all_distros = all_distros
self.all_sdk_machines = all_sdk_machines
self.max_threads = max_threads
# class members for internal use
self.dldir_text = None
self.sstatedir_text = None
self.sstatemirrors_list = []
self.sstatemirrors_changed = 0
self.bb_spinner = None
self.pmake_spinner = None
self.rootfs_size_spinner = None
self.extra_size_spinner = None
self.gplv3_checkbox = None
self.toolchain_checkbox = None
self.setting_store = None
self.image_types_checkbuttons = {}
self.md5 = self.config_md5()
self.proxy_md5 = self.config_proxy_md5()
self.settings_changed = False
self.proxy_settings_changed = False
self.handler = handler
self.proxy_test_ran = False
self.selected_mirror_row = 0
self.new_mirror = False
# create visual elements on the dialog
self.create_visual_elements()
self.connect("response", self.response_cb)
def _get_sorted_value(self, var):
return " ".join(sorted(str(var).split())) + "\n"
def config_proxy_md5(self):
data = ("ENABLE_PROXY: " + self._get_sorted_value(self.configuration.enable_proxy))
if self.configuration.enable_proxy:
for protocol in self.configuration.proxies.keys():
data += (protocol + ": " + self._get_sorted_value(self.configuration.combine_proxy(protocol)))
return hashlib.md5(data).hexdigest()
def config_md5(self):
data = ""
for key in self.configuration.extra_setting.keys():
data += (key + ": " + self._get_sorted_value(self.configuration.extra_setting[key]))
return hashlib.md5(data).hexdigest()
def gen_proxy_entry_widget(self, protocol, parent, need_button=True, line=0):
label = gtk.Label(protocol.upper() + " proxy")
self.proxy_table.attach(label, 0, 1, line, line+1, xpadding=24)
proxy_entry = gtk.Entry()
proxy_entry.set_size_request(300, -1)
self.proxy_table.attach(proxy_entry, 1, 2, line, line+1, ypadding=4)
self.proxy_table.attach(gtk.Label(":"), 2, 3, line, line+1, xpadding=12, ypadding=4)
port_entry = gtk.Entry()
port_entry.set_size_request(60, -1)
self.proxy_table.attach(port_entry, 3, 4, line, line+1, ypadding=4)
details_button = HobAltButton("Details")
details_button.connect("clicked", self.details_cb, parent, protocol)
self.proxy_table.attach(details_button, 4, 5, line, line+1, xpadding=4, yoptions=gtk.EXPAND)
return proxy_entry, port_entry, details_button
def refresh_proxy_components(self):
self.same_checkbox.set_sensitive(self.configuration.enable_proxy)
self.http_proxy.set_text(self.configuration.combine_host_only("http"))
self.http_proxy.set_editable(self.configuration.enable_proxy)
self.http_proxy.set_sensitive(self.configuration.enable_proxy)
self.http_proxy_port.set_text(self.configuration.combine_port_only("http"))
self.http_proxy_port.set_editable(self.configuration.enable_proxy)
self.http_proxy_port.set_sensitive(self.configuration.enable_proxy)
self.http_proxy_details.set_sensitive(self.configuration.enable_proxy)
self.https_proxy.set_text(self.configuration.combine_host_only("https"))
self.https_proxy.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.https_proxy.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.https_proxy_port.set_text(self.configuration.combine_port_only("https"))
self.https_proxy_port.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.https_proxy_port.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.https_proxy_details.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.ftp_proxy.set_text(self.configuration.combine_host_only("ftp"))
self.ftp_proxy.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.ftp_proxy.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.ftp_proxy_port.set_text(self.configuration.combine_port_only("ftp"))
self.ftp_proxy_port.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.ftp_proxy_port.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.ftp_proxy_details.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.socks_proxy.set_text(self.configuration.combine_host_only("socks"))
self.socks_proxy.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.socks_proxy.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.socks_proxy_port.set_text(self.configuration.combine_port_only("socks"))
self.socks_proxy_port.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.socks_proxy_port.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.socks_proxy_details.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.cvs_proxy.set_text(self.configuration.combine_host_only("cvs"))
self.cvs_proxy.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.cvs_proxy.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.cvs_proxy_port.set_text(self.configuration.combine_port_only("cvs"))
self.cvs_proxy_port.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.cvs_proxy_port.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.cvs_proxy_details.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
if self.configuration.same_proxy:
if self.http_proxy.get_text():
[w.set_text(self.http_proxy.get_text()) for w in self.same_proxy_addresses]
if self.http_proxy_port.get_text():
[w.set_text(self.http_proxy_port.get_text()) for w in self.same_proxy_ports]
def proxy_checkbox_toggled_cb(self, button):
self.configuration.enable_proxy = self.proxy_checkbox.get_active()
if not self.configuration.enable_proxy:
self.configuration.same_proxy = False
self.same_checkbox.set_active(self.configuration.same_proxy)
self.save_proxy_data()
self.refresh_proxy_components()
def same_checkbox_toggled_cb(self, button):
self.configuration.same_proxy = self.same_checkbox.get_active()
self.save_proxy_data()
self.refresh_proxy_components()
def save_proxy_data(self):
self.configuration.split_proxy("http", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
if self.configuration.same_proxy:
self.configuration.split_proxy("https", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
self.configuration.split_proxy("ftp", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
self.configuration.split_proxy("socks", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
self.configuration.split_proxy("cvs", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
else:
self.configuration.split_proxy("https", self.https_proxy.get_text() + ":" + self.https_proxy_port.get_text())
self.configuration.split_proxy("ftp", self.ftp_proxy.get_text() + ":" + self.ftp_proxy_port.get_text())
self.configuration.split_proxy("socks", self.socks_proxy.get_text() + ":" + self.socks_proxy_port.get_text())
self.configuration.split_proxy("cvs", self.cvs_proxy.get_text() + ":" + self.cvs_proxy_port.get_text())
def response_cb(self, dialog, response_id):
if response_id == gtk.RESPONSE_YES:
# Check that all proxy entries have a corresponding port
for proxy, port in zip(self.all_proxy_addresses, self.all_proxy_ports):
if proxy.get_text() and not port.get_text():
lbl = "<b>Enter all port numbers</b>\n\n"
msg = "Proxy servers require a port number. Please make sure you have entered a port number for each proxy server."
dialog = CrumbsMessageDialog(self, lbl, gtk.STOCK_DIALOG_WARNING, msg)
button = dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
response = dialog.run()
dialog.destroy()
self.emit_stop_by_name("response")
return
self.configuration.dldir = self.dldir_text.get_text()
self.configuration.sstatedir = self.sstatedir_text.get_text()
self.configuration.sstatemirror = ""
for mirror in self.sstatemirrors_list:
if mirror[1] != "" and mirror[2].startswith("file://"):
if mirror[1].endswith("\\1"):
smirror = mirror[2] + " " + mirror[1] + " \\n "
else:
smirror = mirror[2] + " " + mirror[1] + "\\1 \\n "
self.configuration.sstatemirror += smirror
self.configuration.bbthread = self.bb_spinner.get_value_as_int()
self.configuration.pmake = self.pmake_spinner.get_value_as_int()
self.save_proxy_data()
self.configuration.extra_setting = {}
it = self.setting_store.get_iter_first()
while it:
key = self.setting_store.get_value(it, 0)
value = self.setting_store.get_value(it, 1)
self.configuration.extra_setting[key] = value
it = self.setting_store.iter_next(it)
md5 = self.config_md5()
self.settings_changed = (self.md5 != md5)
self.proxy_settings_changed = (self.proxy_md5 != self.config_proxy_md5())
def create_build_environment_page(self):
advanced_vbox = gtk.VBox(False, 6)
advanced_vbox.set_border_width(6)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Parallel threads</span>'), expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = self.gen_label_widget("BitBake parallel threads")
tooltip = "Sets the number of threads that BitBake tasks can simultaneously run. See the <a href=\""
tooltip += "http://www.yoctoproject.org/docs/current/poky-ref-manual/"
tooltip += "poky-ref-manual.html#var-BB_NUMBER_THREADS\">Poky reference manual</a> for information"
bbthread_widget, self.bb_spinner = self.gen_spinner_widget(self.configuration.bbthread, 1, self.max_threads,"<b>BitBake prallalel threads</b>" + "*" + tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(bbthread_widget, expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = self.gen_label_widget("Make parallel threads")
tooltip = "Sets the maximum number of threads the host can use during the build. See the <a href=\""
tooltip += "http://www.yoctoproject.org/docs/current/poky-ref-manual/"
tooltip += "poky-ref-manual.html#var-PARALLEL_MAKE\">Poky reference manual</a> for information"
pmake_widget, self.pmake_spinner = self.gen_spinner_widget(self.configuration.pmake, 1, self.max_threads,"<b>Make parallel threads</b>" + "*" + tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(pmake_widget, expand=False, fill=False)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Downloaded source code</span>'), expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = self.gen_label_widget("Downloads directory")
tooltip = "Select a folder that caches the upstream project source code"
dldir_widget, self.dldir_text = self.gen_entry_widget(self.configuration.dldir, self,"<b>Downloaded source code</b>" + "*" + tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(dldir_widget, expand=False, fill=False)
return advanced_vbox
def create_shared_state_page(self):
advanced_vbox = gtk.VBox(False)
advanced_vbox.set_border_width(12)
sub_vbox = gtk.VBox(False)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False, padding=24)
content = "<span>Shared state directory</span>"
tooltip = "Select a folder that caches your prebuilt results"
label = self.gen_label_info_widget(content,"<b>Shared state directory</b>" + "*" + tooltip)
sstatedir_widget, self.sstatedir_text = self.gen_entry_widget(self.configuration.sstatedir, self)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(sstatedir_widget, expand=False, fill=False, padding=6)
content = "<span weight=\"bold\">Shared state mirrors</span>"
tooltip = "URLs pointing to pre-built mirrors that will speed your build. "
tooltip += "Select the \'Standard\' configuration if the structure of your "
tooltip += "mirror replicates the structure of your local shared state directory. "
tooltip += "For more information on shared state mirrors, check the <a href=\""
tooltip += "http://www.yoctoproject.org/docs/current/poky-ref-manual/"
tooltip += "poky-ref-manual.html#shared-state\">Yocto Project Reference Manual</a>."
table = self.gen_label_info_widget(content,"<b>Shared state mirrors</b>" + "*" + tooltip)
advanced_vbox.pack_start(table, expand=False, fill=False, padding=6)
sub_vbox = gtk.VBox(False)
advanced_vbox.pack_start(sub_vbox, gtk.TRUE, gtk.TRUE, 0)
searched_string = "file://"
if self.sstatemirrors_changed == 0:
self.sstatemirrors_changed = 1
sstatemirrors = self.configuration.sstatemirror
if sstatemirrors == "":
sm_list = ["Standard", "", "file://(.*)"]
self.sstatemirrors_list.append(sm_list)
else:
while sstatemirrors.find(searched_string) != -1:
if sstatemirrors.find(searched_string,1) != -1:
sstatemirror = sstatemirrors[:sstatemirrors.find(searched_string,1)]
sstatemirrors = sstatemirrors[sstatemirrors.find(searched_string,1):]
else:
sstatemirror = sstatemirrors
sstatemirrors = sstatemirrors[1:]
sstatemirror_fields = [x for x in sstatemirror.split(' ') if x.strip()]
if len(sstatemirror_fields):
if sstatemirror_fields[0] == "file://(.*)":
sm_list = ["Standard", sstatemirror_fields[1], "file://(.*)"]
else:
sm_list = ["Custom", sstatemirror_fields[1], sstatemirror_fields[0]]
self.sstatemirrors_list.append(sm_list)
sstatemirrors_widget, sstatemirrors_store = self.gen_shared_sstate_widget(self.sstatemirrors_list, self)
sub_vbox.pack_start(sstatemirrors_widget, expand=True, fill=True)
table = gtk.Table(1, 10, False)
table.set_col_spacings(6)
add_mirror_button = HobAltButton("Add mirror")
add_mirror_button.connect("clicked", self.add_mirror)
add_mirror_button.set_size_request(120,30)
table.attach(add_mirror_button, 1, 2, 0, 1, xoptions=gtk.SHRINK)
self.delete_button = HobAltButton("Delete mirror")
self.delete_button.connect("clicked", self.delete_cb)
self.delete_button.set_size_request(120, 30)
table.attach(self.delete_button, 3, 4, 0, 1, xoptions=gtk.SHRINK)
advanced_vbox.pack_start(table, expand=False, fill=False, padding=6)
return advanced_vbox
def gen_shared_sstate_widget(self, sstatemirrors_list, window):
hbox = gtk.HBox(False)
sstatemirrors_store = gtk.ListStore(str, str, str)
for sstatemirror in sstatemirrors_list:
sstatemirrors_store.append(sstatemirror)
self.sstatemirrors_tv = gtk.TreeView()
self.sstatemirrors_tv.set_rules_hint(True)
self.sstatemirrors_tv.set_headers_visible(True)
tree_selection = self.sstatemirrors_tv.get_selection()
tree_selection.set_mode(gtk.SELECTION_SINGLE)
# Allow enable drag and drop of rows including row move
self.sstatemirrors_tv.enable_model_drag_source( gtk.gdk.BUTTON1_MASK,
self.TARGETS,
gtk.gdk.ACTION_DEFAULT|
gtk.gdk.ACTION_MOVE)
self.sstatemirrors_tv.enable_model_drag_dest(self.TARGETS,
gtk.gdk.ACTION_DEFAULT)
self.sstatemirrors_tv.connect("drag_data_get", self.drag_data_get_cb)
self.sstatemirrors_tv.connect("drag_data_received", self.drag_data_received_cb)
self.scroll = gtk.ScrolledWindow()
self.scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
self.scroll.set_shadow_type(gtk.SHADOW_IN)
self.scroll.connect('size-allocate', self.scroll_changed)
self.scroll.add(self.sstatemirrors_tv)
#list store for cell renderer
m = gtk.ListStore(gobject.TYPE_STRING)
m.append(["Standard"])
m.append(["Custom"])
cell0 = gtk.CellRendererCombo()
cell0.set_property("model",m)
cell0.set_property("text-column", 0)
cell0.set_property("editable", True)
cell0.set_property("has-entry", False)
col0 = gtk.TreeViewColumn("Configuration")
col0.pack_start(cell0, False)
col0.add_attribute(cell0, "text", 0)
col0.set_cell_data_func(cell0, self.configuration_field)
self.sstatemirrors_tv.append_column(col0)
cell0.connect("edited", self.combo_changed, sstatemirrors_store)
self.cell1 = gtk.CellRendererText()
self.cell1.set_padding(5,2)
col1 = gtk.TreeViewColumn('Regex', self.cell1)
col1.set_cell_data_func(self.cell1, self.regex_field)
self.sstatemirrors_tv.append_column(col1)
self.cell1.connect("edited", self.regex_changed, sstatemirrors_store)
cell2 = gtk.CellRendererText()
cell2.set_padding(5,2)
cell2.set_property("editable", True)
col2 = gtk.TreeViewColumn('URL', cell2)
col2.set_cell_data_func(cell2, self.url_field)
self.sstatemirrors_tv.append_column(col2)
cell2.connect("edited", self.url_changed, sstatemirrors_store)
self.sstatemirrors_tv.set_model(sstatemirrors_store)
self.sstatemirrors_tv.set_cursor(self.selected_mirror_row)
hbox.pack_start(self.scroll, expand=True, fill=True)
hbox.show_all()
return hbox, sstatemirrors_store
def drag_data_get_cb(self, treeview, context, selection, target_id, etime):
treeselection = treeview.get_selection()
model, iter = treeselection.get_selected()
data = model.get_string_from_iter(iter)
selection.set(selection.target, 8, data)
def drag_data_received_cb(self, treeview, context, x, y, selection, info, etime):
model = treeview.get_model()
data = []
tree_iter = model.get_iter_from_string(selection.data)
data.append(model.get_value(tree_iter, 0))
data.append(model.get_value(tree_iter, 1))
data.append(model.get_value(tree_iter, 2))
drop_info = treeview.get_dest_row_at_pos(x, y)
if drop_info:
path, position = drop_info
iter = model.get_iter(path)
if (position == gtk.TREE_VIEW_DROP_BEFORE or position == gtk.TREE_VIEW_DROP_INTO_OR_BEFORE):
model.insert_before(iter, data)
else:
model.insert_after(iter, data)
else:
model.append(data)
if context.action == gtk.gdk.ACTION_MOVE:
context.finish(True, True, etime)
return
def delete_cb(self, button):
selection = self.sstatemirrors_tv.get_selection()
tree_model, tree_iter = selection.get_selected()
index = int(tree_model.get_string_from_iter(tree_iter))
if index == 0:
self.selected_mirror_row = index
else:
self.selected_mirror_row = index - 1
self.sstatemirrors_list.pop(index)
self.refresh_shared_state_page()
if not self.sstatemirrors_list:
self.delete_button.set_sensitive(False)
def add_mirror(self, button):
self.new_mirror = True
tooltip = "Select the pre-built mirror that will speed your build"
index = len(self.sstatemirrors_list)
self.selected_mirror_row = index
sm_list = ["Standard", "", "file://(.*)"]
self.sstatemirrors_list.append(sm_list)
self.refresh_shared_state_page()
def scroll_changed(self, widget, event, data=None):
if self.new_mirror == True:
adj = widget.get_vadjustment()
adj.set_value(adj.upper - adj.page_size)
self.new_mirror = False
def combo_changed(self, widget, path, text, model):
model[path][0] = text
selection = self.sstatemirrors_tv.get_selection()
tree_model, tree_iter = selection.get_selected()
index = int(tree_model.get_string_from_iter(tree_iter))
self.sstatemirrors_list[index][0] = text
def regex_changed(self, cell, path, new_text, user_data):
user_data[path][2] = new_text
selection = self.sstatemirrors_tv.get_selection()
tree_model, tree_iter = selection.get_selected()
index = int(tree_model.get_string_from_iter(tree_iter))
self.sstatemirrors_list[index][2] = new_text
return
def url_changed(self, cell, path, new_text, user_data):
if new_text!="Enter the mirror URL" and new_text!="Match regex and replace it with this URL":
user_data[path][1] = new_text
selection = self.sstatemirrors_tv.get_selection()
tree_model, tree_iter = selection.get_selected()
index = int(tree_model.get_string_from_iter(tree_iter))
self.sstatemirrors_list[index][1] = new_text
return
def configuration_field(self, column, cell, model, iter):
cell.set_property('text', model.get_value(iter, 0))
if model.get_value(iter, 0) == "Standard":
self.cell1.set_property("sensitive", False)
self.cell1.set_property("editable", False)
else:
self.cell1.set_property("sensitive", True)
self.cell1.set_property("editable", True)
return
def regex_field(self, column, cell, model, iter):
cell.set_property('text', model.get_value(iter, 2))
return
def url_field(self, column, cell, model, iter):
text = model.get_value(iter, 1)
if text == "":
if model.get_value(iter, 0) == "Standard":
text = "Enter the mirror URL"
else:
text = "Match regex and replace it with this URL"
cell.set_property('text', text)
return
def refresh_shared_state_page(self):
page_num = self.nb.get_current_page()
self.nb.remove_page(page_num);
self.nb.insert_page(self.create_shared_state_page(), gtk.Label("Shared state"),page_num)
self.show_all()
self.nb.set_current_page(page_num)
def test_proxy_ended(self, passed):
self.proxy_test_running = False
self.set_test_proxy_state(self.TEST_NETWORK_PASSED if passed else self.TEST_NETWORK_FAILED)
self.set_sensitive(True)
self.refresh_proxy_components()
def timer_func(self):
self.test_proxy_progress.pulse()
return self.proxy_test_running
def test_network_button_cb(self, b):
self.set_test_proxy_state(self.TEST_NETWORK_RUNNING)
self.set_sensitive(False)
self.save_proxy_data()
if self.configuration.enable_proxy == True:
self.handler.set_http_proxy(self.configuration.combine_proxy("http"))
self.handler.set_https_proxy(self.configuration.combine_proxy("https"))
self.handler.set_ftp_proxy(self.configuration.combine_proxy("ftp"))
self.handler.set_socks_proxy(self.configuration.combine_proxy("socks"))
self.handler.set_cvs_proxy(self.configuration.combine_host_only("cvs"), self.configuration.combine_port_only("cvs"))
elif self.configuration.enable_proxy == False:
self.handler.set_http_proxy("")
self.handler.set_https_proxy("")
self.handler.set_ftp_proxy("")
self.handler.set_socks_proxy("")
self.handler.set_cvs_proxy("", "")
self.proxy_test_ran = True
self.proxy_test_running = True
gobject.timeout_add(100, self.timer_func)
self.handler.trigger_network_test()
def test_proxy_focus_event(self, w, direction):
if self.test_proxy_state in [self.TEST_NETWORK_PASSED, self.TEST_NETWORK_FAILED]:
self.set_test_proxy_state(self.TEST_NETWORK_INITIAL)
return False
def http_proxy_changed(self, e):
if not self.configuration.same_proxy:
return
if e == self.http_proxy:
[w.set_text(self.http_proxy.get_text()) for w in self.same_proxy_addresses]
else:
[w.set_text(self.http_proxy_port.get_text()) for w in self.same_proxy_ports]
def proxy_address_focus_out_event(self, w, direction):
text = w.get_text()
if not text:
return False
if text.find("//") == -1:
w.set_text("http://" + text)
return False
def set_test_proxy_state(self, state):
if self.test_proxy_state == state:
return
[self.proxy_table.remove(w) for w in self.test_gui_elements]
if state == self.TEST_NETWORK_INITIAL:
self.proxy_table.attach(self.test_network_button, 1, 2, 5, 6)
self.test_network_button.show()
elif state == self.TEST_NETWORK_RUNNING:
self.test_proxy_progress.set_rcstyle("running")
self.test_proxy_progress.set_text("Testing network configuration")
self.proxy_table.attach(self.test_proxy_progress, 0, 5, 5, 6, xpadding=4)
self.test_proxy_progress.show()
else: # passed or failed
self.dummy_progress.update(1.0)
if state == self.TEST_NETWORK_PASSED:
self.dummy_progress.set_text("Your network is properly configured")
self.dummy_progress.set_rcstyle("running")
else:
self.dummy_progress.set_text("Network test failed")
self.dummy_progress.set_rcstyle("fail")
self.proxy_table.attach(self.dummy_progress, 0, 4, 5, 6)
self.proxy_table.attach(self.retest_network_button, 4, 5, 5, 6, xpadding=4)
self.dummy_progress.show()
self.retest_network_button.show()
self.test_proxy_state = state
def create_network_page(self):
advanced_vbox = gtk.VBox(False, 6)
advanced_vbox.set_border_width(6)
self.same_proxy_addresses = []
self.same_proxy_ports = []
self.all_proxy_ports = []
self.all_proxy_addresses = []
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = self.gen_label_widget("<span weight=\"bold\">Set the proxies used when fetching source code</span>")
tooltip = "Set the proxies used when fetching source code. A blank field uses a direct internet connection."
info = HobInfoButton("<span weight=\"bold\">Set the proxies used when fetching source code</span>" + "*" + tooltip, self)
hbox = gtk.HBox(False, 12)
hbox.pack_start(label, expand=True, fill=True)
hbox.pack_start(info, expand=False, fill=False)
sub_vbox.pack_start(hbox, expand=False, fill=False)
proxy_test_focus = []
self.direct_checkbox = gtk.RadioButton(None, "Direct network connection")
proxy_test_focus.append(self.direct_checkbox)
self.direct_checkbox.set_tooltip_text("Check this box to use a direct internet connection with no proxy")
self.direct_checkbox.set_active(not self.configuration.enable_proxy)
sub_vbox.pack_start(self.direct_checkbox, expand=False, fill=False)
self.proxy_checkbox = gtk.RadioButton(self.direct_checkbox, "Manual proxy configuration")
proxy_test_focus.append(self.proxy_checkbox)
self.proxy_checkbox.set_tooltip_text("Check this box to manually set up a specific proxy")
self.proxy_checkbox.set_active(self.configuration.enable_proxy)
sub_vbox.pack_start(self.proxy_checkbox, expand=False, fill=False)
self.same_checkbox = gtk.CheckButton("Use the HTTP proxy for all protocols")
proxy_test_focus.append(self.same_checkbox)
self.same_checkbox.set_tooltip_text("Check this box to use the HTTP proxy for all five proxies")
self.same_checkbox.set_active(self.configuration.same_proxy)
hbox = gtk.HBox(False, 12)
hbox.pack_start(self.same_checkbox, expand=False, fill=False, padding=24)
sub_vbox.pack_start(hbox, expand=False, fill=False)
self.proxy_table = gtk.Table(6, 5, False)
self.http_proxy, self.http_proxy_port, self.http_proxy_details = self.gen_proxy_entry_widget(
"http", self, True, 0)
proxy_test_focus +=[self.http_proxy, self.http_proxy_port]
self.http_proxy.connect("changed", self.http_proxy_changed)
self.http_proxy_port.connect("changed", self.http_proxy_changed)
self.https_proxy, self.https_proxy_port, self.https_proxy_details = self.gen_proxy_entry_widget(
"https", self, True, 1)
proxy_test_focus += [self.https_proxy, self.https_proxy_port]
self.same_proxy_addresses.append(self.https_proxy)
self.same_proxy_ports.append(self.https_proxy_port)
self.ftp_proxy, self.ftp_proxy_port, self.ftp_proxy_details = self.gen_proxy_entry_widget(
"ftp", self, True, 2)
proxy_test_focus += [self.ftp_proxy, self.ftp_proxy_port]
self.same_proxy_addresses.append(self.ftp_proxy)
self.same_proxy_ports.append(self.ftp_proxy_port)
self.socks_proxy, self.socks_proxy_port, self.socks_proxy_details = self.gen_proxy_entry_widget(
"socks", self, True, 3)
proxy_test_focus += [self.socks_proxy, self.socks_proxy_port]
self.same_proxy_addresses.append(self.socks_proxy)
self.same_proxy_ports.append(self.socks_proxy_port)
self.cvs_proxy, self.cvs_proxy_port, self.cvs_proxy_details = self.gen_proxy_entry_widget(
"cvs", self, True, 4)
proxy_test_focus += [self.cvs_proxy, self.cvs_proxy_port]
self.same_proxy_addresses.append(self.cvs_proxy)
self.same_proxy_ports.append(self.cvs_proxy_port)
self.all_proxy_ports = self.same_proxy_ports + [self.http_proxy_port]
self.all_proxy_addresses = self.same_proxy_addresses + [self.http_proxy]
sub_vbox.pack_start(self.proxy_table, expand=False, fill=False)
self.proxy_table.show_all()
# Create the graphical elements for the network test feature, but don't display them yet
self.test_network_button = HobAltButton("Test network configuration")
self.test_network_button.connect("clicked", self.test_network_button_cb)
self.test_proxy_progress = HobProgressBar()
self.dummy_progress = HobProgressBar()
self.retest_network_button = HobAltButton("Retest")
self.retest_network_button.connect("clicked", self.test_network_button_cb)
self.test_gui_elements = [self.test_network_button, self.test_proxy_progress, self.dummy_progress, self.retest_network_button]
# Initialize the network tester
self.test_proxy_state = self.TEST_NETWORK_NONE
self.set_test_proxy_state(self.TEST_NETWORK_INITIAL)
self.proxy_test_passed_id = self.handler.connect("network-passed", lambda h:self.test_proxy_ended(True))
self.proxy_test_failed_id = self.handler.connect("network-failed", lambda h:self.test_proxy_ended(False))
[w.connect("focus-in-event", self.test_proxy_focus_event) for w in proxy_test_focus]
[w.connect("focus-out-event", self.proxy_address_focus_out_event) for w in self.all_proxy_addresses]
self.direct_checkbox.connect("toggled", self.proxy_checkbox_toggled_cb)
self.proxy_checkbox.connect("toggled", self.proxy_checkbox_toggled_cb)
self.same_checkbox.connect("toggled", self.same_checkbox_toggled_cb)
self.refresh_proxy_components()
return advanced_vbox
def switch_to_page(self, page_id):
self.nb.set_current_page(page_id)
def details_cb(self, button, parent, protocol):
self.save_proxy_data()
dialog = ProxyDetailsDialog(title = protocol.upper() + " Proxy Details",
user = self.configuration.proxies[protocol][1],
passwd = self.configuration.proxies[protocol][2],
parent = parent,
flags = gtk.DIALOG_MODAL
| gtk.DIALOG_DESTROY_WITH_PARENT
| gtk.DIALOG_NO_SEPARATOR)
dialog.add_button(gtk.STOCK_CLOSE, gtk.RESPONSE_OK)
response = dialog.run()
if response == gtk.RESPONSE_OK:
self.configuration.proxies[protocol][1] = dialog.user
self.configuration.proxies[protocol][2] = dialog.passwd
self.refresh_proxy_components()
dialog.destroy()
def rootfs_combo_changed_cb(self, rootfs_combo, all_package_format, check_hbox):
combo_item = self.rootfs_combo.get_active_text()
for child in check_hbox.get_children():
if isinstance(child, gtk.CheckButton):
check_hbox.remove(child)
for format in all_package_format:
if format != combo_item:
check_button = gtk.CheckButton(format)
check_hbox.pack_start(check_button, expand=False, fill=False)
check_hbox.show_all()
def gen_pkgfmt_widget(self, curr_package_format, all_package_format, tooltip_combo="", tooltip_extra=""):
pkgfmt_hbox = gtk.HBox(False, 24)
rootfs_vbox = gtk.VBox(False, 6)
pkgfmt_hbox.pack_start(rootfs_vbox, expand=False, fill=False)
label = self.gen_label_widget("Root file system package format")
rootfs_vbox.pack_start(label, expand=False, fill=False)
rootfs_format = ""
if curr_package_format:
rootfs_format = curr_package_format.split()[0]
rootfs_format_widget, rootfs_combo = self.gen_combo_widget(rootfs_format, all_package_format, tooltip_combo)
rootfs_vbox.pack_start(rootfs_format_widget, expand=False, fill=False)
extra_vbox = gtk.VBox(False, 6)
pkgfmt_hbox.pack_start(extra_vbox, expand=False, fill=False)
label = self.gen_label_widget("Additional package formats")
extra_vbox.pack_start(label, expand=False, fill=False)
check_hbox = gtk.HBox(False, 12)
extra_vbox.pack_start(check_hbox, expand=False, fill=False)
for format in all_package_format:
if format != rootfs_format:
check_button = gtk.CheckButton(format)
is_active = (format in curr_package_format.split())
check_button.set_active(is_active)
check_hbox.pack_start(check_button, expand=False, fill=False)
info = HobInfoButton(tooltip_extra, self)
check_hbox.pack_end(info, expand=False, fill=False)
rootfs_combo.connect("changed", self.rootfs_combo_changed_cb, all_package_format, check_hbox)
pkgfmt_hbox.show_all()
return pkgfmt_hbox, rootfs_combo, check_hbox
def editable_settings_cell_edited(self, cell, path_string, new_text, model):
it = model.get_iter_from_string(path_string)
column = cell.get_data("column")
model.set(it, column, new_text)
def editable_settings_add_item_clicked(self, button, model):
new_item = ["##KEY##", "##VALUE##"]
iter = model.append()
model.set (iter,
0, new_item[0],
1, new_item[1],
)
def editable_settings_remove_item_clicked(self, button, treeview):
selection = treeview.get_selection()
model, iter = selection.get_selected()
if iter:
path = model.get_path(iter)[0]
model.remove(iter)
def gen_editable_settings(self, setting, tooltip=""):
setting_hbox = gtk.HBox(False, 12)
vbox = gtk.VBox(False, 12)
setting_hbox.pack_start(vbox, expand=True, fill=True)
setting_store = gtk.ListStore(gobject.TYPE_STRING, gobject.TYPE_STRING)
for key in setting.keys():
setting_store.set(setting_store.append(), 0, key, 1, setting[key])
setting_tree = gtk.TreeView(setting_store)
setting_tree.set_headers_visible(True)
setting_tree.set_size_request(300, 100)
col = gtk.TreeViewColumn('Key')
col.set_min_width(100)
col.set_max_width(150)
col.set_resizable(True)
col1 = gtk.TreeViewColumn('Value')
col1.set_min_width(100)
col1.set_max_width(150)
col1.set_resizable(True)
setting_tree.append_column(col)
setting_tree.append_column(col1)
cell = gtk.CellRendererText()
cell.set_property('width-chars', 10)
cell.set_property('editable', True)
cell.set_data("column", 0)
cell.connect("edited", self.editable_settings_cell_edited, setting_store)
cell1 = gtk.CellRendererText()
cell1.set_property('width-chars', 10)
cell1.set_property('editable', True)
cell1.set_data("column", 1)
cell1.connect("edited", self.editable_settings_cell_edited, setting_store)
col.pack_start(cell, True)
col1.pack_end(cell1, True)
col.set_attributes(cell, text=0)
col1.set_attributes(cell1, text=1)
scroll = gtk.ScrolledWindow()
scroll.set_shadow_type(gtk.SHADOW_IN)
scroll.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
scroll.add(setting_tree)
vbox.pack_start(scroll, expand=True, fill=True)
# some buttons
hbox = gtk.HBox(True, 6)
vbox.pack_start(hbox, False, False)
button = gtk.Button(stock=gtk.STOCK_ADD)
button.connect("clicked", self.editable_settings_add_item_clicked, setting_store)
hbox.pack_start(button)
button = gtk.Button(stock=gtk.STOCK_REMOVE)
button.connect("clicked", self.editable_settings_remove_item_clicked, setting_tree)
hbox.pack_start(button)
info = HobInfoButton(tooltip, self)
setting_hbox.pack_start(info, expand=False, fill=False)
return setting_hbox, setting_store
def create_others_page(self):
advanced_vbox = gtk.VBox(False, 6)
advanced_vbox.set_border_width(6)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=True, fill=True)
label = self.gen_label_widget("<span weight=\"bold\">Add your own variables:</span>")
tooltip = "These are key/value pairs for your extra settings. Click \'Add\' and then directly edit the key and the value"
setting_widget, self.setting_store = self.gen_editable_settings(self.configuration.extra_setting,"<b>Add your own variables</b>" + "*" + tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(setting_widget, expand=True, fill=True)
return advanced_vbox
def create_visual_elements(self):
self.nb = gtk.Notebook()
self.nb.set_show_tabs(True)
self.nb.append_page(self.create_build_environment_page(), gtk.Label("Build environment"))
self.nb.append_page(self.create_shared_state_page(), gtk.Label("Shared state"))
self.nb.append_page(self.create_network_page(), gtk.Label("Network"))
self.nb.append_page(self.create_others_page(), gtk.Label("Others"))
self.nb.set_current_page(0)
self.vbox.pack_start(self.nb, expand=True, fill=True)
self.vbox.pack_end(gtk.HSeparator(), expand=True, fill=True)
self.show_all()
def destroy(self):
self.handler.disconnect(self.proxy_test_passed_id)
self.handler.disconnect(self.proxy_test_failed_id)
super(SimpleSettingsDialog, self).destroy()

View File

@@ -30,7 +30,6 @@ class HobColors:
BLACK = "#000000"
PALE_BLUE = "#53b8ff"
DEEP_RED = "#aa3e3e"
KHAKI = "#fff68f"
OK = WHITE
RUNNING = PALE_GREEN

View File

@@ -22,6 +22,7 @@
import gobject
import logging
from bb.ui.crumbs.runningbuild import RunningBuild
from bb.ui.crumbs.hobwidget import hcc
class HobHandler(gobject.GObject):
@@ -41,12 +42,9 @@ class HobHandler(gobject.GObject):
"command-failed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING,)),
"parsing-warning" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING,)),
"sanity-failed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING, gobject.TYPE_INT)),
(gobject.TYPE_STRING,)),
"generating-data" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
@@ -68,17 +66,10 @@ class HobHandler(gobject.GObject):
"package-populated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"network-passed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"network-failed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
}
(GENERATE_CONFIGURATION, GENERATE_RECIPES, GENERATE_PACKAGES, GENERATE_IMAGE, POPULATE_PACKAGEINFO, SANITY_CHECK, NETWORK_TEST) = range(7)
(SUB_PATH_LAYERS, SUB_FILES_DISTRO, SUB_FILES_MACH, SUB_FILES_SDKMACH, SUB_MATCH_CLASS, SUB_PARSE_CONFIG, SUB_SANITY_CHECK,
SUB_GNERATE_TGTS, SUB_GENERATE_PKGINFO, SUB_BUILD_RECIPES, SUB_BUILD_IMAGE, SUB_NETWORK_TEST) = range(12)
(GENERATE_CONFIGURATION, GENERATE_RECIPES, GENERATE_PACKAGES, GENERATE_IMAGE, POPULATE_PACKAGEINFO, SANITY_CHECK) = range(6)
(SUB_PATH_LAYERS, SUB_FILES_DISTRO, SUB_FILES_MACH, SUB_FILES_SDKMACH, SUB_MATCH_CLASS, SUB_PARSE_CONFIG, SUB_SANITY_CHECK, SUB_GNERATE_TGTS, SUB_GENERATE_PKGINFO, SUB_BUILD_RECIPES, SUB_BUILD_IMAGE) = range(11)
def __init__(self, server, recipe_model, package_model):
super(HobHandler, self).__init__()
@@ -98,7 +89,6 @@ class HobHandler(gobject.GObject):
self.server = server
self.error_msg = ""
self.initcmd = None
self.parsing = False
def set_busy(self):
if not self.generating:
@@ -146,17 +136,13 @@ class HobHandler(gobject.GObject):
elif next_command == self.SUB_MATCH_CLASS:
self.runCommand(["findFilesMatchingInDir", "rootfs_", "classes"])
elif next_command == self.SUB_PARSE_CONFIG:
self.runCommand(["enableDataTracking"])
self.runCommand(["parseConfigurationFiles", "", ""])
self.runCommand(["disableDataTracking"])
elif next_command == self.SUB_GNERATE_TGTS:
self.runCommand(["generateTargetsTree", "classes/image.bbclass", []])
elif next_command == self.SUB_GENERATE_PKGINFO:
self.runCommand(["triggerEvent", "bb.event.RequestPackageInfo()"])
elif next_command == self.SUB_SANITY_CHECK:
self.runCommand(["triggerEvent", "bb.event.SanityCheck()"])
elif next_command == self.SUB_NETWORK_TEST:
self.runCommand(["triggerEvent", "bb.event.NetworkTest()"])
elif next_command == self.SUB_BUILD_RECIPES:
self.clear_busy()
self.building = True
@@ -172,26 +158,12 @@ class HobHandler(gobject.GObject):
if self.toolchain_packages:
self.runCommand(["setVariable", "TOOLCHAIN_TARGET_TASK", " ".join(self.toolchain_packages)])
targets.append(self.toolchain)
if targets[0] == "hob-image":
hobImage = self.runCommand(["matchFile", "hob-image.bb"])
if self.base_image != "Create your own image":
baseImage = self.runCommand(["matchFile", self.base_image + ".bb"])
version = self.runCommand(["generateNewImage", hobImage, baseImage, self.package_queue])
targets[0] += version
self.recipe_model.set_custom_image_version(version)
self.runCommand(["buildTargets", targets, self.default_task])
def display_error(self):
self.clear_busy()
self.emit("command-failed", self.error_msg)
self.error_msg = ""
if self.building:
self.building = False
def handle_event(self, event):
if not event:
return
if self.building:
self.current_phase = "building"
self.build.handle_event(event)
@@ -202,26 +174,16 @@ class HobHandler(gobject.GObject):
self.run_next_command()
elif isinstance(event, bb.event.SanityCheckPassed):
reparse = self.runCommand(["getVariable", "BB_INVALIDCONF"]) or None
if reparse is True:
self.runCommand(["setVariable", "BB_INVALIDCONF", False])
self.runCommand(["parseConfigurationFiles", "", ""])
self.run_next_command()
elif isinstance(event, bb.event.SanityCheckFailed):
self.emit("sanity-failed", event._msg, event._network_error)
self.emit("sanity-failed", event._msg)
elif isinstance(event, logging.LogRecord):
if not self.building:
if event.levelno >= logging.ERROR:
formatter = bb.msg.BBLogFormatter()
msg = formatter.format(event)
self.error_msg += msg + '\n'
elif event.levelno >= logging.WARNING and self.parsing == True:
formatter = bb.msg.BBLogFormatter()
msg = formatter.format(event)
warn_msg = msg + '\n'
self.emit("parsing-warning", warn_msg)
if event.levelno >= logging.ERROR:
formatter = bb.msg.BBLogFormatter()
formatter.format(event)
self.error_msg += event.message + '\n'
elif isinstance(event, bb.event.TargetsTreeGenerated):
self.current_phase = "data generation"
@@ -253,7 +215,11 @@ class HobHandler(gobject.GObject):
self.run_next_command()
elif isinstance(event, bb.command.CommandFailed):
self.commands_async = []
self.display_error()
self.clear_busy()
self.emit("command-failed", self.error_msg)
self.error_msg = ""
if self.building:
self.building = False
elif isinstance(event, (bb.event.ParseStarted,
bb.event.CacheLoadStarted,
bb.event.TreeDataPreparationStarted,
@@ -262,10 +228,8 @@ class HobHandler(gobject.GObject):
message["eventname"] = bb.event.getName(event)
message["current"] = 0
message["total"] = None
message["title"] = "Parsing recipes"
message["title"] = "Parsing recipes: "
self.emit("parsing-started", message)
if isinstance(event, bb.event.ParseStarted):
self.parsing = True
elif isinstance(event, (bb.event.ParseProgress,
bb.event.CacheLoadProgress,
bb.event.TreeDataPreparationProgress)):
@@ -273,7 +237,7 @@ class HobHandler(gobject.GObject):
message["eventname"] = bb.event.getName(event)
message["current"] = event.current
message["total"] = event.total
message["title"] = "Parsing recipes"
message["title"] = "Parsing recipes: "
self.emit("parsing", message)
elif isinstance(event, (bb.event.ParseCompleted,
bb.event.CacheLoadCompleted,
@@ -282,19 +246,8 @@ class HobHandler(gobject.GObject):
message["eventname"] = bb.event.getName(event)
message["current"] = event.total
message["total"] = event.total
message["title"] = "Parsing recipes"
message["title"] = "Parsing recipes: "
self.emit("parsing-completed", message)
if isinstance(event, bb.event.ParseCompleted):
self.parsing = False
elif isinstance(event, bb.event.NetworkTestFailed):
self.emit("network-failed")
self.run_next_command()
elif isinstance(event, bb.event.NetworkTestPassed):
self.emit("network-passed")
self.run_next_command()
if self.error_msg and not self.commands_async:
self.display_error()
return
@@ -307,42 +260,42 @@ class HobHandler(gobject.GObject):
self.runCommand(["setVariable", "INHERIT", inherits])
def set_bblayers(self, bblayers):
self.runCommand(["setVariable", "BBLAYERS", " ".join(bblayers)])
self.runCommand(["setVariable", "BBLAYERS_HOB", " ".join(bblayers)])
def set_machine(self, machine):
if machine:
self.runCommand(["setVariable", "MACHINE", machine])
self.runCommand(["setVariable", "MACHINE_HOB", machine])
def set_sdk_machine(self, sdk_machine):
self.runCommand(["setVariable", "SDKMACHINE", sdk_machine])
self.runCommand(["setVariable", "SDKMACHINE_HOB", sdk_machine])
def set_image_fstypes(self, image_fstypes):
self.runCommand(["setVariable", "IMAGE_FSTYPES", image_fstypes])
def set_distro(self, distro):
self.runCommand(["setVariable", "DISTRO", distro])
self.runCommand(["setVariable", "DISTRO_HOB", distro])
def set_package_format(self, format):
package_classes = ""
for pkgfmt in format.split():
package_classes += ("package_%s" % pkgfmt + " ")
self.runCommand(["setVariable", "PACKAGE_CLASSES", package_classes])
self.runCommand(["setVariable", "PACKAGE_CLASSES_HOB", package_classes])
def set_bbthreads(self, threads):
self.runCommand(["setVariable", "BB_NUMBER_THREADS", threads])
self.runCommand(["setVariable", "BB_NUMBER_THREADS_HOB", threads])
def set_pmake(self, threads):
pmake = "-j %s" % threads
self.runCommand(["setVariable", "PARALLEL_MAKE", pmake])
self.runCommand(["setVariable", "PARALLEL_MAKE_HOB", pmake])
def set_dl_dir(self, directory):
self.runCommand(["setVariable", "DL_DIR", directory])
self.runCommand(["setVariable", "DL_DIR_HOB", directory])
def set_sstate_dir(self, directory):
self.runCommand(["setVariable", "SSTATE_DIR", directory])
self.runCommand(["setVariable", "SSTATE_DIR_HOB", directory])
def set_sstate_mirrors(self, url):
self.runCommand(["setVariable", "SSTATE_MIRRORS", url])
def set_sstate_mirror(self, url):
self.runCommand(["setVariable", "SSTATE_MIRROR_HOB", url])
def set_extra_size(self, image_extra_size):
self.runCommand(["setVariable", "IMAGE_ROOTFS_EXTRA_SPACE", str(image_extra_size)])
@@ -351,13 +304,16 @@ class HobHandler(gobject.GObject):
self.runCommand(["setVariable", "IMAGE_ROOTFS_SIZE", str(image_rootfs_size)])
def set_incompatible_license(self, incompat_license):
self.runCommand(["setVariable", "INCOMPATIBLE_LICENSE", incompat_license])
self.runCommand(["setVariable", "INCOMPATIBLE_LICENSE_HOB", incompat_license])
def set_extra_config(self, extra_setting):
for key in extra_setting.keys():
value = extra_setting[key]
self.runCommand(["setVariable", key, value])
def set_config_filter(self, config_filter):
self.runCommand(["setConfFilter", config_filter])
def set_http_proxy(self, http_proxy):
self.runCommand(["setVariable", "http_proxy", http_proxy])
@@ -367,8 +323,12 @@ class HobHandler(gobject.GObject):
def set_ftp_proxy(self, ftp_proxy):
self.runCommand(["setVariable", "ftp_proxy", ftp_proxy])
def set_socks_proxy(self, socks_proxy):
self.runCommand(["setVariable", "all_proxy", socks_proxy])
def set_all_proxy(self, all_proxy):
self.runCommand(["setVariable", "all_proxy", all_proxy])
def set_git_proxy(self, host, port):
self.runCommand(["setVariable", "GIT_PROXY_HOST", host])
self.runCommand(["setVariable", "GIT_PROXY_PORT", port])
def set_cvs_proxy(self, host, port):
self.runCommand(["setVariable", "CVS_PROXY_HOST", host])
@@ -382,10 +342,6 @@ class HobHandler(gobject.GObject):
self.commands_async.append(self.SUB_SANITY_CHECK)
self.run_next_command(self.SANITY_CHECK)
def trigger_network_test(self):
self.commands_async.append(self.SUB_NETWORK_TEST)
self.run_next_command(self.NETWORK_TEST)
def generate_configuration(self):
self.commands_async.append(self.SUB_PARSE_CONFIG)
self.commands_async.append(self.SUB_PATH_LAYERS)
@@ -399,7 +355,7 @@ class HobHandler(gobject.GObject):
self.commands_async.append(self.SUB_PARSE_CONFIG)
self.commands_async.append(self.SUB_GNERATE_TGTS)
self.run_next_command(self.GENERATE_RECIPES)
def generate_packages(self, tgts, default_task="build"):
targets = []
targets.extend(tgts)
@@ -409,9 +365,8 @@ class HobHandler(gobject.GObject):
self.commands_async.append(self.SUB_BUILD_RECIPES)
self.run_next_command(self.GENERATE_PACKAGES)
def generate_image(self, image, base_image, toolchain, image_packages=[], toolchain_packages=[], default_task="build"):
def generate_image(self, image, toolchain, image_packages=[], toolchain_packages=[], default_task="build"):
self.image = image
self.base_image = base_image
self.toolchain = toolchain
self.package_queue = image_packages
self.toolchain_packages = toolchain_packages
@@ -443,9 +398,6 @@ class HobHandler(gobject.GObject):
def reset_build(self):
self.build.reset()
def get_logfile(self):
return self.server.runCommand(["getVariable", "BB_CONSOLELOG"])[0]
def _remove_redundant(self, string):
ret = []
for i in string.split():
@@ -453,26 +405,20 @@ class HobHandler(gobject.GObject):
ret.append(i)
return " ".join(ret)
def set_var_in_file(self, var, val, default_file=None):
self.server.runCommand(["setVarFile", var, val, default_file])
def get_parameters(self):
# retrieve the parameters from bitbake
params = {}
params["core_base"] = self.runCommand(["getVariable", "COREBASE"]) or ""
hob_layer = params["core_base"] + "/meta-hob"
params["layer"] = self.runCommand(["getVariable", "BBLAYERS"]) or ""
params["layers_non_removable"] = self.runCommand(["getVariable", "BBLAYERS_NON_REMOVABLE"]) or ""
if hob_layer not in params["layer"].split():
params["layer"] += (" " + hob_layer)
if hob_layer not in params["layers_non_removable"].split():
params["layers_non_removable"] += (" " + hob_layer)
params["dldir"] = self.runCommand(["getVariable", "DL_DIR"]) or ""
params["machine"] = self.runCommand(["getVariable", "MACHINE"]) or ""
params["distro"] = self.runCommand(["getVariable", "DISTRO"]) or "defaultsetup"
params["pclass"] = self.runCommand(["getVariable", "PACKAGE_CLASSES"]) or ""
params["sstatedir"] = self.runCommand(["getVariable", "SSTATE_DIR"]) or ""
params["sstatemirror"] = self.runCommand(["getVariable", "SSTATE_MIRRORS"]) or ""
params["sstatemirror"] = self.runCommand(["getVariable", "SSTATE_MIRROR"]) or ""
num_threads = self.runCommand(["getCpuCount"])
if not num_threads:
@@ -554,7 +500,6 @@ class HobHandler(gobject.GObject):
params["runnable_image_types"] = self._remove_redundant(self.runCommand(["getVariable", "RUNNABLE_IMAGE_TYPES"]) or "")
params["runnable_machine_patterns"] = self._remove_redundant(self.runCommand(["getVariable", "RUNNABLE_MACHINE_PATTERNS"]) or "")
params["deployable_image_types"] = self._remove_redundant(self.runCommand(["getVariable", "DEPLOYABLE_IMAGE_TYPES"]) or "")
params["kernel_image_type"] = self.runCommand(["getVariable", "KERNEL_IMAGETYPE"]) or ""
params["tmpdir"] = self.runCommand(["getVariable", "TMPDIR"]) or ""
params["distro_version"] = self.runCommand(["getVariable", "DISTRO_VERSION"]) or ""
params["target_os"] = self.runCommand(["getVariable", "TARGET_OS"]) or ""
@@ -564,14 +509,15 @@ class HobHandler(gobject.GObject):
params["default_task"] = self.runCommand(["getVariable", "BB_DEFAULT_TASK"]) or "build"
params["socks_proxy"] = self.runCommand(["getVariable", "all_proxy"]) or ""
params["git_proxy_host"] = self.runCommand(["getVariable", "GIT_PROXY_HOST"]) or ""
params["git_proxy_port"] = self.runCommand(["getVariable", "GIT_PROXY_PORT"]) or ""
params["http_proxy"] = self.runCommand(["getVariable", "http_proxy"]) or ""
params["ftp_proxy"] = self.runCommand(["getVariable", "ftp_proxy"]) or ""
params["https_proxy"] = self.runCommand(["getVariable", "https_proxy"]) or ""
params["all_proxy"] = self.runCommand(["getVariable", "all_proxy"]) or ""
params["cvs_proxy_host"] = self.runCommand(["getVariable", "CVS_PROXY_HOST"]) or ""
params["cvs_proxy_port"] = self.runCommand(["getVariable", "CVS_PROXY_PORT"]) or ""
params["image_white_pattern"] = self.runCommand(["getVariable", "BBUI_IMAGE_WHITE_PATTERN"]) or ""
params["image_black_pattern"] = self.runCommand(["getVariable", "BBUI_IMAGE_BLACK_PATTERN"]) or ""
return params

View File

@@ -27,15 +27,14 @@ from bb.ui.crumbs.hobpages import HobPage
#
# PackageListModel
#
class PackageListModel(gtk.ListStore):
class PackageListModel(gtk.TreeStore):
"""
This class defines an gtk.ListStore subclass which will convert the output
of the bb.event.TargetsTreeGenerated event into a gtk.ListStore whilst also
This class defines an gtk.TreeStore subclass which will convert the output
of the bb.event.TargetsTreeGenerated event into a gtk.TreeStore whilst also
providing convenience functions to access gtk.TreeModel subclasses which
provide filtered views of the data.
"""
(COL_NAME, COL_VER, COL_REV, COL_RNM, COL_SEC, COL_SUM, COL_RDEP, COL_RPROV, COL_SIZE, COL_RCP, COL_BINB, COL_INC, COL_FADE_INC, COL_FONT, COL_FLIST) = range(15)
(COL_NAME, COL_VER, COL_REV, COL_RNM, COL_SEC, COL_SUM, COL_RDEP, COL_RPROV, COL_SIZE, COL_BINB, COL_INC, COL_FADE_INC) = range(12)
__gsignals__ = {
"package-selection-changed" : (gobject.SIGNAL_RUN_LAST,
@@ -43,12 +42,18 @@ class PackageListModel(gtk.ListStore):
()),
}
__toolchain_required_packages__ = ["packagegroup-core-standalone-sdk-target", "packagegroup-core-standalone-sdk-target-dbg"]
__toolchain_required_packages__ = ["task-core-standalone-sdk-target", "task-core-standalone-sdk-target-dbg"]
def __init__(self):
self.contents = None
self.images = None
self.pkgs_size = 0
self.pn_path = {}
self.pkg_path = {}
self.rprov_pkg = {}
gtk.ListStore.__init__ (self,
gobject.TYPE_STRING,
gtk.TreeStore.__init__ (self,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
@@ -60,9 +65,8 @@ class PackageListModel(gtk.ListStore):
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_BOOLEAN,
gobject.TYPE_BOOLEAN,
gobject.TYPE_STRING,
gobject.TYPE_STRING)
gobject.TYPE_BOOLEAN)
"""
Find the model path for the item_name
@@ -70,14 +74,14 @@ class PackageListModel(gtk.ListStore):
"""
def find_path_for_item(self, item_name):
pkg = item_name
if item_name not in self.pn_path.keys():
if item_name not in self.pkg_path.keys():
if item_name not in self.rprov_pkg.keys():
return None
pkg = self.rprov_pkg[item_name]
if pkg not in self.pn_path.keys():
if pkg not in self.pkg_path.keys():
return None
return self.pn_path[pkg]
return self.pkg_path[pkg]
def find_item_for_path(self, item_path):
return self[item_path][self.COL_NAME]
@@ -86,110 +90,25 @@ class PackageListModel(gtk.ListStore):
Helper function to determine whether an item is an item specified by filter
"""
def tree_model_filter(self, model, it, filter):
name = model.get_value(it, self.COL_NAME)
for key in filter.keys():
if key == self.COL_NAME:
if filter[key] != 'Search packages by name':
if name and filter[key] not in name:
return False
else:
if model.get_value(it, key) not in filter[key]:
return False
self.filtered_nb += 1
if model.get_value(it, key) not in filter[key]:
return False
return True
"""
Create, if required, and return a filtered gtk.TreeModelSort
containing only the items specified by filter
"""
def tree_model(self, filter, excluded_items_ahead=False, included_items_ahead=False, search_data=None, initial=False):
def tree_model(self, filter):
model = self.filter_new()
self.filtered_nb = 0
model.set_visible_func(self.tree_model_filter, filter)
sort = gtk.TreeModelSort(model)
if initial:
sort.set_sort_column_id(PackageListModel.COL_NAME, gtk.SORT_ASCENDING)
sort.set_default_sort_func(None)
if excluded_items_ahead:
sort.set_default_sort_func(self.exclude_item_sort_func, search_data)
elif included_items_ahead:
sort.set_default_sort_func(self.include_item_sort_func, search_data)
else:
if search_data and search_data!='Search recipes by name' and search_data!='Search package groups by name':
sort.set_default_sort_func(self.sort_func, search_data)
else:
sort.set_sort_column_id(PackageListModel.COL_NAME, gtk.SORT_ASCENDING)
sort.set_default_sort_func(None)
sort.set_sort_func(PackageListModel.COL_INC, self.sort_column, PackageListModel.COL_INC)
sort.set_sort_func(PackageListModel.COL_SIZE, self.sort_column, PackageListModel.COL_SIZE)
sort.set_sort_func(PackageListModel.COL_BINB, self.sort_column, PackageListModel.COL_BINB)
sort.set_sort_func(PackageListModel.COL_RCP, self.sort_column, PackageListModel.COL_RCP)
sort.set_sort_column_id(PackageListModel.COL_NAME, gtk.SORT_ASCENDING)
sort.set_default_sort_func(None)
return sort
def sort_column(self, model, row1, row2, col):
value1 = model.get_value(row1, col)
value2 = model.get_value(row2, col)
if col==PackageListModel.COL_SIZE:
value1 = HobPage._string_to_size(value1)
value2 = HobPage._string_to_size(value2)
cmp_res = cmp(value1, value2)
if cmp_res!=0:
if col==PackageListModel.COL_INC:
return -cmp_res
else:
return cmp_res
else:
name1 = model.get_value(row1, PackageListModel.COL_NAME)
name2 = model.get_value(row2, PackageListModel.COL_NAME)
return cmp(name1,name2)
def exclude_item_sort_func(self, model, iter1, iter2, user_data=None):
if user_data:
val1 = model.get_value(iter1, PackageListModel.COL_NAME)
val2 = model.get_value(iter2, PackageListModel.COL_NAME)
if val1.startswith(user_data) and not val2.startswith(user_data):
return -1
elif not val1.startswith(user_data) and val2.startswith(user_data):
return 1
else:
return 0
else:
val1 = model.get_value(iter1, PackageListModel.COL_FADE_INC)
val2 = model.get_value(iter2, PackageListModel.COL_INC)
return ((val1 == True) and (val2 == False))
def include_item_sort_func(self, model, iter1, iter2, user_data=None):
if user_data:
val1 = model.get_value(iter1, PackageListModel.COL_NAME)
val2 = model.get_value(iter2, PackageListModel.COL_NAME)
if val1.startswith(user_data) and not val2.startswith(user_data):
return -1
elif not val1.startswith(user_data) and val2.startswith(user_data):
return 1
else:
return 0
else:
val1 = model.get_value(iter1, PackageListModel.COL_INC)
val2 = model.get_value(iter2, PackageListModel.COL_INC)
return ((val1 == False) and (val2 == True))
def sort_func(self, model, iter1, iter2, user_data):
val1 = model.get_value(iter1, PackageListModel.COL_NAME)
val2 = model.get_value(iter2, PackageListModel.COL_NAME)
if val1 is None or val2 is None:
return 0
elif val1.startswith(user_data) and not val2.startswith(user_data):
return -1
elif not val1.startswith(user_data) and val2.startswith(user_data):
return 1
else:
return 0
def convert_vpath_to_path(self, view_model, view_path):
# view_model is the model sorted
# get the path of the model filtered
@@ -201,13 +120,16 @@ class PackageListModel(gtk.ListStore):
return path
def convert_path_to_vpath(self, view_model, path):
name = self.find_item_for_path(path)
it = view_model.get_iter_first()
while it:
name = self.find_item_for_path(path)
view_name = view_model.get_value(it, PackageListModel.COL_NAME)
if view_name == name:
view_path = view_model.get_path(it)
return view_path
child_it = view_model.iter_children(it)
while child_it:
view_name = view_model.get_value(child_it, self.COL_NAME)
if view_name == name:
view_path = view_model.get_path(child_it)
return view_path
child_it = view_model.iter_next(child_it)
it = view_model.iter_next(it)
return None
@@ -216,66 +138,58 @@ class PackageListModel(gtk.ListStore):
bb.event.PackageInfo event and populates the package list.
"""
def populate(self, pkginfolist):
# First clear the model, in case repopulating
self.clear()
def getpkgvalue(pkgdict, key, pkgname, defaultval = None):
value = pkgdict.get('%s_%s' % (key, pkgname), None)
if not value:
value = pkgdict.get(key, defaultval)
return value
self.pkgs_size = 0
self.pn_path = {}
self.pkg_path = {}
self.rprov_pkg = {}
for pkginfo in pkginfolist:
pn = pkginfo['PN']
pv = pkginfo['PV']
pr = pkginfo['PR']
if pn in self.pn_path.keys():
pniter = self.get_iter(self.pn_path[pn])
else:
pniter = self.append(None)
self.set(pniter, self.COL_NAME, pn + '-' + pv + '-' + pr,
self.COL_INC, False)
self.pn_path[pn] = self.get_path(pniter)
pkg = pkginfo['PKG']
pkgv = getpkgvalue(pkginfo, 'PKGV', pkg)
pkgr = getpkgvalue(pkginfo, 'PKGR', pkg)
# PKGSIZE is artificial, will always be overridden with the package name if present
pkgsize = pkginfo.get('PKGSIZE_%s' % pkg, "0")
# PKG_%s is the renamed version
pkg_rename = pkginfo.get('PKG_%s' % pkg, "")
# The rest may be overridden or not
section = getpkgvalue(pkginfo, 'SECTION', pkg, "")
summary = getpkgvalue(pkginfo, 'SUMMARY', pkg, "")
rdep = getpkgvalue(pkginfo, 'RDEPENDS', pkg, "")
rrec = getpkgvalue(pkginfo, 'RRECOMMENDS', pkg, "")
rprov = getpkgvalue(pkginfo, 'RPROVIDES', pkg, "")
files_list = getpkgvalue(pkginfo, 'FILES_INFO', pkg, "")
pkgv = pkginfo['PKGV']
pkgr = pkginfo['PKGR']
pkgsize = pkginfo['PKGSIZE_%s' % pkg] if 'PKGSIZE_%s' % pkg in pkginfo.keys() else "0"
pkg_rename = pkginfo['PKG_%s' % pkg] if 'PKG_%s' % pkg in pkginfo.keys() else ""
section = pkginfo['SECTION_%s' % pkg] if 'SECTION_%s' % pkg in pkginfo.keys() else ""
summary = pkginfo['SUMMARY_%s' % pkg] if 'SUMMARY_%s' % pkg in pkginfo.keys() else ""
rdep = pkginfo['RDEPENDS_%s' % pkg] if 'RDEPENDS_%s' % pkg in pkginfo.keys() else ""
rrec = pkginfo['RRECOMMENDS_%s' % pkg] if 'RRECOMMENDS_%s' % pkg in pkginfo.keys() else ""
rprov = pkginfo['RPROVIDES_%s' % pkg] if 'RPROVIDES_%s' % pkg in pkginfo.keys() else ""
for i in rprov.split():
self.rprov_pkg[i] = pkg
recipe = pn + '-' + pv + '-' + pr
allow_empty = getpkgvalue(pkginfo, 'ALLOW_EMPTY', pkg, "")
if 'ALLOW_EMPTY_%s' % pkg in pkginfo.keys():
allow_empty = pkginfo['ALLOW_EMPTY_%s' % pkg]
elif 'ALLOW_EMPTY' in pkginfo.keys():
allow_empty = pkginfo['ALLOW_EMPTY']
else:
allow_empty = ""
if pkgsize == "0" and not allow_empty:
continue
# pkgsize is in KB
size = HobPage._size_to_string(HobPage._string_to_size(pkgsize + ' KB'))
self.set(self.append(), self.COL_NAME, pkg, self.COL_VER, pkgv,
it = self.append(pniter)
self.pkg_path[pkg] = self.get_path(it)
self.set(it, self.COL_NAME, pkg, self.COL_VER, pkgv,
self.COL_REV, pkgr, self.COL_RNM, pkg_rename,
self.COL_SEC, section, self.COL_SUM, summary,
self.COL_RDEP, rdep + ' ' + rrec,
self.COL_RPROV, rprov, self.COL_SIZE, size,
self.COL_RCP, recipe, self.COL_BINB, "",
self.COL_INC, False, self.COL_FONT, '10', self.COL_FLIST, files_list)
self.pn_path = {}
it = self.get_iter_first()
while it:
pn = self.get_value(it, self.COL_NAME)
path = self.get_path(it)
self.pn_path[pn] = path
it = self.iter_next(it)
"""
Update the model, send out the notification.
"""
def selection_change_notification(self):
self.emit("package-selection-changed")
self.COL_BINB, "", self.COL_INC, False)
"""
Check whether the item at item_path is included or not
@@ -284,26 +198,54 @@ class PackageListModel(gtk.ListStore):
return self[item_path][self.COL_INC]
"""
Add this item, and any of its dependencies, to the image contents
Update the model, send out the notification.
"""
def selection_change_notification(self):
self.emit("package-selection-changed")
"""
Mark a certain package as selected.
All its dependencies are marked as selected.
The recipe provides the package is marked as selected.
If user explicitly selects a recipe, all its providing packages are selected
"""
def include_item(self, item_path, binb=""):
if self.path_included(item_path):
return
item_name = self[item_path][self.COL_NAME]
item_deps = self[item_path][self.COL_RDEP]
item_rdep = self[item_path][self.COL_RDEP]
self[item_path][self.COL_INC] = True
it = self.get_iter(item_path)
# If user explicitly selects a recipe, all its providing packages are selected.
if not self[item_path][self.COL_VER] and binb == "User Selected":
child_it = self.iter_children(it)
while child_it:
child_path = self.get_path(child_it)
child_included = self.path_included(child_path)
if not child_included:
self.include_item(child_path, binb="User Selected")
child_it = self.iter_next(child_it)
return
# The recipe provides the package is also marked as selected
parent_it = self.iter_parent(it)
if parent_it:
parent_path = self.get_path(parent_it)
self[parent_path][self.COL_INC] = True
item_bin = self[item_path][self.COL_BINB].split(', ')
if binb and not binb in item_bin:
item_bin.append(binb)
self[item_path][self.COL_BINB] = ', '.join(item_bin).lstrip(', ')
if item_deps:
if item_rdep:
# Ensure all of the items deps are included and, where appropriate,
# add this item to their COL_BINB
for dep in item_deps.split(" "):
for dep in item_rdep.split(" "):
if dep.startswith('('):
continue
# If the contents model doesn't already contain dep, add it
@@ -322,6 +264,12 @@ class PackageListModel(gtk.ListStore):
elif not dep_included:
self.include_item(dep_path, binb=item_name)
"""
Mark a certain package as de-selected.
All other packages that depends on this package are marked as de-selected.
If none of the packages provided by the recipe, the recipe should be marked as de-selected.
If user explicitly de-select a recipe, all its providing packages are de-selected.
"""
def exclude_item(self, item_path):
if not self.path_included(item_path):
return
@@ -329,9 +277,37 @@ class PackageListModel(gtk.ListStore):
self[item_path][self.COL_INC] = False
item_name = self[item_path][self.COL_NAME]
item_deps = self[item_path][self.COL_RDEP]
if item_deps:
for dep in item_deps.split(" "):
item_rdep = self[item_path][self.COL_RDEP]
it = self.get_iter(item_path)
# If user explicitly de-select a recipe, all its providing packages are de-selected.
if not self[item_path][self.COL_VER]:
child_it = self.iter_children(it)
while child_it:
child_path = self.get_path(child_it)
child_included = self[child_path][self.COL_INC]
if child_included:
self.exclude_item(child_path)
child_it = self.iter_next(child_it)
return
# If none of the packages provided by the recipe, the recipe should be marked as de-selected.
parent_it = self.iter_parent(it)
peer_iter = self.iter_children(parent_it)
enabled = 0
while peer_iter:
peer_path = self.get_path(peer_iter)
if self[peer_path][self.COL_INC]:
enabled = 1
break
peer_iter = self.iter_next(peer_iter)
if not enabled:
parent_path = self.get_path(parent_it)
self[parent_path][self.COL_INC] = False
# All packages that depends on this package are de-selected.
if item_rdep:
for dep in item_rdep.split(" "):
if dep.startswith('('):
continue
dep_path = self.find_path_for_item(dep)
@@ -351,40 +327,51 @@ class PackageListModel(gtk.ListStore):
self.exclude_item(binb_path)
"""
Empty self.contents by setting the include of each entry to None
Package model may be incomplete, therefore when calling the
set_selected_packages(), some packages will not be set included.
Return the un-set packages list.
"""
def reset(self):
it = self.get_iter_first()
while it:
self.set(it,
self.COL_INC, False,
self.COL_BINB, "")
it = self.iter_next(it)
def set_selected_packages(self, packagelist):
left = []
for pn in packagelist:
if pn in self.pkg_path.keys():
path = self.pkg_path[pn]
self.include_item(item_path=path,
binb="User Selected")
else:
left.append(pn)
self.selection_change_notification()
def get_selected_packages(self):
packagelist = []
it = self.get_iter_first()
while it:
if self.get_value(it, self.COL_INC):
name = self.get_value(it, self.COL_NAME)
packagelist.append(name)
it = self.iter_next(it)
return packagelist
return left
def get_user_selected_packages(self):
packagelist = []
it = self.get_iter_first()
while it:
if self.get_value(it, self.COL_INC):
binb = self.get_value(it, self.COL_BINB)
if binb == "User Selected":
name = self.get_value(it, self.COL_NAME)
child_it = self.iter_children(it)
while child_it:
if self.get_value(child_it, self.COL_INC):
binb = self.get_value(child_it, self.COL_BINB)
if not binb or binb == "User Selected":
name = self.get_value(child_it, self.COL_NAME)
packagelist.append(name)
child_it = self.iter_next(child_it)
it = self.iter_next(it)
return packagelist
def get_selected_packages(self):
packagelist = []
it = self.get_iter_first()
while it:
child_it = self.iter_children(it)
while child_it:
if self.get_value(child_it, self.COL_INC):
name = self.get_value(child_it, self.COL_NAME)
packagelist.append(name)
child_it = self.iter_next(child_it)
it = self.iter_next(it)
return packagelist
@@ -395,31 +382,16 @@ class PackageListModel(gtk.ListStore):
it = self.get_iter_first()
while it:
if self.get_value(it, self.COL_INC):
name = self.get_value(it, self.COL_NAME)
if name.endswith("-dev") or name.endswith("-dbg"):
packagelist.append(name)
child_it = self.iter_children(it)
while child_it:
name = self.get_value(child_it, self.COL_NAME)
inc = self.get_value(child_it, self.COL_INC)
if inc or name.endswith("-dev") or name.endswith("-dbg"):
packagelist.append(name)
child_it = self.iter_next(child_it)
it = self.iter_next(it)
return list(set(packagelist + self.__toolchain_required_packages__));
"""
Package model may be incomplete, therefore when calling the
set_selected_packages(), some packages will not be set included.
Return the un-set packages list.
"""
def set_selected_packages(self, packagelist, user_selected=False):
left = []
binb = 'User Selected' if user_selected else ''
for pn in packagelist:
if pn in self.pn_path.keys():
path = self.pn_path[pn]
self.include_item(item_path=path, binb=binb)
else:
left.append(pn)
self.selection_change_notification()
return left
"""
Return the selected package size, unit is B.
"""
@@ -427,16 +399,37 @@ class PackageListModel(gtk.ListStore):
packages_size = 0
it = self.get_iter_first()
while it:
if self.get_value(it, self.COL_INC):
str_size = self.get_value(it, self.COL_SIZE)
if not str_size:
continue
child_it = self.iter_children(it)
while child_it:
if self.get_value(child_it, self.COL_INC):
str_size = self.get_value(child_it, self.COL_SIZE)
if not str_size:
continue
packages_size += HobPage._string_to_size(str_size)
packages_size += HobPage._string_to_size(str_size)
child_it = self.iter_next(child_it)
it = self.iter_next(it)
return packages_size
"""
Empty self.contents by setting the include of each entry to None
"""
def reset(self):
self.pkgs_size = 0
it = self.get_iter_first()
while it:
self.set(it, self.COL_INC, False)
child_it = self.iter_children(it)
while child_it:
self.set(child_it,
self.COL_INC, False,
self.COL_BINB, "")
child_it = self.iter_next(child_it)
it = self.iter_next(it)
self.selection_change_notification()
"""
Resync the state of included items to a backup column before performing the fadeout visible effect
"""
@@ -445,6 +438,9 @@ class PackageListModel(gtk.ListStore):
while it:
active = self.get_value(it, self.COL_INC)
self.set(it, self.COL_FADE_INC, active)
if self.iter_has_child(it):
self.resync_fadeout_column(self.iter_children(it))
it = self.iter_next(it)
#
@@ -457,10 +453,9 @@ class RecipeListModel(gtk.ListStore):
providing convenience functions to access gtk.TreeModel subclasses which
provide filtered views of the data.
"""
(COL_NAME, COL_DESC, COL_LIC, COL_GROUP, COL_DEPS, COL_BINB, COL_TYPE, COL_INC, COL_IMG, COL_INSTALL, COL_PN, COL_FADE_INC, COL_SUMMARY, COL_VERSION,
COL_REVISION, COL_HOMEPAGE, COL_BUGTRACKER) = range(17)
(COL_NAME, COL_DESC, COL_LIC, COL_GROUP, COL_DEPS, COL_BINB, COL_TYPE, COL_INC, COL_IMG, COL_INSTALL, COL_PN, COL_FADE_INC) = range(12)
__custom_image__ = "Create your own image"
__dummy_image__ = "Create your own image"
__gsignals__ = {
"recipe-selection-changed" : (gobject.SIGNAL_RUN_LAST,
@@ -483,12 +478,7 @@ class RecipeListModel(gtk.ListStore):
gobject.TYPE_BOOLEAN,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_BOOLEAN,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING)
gobject.TYPE_BOOLEAN)
"""
Find the model path for the item_name
@@ -520,104 +510,32 @@ class RecipeListModel(gtk.ListStore):
return False
for key in filter.keys():
if key == self.COL_NAME:
if filter[key] != 'Search recipes by name' and filter[key] != 'Search package groups by name':
if filter[key] not in name:
return False
else:
if model.get_value(it, key) not in filter[key]:
return False
self.filtered_nb += 1
if model.get_value(it, key) not in filter[key]:
return False
return True
def exclude_item_sort_func(self, model, iter1, iter2, user_data=None):
if user_data:
val1 = model.get_value(iter1, RecipeListModel.COL_NAME)
val2 = model.get_value(iter2, RecipeListModel.COL_NAME)
if val1.startswith(user_data) and not val2.startswith(user_data):
return -1
elif not val1.startswith(user_data) and val2.startswith(user_data):
return 1
else:
return 0
else:
val1 = model.get_value(iter1, RecipeListModel.COL_FADE_INC)
val2 = model.get_value(iter2, RecipeListModel.COL_INC)
return ((val1 == True) and (val2 == False))
def include_item_sort_func(self, model, iter1, iter2, user_data=None):
if user_data:
val1 = model.get_value(iter1, RecipeListModel.COL_NAME)
val2 = model.get_value(iter2, RecipeListModel.COL_NAME)
if val1.startswith(user_data) and not val2.startswith(user_data):
return -1
elif not val1.startswith(user_data) and val2.startswith(user_data):
return 1
else:
return 0
else:
val1 = model.get_value(iter1, RecipeListModel.COL_INC)
val2 = model.get_value(iter2, RecipeListModel.COL_INC)
return ((val1 == False) and (val2 == True))
def sort_func(self, model, iter1, iter2, user_data):
val1 = model.get_value(iter1, RecipeListModel.COL_NAME)
val2 = model.get_value(iter2, RecipeListModel.COL_NAME)
if val1 is None or val2 is None:
return 0
elif val1.startswith(user_data) and not val2.startswith(user_data):
return -1
elif not val1.startswith(user_data) and val2.startswith(user_data):
return 1
else:
return 0
def exclude_item_sort_func(self, model, iter1, iter2):
val1 = model.get_value(iter1, RecipeListModel.COL_FADE_INC)
val2 = model.get_value(iter2, RecipeListModel.COL_INC)
return ((val1 == True) and (val2 == False))
"""
Create, if required, and return a filtered gtk.TreeModelSort
containing only the items specified by filter
containing only the items which are items specified by filter
"""
def tree_model(self, filter, excluded_items_ahead=False, included_items_ahead=False, search_data=None, initial=False):
def tree_model(self, filter, excluded_items_ahead=False):
model = self.filter_new()
self.filtered_nb = 0
model.set_visible_func(self.tree_model_filter, filter)
sort = gtk.TreeModelSort(model)
if initial:
if excluded_items_ahead:
sort.set_default_sort_func(self.exclude_item_sort_func)
else:
sort.set_sort_column_id(RecipeListModel.COL_NAME, gtk.SORT_ASCENDING)
sort.set_default_sort_func(None)
if excluded_items_ahead:
sort.set_default_sort_func(self.exclude_item_sort_func, search_data)
elif included_items_ahead:
sort.set_default_sort_func(self.include_item_sort_func, search_data)
else:
if search_data and search_data!='Search recipes by name' and search_data!='Search package groups by name':
sort.set_default_sort_func(self.sort_func, search_data)
else:
sort.set_sort_column_id(RecipeListModel.COL_NAME, gtk.SORT_ASCENDING)
sort.set_default_sort_func(None)
sort.set_sort_func(RecipeListModel.COL_INC, self.sort_column, RecipeListModel.COL_INC)
sort.set_sort_func(RecipeListModel.COL_GROUP, self.sort_column, RecipeListModel.COL_GROUP)
sort.set_sort_func(RecipeListModel.COL_BINB, self.sort_column, RecipeListModel.COL_BINB)
sort.set_sort_func(RecipeListModel.COL_LIC, self.sort_column, RecipeListModel.COL_LIC)
return sort
def sort_column(self, model, row1, row2, col):
value1 = model.get_value(row1, col)
value2 = model.get_value(row2, col)
cmp_res = cmp(value1, value2)
if cmp_res!=0:
if col==RecipeListModel.COL_INC:
return -cmp_res
else:
return cmp_res
else:
name1 = model.get_value(row1, RecipeListModel.COL_NAME)
name2 = model.get_value(row2, RecipeListModel.COL_NAME)
return cmp(name1,name2)
def convert_vpath_to_path(self, view_model, view_path):
filtered_model_path = view_model.convert_path_to_child_path(view_path)
filtered_model = view_model.get_model()
@@ -646,15 +564,14 @@ class RecipeListModel(gtk.ListStore):
self.clear()
# dummy image for prompt
self.set(self.append(), self.COL_NAME, self.__custom_image__,
self.COL_DESC, "Use 'Edit image' to customize recipes and packages " \
"to be included in your image ",
self.set(self.append(), self.COL_NAME, self.__dummy_image__,
self.COL_DESC, "Use the 'View recipes' and 'View packages' " \
"options to select what you want to include " \
"in your image.",
self.COL_LIC, "", self.COL_GROUP, "",
self.COL_DEPS, "", self.COL_BINB, "",
self.COL_TYPE, "image", self.COL_INC, False,
self.COL_IMG, False, self.COL_INSTALL, "", self.COL_PN, self.__custom_image__,
self.COL_SUMMARY, "", self.COL_VERSION, "", self.COL_REVISION, "",
self.COL_HOMEPAGE, "", self.COL_BUGTRACKER, "")
self.COL_IMG, False, self.COL_INSTALL, "", self.COL_PN, self.__dummy_image__)
for item in event_model["pn"]:
name = item
@@ -662,17 +579,12 @@ class RecipeListModel(gtk.ListStore):
lic = event_model["pn"][item]["license"]
group = event_model["pn"][item]["section"]
inherits = event_model["pn"][item]["inherits"]
summary = event_model["pn"][item]["summary"]
version = event_model["pn"][item]["version"]
revision = event_model["pn"][item]["revision"]
homepage = event_model["pn"][item]["homepage"]
bugtracker = event_model["pn"][item]["bugtracker"]
install = []
depends = event_model["depends"].get(item, []) + event_model["rdepends-pn"].get(item, [])
if ('packagegroup.bbclass' in " ".join(inherits)):
atype = 'packagegroup'
if ('task-' in name):
atype = 'task'
elif ('image.bbclass' in " ".join(inherits)):
if name != "hob-image":
atype = 'image'
@@ -688,9 +600,7 @@ class RecipeListModel(gtk.ListStore):
self.COL_LIC, lic, self.COL_GROUP, group,
self.COL_DEPS, " ".join(depends), self.COL_BINB, "",
self.COL_TYPE, atype, self.COL_INC, False,
self.COL_IMG, False, self.COL_INSTALL, " ".join(install), self.COL_PN, item,
self.COL_SUMMARY, summary, self.COL_VERSION, version, self.COL_REVISION, revision,
self.COL_HOMEPAGE, homepage, self.COL_BUGTRACKER, bugtracker)
self.COL_IMG, False, self.COL_INSTALL, " ".join(install), self.COL_PN, item)
self.pn_path = {}
it = self.get_iter_first()
@@ -750,10 +660,6 @@ class RecipeListModel(gtk.ListStore):
self[dep_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
elif not dep_included:
self.include_item(dep_path, binb=item_name, image_contents=image_contents)
dep_bin = self[item_path][self.COL_BINB].split(', ')
if self[item_path][self.COL_NAME] in dep_bin:
dep_bin.remove(self[item_path][self.COL_NAME])
self[item_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
def exclude_item(self, item_path):
if not self.path_included(item_path):
@@ -844,9 +750,3 @@ class RecipeListModel(gtk.ListStore):
binb="User Selected",
image_contents=True)
self.selection_change_notification()
def set_custom_image_version(self, version):
self.custom_image_version = version
def get_custom_image_version(self):
return self.custom_image_version

View File

@@ -38,7 +38,6 @@ class HobPage (gtk.VBox):
self.title = "Hob -- Image Creator"
else:
self.title = title
self.title_label = gtk.Label()
self.box_group_area = gtk.VBox(False, 12)
self.box_group_area.set_size_request(self.builder_width - 73 - 73, self.builder_height - 88 - 15 - 15)
@@ -47,9 +46,6 @@ class HobPage (gtk.VBox):
self.group_align.add(self.box_group_area)
self.box_group_area.set_homogeneous(False)
def set_title(self, title):
self.title = title
self.title_label.set_markup("<span size='x-large'>%s</span>" % self.title)
def add_onto_top_bar(self, widget = None, padding = 0):
# the top button occupies 1/7 of the page height
@@ -62,9 +58,9 @@ class HobPage (gtk.VBox):
hbox = gtk.HBox()
self.title_label = gtk.Label()
self.title_label.set_markup("<span size='x-large'>%s</span>" % self.title)
hbox.pack_start(self.title_label, expand=False, fill=False, padding=20)
label = gtk.Label()
label.set_markup("<span size='x-large'>%s</span>" % self.title)
hbox.pack_start(label, expand=False, fill=False, padding=20)
if widget:
# add the widget in the event box

View File

@@ -23,7 +23,6 @@ import os
import os.path
import sys
import pango, pangocairo
import cairo
import math
from bb.ui.crumbs.hobcolor import HobColors
@@ -44,6 +43,8 @@ class hic:
ICON_PACKAGES_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('packages/packages_hover.png'))
ICON_LAYERS_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('layers/layers_display.png'))
ICON_LAYERS_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('layers/layers_hover.png'))
ICON_TEMPLATES_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('templates/templates_display.png'))
ICON_TEMPLATES_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('templates/templates_hover.png'))
ICON_IMAGES_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('images/images_display.png'))
ICON_IMAGES_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('images/images_hover.png'))
ICON_SETTINGS_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('settings/settings_display.png'))
@@ -61,6 +62,34 @@ class hic:
ICON_INDI_TICK_FILE = os.path.join(HOB_ICON_BASE_DIR, ('indicators/tick.png'))
ICON_INDI_INFO_FILE = os.path.join(HOB_ICON_BASE_DIR, ('indicators/info.png'))
class hcc:
SUPPORTED_IMAGE_TYPES = {
"jffs2" : ["jffs2"],
"sum.jffs2" : ["sum.jffs2"],
"cramfs" : ["cramfs"],
"ext2" : ["ext2"],
"ext2.gz" : ["ext2.gz"],
"ext2.bz2" : ["ext2.bz2"],
"ext3" : ["ext3"],
"ext3.gz" : ["ext3.gz"],
"ext2.lzma" : ["ext2.lzma"],
"btrfs" : ["btrfs"],
"live" : ["hddimg", "iso"],
"squashfs" : ["squashfs"],
"squashfs-lzma" : ["squashfs-lzma"],
"ubi" : ["ubi"],
"tar" : ["tar"],
"tar.gz" : ["tar.gz"],
"tar.bz2" : ["tar.bz2"],
"tar.xz" : ["tar.xz"],
"cpio" : ["cpio"],
"cpio.gz" : ["cpio.gz"],
"cpio.xz" : ["cpio.xz"],
"vmdk" : ["vmdk"],
"cpio.lzma" : ["cpio.lzma"],
}
class HobViewTable (gtk.VBox):
"""
A VBox to contain the table for different recipe views and package view
@@ -83,29 +112,22 @@ class HobViewTable (gtk.VBox):
gobject.TYPE_PYOBJECT,)),
}
def __init__(self, columns, name):
def __init__(self, columns):
gtk.VBox.__init__(self, False, 6)
self.table_tree = gtk.TreeView()
self.table_tree.set_headers_visible(True)
self.table_tree.set_headers_clickable(True)
self.table_tree.set_enable_search(True)
self.table_tree.set_rules_hint(True)
self.table_tree.set_enable_tree_lines(True)
self.table_tree.get_selection().set_mode(gtk.SELECTION_SINGLE)
self.toggle_columns = []
self.table_tree.connect("row-activated", self.row_activated_cb)
self.top_bar = None
self.tab_name = name
for i, column in enumerate(columns):
col_name = column['col_name']
col = gtk.TreeViewColumn(col_name)
col = gtk.TreeViewColumn(column['col_name'])
col.set_clickable(True)
col.set_resizable(True)
if self.tab_name.startswith('Included'):
if col_name!='Included':
col.set_sort_column_id(column['col_id'])
else:
col.set_sort_column_id(column['col_id'])
col.set_sort_column_id(column['col_id'])
if 'col_min' in column.keys():
col.set_min_width(column['col_min'])
if 'col_max' in column.keys():
@@ -118,8 +140,6 @@ class HobViewTable (gtk.VBox):
cell = gtk.CellRendererText()
col.pack_start(cell, True)
col.set_attributes(cell, text=column['col_id'])
if 'col_t_id' in column.keys():
col.add_attribute(cell, 'font', column['col_t_id'])
elif column['col_style'] == 'check toggle':
cell = HobCellRendererToggle()
cell.set_property('activatable', True)
@@ -128,9 +148,7 @@ class HobViewTable (gtk.VBox):
self.toggle_id = i
col.pack_end(cell, True)
col.set_attributes(cell, active=column['col_id'])
self.toggle_columns.append(col_name)
if 'col_group' in column.keys():
col.set_cell_data_func(cell, self.set_group_number_cb)
self.toggle_columns.append(column['col_name'])
elif column['col_style'] == 'radio toggle':
cell = gtk.CellRendererToggle()
cell.set_property('activatable', True)
@@ -139,68 +157,23 @@ class HobViewTable (gtk.VBox):
self.toggle_id = i
col.pack_end(cell, True)
col.set_attributes(cell, active=column['col_id'])
self.toggle_columns.append(col_name)
self.toggle_columns.append(column['col_name'])
elif column['col_style'] == 'binb':
cell = gtk.CellRendererText()
col.pack_start(cell, True)
col.set_cell_data_func(cell, self.display_binb_cb, column['col_id'])
if 'col_t_id' in column.keys():
col.add_attribute(cell, 'font', column['col_t_id'])
self.scroll = gtk.ScrolledWindow()
self.scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
self.scroll.add(self.table_tree)
self.pack_end(self.scroll, True, True, 0)
def add_no_result_bar(self, entry):
color = HobColors.KHAKI
self.top_bar = gtk.EventBox()
self.top_bar.set_size_request(-1, 70)
self.top_bar.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse(color))
self.top_bar.set_flags(gtk.CAN_DEFAULT)
self.top_bar.grab_default()
no_result_tab = gtk.Table(5, 20, True)
self.top_bar.add(no_result_tab)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
title = "No results matching your search"
label.set_markup("<span size='x-large'><b>%s</b></span>" % title)
no_result_tab.attach(label, 1, 14, 1, 4)
clear_button = HobButton("Clear search")
clear_button.set_tooltip_text("Clear search query")
clear_button.connect('clicked', self.set_search_entry_clear_cb, entry)
no_result_tab.attach(clear_button, 16, 19, 1, 4)
self.pack_start(self.top_bar, False, True, 12)
self.top_bar.show_all()
def set_search_entry_clear_cb(self, button, search):
if search.get_editable() == True:
search.set_text("")
search.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, False)
search.grab_focus()
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
scroll.add(self.table_tree)
self.pack_start(scroll, True, True, 0)
def display_binb_cb(self, col, cell, model, it, col_id):
binb = model.get_value(it, col_id)
# Just display the first item
if binb:
bin = binb.split(', ')
total_no = len(bin)
if total_no > 1 and bin[0] == "User Selected":
if total_no > 2:
present_binb = bin[1] + ' (+' + str(total_no - 1) + ')'
else:
present_binb = bin[1]
else:
if total_no > 1:
present_binb = bin[0] + ' (+' + str(total_no - 1) + ')'
else:
present_binb = bin[0]
cell.set_property('text', present_binb)
cell.set_property('text', bin[0])
else:
cell.set_property('text', "")
return True
@@ -208,6 +181,10 @@ class HobViewTable (gtk.VBox):
def set_model(self, tree_model):
self.table_tree.set_model(tree_model)
def set_search_entry(self, search_column_id, entry):
self.table_tree.set_search_column(search_column_id)
self.table_tree.set_search_entry(entry)
def toggle_default(self):
model = self.table_tree.get_model()
if not model:
@@ -227,15 +204,6 @@ class HobViewTable (gtk.VBox):
def stop_cell_fadeinout_cb(self, ctrl, cell, tree):
self.emit("cell-fadeinout-stopped", ctrl, cell, tree)
def set_group_number_cb(self, col, cell, model, iter):
if model and (model.iter_parent(iter) == None):
cell.cell_attr["number_of_children"] = model.iter_n_children(iter)
else:
cell.cell_attr["number_of_children"] = 0
def connect_group_selection(self, cb_func):
self.table_tree.get_selection().connect("changed", cb_func)
"""
A method to calculate a softened value for the colour of widget when in the
provided state.
@@ -257,7 +225,7 @@ def soften_color(widget, state=gtk.STATE_NORMAL):
color.blue = color.blue * blend + style.base[state].blue * (1.0 - blend)
return color.to_string()
class BaseHobButton(gtk.Button):
class HobButton(gtk.Button):
"""
A gtk.Button subclass which follows the visual design of Hob for primary
action buttons
@@ -271,33 +239,24 @@ class BaseHobButton(gtk.Button):
@staticmethod
def style_button(button):
style = button.get_style()
style = gtk.rc_get_style_by_paths(gtk.settings_get_default(), 'gtk-button', 'gtk-button', gobject.TYPE_NONE)
button_color = gtk.gdk.Color(HobColors.ORANGE)
button.modify_bg(gtk.STATE_NORMAL, button_color)
button.modify_bg(gtk.STATE_PRELIGHT, button_color)
button.modify_bg(gtk.STATE_SELECTED, button_color)
button.set_flags(gtk.CAN_DEFAULT)
button.grab_default()
# label = "<span size='x-large'><b>%s</b></span>" % gobject.markup_escape_text(button.get_label())
label = button.get_label()
label = "<span size='x-large'><b>%s</b></span>" % gobject.markup_escape_text(button.get_label())
button.set_label(label)
button.child.set_use_markup(True)
class HobButton(BaseHobButton):
"""
A gtk.Button subclass which follows the visual design of Hob for primary
action buttons
label: the text to display as the button's label
"""
def __init__(self, label):
BaseHobButton.__init__(self, label)
HobButton.style_button(self)
class HobAltButton(BaseHobButton):
class HobAltButton(gtk.Button):
"""
A gtk.Button subclass which has no relief, and so is more discrete
"""
def __init__(self, label):
BaseHobButton.__init__(self, label)
gtk.Button.__init__(self, label)
HobAltButton.style_button(self)
"""
@@ -323,6 +282,14 @@ class HobAltButton(BaseHobButton):
button.set_label("<span size='large' color='%s'><b>%s</b></span>" % (colour, gobject.markup_escape_text(button.text)))
button.child.set_use_markup(True)
@staticmethod
def style_button(button):
button.text = button.get_label()
button.connect("state-changed", HobAltButton.desensitise_on_state_change_cb)
HobAltButton.set_text(button)
button.child.set_use_markup(True)
button.set_relief(gtk.RELIEF_NONE)
class HobImageButton(gtk.Button):
"""
A gtk.Button with an icon and two rows of text, the second of which is
@@ -375,17 +342,21 @@ class HobInfoButton(gtk.EventBox):
def __init__(self, tip_markup, parent=None):
gtk.EventBox.__init__(self)
self.image = gtk.Image()
self.image.set_from_file(
hic.ICON_INFO_DISPLAY_FILE)
self.image.set_from_file(hic.ICON_INFO_DISPLAY_FILE)
self.image.show()
self.add(self.image)
self.tip_markup = tip_markup
self.my_parent = parent
self.set_events(gtk.gdk.BUTTON_RELEASE |
gtk.gdk.ENTER_NOTIFY_MASK |
gtk.gdk.LEAVE_NOTIFY_MASK)
self.ptip = PersistentTooltip(tip_markup)
if parent:
self.ptip.set_parent(parent)
self.ptip.set_transient_for(parent)
self.ptip.set_destroy_with_parent(True)
self.connect("button-release-event", self.button_release_cb)
self.connect("enter-notify-event", self.mouse_in_cb)
self.connect("leave-notify-event", self.mouse_out_cb)
@@ -395,18 +366,7 @@ class HobInfoButton(gtk.EventBox):
PersistentTooltip
"""
def button_release_cb(self, widget, event):
from bb.ui.crumbs.hig.propertydialog import PropertyDialog
self.dialog = PropertyDialog(title = '',
parent = self.my_parent,
information = self.tip_markup,
flags = gtk.DIALOG_DESTROY_WITH_PARENT
| gtk.DIALOG_NO_SEPARATOR)
button = self.dialog.add_button("Close", gtk.RESPONSE_CANCEL)
HobAltButton.style_button(button)
button.connect("clicked", lambda w: self.dialog.destroy())
self.dialog.show_all()
self.dialog.run()
self.ptip.show()
"""
Change to the prelight image when the mouse enters the widget
@@ -420,168 +380,446 @@ class HobInfoButton(gtk.EventBox):
def mouse_out_cb(self, widget, event):
self.image.set_from_file(hic.ICON_INFO_DISPLAY_FILE)
class HobIndicator(gtk.DrawingArea):
def __init__(self, count):
gtk.DrawingArea.__init__(self)
# Set no window for transparent background
self.set_has_window(False)
self.set_size_request(38,38)
# We need to pass through button clicks
self.add_events(gtk.gdk.BUTTON_PRESS_MASK | gtk.gdk.BUTTON_RELEASE_MASK)
class HobTabBar(gtk.DrawingArea):
__gsignals__ = {
"blank-area-changed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_INT,
gobject.TYPE_INT,
gobject.TYPE_INT,
gobject.TYPE_INT,)),
self.connect('expose-event', self.expose)
"tab-switched" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_INT,)),
}
self.count = count
self.color = HobColors.GRAY
def expose(self, widget, event):
if self.count and self.count > 0:
ctx = widget.window.cairo_create()
x, y, w, h = self.allocation
ctx.set_operator(cairo.OPERATOR_OVER)
ctx.set_source_color(gtk.gdk.color_parse(self.color))
ctx.translate(w/2, h/2)
ctx.arc(x, y, min(w,h)/2 - 2, 0, 2*math.pi)
ctx.fill_preserve()
layout = self.create_pango_layout(str(self.count))
textw, texth = layout.get_pixel_size()
x = (w/2)-(textw/2) + x
y = (h/2) - (texth/2) + y
ctx.move_to(x, y)
self.window.draw_layout(self.style.light_gc[gtk.STATE_NORMAL], int(x), int(y), layout)
def set_count(self, count):
self.count = count
def set_active(self, active):
if active:
self.color = HobColors.DEEP_RED
else:
self.color = HobColors.GRAY
class HobTabLabel(gtk.HBox):
def __init__(self, text, count=0):
gtk.HBox.__init__(self, False, 0)
self.indicator = HobIndicator(count)
self.indicator.show()
self.pack_end(self.indicator, False, False)
self.lbl = gtk.Label(text)
self.lbl.set_alignment(0.0, 0.5)
self.lbl.show()
self.pack_end(self.lbl, True, True, 6)
def set_count(self, count):
self.indicator.set_count(count)
def set_active(self, active=True):
self.indicator.set_active(active)
class HobNotebook(gtk.Notebook):
def __init__(self):
gtk.Notebook.__init__(self)
self.set_property('homogeneous', True)
gtk.DrawingArea.__init__(self)
self.children = []
self.pages = []
self.tab_width = 140
self.tab_height = 52
self.tab_x = 10
self.tab_y = 0
self.width = 500
self.height = 53
self.tab_w_ratio = 140 * 1.0/500
self.tab_h_ratio = 52 * 1.0/53
self.set_size_request(self.width, self.height)
self.current_child = None
self.font = self.get_style().font_desc
self.font.set_size(pango.SCALE * 13)
self.update_children_text_layout_and_bg_color()
self.blank_rectangle = None
self.tab_pressed = False
self.set_property('can-focus', True)
self.set_events(gtk.gdk.EXPOSURE_MASK | gtk.gdk.POINTER_MOTION_MASK |
gtk.gdk.BUTTON1_MOTION_MASK | gtk.gdk.BUTTON_PRESS_MASK |
gtk.gdk.BUTTON_RELEASE_MASK)
self.connect("expose-event", self.on_draw)
self.connect("button-press-event", self.button_pressed_cb)
self.connect("button-release-event", self.button_released_cb)
self.connect("query-tooltip", self.query_tooltip_cb)
self.show_all()
def button_released_cb(self, widget, event):
self.tab_pressed = False
self.queue_draw()
def button_pressed_cb(self, widget, event):
if event.type == gtk.gdk._2BUTTON_PRESS:
return
result = False
if self.is_focus() or event.type == gtk.gdk.BUTTON_PRESS:
x, y = event.get_coords()
# check which tab be clicked
for child in self.children:
if (child["x"] < x) and (x < child["x"] + self.tab_width) \
and (child["y"] < y) and (y < child["y"] + self.tab_height):
self.current_child = child
result = True
self.grab_focus()
break
# check the blank area is focus in or not
if (self.blank_rectangle) and (self.blank_rectangle.x > 0) and (self.blank_rectangle.y > 0):
if (self.blank_rectangle.x < x) and (x < self.blank_rectangle.x + self.blank_rectangle.width) \
and (self.blank_rectangle.y < y) and (y < self.blank_rectangle.y + self.blank_rectangle.height):
self.grab_focus()
if result == True:
page = self.current_child["toggled_page"]
self.emit("tab-switched", page)
self.tab_pressed = True
self.queue_draw()
def update_children_size(self):
# calculate the size of tabs
self.tab_width = int(self.width * self.tab_w_ratio)
self.tab_height = int(self.height * self.tab_h_ratio)
for i, child in enumerate(self.children):
child["x"] = self.tab_x + i * self.tab_width
child["y"] = self.tab_y
if self.blank_rectangle:
self.resize_blank_rectangle()
def resize_blank_rectangle(self):
width = self.width - self.tab_width * len(self.children) - self.tab_x
x = self.tab_x + self.tab_width * len(self.children)
hpadding = vpadding = 5
self.blank_rectangle = self.set_blank_size(x + hpadding, self.tab_y + vpadding,
width - 2 * hpadding, self.tab_height - 2 * vpadding)
def update_children_text_layout_and_bg_color(self):
style = self.get_style().copy()
color = style.base[gtk.STATE_NORMAL]
for child in self.children:
pangolayout = self.create_pango_layout(child["title"])
pangolayout.set_font_description(self.font)
child["title_layout"] = pangolayout
child["r"] = color.red
child["g"] = color.green
child["b"] = color.blue
def append_tab_child(self, title, page, tooltip=""):
num = len(self.children) + 1
self.tab_width = self.tab_width * len(self.children) / num
i = 0
for i, child in enumerate(self.children):
child["x"] = self.tab_x + i * self.tab_width
i += 1
x = self.tab_x + i * self.tab_width
y = self.tab_y
pangolayout = self.create_pango_layout(title)
pangolayout.set_font_description(self.font)
color = self.style.base[gtk.STATE_NORMAL]
new_one = {
"x" : x,
"y" : y,
"r" : color.red,
"g" : color.green,
"b" : color.blue,
"title_layout" : pangolayout,
"toggled_page" : page,
"title" : title,
"indicator_show" : False,
"indicator_number" : 0,
"tooltip_markup" : tooltip,
}
self.children.append(new_one)
if tooltip and (not self.props.has_tooltip):
self.props.has_tooltip = True
# set the default current child
if not self.current_child:
self.current_child = new_one
def on_draw(self, widget, event):
cr = widget.window.cairo_create()
self.width = self.allocation.width
self.height = self.allocation.height
self.update_children_size()
self.draw_background(cr)
self.draw_toggled_tab(cr)
for child in self.children:
if child["indicator_show"] == True:
self.draw_indicator(cr, child)
self.draw_tab_text(cr)
def draw_background(self, cr):
style = self.get_style()
if self.is_focus():
cr.set_source_color(style.base[gtk.STATE_SELECTED])
else:
cr.set_source_color(style.base[gtk.STATE_NORMAL])
y = 6
h = self.height - 6 - 1
gap = 1
w = self.children[0]["x"]
cr.set_source_color(gtk.gdk.color_parse(HobColors.GRAY))
cr.rectangle(0, y, w - gap, h) # start rectangle
cr.fill()
cr.set_source_color(style.base[gtk.STATE_NORMAL])
cr.rectangle(w - gap, y, w, h) #first gap
cr.fill()
w = self.tab_width
for child in self.children:
x = child["x"]
cr.set_source_color(gtk.gdk.color_parse(HobColors.GRAY))
cr.rectangle(x, y, w - gap, h) # tab rectangle
cr.fill()
cr.set_source_color(style.base[gtk.STATE_NORMAL])
cr.rectangle(x + w - gap, y, w, h) # gap
cr.fill()
cr.set_source_color(gtk.gdk.color_parse(HobColors.GRAY))
cr.rectangle(x + w, y, self.width - x - w, h) # last rectangle
cr.fill()
def draw_tab_text(self, cr):
style = self.get_style()
for child in self.children:
pangolayout = child["title_layout"]
if pangolayout:
fontw, fonth = pangolayout.get_pixel_size()
# center pos
off_x = (self.tab_width - fontw) / 2
off_y = (self.tab_height - fonth) / 2
x = child["x"] + off_x
y = child["y"] + off_y
if not child == self.current_child:
self.window.draw_layout(self.style.fg_gc[gtk.STATE_NORMAL], int(x), int(y), pangolayout, gtk.gdk.Color(HobColors.WHITE))
else:
self.window.draw_layout(self.style.fg_gc[gtk.STATE_NORMAL], int(x), int(y), pangolayout)
def draw_toggled_tab(self, cr):
if not self.current_child:
return
x = self.current_child["x"]
y = self.current_child["y"]
width = self.tab_width
height = self.tab_height
style = self.get_style()
color = style.base[gtk.STATE_NORMAL]
r = height / 10
if self.tab_pressed == True:
for xoff, yoff, c1, c2 in [(1, 0, HobColors.SLIGHT_DARK, HobColors.DARK), (2, 0, HobColors.GRAY, HobColors.LIGHT_GRAY)]:
cr.set_source_color(gtk.gdk.color_parse(c1))
cr.move_to(x + xoff, y + height + yoff)
cr.line_to(x + xoff, r + yoff)
cr.arc(x + r + xoff, y + r + yoff, r, math.pi, 1.5*math.pi)
cr.move_to(x + r + xoff, y + yoff)
cr.line_to(x + width - r + xoff, y + yoff)
cr.arc(x + width - r + xoff, y + r + yoff, r, 1.5*math.pi, 2*math.pi)
cr.stroke()
cr.set_source_color(gtk.gdk.color_parse(c2))
cr.move_to(x + width + xoff, r + yoff)
cr.line_to(x + width + xoff, y + height + yoff)
cr.line_to(x + xoff, y + height + yoff)
cr.stroke()
x = x + 2
y = y + 2
cr.set_source_rgba(color.red, color.green, color.blue, 1)
cr.move_to(x + r, y)
cr.line_to(x + width - r , y)
cr.arc(x + width - r, y + r, r, 1.5*math.pi, 2*math.pi)
cr.move_to(x + width, r)
cr.line_to(x + width, y + height)
cr.line_to(x, y + height)
cr.line_to(x, r)
cr.arc(x + r, y + r, r, math.pi, 1.5*math.pi)
cr.fill()
def draw_indicator(self, cr, child):
text = ("%d" % child["indicator_number"])
layout = self.create_pango_layout(text)
layout.set_font_description(self.font)
textw, texth = layout.get_pixel_size()
# draw the back round area
tab_x = child["x"]
tab_y = child["y"]
dest_w = int(32 * self.tab_w_ratio)
dest_h = int(32 * self.tab_h_ratio)
if dest_h < self.tab_height:
dest_w = dest_h
# x position is offset(tab_width*3/4 - icon_width/2) + start_pos(tab_x)
x = tab_x + self.tab_width * 3/4 - dest_w/2
y = tab_y + self.tab_height/2 - dest_h/2
r = min(dest_w, dest_h)/2
if not child == self.current_child:
color = cr.set_source_color(gtk.gdk.color_parse(HobColors.DEEP_RED))
else:
color = cr.set_source_color(gtk.gdk.color_parse(HobColors.GRAY))
# check round back area can contain the text or not
back_round_can_contain_width = float(2 * r * 0.707)
if float(textw) > back_round_can_contain_width:
xoff = (textw - int(back_round_can_contain_width)) / 2
cr.move_to(x + r - xoff, y + r + r)
cr.arc((x + r - xoff), (y + r), r, 0.5*math.pi, 1.5*math.pi)
cr.fill() # left half round
cr.rectangle((x + r - xoff), y, 2 * xoff, 2 * r)
cr.fill() # center rectangle
cr.arc((x + r + xoff), (y + r), r, 1.5*math.pi, 0.5*math.pi)
cr.fill() # right half round
else:
cr.arc((x + r), (y + r), r, 0, 2*math.pi)
cr.fill()
# draw the number text
x = x + (dest_w/2)-(textw/2)
y = y + (dest_h/2) - (texth/2)
cr.move_to(x, y)
self.window.draw_layout(self.style.fg_gc[gtk.STATE_NORMAL], int(x), int(y), layout, gtk.gdk.Color(HobColors.WHITE))
def show_indicator_icon(self, child, number):
child["indicator_show"] = True
child["indicator_number"] = number
self.queue_draw()
def hide_indicator_icon(self, child):
child["indicator_show"] = False
self.queue_draw()
def set_blank_size(self, x, y, w, h):
if not self.blank_rectangle or self.blank_rectangle.x != x or self.blank_rectangle.width != w:
self.emit("blank-area-changed", x, y, w, h)
return gtk.gdk.Rectangle(x, y, w, h)
def query_tooltip_cb(self, widget, x, y, keyboardtip, tooltip):
if keyboardtip or (not tooltip):
return False
# check which tab be clicked
for child in self.children:
if (child["x"] < x) and (x < child["x"] + self.tab_width) \
and (child["y"] < y) and (y < child["y"] + self.tab_height):
tooltip.set_markup(child["tooltip_markup"])
return True
return False
class HobNotebook(gtk.VBox):
def __init__(self):
gtk.VBox.__init__(self, False, 0)
self.notebook = gtk.Notebook()
self.notebook.set_property('homogeneous', True)
self.notebook.set_property('show-tabs', False)
self.tabbar = HobTabBar()
self.tabbar.connect("tab-switched", self.tab_switched_cb)
self.notebook.connect("page-added", self.page_added_cb)
self.notebook.connect("page-removed", self.page_removed_cb)
self.search = None
self.search_focus = False
self.page_changed = False
self.search_name = ""
self.connect("switch-page", self.page_changed_cb)
self.tb = gtk.Table(1, 100, False)
self.hbox= gtk.HBox(False, 0)
self.hbox.pack_start(self.tabbar, True, True)
self.tb.attach(self.hbox, 0, 100, 0, 1)
self.pack_start(self.tb, False, False)
self.pack_start(self.notebook)
self.show_all()
def page_changed_cb(self, nb, page, page_num):
for p, lbl in enumerate(self.pages):
if p == page_num:
lbl.set_active()
else:
lbl.set_active(False)
def append_page(self, child, tab_label):
self.notebook.set_current_page(self.notebook.append_page(child, tab_label))
if self.search:
self.page_changed = True
self.reset_entry(self.search, page_num)
def set_entry(self, name="Search:"):
for child in self.tb.get_children():
if child:
self.tb.remove(child)
def append_page(self, child, tab_label, tab_tooltip=None):
label = HobTabLabel(tab_label)
if tab_tooltip:
label.set_tooltip_text(tab_tooltip)
label.set_active(False)
self.pages.append(label)
gtk.Notebook.append_page(self, child, label)
hbox_entry = gtk.HBox(False, 0)
hbox_entry.show()
def set_entry(self, names, tips):
self.search = gtk.Entry()
self.search_names = names
self.search_tips = tips
self.search_name = name
style = self.search.get_style()
style.text[gtk.STATE_NORMAL] = self.get_colormap().alloc_color(HobColors.GRAY, False, False)
self.search.set_style(style)
self.search.set_text(names[0])
self.search.set_tooltip_text(self.search_tips[0])
self.search.props.has_tooltip = True
self.search.set_text(name)
self.search.set_editable(False)
self.search.set_icon_from_stock(gtk.ENTRY_ICON_SECONDARY, gtk.STOCK_CLEAR)
self.search.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, False)
self.search.connect("icon-release", self.set_search_entry_clear_cb)
self.search.set_width_chars(30)
self.search.show()
self.align = gtk.Alignment(xalign=1.0, yalign=0.7)
self.align.add(self.search)
self.align.show()
hbox_entry.pack_end(self.align, False, False)
self.tabbar.resize_blank_rectangle()
self.tb.attach(hbox_entry, 75, 100, 0, 1, xpadding=5)
self.tb.attach(self.hbox, 0, 100, 0, 1)
self.tabbar.connect("blank-area-changed", self.blank_area_resize_cb)
self.search.connect("focus-in-event", self.set_search_entry_editable_cb)
self.search.connect("focus-out-event", self.set_search_entry_reset_cb)
self.set_action_widget(self.search, gtk.PACK_END)
self.tb.show()
def show_indicator_icon(self, title, number):
for child in self.pages:
if child.lbl.get_label() == title:
child.set_count(number)
for child in self.tabbar.children:
if child["toggled_page"] == -1:
continue
if child["title"] == title:
self.tabbar.show_indicator_icon(child, number)
def hide_indicator_icon(self, title):
for child in self.pages:
if child.lbl.get_label() == title:
child.set_count(0)
for child in self.tabbar.children:
if child["toggled_page"] == -1:
continue
if child["title"] == title:
self.tabbar.hide_indicator_icon(child)
def tab_switched_cb(self, widget, page):
self.notebook.set_current_page(page)
def page_added_cb(self, notebook, notebook_child, page):
if not notebook:
return
title = notebook.get_tab_label_text(notebook_child)
label = notebook.get_tab_label(notebook_child)
tooltip_markup = label.get_tooltip_markup()
if not title:
return
for child in self.tabbar.children:
if child["title"] == title:
child["toggled_page"] = page
return
self.tabbar.append_tab_child(title, page, tooltip_markup)
def page_removed_cb(self, notebook, notebook_child, page, title=""):
for child in self.tabbar.children:
if child["title"] == title:
child["toggled_page"] = -1
def blank_area_resize_cb(self, widget, request_x, request_y, request_width, request_height):
self.search.set_size_request(request_width, request_height)
def set_search_entry_editable_cb(self, search, event):
self.search_focus = True
search.set_editable(True)
text = search.get_text()
if text in self.search_names:
search.set_text("")
search.set_text("")
style = self.search.get_style()
style.text[gtk.STATE_NORMAL] = self.get_colormap().alloc_color(HobColors.BLACK, False, False)
search.set_style(style)
def set_search_entry_reset_cb(self, search, event):
page_num = self.get_current_page()
text = search.get_text()
if not text:
self.reset_entry(search, page_num)
def reset_entry(self, entry, page_num):
def reset_entry(self, entry):
style = entry.get_style()
style.text[gtk.STATE_NORMAL] = self.get_colormap().alloc_color(HobColors.GRAY, False, False)
entry.set_style(style)
entry.set_text(self.search_names[page_num])
entry.set_tooltip_text(self.search_tips[page_num])
entry.set_text(self.search_name)
entry.set_editable(False)
entry.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, False)
def set_search_entry_reset_cb(self, search, event):
self.reset_entry(search)
def set_search_entry_clear_cb(self, search, icon_pos, event):
if search.get_editable() == True:
search.set_text("")
search.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, False)
search.grab_focus()
def set_page(self, title):
for child in self.pages:
if child.lbl.get_label() == title:
child.grab_focus()
self.set_current_page(self.pages.index(child))
return
self.reset_entry(search)
class HobWarpCellRendererText(gtk.CellRendererText):
def __init__(self, col_number):
@@ -846,17 +1084,11 @@ class HobCellRendererToggle(gtk.CellRendererToggle):
gtk.CellRendererToggle.__init__(self)
self.ctrl = HobCellRendererController(is_draw_row=True)
self.ctrl.running_mode = self.ctrl.MODE_ONE_SHORT
self.cell_attr = {"fadeout": False, "number_of_children": 0}
self.cell_attr = {"fadeout": False}
def do_render(self, window, widget, background_area, cell_area, expose_area, flags):
if (not self.ctrl) or (not widget):
return
if flags & gtk.CELL_RENDERER_SELECTED:
state = gtk.STATE_SELECTED
else:
state = gtk.STATE_NORMAL
if self.ctrl.is_active():
path = widget.get_path_at_pos(cell_area.x + cell_area.width/2, cell_area.y + cell_area.height/2)
# sometimes the parameters of cell_area will be a negative number,such as pull up down the scroll bar
@@ -865,23 +1097,14 @@ class HobCellRendererToggle(gtk.CellRendererToggle):
path = path[0]
if path in self.ctrl.running_cell_areas:
cr = window.cairo_create()
color = widget.get_style().base[state]
color = gtk.gdk.Color(HobColors.WHITE)
row_x, _, row_width, _ = widget.get_visible_rect()
border_y = self.get_property("ypad")
self.ctrl.on_draw_fadeinout_cb(cr, color, row_x, cell_area.y - border_y, row_width, \
cell_area.height + border_y * 2, self.cell_attr["fadeout"])
# draw number of a group
if self.cell_attr["number_of_children"]:
text = "%d pkg" % self.cell_attr["number_of_children"]
pangolayout = widget.create_pango_layout(text)
textw, texth = pangolayout.get_pixel_size()
x = cell_area.x + (cell_area.width/2) - (textw/2)
y = cell_area.y + (cell_area.height/2) - (texth/2)
widget.style.paint_layout(window, state, True, cell_area, widget, "checkbox", x, y, pangolayout)
else:
return gtk.CellRendererToggle.do_render(self, window, widget, background_area, cell_area, expose_area, flags)
return gtk.CellRendererToggle.do_render(self, window, widget, background_area, cell_area, expose_area, flags)
'''delay: normally delay time is 1000ms
cell_list: whilch cells need to be render

View File

@@ -22,7 +22,6 @@
import gtk
import glib
import re
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hobcolor import HobColors
from bb.ui.crumbs.hobwidget import hic, HobImageButton, HobInfoButton, HobAltButton, HobButton
@@ -34,9 +33,6 @@ from bb.ui.crumbs.hobpages import HobPage
#
class ImageConfigurationPage (HobPage):
__dummy_machine__ = "--select a machine--"
__dummy_image__ = "--select a base image--"
def __init__(self, builder):
super(ImageConfigurationPage, self).__init__(builder, "Image configuration")
@@ -45,8 +41,6 @@ class ImageConfigurationPage (HobPage):
# or by manual. If by manual, all user's recipe selection and package selection are
# cleared.
self.machine_combo_changed_by_manual = True
self.stopping = False
self.warning_shift = 0
self.create_visual_elements()
def create_visual_elements(self):
@@ -55,6 +49,12 @@ class ImageConfigurationPage (HobPage):
self.toolbar.set_orientation(gtk.ORIENTATION_HORIZONTAL)
self.toolbar.set_style(gtk.TOOLBAR_BOTH)
template_button = self.append_toolbar_button(self.toolbar,
"Templates",
hic.ICON_TEMPLATES_DISPLAY_FILE,
hic.ICON_TEMPLATES_HOVER_FILE,
"Load a previously saved template",
self.template_button_clicked_cb)
my_images_button = self.append_toolbar_button(self.toolbar,
"Images",
hic.ICON_IMAGES_DISPLAY_FILE,
@@ -110,10 +110,9 @@ class ImageConfigurationPage (HobPage):
self.show_all()
def update_progress_bar(self, title, fraction, status=None):
if self.stopping == False:
self.progress_bar.update(fraction)
self.progress_bar.set_text(title)
self.progress_bar.set_rcstyle(status)
self.progress_bar.update(fraction)
self.progress_bar.set_title(title)
self.progress_bar.set_rcstyle(status)
def show_info_populating(self):
self._pack_components(pack_config_build_button = False)
@@ -132,45 +131,8 @@ class ImageConfigurationPage (HobPage):
self._pack_components(pack_config_build_button = True)
self.set_config_machine_layout(show_progress_bar = False)
self.set_config_baseimg_layout()
self.set_rcppkg_layout()
self.show_all()
if self.builder.recipe_model.get_selected_image() == self.builder.recipe_model.__custom_image__:
self.just_bake_button.hide()
def add_warnings_bar(self):
#create the warnings bar shown when recipes parsing generates warnings
color = HobColors.KHAKI
warnings_bar = gtk.EventBox()
warnings_bar.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse(color))
warnings_bar.set_flags(gtk.CAN_DEFAULT)
warnings_bar.grab_default()
build_stop_tab = gtk.Table(10, 20, True)
warnings_bar.add(build_stop_tab)
icon = gtk.Image()
icon_pix_buffer = gtk.gdk.pixbuf_new_from_file(hic.ICON_INDI_ALERT_FILE)
icon.set_from_pixbuf(icon_pix_buffer)
build_stop_tab.attach(icon, 0, 2, 0, 10)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
warnings_nb = len(self.builder.parsing_warnings)
if warnings_nb == 1:
label.set_markup("<span size='x-large'><b>1 recipe parsing warning</b></span>")
else:
label.set_markup("<span size='x-large'><b>%s recipe parsing warnings</b></span>" % warnings_nb)
build_stop_tab.attach(label, 2, 12, 0, 10)
view_warnings_button = HobButton("View warnings")
view_warnings_button.connect('clicked', self.view_warnings_button_clicked_cb)
build_stop_tab.attach(view_warnings_button, 15, 19, 1, 9)
return warnings_bar
def disable_warnings_bar(self):
if self.builder.parsing_warnings:
self.warnings_bar.hide_all()
self.builder.parsing_warnings = []
def create_config_machine(self):
self.machine_title = gtk.Label()
@@ -185,6 +147,7 @@ class ImageConfigurationPage (HobPage):
self.machine_title_desc.set_markup(mark)
self.machine_combo = gtk.combo_box_new_text()
self.machine_combo.set_wrap_width(1)
self.machine_combo.connect("changed", self.machine_combo_changed_cb)
icon_file = hic.ICON_LAYERS_DISPLAY_FILE
@@ -198,13 +161,15 @@ class ImageConfigurationPage (HobPage):
markup += "For more on layers, check the <a href=\""
markup += "http://www.yoctoproject.org/docs/current/dev-manual/"
markup += "dev-manual.html#understanding-and-using-layers\">reference manual</a>."
self.layer_info_icon = HobInfoButton("<b>Layers</b>" + "*" + markup, self.get_parent())
# self.progress_box = gtk.HBox(False, 6)
self.layer_info_icon = HobInfoButton(markup, self.get_parent())
self.progress_box = gtk.HBox(False, 6)
self.progress_bar = HobProgressBar()
# self.progress_box.pack_start(self.progress_bar, expand=True, fill=True)
self.progress_box.pack_start(self.progress_bar, expand=True, fill=True)
self.stop_button = HobAltButton("Stop")
self.stop_button.connect("clicked", self.stop_button_clicked_cb)
# self.progress_box.pack_end(stop_button, expand=False, fill=False)
self.progress_box.pack_end(self.stop_button, expand=False, fill=False)
self.machine_separator = gtk.HSeparator()
def set_config_machine_layout(self, show_progress_bar = False):
@@ -214,15 +179,7 @@ class ImageConfigurationPage (HobPage):
self.gtable.attach(self.layer_button, 14, 36, 7, 12)
self.gtable.attach(self.layer_info_icon, 36, 40, 7, 11)
if show_progress_bar:
#self.gtable.attach(self.progress_box, 0, 40, 15, 18)
self.gtable.attach(self.progress_bar, 0, 37, 15, 18)
self.gtable.attach(self.stop_button, 37, 40, 15, 18, 0, 0)
if self.builder.parsing_warnings:
self.warnings_bar = self.add_warnings_bar()
self.gtable.attach(self.warnings_bar, 0, 40, 14, 18)
self.warning_shift = 4
else:
self.warning_shift = 0
self.gtable.attach(self.progress_box, 0, 40, 15, 19)
self.gtable.attach(self.machine_separator, 0, 40, 13, 14)
def create_config_baseimg(self):
@@ -239,74 +196,72 @@ class ImageConfigurationPage (HobPage):
self.image_title_desc.set_markup(mark)
self.image_combo = gtk.combo_box_new_text()
self.image_combo.set_wrap_width(1)
self.image_combo_id = self.image_combo.connect("changed", self.image_combo_changed_cb)
self.image_desc = gtk.Label()
self.image_desc.set_alignment(0.0, 0.5)
self.image_desc.set_size_request(256, -1)
self.image_desc.set_justify(gtk.JUSTIFY_LEFT)
self.image_desc.set_line_wrap(True)
# button to view recipes
icon_file = hic.ICON_RCIPE_DISPLAY_FILE
hover_file = hic.ICON_RCIPE_HOVER_FILE
self.view_adv_configuration_button = HobImageButton("Advanced configuration",
"Select image types, package formats, etc",
icon_file, hover_file)
self.view_adv_configuration_button.connect("clicked", self.view_adv_configuration_button_clicked_cb)
self.view_recipes_button = HobImageButton("View recipes",
"Add/remove recipes and tasks",
icon_file, hover_file)
self.view_recipes_button.connect("clicked", self.view_recipes_button_clicked_cb)
# button to view packages
icon_file = hic.ICON_PACKAGES_DISPLAY_FILE
hover_file = hic.ICON_PACKAGES_HOVER_FILE
self.view_packages_button = HobImageButton("View packages",
"Add/remove previously built packages",
icon_file, hover_file)
self.view_packages_button.connect("clicked", self.view_packages_button_clicked_cb)
self.image_separator = gtk.HSeparator()
def set_config_baseimg_layout(self):
self.gtable.attach(self.image_title, 0, 40, 15+self.warning_shift, 17+self.warning_shift)
self.gtable.attach(self.image_title_desc, 0, 40, 18+self.warning_shift, 22+self.warning_shift)
self.gtable.attach(self.image_combo, 0, 12, 23+self.warning_shift, 26+self.warning_shift)
self.gtable.attach(self.image_desc, 0, 12, 27+self.warning_shift, 33+self.warning_shift)
self.gtable.attach(self.view_adv_configuration_button, 14, 36, 23+self.warning_shift, 28+self.warning_shift)
self.gtable.attach(self.image_separator, 0, 40, 35+self.warning_shift, 36+self.warning_shift)
self.gtable.attach(self.image_title, 0, 40, 15, 17)
self.gtable.attach(self.image_title_desc, 0, 40, 18, 22)
self.gtable.attach(self.image_combo, 0, 12, 23, 26)
self.gtable.attach(self.image_desc, 13, 38, 23, 28)
self.gtable.attach(self.image_separator, 0, 40, 35, 36)
def set_rcppkg_layout(self):
self.gtable.attach(self.view_recipes_button, 0, 20, 28, 33)
self.gtable.attach(self.view_packages_button, 20, 40, 28, 33)
def create_config_build_button(self):
# Create the "Build packages" and "Build image" buttons at the bottom
button_box = gtk.HBox(False, 6)
# create button "Build image"
self.just_bake_button = HobButton("Build image")
#self.just_bake_button.set_size_request(205, 49)
self.just_bake_button.set_tooltip_text("Build target image")
self.just_bake_button.connect("clicked", self.just_bake_button_clicked_cb)
button_box.pack_end(self.just_bake_button, expand=False, fill=False)
just_bake_button = HobButton("Build image")
just_bake_button.set_size_request(205, 49)
just_bake_button.set_tooltip_text("Build target image")
just_bake_button.connect("clicked", self.just_bake_button_clicked_cb)
button_box.pack_end(just_bake_button, expand=False, fill=False)
# create button "Edit Image"
self.edit_image_button = HobAltButton("Edit image")
#self.edit_image_button.set_size_request(205, 49)
self.edit_image_button.set_tooltip_text("Edit target image")
self.edit_image_button.connect("clicked", self.edit_image_button_clicked_cb)
button_box.pack_end(self.edit_image_button, expand=False, fill=False)
label = gtk.Label(" or ")
button_box.pack_end(label, expand=False, fill=False)
# create button "Build Packages"
build_packages_button = HobAltButton("Build packages")
build_packages_button.connect("clicked", self.build_packages_button_clicked_cb)
build_packages_button.set_tooltip_text("Build recipes into packages")
button_box.pack_end(build_packages_button, expand=False, fill=False)
return button_box
def stop_button_clicked_cb(self, button):
self.stopping = True
self.progress_bar.set_text("Stopping recipe parsing")
self.progress_bar.set_rcstyle("stop")
self.builder.cancel_parse_sync()
def view_warnings_button_clicked_cb(self, button):
self.builder.show_warning_dialog()
def machine_combo_changed_cb(self, machine_combo):
self.stopping = False
self.builder.parsing_warnings = []
combo_item = machine_combo.get_active_text()
if not combo_item or combo_item == self.__dummy_machine__:
if not combo_item:
return
# remove __dummy_machine__ item from the store list after first user selection
# because it is no longer valid
combo_store = machine_combo.get_model()
if len(combo_store) and (combo_store[0][0] == self.__dummy_machine__):
machine_combo.remove_text(0)
self.builder.configuration.curr_mach = combo_item
if self.machine_combo_changed_by_manual:
self.builder.configuration.clear_selection()
@@ -317,17 +272,15 @@ class ImageConfigurationPage (HobPage):
self.builder.populate_recipe_package_info_async()
def update_machine_combo(self):
self.disable_warnings_bar()
all_machines = [self.__dummy_machine__] + self.builder.parameters.all_machines
all_machines = self.builder.parameters.all_machines
model = self.machine_combo.get_model()
model.clear()
for machine in all_machines:
self.machine_combo.append_text(machine)
self.machine_combo.set_active(0)
self.machine_combo.set_active(-1)
def switch_machine_combo(self):
self.disable_warnings_bar()
self.machine_combo_changed_by_manual = False
model = self.machine_combo.get_model()
active = 0
@@ -336,15 +289,10 @@ class ImageConfigurationPage (HobPage):
self.machine_combo.set_active(active)
return
active += 1
self.machine_combo.set_active(-1)
if model[0][0] != self.__dummy_machine__:
self.machine_combo.insert_text(0, self.__dummy_machine__)
self.machine_combo.set_active(0)
def update_image_desc(self):
def update_image_desc(self, selected_image):
desc = ""
selected_image = self.image_combo.get_active_text()
if selected_image and selected_image in self.builder.recipe_model.pn_path.keys():
image_path = self.builder.recipe_model.pn_path[selected_image]
image_iter = self.builder.recipe_model.get_iter(image_path)
@@ -361,15 +309,9 @@ class ImageConfigurationPage (HobPage):
def image_combo_changed_cb(self, combo):
self.builder.window_sensitive(False)
selected_image = self.image_combo.get_active_text()
if not selected_image or (selected_image == self.__dummy_image__):
if not selected_image:
return
# remove __dummy_image__ item from the store list after first user selection
# because it is no longer valid
combo_store = combo.get_model()
if len(combo_store) and (combo_store[0][0] == self.__dummy_image__):
combo.remove_text(0)
self.builder.customized = False
selected_recipes = []
@@ -377,16 +319,13 @@ class ImageConfigurationPage (HobPage):
image_path = self.builder.recipe_model.pn_path[selected_image]
image_iter = self.builder.recipe_model.get_iter(image_path)
selected_packages = self.builder.recipe_model.get_value(image_iter, self.builder.recipe_model.COL_INSTALL).split()
self.update_image_desc()
self.update_image_desc(selected_image)
self.builder.recipe_model.reset()
self.builder.package_model.reset()
self.show_baseimg_selected()
if selected_image == self.builder.recipe_model.__custom_image__:
self.just_bake_button.hide()
glib.idle_add(self.image_combo_changed_idle_cb, selected_image, selected_recipes, selected_packages)
def _image_combo_connect_signal(self):
@@ -403,66 +342,32 @@ class ImageConfigurationPage (HobPage):
# populate image combo
filter = {RecipeListModel.COL_TYPE : ['image']}
image_model = recipe_model.tree_model(filter)
image_model.set_sort_column_id(recipe_model.COL_NAME, gtk.SORT_ASCENDING)
active = 0
active = -1
cnt = 0
white_pattern = []
if self.builder.parameters.image_white_pattern:
for i in self.builder.parameters.image_white_pattern.split():
white_pattern.append(re.compile(i))
black_pattern = []
if self.builder.parameters.image_black_pattern:
for i in self.builder.parameters.image_black_pattern.split():
black_pattern.append(re.compile(i))
black_pattern.append(re.compile("hob-image"))
it = image_model.get_iter_first()
self._image_combo_disconnect_signal()
model = self.image_combo.get_model()
model.clear()
# Set a indicator text to combo store when first open
if not selected_image:
self.image_combo.append_text(self.__dummy_image__)
cnt = cnt + 1
# append and set active
while it:
path = image_model.get_path(it)
it = image_model.iter_next(it)
image_name = image_model[path][recipe_model.COL_NAME]
if image_name == self.builder.recipe_model.__custom_image__:
if image_name == self.builder.recipe_model.__dummy_image__:
continue
if black_pattern:
allow = True
for pattern in black_pattern:
if pattern.search(image_name):
allow = False
break
elif white_pattern:
allow = False
for pattern in white_pattern:
if pattern.search(image_name):
allow = True
break
else:
allow = True
if allow:
self.image_combo.append_text(image_name)
if image_name == selected_image:
active = cnt
cnt = cnt + 1
self.image_combo.append_text(self.builder.recipe_model.__custom_image__)
if selected_image == self.builder.recipe_model.__custom_image__:
self.image_combo.append_text(image_name)
if image_name == selected_image:
active = cnt
cnt = cnt + 1
self.image_combo.append_text(self.builder.recipe_model.__dummy_image__)
if selected_image == self.builder.recipe_model.__dummy_image__:
active = cnt
self.image_combo.set_active(-1)
self.image_combo.set_active(active)
if active != 0:
if active != -1:
self.show_baseimg_selected()
self._image_combo_connect_signal()
@@ -470,29 +375,32 @@ class ImageConfigurationPage (HobPage):
def layer_button_clicked_cb(self, button):
# Create a layer selection dialog
self.builder.show_layer_selection_dialog()
def view_adv_configuration_button_clicked_cb(self, button):
# Create an advanced settings dialog
response, settings_changed = self.builder.show_adv_settings_dialog()
if not response:
return
if settings_changed:
self.builder.reparse_post_adv_settings()
def view_recipes_button_clicked_cb(self, button):
self.builder.show_recipes()
def view_packages_button_clicked_cb(self, button):
self.builder.show_packages()
def just_bake_button_clicked_cb(self, button):
self.builder.parsing_warnings = []
self.builder.just_bake()
def edit_image_button_clicked_cb(self, button):
self.builder.configuration.initial_selected_image = self.builder.configuration.selected_image
self.builder.show_recipes()
def build_packages_button_clicked_cb(self, button):
self.builder.build_packages()
def template_button_clicked_cb(self, button):
response, path = self.builder.show_load_template_dialog()
if not response:
return
if path:
self.builder.load_template(path)
def my_images_button_clicked_cb(self, button):
self.builder.show_load_my_images_dialog()
def settings_button_clicked_cb(self, button):
# Create an advanced settings dialog
response, settings_changed = self.builder.show_simple_settings_dialog()
response, settings_changed = self.builder.show_adv_settings_dialog()
if not response:
return
if settings_changed:

View File

@@ -25,97 +25,34 @@ import gtk
from bb.ui.crumbs.hobcolor import HobColors
from bb.ui.crumbs.hobwidget import hic, HobViewTable, HobAltButton, HobButton
from bb.ui.crumbs.hobpages import HobPage
import subprocess
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
#
# ImageDetailsPage
#
class ImageDetailsPage (HobPage):
__columns__ = [{
'col_name' : 'Image name',
'col_id' : 0,
'col_style': 'text',
'col_min' : 500,
'col_max' : 500
}, {
'col_name' : 'Image size',
'col_id' : 1,
'col_style': 'text',
'col_min' : 100,
'col_max' : 100
}, {
'col_name' : 'Select',
'col_id' : 2,
'col_style': 'radio toggle',
'col_min' : 100,
'col_max' : 100
}]
class DetailBox (gtk.EventBox):
def __init__(self, widget = None, varlist = None, vallist = None, icon = None, button = None, button2=None, color = HobColors.LIGHT_GRAY):
gtk.EventBox.__init__(self)
# set color
style = self.get_style().copy()
style.bg[gtk.STATE_NORMAL] = self.get_colormap().alloc_color(color, False, False)
self.set_style(style)
self.row = gtk.Table(1, 2, False)
self.row.set_border_width(10)
self.add(self.row)
total_rows = 0
if widget:
total_rows = 10
if varlist and vallist:
# pack the icon and the text on the left
total_rows += len(varlist)
self.table = gtk.Table(total_rows, 20, True)
self.table.set_row_spacings(6)
self.table.set_size_request(100, -1)
self.row.attach(self.table, 0, 1, 0, 1, xoptions=gtk.FILL|gtk.EXPAND, yoptions=gtk.FILL)
colid = 0
rowid = 0
self.line_widgets = {}
if icon:
self.table.attach(icon, colid, colid + 2, 0, 1)
colid = colid + 2
if widget:
self.table.attach(widget, colid, 20, 0, 10)
rowid = 10
if varlist and vallist:
for row in range(rowid, total_rows):
index = row - rowid
self.line_widgets[varlist[index]] = self.text2label(varlist[index], vallist[index])
self.table.attach(self.line_widgets[varlist[index]], colid, 20, row, row + 1)
# pack the button on the right
if button:
self.bbox = gtk.VBox()
self.bbox.pack_start(button, expand=True, fill=False)
if button2:
self.bbox.pack_start(button2, expand=True, fill=False)
self.bbox.set_size_request(150,-1)
self.row.attach(self.bbox, 1, 2, 0, 1, xoptions=gtk.FILL, yoptions=gtk.EXPAND)
def update_line_widgets(self, variable, value):
if len(self.line_widgets) == 0:
return
if not isinstance(self.line_widgets[variable], gtk.Label):
return
self.line_widgets[variable].set_markup(self.format_line(variable, value))
def wrap_line(self, inputs):
# wrap the long text of inputs
wrap_width_chars = 75
outputs = ""
tmps = inputs
less_chars = len(inputs)
while (less_chars - wrap_width_chars) > 0:
less_chars -= wrap_width_chars
outputs += tmps[:wrap_width_chars] + "\n "
tmps = inputs[less_chars:]
outputs += tmps
return outputs
def format_line(self, variable, value):
wraped_value = self.wrap_line(value)
markup = "<span weight=\'bold\'>%s</span>" % variable
markup += "<span weight=\'normal\' foreground=\'#1c1c1c\' font_desc=\'14px\'>%s</span>" % wraped_value
return markup
def text2label(self, variable, value):
# append the name:value to the left box
# such as "Name: hob-core-minimal-variant-2011-12-15-beagleboard"
label = gtk.Label()
label.set_alignment(0.0, 0.5)
label.set_markup(self.format_line(variable, value))
return label
class BuildDetailBox (gtk.EventBox):
def __init__(self, varlist = None, vallist = None, icon = None, color = HobColors.LIGHT_GRAY):
def __init__(self, widget = None, varlist = None, vallist = None, icon = None, button = None, color = HobColors.LIGHT_GRAY):
gtk.EventBox.__init__(self)
# set color
@@ -124,30 +61,34 @@ class ImageDetailsPage (HobPage):
self.set_style(style)
self.hbox = gtk.HBox()
self.hbox.set_border_width(10)
self.hbox.set_border_width(15)
self.add(self.hbox)
total_rows = 0
if varlist and vallist:
if widget:
row = 1
elif varlist and vallist:
# pack the icon and the text on the left
total_rows += len(varlist)
self.table = gtk.Table(total_rows, 20, True)
self.table.set_row_spacings(6)
row = len(varlist)
self.table = gtk.Table(row, 20, True)
self.table.set_size_request(100, -1)
self.hbox.pack_start(self.table, expand=True, fill=True, padding=15)
colid = 0
rowid = 0
self.line_widgets = {}
if icon:
self.table.attach(icon, colid, colid + 2, 0, 1)
colid = colid + 2
if varlist and vallist:
for row in range(rowid, total_rows):
index = row - rowid
self.line_widgets[varlist[index]] = self.text2label(varlist[index], vallist[index])
self.table.attach(self.line_widgets[varlist[index]], colid, 20, row, row + 1)
if widget:
self.table.attach(widget, colid, 20, 0, 1)
elif varlist and vallist:
for line in range(0, row):
self.line_widgets[varlist[line]] = self.text2label(varlist[line], vallist[line])
self.table.attach(self.line_widgets[varlist[line]], colid, 20, line, line + 1)
# pack the button on the right
if button:
self.hbox.pack_end(button, expand=False, fill=False)
def update_line_widgets(self, variable, value):
if len(self.line_widgets) == 0:
return
@@ -155,23 +96,9 @@ class ImageDetailsPage (HobPage):
return
self.line_widgets[variable].set_markup(self.format_line(variable, value))
def wrap_line(self, inputs):
# wrap the long text of inputs
wrap_width_chars = 75
outputs = ""
tmps = inputs
less_chars = len(inputs)
while (less_chars - wrap_width_chars) > 0:
less_chars -= wrap_width_chars
outputs += tmps[:wrap_width_chars] + "\n "
tmps = inputs[less_chars:]
outputs += tmps
return outputs
def format_line(self, variable, value):
wraped_value = self.wrap_line(value)
markup = "<span weight=\'bold\'>%s</span>" % variable
markup += "<span weight=\'normal\' foreground=\'#1c1c1c\' font_desc=\'14px\'>%s</span>" % wraped_value
markup += "<span weight=\'normal\' foreground=\'#1c1c1c\' font_desc=\'14px\'>%s</span>" % value
return markup
def text2label(self, variable, value):
@@ -185,7 +112,7 @@ class ImageDetailsPage (HobPage):
def __init__(self, builder):
super(ImageDetailsPage, self).__init__(builder, "Image details")
self.image_store = []
self.image_store = gtk.ListStore(gobject.TYPE_STRING, gobject.TYPE_STRING, gobject.TYPE_BOOLEAN)
self.button_ids = {}
self.details_bottom_buttons = gtk.HBox(False, 6)
self.create_visual_elements()
@@ -197,6 +124,12 @@ class ImageDetailsPage (HobPage):
self.toolbar.set_orientation(gtk.ORIENTATION_HORIZONTAL)
self.toolbar.set_style(gtk.TOOLBAR_BOTH)
template_button = self.append_toolbar_button(self.toolbar,
"Templates",
hic.ICON_TEMPLATES_DISPLAY_FILE,
hic.ICON_TEMPLATES_HOVER_FILE,
"Load a previously saved template",
self.template_button_clicked_cb)
my_images_button = self.append_toolbar_button(self.toolbar,
"Images",
hic.ICON_IMAGES_DISPLAY_FILE,
@@ -224,30 +157,27 @@ class ImageDetailsPage (HobPage):
self.details_bottom_buttons.remove(child)
def show_page(self, step):
self.build_succeeded = (step == self.builder.IMAGE_GENERATED)
build_succeeded = (step == self.builder.IMAGE_GENERATED)
image_addr = self.builder.parameters.image_addr
image_names = self.builder.parameters.image_names
if self.build_succeeded:
if build_succeeded:
machine = self.builder.configuration.curr_mach
base_image = self.builder.recipe_model.get_selected_image()
layers = self.builder.configuration.layers
pkg_num = "%s" % len(self.builder.package_model.get_selected_packages())
log_file = self.builder.current_logfile
else:
pkg_num = "N/A"
log_file = None
# remove
for button_id, button in self.button_ids.items():
button.disconnect(button_id)
self._remove_all_widget()
# repack
self.pack_start(self.details_top_buttons, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
self.build_result = None
if self.build_succeeded and self.builder.current_step == self.builder.IMAGE_GENERATING:
if build_succeeded:
# building is the previous step
icon = gtk.Image()
pixmap_path = hic.ICON_INDI_CONFIRM_FILE
@@ -256,89 +186,49 @@ class ImageDetailsPage (HobPage):
icon.set_from_pixbuf(pix_buffer)
varlist = [""]
vallist = ["Your image is ready"]
self.build_result = self.BuildDetailBox(varlist=varlist, vallist=vallist, icon=icon, color=color)
self.build_result = self.DetailBox(varlist=varlist, vallist=vallist, icon=icon, color=color)
self.box_group_area.pack_start(self.build_result, expand=False, fill=False)
self.buttonlist = ["Build new image", "Run image", "Deploy image"]
# create the buttons at the bottom first because the buttons are used in apply_button_per_image()
if build_succeeded:
self.buttonlist = ["Build new image", "Save as template", "Run image", "Deploy image"]
else: # get to this page from "My images"
self.buttonlist = ["Build new image", "Run image", "Deploy image"]
# Name
self.image_store = []
self.toggled_image = ""
self.image_store.clear()
default_toggled = False
default_image_size = 0
self.num_toggled = 0
i = 0
for image_name in image_names:
image_size = HobPage._size_to_string(os.stat(os.path.join(image_addr, image_name)).st_size)
image_attr = ("run" if (self.test_type_runnable(image_name) and self.test_mach_runnable(image_name)) else \
("deploy" if self.test_deployable(image_name) else ""))
is_toggled = (image_attr != "")
if not self.toggled_image:
if not default_toggled:
default_toggled = (self.test_type_runnable(image_name) and self.test_mach_runnable(image_name)) \
or self.test_deployable(image_name)
if i == (len(image_names) - 1):
is_toggled = True
if is_toggled:
default_toggled = True
self.image_store.set(self.image_store.append(), 0, image_name, 1, image_size, 2, default_toggled)
if default_toggled:
default_image_size = image_size
self.toggled_image = image_name
split_stuff = image_name.split('.')
if "rootfs" in split_stuff:
image_type = image_name[(len(split_stuff[0]) + len(".rootfs") + 1):]
self.create_bottom_buttons(self.buttonlist, image_name)
else:
image_type = image_name[(len(split_stuff[0]) + 1):]
self.image_store.append({'name': image_name,
'type': image_type,
'size': image_size,
'is_toggled': is_toggled,
'action_attr': image_attr,})
self.image_store.set(self.image_store.append(), 0, image_name, 1, image_size, 2, False)
i = i + 1
self.num_toggled += is_toggled
is_runnable = self.create_bottom_buttons(self.buttonlist, self.toggled_image)
# Generated image files info
varlist = ["Name: ", "Files created: ", "Directory: "]
vallist = []
vallist.append(image_name.split('.')[0])
vallist.append(', '.join(fileitem['type'] for fileitem in self.image_store))
vallist.append(image_addr)
image_table = HobViewTable(self.__columns__)
image_table.set_model(self.image_store)
image_table.connect("toggled", self.toggled_cb)
view_files_button = HobAltButton("View files")
view_files_button.connect("clicked", self.view_files_clicked_cb, image_addr)
view_files_button.set_tooltip_text("Open the directory containing the image files")
open_log_button = None
if log_file:
open_log_button = HobAltButton("Open log")
open_log_button.connect("clicked", self.open_log_clicked_cb, log_file)
open_log_button.set_tooltip_text("Open the build's log file")
self.image_detail = self.DetailBox(varlist=varlist, vallist=vallist, button=view_files_button, button2=open_log_button)
self.box_group_area.pack_start(self.image_detail, expand=False, fill=True)
# The default kernel box for the qemu images
self.sel_kernel = ""
self.kernel_detail = None
if 'qemu' in image_name:
self.sel_kernel = self.get_kernel_file_name()
# varlist = ["Kernel: "]
# vallist = []
# vallist.append(self.sel_kernel)
# change_kernel_button = HobAltButton("Change")
# change_kernel_button.connect("clicked", self.change_kernel_cb)
# change_kernel_button.set_tooltip_text("Change qemu kernel file")
# self.kernel_detail = self.DetailBox(varlist=varlist, vallist=vallist, button=change_kernel_button)
# self.box_group_area.pack_start(self.kernel_detail, expand=True, fill=True)
self.image_detail = self.DetailBox(widget=image_table, button=view_files_button)
self.box_group_area.pack_start(self.image_detail, expand=True, fill=True)
# Machine, Base image and Layers
layer_num_limit = 15
varlist = ["Machine: ", "Base image: ", "Layers: "]
vallist = []
self.setting_detail = None
if self.build_succeeded:
if build_succeeded:
vallist.append(machine)
vallist.append(base_image)
i = 0
@@ -362,41 +252,29 @@ class ImageDetailsPage (HobPage):
edit_config_button.set_tooltip_text("Edit machine, base image and recipes")
edit_config_button.connect("clicked", self.edit_config_button_clicked_cb)
self.setting_detail = self.DetailBox(varlist=varlist, vallist=vallist, button=edit_config_button)
self.box_group_area.pack_start(self.setting_detail, expand=True, fill=True)
self.box_group_area.pack_start(self.setting_detail, expand=False, fill=False)
# Packages included, and Total image size
varlist = ["Packages included: ", "Total image size: "]
vallist = []
vallist.append(pkg_num)
vallist.append(default_image_size)
if self.build_succeeded:
if build_succeeded:
edit_packages_button = HobAltButton("Edit packages")
edit_packages_button.set_tooltip_text("Edit the packages included in your image")
edit_packages_button.connect("clicked", self.edit_packages_button_clicked_cb)
else: # get to this page from "My images"
edit_packages_button = None
self.package_detail = self.DetailBox(varlist=varlist, vallist=vallist, button=edit_packages_button)
self.box_group_area.pack_start(self.package_detail, expand=True, fill=True)
self.box_group_area.pack_start(self.package_detail, expand=False, fill=False)
# pack the buttons at the bottom, at this time they are already created.
if self.build_succeeded:
self.box_group_area.pack_end(self.details_bottom_buttons, expand=False, fill=False)
else: # for "My images" page
self.details_separator = gtk.HSeparator()
self.box_group_area.pack_start(self.details_separator, expand=False, fill=False)
self.box_group_area.pack_start(self.details_bottom_buttons, expand=False, fill=False)
self.box_group_area.pack_end(self.details_bottom_buttons, expand=False, fill=False)
self.show_all()
if self.kernel_detail and (not is_runnable):
self.kernel_detail.hide()
def view_files_clicked_cb(self, button, image_addr):
subprocess.call("xdg-open /%s" % image_addr, shell=True)
def open_log_clicked_cb(self, button, log_file):
if log_file:
log_file = "file:///" + log_file
gtk.show_uri(screen=button.get_screen(), uri=log_file, timestamp=0)
os.system("xdg-open /%s" % image_addr)
def refresh_package_detail_box(self, image_size):
self.package_detail.update_line_widgets("Total image size: ", image_size)
@@ -418,8 +296,6 @@ class ImageDetailsPage (HobPage):
return mach_runnable
def test_deployable(self, image_name):
if self.builder.configuration.curr_mach.startswith("qemu"):
return False
deployable = False
for t in self.builder.parameters.deployable_image_types:
if image_name.endswith(t):
@@ -427,121 +303,49 @@ class ImageDetailsPage (HobPage):
break
return deployable
def get_kernel_file_name(self, kernel_addr=""):
kernel_name = ""
if not kernel_addr:
kernel_addr = self.builder.parameters.image_addr
files = [f for f in os.listdir(kernel_addr) if f[0] <> '.']
for check_file in files:
if check_file.endswith(".bin"):
name_splits = check_file.split(".")[0]
if self.builder.parameters.kernel_image_type in name_splits.split("-"):
kernel_name = check_file
break
return kernel_name
def show_builded_images_dialog(self, widget, primary_action=""):
title = primary_action if primary_action else "Your builded images"
dialog = CrumbsDialog(title, self.builder,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT)
dialog.set_border_width(12)
label = gtk.Label()
label.set_use_markup(True)
label.set_alignment(0.0, 0.5)
label.set_padding(12,0)
if primary_action == "Run image":
label.set_markup("<span font_desc='12'>Select the image file you want to run:</span>")
elif primary_action == "Deploy image":
label.set_markup("<span font_desc='12'>Select the image file you want to deploy:</span>")
else:
label.set_markup("<span font_desc='12'>Select the image file you want to %s</span>" % primary_action)
dialog.vbox.pack_start(label, expand=False, fill=False)
# filter created images as action attribution (deploy or run)
action_attr = ""
action_images = []
for fileitem in self.image_store:
action_attr = fileitem['action_attr']
if (action_attr == 'run' and primary_action == "Run image") \
or (action_attr == 'deploy' and primary_action == "Deploy image"):
action_images.append(fileitem)
# pack the corresponding 'runnable' or 'deploy' radio_buttons, if there has no more than one file.
# assume that there does not both have 'deploy' and 'runnable' files in the same building result
# in possible as design.
curr_row = 0
rows = (len(action_images)) if len(action_images) < 10 else 10
table = gtk.Table(rows, 10, True)
table.set_row_spacings(6)
table.set_col_spacing(0, 12)
table.set_col_spacing(5, 12)
sel_parent_btn = None
for fileitem in action_images:
sel_btn = gtk.RadioButton(sel_parent_btn, fileitem['type'])
sel_parent_btn = sel_btn if not sel_parent_btn else sel_parent_btn
sel_btn.set_active(fileitem['is_toggled'])
sel_btn.connect('toggled', self.table_selected_cb, fileitem)
if curr_row < 10:
table.attach(sel_btn, 0, 4, curr_row, curr_row + 1, xpadding=24)
else:
table.attach(sel_btn, 5, 9, curr_row - 10, curr_row - 9, xpadding=24)
curr_row += 1
dialog.vbox.pack_start(table, expand=False, fill=False, padding=6)
button = dialog.add_button("Cancel", gtk.RESPONSE_CANCEL)
HobAltButton.style_button(button)
if primary_action:
button = dialog.add_button(primary_action, gtk.RESPONSE_YES)
HobButton.style_button(button)
dialog.show_all()
response = dialog.run()
dialog.destroy()
if response != gtk.RESPONSE_YES:
def toggled_cb(self, table, cell, path, columnid, tree):
model = tree.get_model()
if not model:
return
iter = model.get_iter_first()
while iter:
rowpath = model.get_path(iter)
model[rowpath][columnid] = False
iter = model.iter_next(iter)
for fileitem in self.image_store:
if fileitem['is_toggled']:
if fileitem['action_attr'] == 'run':
self.builder.runqemu_image(fileitem['name'], self.sel_kernel)
elif fileitem['action_attr'] == 'deploy':
self.builder.deploy_image(fileitem['name'])
model[path][columnid] = True
self.refresh_package_detail_box(model[path][1])
def table_selected_cb(self, tbutton, image):
image['is_toggled'] = tbutton.get_active()
if image['is_toggled']:
self.toggled_image = image['name']
image_name = model[path][0]
def change_kernel_cb(self, widget):
kernel_path = self.builder.show_load_kernel_dialog()
if kernel_path and self.kernel_detail:
import os.path
self.sel_kernel = os.path.basename(kernel_path)
markup = self.kernel_detail.format_line("Kernel: ", self.sel_kernel)
label = ((self.kernel_detail.get_children()[0]).get_children()[0]).get_children()[0]
label.set_markup(markup)
# remove
for button_id, button in self.button_ids.items():
button.disconnect(button_id)
self._remove_all_widget()
# repack
self.pack_start(self.details_top_buttons, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
if self.build_result:
self.box_group_area.pack_start(self.build_result, expand=False, fill=False)
self.box_group_area.pack_start(self.image_detail, expand=True, fill=True)
if self.setting_detail:
self.box_group_area.pack_start(self.setting_detail, expand=False, fill=False)
self.box_group_area.pack_start(self.package_detail, expand=False, fill=False)
self.create_bottom_buttons(self.buttonlist, image_name)
self.box_group_area.pack_end(self.details_bottom_buttons, expand=False, fill=False)
self.show_all()
def create_bottom_buttons(self, buttonlist, image_name):
# Create the buttons at the bottom
created = False
packed = False
self.button_ids = {}
is_runnable = False
# create button "Deploy image"
name = "Deploy image"
if name in buttonlist and self.test_deployable(image_name):
deploy_button = HobButton('Deploy image')
#deploy_button.set_size_request(205, 49)
deploy_button.set_size_request(205, 49)
deploy_button.set_tooltip_text("Burn a live image to a USB drive or flash memory")
deploy_button.set_flags(gtk.CAN_DEFAULT)
button_id = deploy_button.connect("clicked", self.deploy_button_clicked_cb)
@@ -554,15 +358,15 @@ class ImageDetailsPage (HobPage):
if name in buttonlist and self.test_type_runnable(image_name) and self.test_mach_runnable(image_name):
if created == True:
# separator
#label = gtk.Label(" or ")
#self.details_bottom_buttons.pack_end(label, expand=False, fill=False)
label = gtk.Label(" or ")
self.details_bottom_buttons.pack_end(label, expand=False, fill=False)
# create button "Run image"
run_button = HobAltButton("Run image")
else:
# create button "Run image" as the primary button
run_button = HobButton("Run image")
#run_button.set_size_request(205, 49)
run_button.set_size_request(205, 49)
run_button.set_flags(gtk.CAN_DEFAULT)
packed = True
run_button.set_tooltip_text("Start up an image with qemu emulator")
@@ -570,41 +374,62 @@ class ImageDetailsPage (HobPage):
self.button_ids[button_id] = run_button
self.details_bottom_buttons.pack_end(run_button, expand=False, fill=False)
created = True
is_runnable = True
if not packed:
box = gtk.HBox(False, 6)
box.show()
subbox = gtk.HBox(False, 0)
subbox.set_size_request(205, 49)
subbox.show()
box.add(subbox)
self.details_bottom_buttons.pack_end(box, False, False)
name = "Save as template"
if name in buttonlist:
if created == True:
# separator
label = gtk.Label(" or ")
self.details_bottom_buttons.pack_end(label, expand=False, fill=False)
# create button "Save as template"
save_button = HobAltButton("Save as template")
save_button.set_tooltip_text("Save the image configuration for reuse")
button_id = save_button.connect("clicked", self.save_button_clicked_cb)
self.button_ids[button_id] = save_button
self.details_bottom_buttons.pack_end(save_button, expand=False, fill=False)
create = True
name = "Build new image"
if name in buttonlist:
# create button "Build new image"
if packed:
build_new_button = HobAltButton("Build new image")
else:
build_new_button = HobButton("Build new image")
build_new_button.set_flags(gtk.CAN_DEFAULT)
#build_new_button.set_size_request(205, 49)
self.details_bottom_buttons.pack_end(build_new_button, expand=False, fill=False)
build_new_button = HobAltButton("Build new image")
build_new_button.set_tooltip_text("Create a new image from scratch")
button_id = build_new_button.connect("clicked", self.build_new_button_clicked_cb)
self.button_ids[button_id] = build_new_button
self.details_bottom_buttons.pack_start(build_new_button, expand=False, fill=False)
return is_runnable
def _get_selected_image(self):
image_name = ""
iter = self.image_store.get_iter_first()
while iter:
path = self.image_store.get_path(iter)
if self.image_store[path][2]:
image_name = self.image_store[path][0]
break
iter = self.image_store.iter_next(iter)
return image_name
def save_button_clicked_cb(self, button):
self.builder.show_save_template_dialog()
def deploy_button_clicked_cb(self, button):
if self.toggled_image:
if self.num_toggled > 1:
self.set_sensitive(False)
self.show_builded_images_dialog(None, "Deploy image")
self.set_sensitive(True)
else:
self.builder.deploy_image(self.toggled_image)
image_name = self._get_selected_image()
self.builder.deploy_image(image_name)
def run_button_clicked_cb(self, button):
if self.toggled_image:
if self.num_toggled > 1:
self.set_sensitive(False)
self.show_builded_images_dialog(None, "Run image")
self.set_sensitive(True)
else:
self.builder.runqemu_image(self.toggled_image, self.sel_kernel)
image_name = self._get_selected_image()
self.builder.runqemu_image(image_name)
def build_new_button_clicked_cb(self, button):
self.builder.initiate_new_build_async()
@@ -615,12 +440,19 @@ class ImageDetailsPage (HobPage):
def edit_packages_button_clicked_cb(self, button):
self.builder.show_packages(ask=False)
def template_button_clicked_cb(self, button):
response, path = self.builder.show_load_template_dialog()
if not response:
return
if path:
self.builder.load_template(path)
def my_images_button_clicked_cb(self, button):
self.builder.show_load_my_images_dialog()
def settings_button_clicked_cb(self, button):
# Create an advanced settings dialog
response, settings_changed = self.builder.show_simple_settings_dialog()
response, settings_changed = self.builder.show_adv_settings_dialog()
if not response:
return
if settings_changed:

View File

@@ -34,18 +34,22 @@ class PackageSelectionPage (HobPage):
pages = [
{
'name' : 'Included packages',
'tooltip' : 'The packages currently included for your image',
'filter' : { PackageListModel.COL_INC : [True] },
'search' : 'Search packages by name',
'searchtip' : 'Enter a package name to find it',
'columns' : [{
'name' : 'Included',
'filter' : { PackageListModel.COL_INC : [True] },
'columns' : [{
'col_name' : 'Package name',
'col_id' : PackageListModel.COL_NAME,
'col_style': 'text',
'col_min' : 100,
'col_max' : 300,
'expand' : 'True'
}, {
'col_name' : 'Brought in by',
'col_id' : PackageListModel.COL_BINB,
'col_style': 'binb',
'col_min' : 100,
'col_max' : 350,
'expand' : 'True'
}, {
'col_name' : 'Size',
'col_id' : PackageListModel.COL_SIZE,
@@ -53,20 +57,6 @@ class PackageSelectionPage (HobPage):
'col_min' : 100,
'col_max' : 300,
'expand' : 'True'
}, {
'col_name' : 'Recipe',
'col_id' : PackageListModel.COL_RCP,
'col_style': 'text',
'col_min' : 100,
'col_max' : 250,
'expand' : 'True'
}, {
'col_name' : 'Brought in by (+others)',
'col_id' : PackageListModel.COL_BINB,
'col_style': 'binb',
'col_min' : 100,
'col_max' : 350,
'expand' : 'True'
}, {
'col_name' : 'Included',
'col_id' : PackageListModel.COL_INC,
@@ -75,12 +65,9 @@ class PackageSelectionPage (HobPage):
'col_max' : 100
}]
}, {
'name' : 'All packages',
'tooltip' : 'All packages that have been built',
'filter' : {},
'search' : 'Search packages by name',
'searchtip' : 'Enter a package name to find it',
'columns' : [{
'name' : 'All packages',
'filter' : {},
'columns' : [{
'col_name' : 'Package name',
'col_id' : PackageListModel.COL_NAME,
'col_style': 'text',
@@ -94,13 +81,6 @@ class PackageSelectionPage (HobPage):
'col_min' : 100,
'col_max' : 500,
'expand' : 'True'
}, {
'col_name' : 'Recipe',
'col_id' : PackageListModel.COL_RCP,
'col_style': 'text',
'col_min' : 100,
'col_max' : 250,
'expand' : 'True'
}, {
'col_name' : 'Included',
'col_id' : PackageListModel.COL_INC,
@@ -110,151 +90,84 @@ class PackageSelectionPage (HobPage):
}]
}
]
(INCLUDED,
ALL) = range(2)
def __init__(self, builder):
super(PackageSelectionPage, self).__init__(builder, "Edit packages")
super(PackageSelectionPage, self).__init__(builder, "Packages")
# set invisible members
# set invisiable members
self.recipe_model = self.builder.recipe_model
self.package_model = self.builder.package_model
# create visual elements
self.create_visual_elements()
def included_clicked_cb(self, button):
self.ins.set_current_page(self.INCLUDED)
def create_visual_elements(self):
self.label = gtk.Label("Packages included: 0\nSelected packages size: 0 MB")
self.eventbox = self.add_onto_top_bar(self.label, 73)
self.pack_start(self.eventbox, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
# set visible members
# set visiable members
self.ins = HobNotebook()
self.tables = [] # we need to modify table when the dialog is shown
search_names = []
search_tips = []
# append the tab
for page in self.pages:
columns = page['columns']
name = page['name']
tab = HobViewTable(columns, name)
search_names.append(page['search'])
search_tips.append(page['searchtip'])
tab = HobViewTable(columns)
filter = page['filter']
sort_model = self.package_model.tree_model(filter, initial=True)
tab.set_model(sort_model)
tab.connect("toggled", self.table_toggled_cb, name)
if name == "Included packages":
tab.set_model(self.package_model.tree_model(filter))
tab.connect("toggled", self.table_toggled_cb, page['name'])
if page['name'] == "Included":
tab.connect("button-release-event", self.button_click_cb)
tab.connect("cell-fadeinout-stopped", self.after_fadeout_checkin_include)
if name == "All packages":
tab.connect("button-release-event", self.button_click_cb)
tab.connect("cell-fadeinout-stopped", self.after_fadeout_checkin_include)
self.ins.append_page(tab, page['name'], page['tooltip'])
label = gtk.Label(page['name'])
self.ins.append_page(tab, label)
self.tables.append(tab)
self.ins.set_entry(search_names, search_tips)
self.ins.search.connect("changed", self.search_entry_changed)
self.ins.set_entry("Search packages:")
# set the search entry for each table
for tab in self.tables:
tab.set_search_entry(0, self.ins.search)
# add all into the dialog
self.box_group_area.pack_start(self.ins, expand=True, fill=True)
self.button_box = gtk.HBox(False, 6)
self.box_group_area.pack_start(self.button_box, expand=False, fill=False)
button_box = gtk.HBox(False, 6)
self.box_group_area.pack_start(button_box, expand=False, fill=False)
self.build_image_button = HobButton('Build image')
#self.build_image_button.set_size_request(205, 49)
self.build_image_button.set_size_request(205, 49)
self.build_image_button.set_tooltip_text("Build target image")
self.build_image_button.set_flags(gtk.CAN_DEFAULT)
self.build_image_button.grab_default()
self.build_image_button.connect("clicked", self.build_image_clicked_cb)
self.button_box.pack_end(self.build_image_button, expand=False, fill=False)
button_box.pack_end(self.build_image_button, expand=False, fill=False)
self.back_button = HobAltButton('Cancel')
self.back_button = HobAltButton("<< Back to image configuration")
self.back_button.connect("clicked", self.back_button_clicked_cb)
self.button_box.pack_end(self.back_button, expand=False, fill=False)
def search_entry_changed(self, entry):
text = entry.get_text()
if self.ins.search_focus:
self.ins.search_focus = False
elif self.ins.page_changed:
self.ins.page_change = False
self.filter_search(entry)
elif text not in self.ins.search_names:
self.filter_search(entry)
def filter_search(self, entry):
text = entry.get_text()
current_tab = self.ins.get_current_page()
filter = self.pages[current_tab]['filter']
filter[PackageListModel.COL_NAME] = text
self.tables[current_tab].set_model(self.package_model.tree_model(filter, search_data=text))
if self.package_model.filtered_nb == 0:
if not self.ins.get_nth_page(current_tab).top_bar:
self.ins.get_nth_page(current_tab).add_no_result_bar(entry)
self.ins.get_nth_page(current_tab).top_bar.show()
self.ins.get_nth_page(current_tab).scroll.hide()
else:
if self.ins.get_nth_page(current_tab).top_bar:
self.ins.get_nth_page(current_tab).top_bar.hide()
self.ins.get_nth_page(current_tab).scroll.show()
if entry.get_text() == '':
entry.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, False)
else:
entry.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, True)
button_box.pack_start(self.back_button, expand=False, fill=False)
def button_click_cb(self, widget, event):
path, col = widget.table_tree.get_cursor()
tree_model = widget.table_tree.get_model()
if path and col.get_title() != 'Included': # else activation is likely a removal
properties = {'binb': '' , 'name': '', 'size':'', 'recipe':'', 'files_list':''}
properties['binb'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_BINB)
properties['name'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_NAME)
properties['size'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_SIZE)
properties['recipe'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_RCP)
properties['files_list'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_FLIST)
self.builder.show_recipe_property_dialog(properties)
def open_log_clicked_cb(self, button, log_file):
if log_file:
log_file = "file:///" + log_file
gtk.show_uri(screen=button.get_screen(), uri=log_file, timestamp=0)
def show_page(self, log_file):
children = self.button_box.get_children() or []
for child in children:
self.button_box.remove(child)
# re-packed the buttons as request, add the 'open log' button if build success
self.button_box.pack_end(self.build_image_button, expand=False, fill=False)
if log_file:
open_log_button = HobAltButton("Open log")
open_log_button.connect("clicked", self.open_log_clicked_cb, log_file)
open_log_button.set_tooltip_text("Open the build's log file")
self.button_box.pack_end(open_log_button, expand=False, fill=False)
self.button_box.pack_end(self.back_button, expand=False, fill=False)
self.show_all()
if path: # else activation is likely a removal
binb = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_BINB)
if binb:
self.builder.show_binb_dialog(binb)
def build_image_clicked_cb(self, button):
self.builder.parsing_warnings = []
self.builder.build_image()
def back_button_clicked_cb(self, button):
if self.builder.previous_step == self.builder.IMAGE_GENERATED:
self.builder.restore_initial_selected_packages()
self.refresh_selection()
self.builder.show_image_details()
else:
self.builder.show_configuration()
self.builder.show_configuration()
def _expand_all(self):
for tab in self.tables:
tab.table_tree.expand_all()
def refresh_selection(self):
self._expand_all()
self.builder.configuration.selected_packages = self.package_model.get_selected_packages()
self.builder.configuration.user_selected_packages = self.package_model.get_user_selected_packages()
selected_packages_num = len(self.builder.configuration.selected_packages)
@@ -262,23 +175,23 @@ class PackageSelectionPage (HobPage):
selected_packages_size_str = HobPage._size_to_string(selected_packages_size)
image_overhead_factor = self.builder.configuration.image_overhead_factor
image_rootfs_size = self.builder.configuration.image_rootfs_size / 1024 # image_rootfs_size is KB
image_extra_size = self.builder.configuration.image_extra_size / 1024 # image_extra_size is KB
image_rootfs_size = self.builder.configuration.image_rootfs_size * 1024 # image_rootfs_size is KB
image_extra_size = self.builder.configuration.image_extra_size * 1024 # image_extra_size is KB
base_size = image_overhead_factor * selected_packages_size
image_total_size = max(base_size, image_rootfs_size) + image_extra_size
if "zypper" in self.builder.configuration.selected_packages:
image_total_size += (51200 * 1024)
image_total_size_str = HobPage._size_to_string(image_total_size)
self.label.set_label("Packages included: %s\nSelected packages size: %s\nTotal image size: %s" %
self.label.set_text("Packages included: %s\nSelected packages size: %s\nTotal image size: %s" %
(selected_packages_num, selected_packages_size_str, image_total_size_str))
self.ins.show_indicator_icon("Included packages", selected_packages_num)
self.ins.show_indicator_icon("Included", selected_packages_num)
def toggle_item_idle_cb(self, path, view_tree, cell, pagename):
if not self.package_model.path_included(path):
self.package_model.include_item(item_path=path, binb="User Selected")
else:
if pagename == "Included packages":
if pagename == "Included":
self.pre_fadeout_checkout_include(view_tree)
self.package_model.exclude_item(item_path=path)
self.render_fadeout(view_tree, cell)
@@ -288,8 +201,7 @@ class PackageSelectionPage (HobPage):
self.refresh_selection()
if not self.builder.customized:
self.builder.customized = True
self.builder.configuration.initial_selected_image = self.builder.configuration.selected_image
self.builder.configuration.selected_image = self.recipe_model.__custom_image__
self.builder.configuration.selected_image = self.recipe_model.__dummy_image__
self.builder.rcppkglist_populated()
self.builder.window_sensitive(True)
@@ -335,7 +247,3 @@ class PackageSelectionPage (HobPage):
def after_fadeout_checkin_include(self, table, ctrl, cell, tree):
tree.set_model(self.package_model.tree_model(self.pages[0]['filter']))
tree.expand_all()
def set_packages_curr_tab(self, curr_page):
self.ins.set_current_page(curr_page)

View File

@@ -11,9 +11,6 @@ class ProgressBar(gtk.Dialog):
self.vbox.pack_start(self.progress)
self.show_all()
def set_text(self, msg):
self.progress.set_text(msg)
def update(self, x, y):
self.progress.set_fraction(float(x)/float(y))
self.progress.set_text("%2d %%" % (x*100/y))

View File

@@ -43,11 +43,6 @@ class HobProgressBar (gtk.ProgressBar):
text += " %.0f%%" % self.percentage
self.set_text(text)
def set_stop_title(self, text=None):
if not text:
text = ""
self.set_text(text)
def reset(self):
self.set_fraction(0)
self.set_text("")

View File

@@ -33,19 +33,24 @@ from bb.ui.crumbs.hobpages import HobPage
class RecipeSelectionPage (HobPage):
pages = [
{
'name' : 'Included recipes',
'tooltip' : 'The recipes currently included for your image',
'filter' : { RecipeListModel.COL_INC : [True],
RecipeListModel.COL_TYPE : ['recipe', 'packagegroup'] },
'search' : 'Search recipes by name',
'searchtip' : 'Enter a recipe name to find it',
'columns' : [{
'name' : 'Included',
'tooltip' : 'The recipes currently included for your image',
'filter' : { RecipeListModel.COL_INC : [True],
RecipeListModel.COL_TYPE : ['recipe', 'task'] },
'columns' : [{
'col_name' : 'Recipe name',
'col_id' : RecipeListModel.COL_NAME,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Brought in by',
'col_id' : RecipeListModel.COL_BINB,
'col_style': 'binb',
'col_min' : 100,
'col_max' : 500,
'expand' : 'True'
}, {
'col_name' : 'Group',
'col_id' : RecipeListModel.COL_GROUP,
@@ -53,13 +58,6 @@ class RecipeSelectionPage (HobPage):
'col_min' : 100,
'col_max' : 300,
'expand' : 'True'
}, {
'col_name' : 'Brought in by (+others)',
'col_id' : RecipeListModel.COL_BINB,
'col_style': 'binb',
'col_min' : 100,
'col_max' : 500,
'expand' : 'True'
}, {
'col_name' : 'Included',
'col_id' : RecipeListModel.COL_INC,
@@ -68,25 +66,16 @@ class RecipeSelectionPage (HobPage):
'col_max' : 100
}]
}, {
'name' : 'All recipes',
'tooltip' : 'All recipes in your configured layers',
'filter' : { RecipeListModel.COL_TYPE : ['recipe'] },
'search' : 'Search recipes by name',
'searchtip' : 'Enter a recipe name to find it',
'columns' : [{
'name' : 'All recipes',
'tooltip' : 'All recipes available in the Yocto Project',
'filter' : { RecipeListModel.COL_TYPE : ['recipe'] },
'columns' : [{
'col_name' : 'Recipe name',
'col_id' : RecipeListModel.COL_NAME,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Group',
'col_id' : RecipeListModel.COL_GROUP,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'License',
'col_id' : RecipeListModel.COL_LIC,
@@ -94,6 +83,13 @@ class RecipeSelectionPage (HobPage):
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Group',
'col_id' : RecipeListModel.COL_GROUP,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Included',
'col_id' : RecipeListModel.COL_INC,
@@ -102,18 +98,23 @@ class RecipeSelectionPage (HobPage):
'col_max' : 100
}]
}, {
'name' : 'Package Groups',
'tooltip' : 'All package groups in your configured layers',
'filter' : { RecipeListModel.COL_TYPE : ['packagegroup'] },
'search' : 'Search package groups by name',
'searchtip' : 'Enter a package group name to find it',
'columns' : [{
'col_name' : 'Package group name',
'name' : 'Tasks',
'tooltip' : 'All tasks available in the Yocto Project',
'filter' : { RecipeListModel.COL_TYPE : ['task'] },
'columns' : [{
'col_name' : 'Task name',
'col_id' : RecipeListModel.COL_NAME,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Description',
'col_id' : RecipeListModel.COL_DESC,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Included',
'col_id' : RecipeListModel.COL_INC,
@@ -123,59 +124,48 @@ class RecipeSelectionPage (HobPage):
}]
}
]
(INCLUDED,
ALL,
TASKS) = range(3)
def __init__(self, builder = None):
super(RecipeSelectionPage, self).__init__(builder, "Step 1 of 2: Edit recipes")
super(RecipeSelectionPage, self).__init__(builder, "Recipes")
# set invisible members
# set invisiable members
self.recipe_model = self.builder.recipe_model
# create visual elements
self.create_visual_elements()
def included_clicked_cb(self, button):
self.ins.set_current_page(self.INCLUDED)
def create_visual_elements(self):
self.eventbox = self.add_onto_top_bar(None, 73)
self.label = gtk.Label()
self.eventbox = self.add_onto_top_bar(self.label, 73)
self.pack_start(self.eventbox, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
# set visible members
# set visiable members
self.ins = HobNotebook()
self.tables = [] # we need modify table when the dialog is shown
search_names = []
search_tips = []
# append the tabs in order
for page in self.pages:
columns = page['columns']
name = page['name']
tab = HobViewTable(columns, name)
search_names.append(page['search'])
search_tips.append(page['searchtip'])
tab = HobViewTable(columns)
filter = page['filter']
sort_model = self.recipe_model.tree_model(filter, initial=True)
tab.set_model(sort_model)
tab.connect("toggled", self.table_toggled_cb, name)
if name == "Included recipes":
tab.set_model(self.recipe_model.tree_model(filter))
tab.connect("toggled", self.table_toggled_cb, page['name'])
if page['name'] == "Included":
tab.connect("button-release-event", self.button_click_cb)
tab.connect("cell-fadeinout-stopped", self.after_fadeout_checkin_include)
if name == "Package Groups":
tab.connect("button-release-event", self.button_click_cb)
tab.connect("cell-fadeinout-stopped", self.after_fadeout_checkin_include)
if name == "All recipes":
tab.connect("button-release-event", self.button_click_cb)
tab.connect("cell-fadeinout-stopped", self.button_click_cb)
self.ins.append_page(tab, page['name'], page['tooltip'])
label = gtk.Label(page['name'])
label.set_selectable(False)
label.set_tooltip_text(page['tooltip'])
self.ins.append_page(tab, label)
self.tables.append(tab)
self.ins.set_entry(search_names, search_tips)
self.ins.search.connect("changed", self.search_entry_changed)
self.ins.set_entry("Search recipes:")
# set the search entry for each table
for tab in self.tables:
search_tip = "Enter a recipe's or task's name to find it"
self.ins.search.set_tooltip_text(search_tip)
self.ins.search.props.has_tooltip = True
tab.set_search_entry(0, self.ins.search)
# add all into the window
self.box_group_area.pack_start(self.ins, expand=True, fill=True)
@@ -184,83 +174,42 @@ class RecipeSelectionPage (HobPage):
self.box_group_area.pack_end(button_box, expand=False, fill=False)
self.build_packages_button = HobButton('Build packages')
#self.build_packages_button.set_size_request(205, 49)
self.build_packages_button.set_size_request(205, 49)
self.build_packages_button.set_tooltip_text("Build selected recipes into packages")
self.build_packages_button.set_flags(gtk.CAN_DEFAULT)
self.build_packages_button.grab_default()
self.build_packages_button.connect("clicked", self.build_packages_clicked_cb)
button_box.pack_end(self.build_packages_button, expand=False, fill=False)
self.back_button = HobAltButton('Cancel')
self.back_button = HobAltButton("<< Back to image configuration")
self.back_button.connect("clicked", self.back_button_clicked_cb)
button_box.pack_end(self.back_button, expand=False, fill=False)
def search_entry_changed(self, entry):
text = entry.get_text()
if self.ins.search_focus:
self.ins.search_focus = False
elif self.ins.page_changed:
self.ins.page_change = False
self.filter_search(entry)
elif text not in self.ins.search_names:
self.filter_search(entry)
def filter_search(self, entry):
text = entry.get_text()
current_tab = self.ins.get_current_page()
filter = self.pages[current_tab]['filter']
filter[RecipeListModel.COL_NAME] = text
self.tables[current_tab].set_model(self.recipe_model.tree_model(filter, search_data=text))
if self.recipe_model.filtered_nb == 0:
if not self.ins.get_nth_page(current_tab).top_bar:
self.ins.get_nth_page(current_tab).add_no_result_bar(entry)
self.ins.get_nth_page(current_tab).top_bar.show()
self.ins.get_nth_page(current_tab).scroll.hide()
else:
if self.ins.get_nth_page(current_tab).top_bar:
self.ins.get_nth_page(current_tab).top_bar.hide()
self.ins.get_nth_page(current_tab).scroll.show()
if entry.get_text() == '':
entry.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, False)
else:
entry.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, True)
button_box.pack_start(self.back_button, expand=False, fill=False)
def button_click_cb(self, widget, event):
path, col = widget.table_tree.get_cursor()
tree_model = widget.table_tree.get_model()
if path and col.get_title() != 'Included': # else activation is likely a removal
properties = {'summary': '', 'name': '', 'version': '', 'revision': '', 'binb': '', 'group': '', 'license': '', 'homepage': '', 'bugtracker': '', 'description': ''}
properties['summary'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_SUMMARY)
properties['name'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_NAME)
properties['version'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_VERSION)
properties['revision'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_REVISION)
properties['binb'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_BINB)
properties['group'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_GROUP)
properties['license'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_LIC)
properties['homepage'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_HOMEPAGE)
properties['bugtracker'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_BUGTRACKER)
properties['description'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_DESC)
self.builder.show_recipe_property_dialog(properties)
if path: # else activation is likely a removal
binb = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_BINB)
if binb:
self.builder.show_binb_dialog(binb)
def build_packages_clicked_cb(self, button):
self.builder.build_packages()
def back_button_clicked_cb(self, button):
self.builder.recipe_model.set_selected_image(self.builder.configuration.initial_selected_image)
self.builder.image_configuration_page.update_image_combo(self.builder.recipe_model, self.builder.configuration.initial_selected_image)
self.builder.image_configuration_page.update_image_desc()
self.builder.show_configuration()
def refresh_selection(self):
self.builder.configuration.selected_image = self.recipe_model.get_selected_image()
_, self.builder.configuration.selected_recipes = self.recipe_model.get_selected_recipes()
self.ins.show_indicator_icon("Included recipes", len(self.builder.configuration.selected_recipes))
self.label.set_text("Recipes included: %s" % len(self.builder.configuration.selected_recipes))
self.ins.show_indicator_icon("Included", len(self.builder.configuration.selected_recipes))
def toggle_item_idle_cb(self, path, view_tree, cell, pagename):
if not self.recipe_model.path_included(path):
self.recipe_model.include_item(item_path=path, binb="User Selected", image_contents=False)
else:
if pagename == "Included recipes":
if pagename == "Included":
self.pre_fadeout_checkout_include(view_tree)
self.recipe_model.exclude_item(item_path=path)
self.render_fadeout(view_tree, cell)
@@ -270,7 +219,7 @@ class RecipeSelectionPage (HobPage):
self.refresh_selection()
if not self.builder.customized:
self.builder.customized = True
self.builder.configuration.selected_image = self.recipe_model.__custom_image__
self.builder.configuration.selected_image = self.recipe_model.__dummy_image__
self.builder.rcppkglist_populated()
self.builder.window_sensitive(True)
@@ -292,7 +241,7 @@ class RecipeSelectionPage (HobPage):
# Check out a model which base on the column COL_FADE_INC,
# it's save the prev state of column COL_INC before do exclude_item
filter = { RecipeListModel.COL_FADE_INC : [True],
RecipeListModel.COL_TYPE : ['recipe', 'packagegroup'] }
RecipeListModel.COL_TYPE : ['recipe', 'task'] }
new_model = self.recipe_model.tree_model(filter, excluded_items_ahead=True)
tree.set_model(new_model)
@@ -314,6 +263,3 @@ class RecipeSelectionPage (HobPage):
def after_fadeout_checkin_include(self, table, ctrl, cell, tree):
tree.set_model(self.recipe_model.tree_model(self.pages[0]['filter']))
def set_recipe_curr_tab(self, curr_page):
self.ins.set_current_page(curr_page)

View File

@@ -46,7 +46,7 @@ class RunningBuildModel (gtk.TreeStore):
color = model.get(it, self.COL_COLOR)[0]
if not color:
return False
if color == HobColors.ERROR or color == HobColors.WARNING:
if color == HobColors.ERROR:
return True
return False
@@ -76,27 +76,15 @@ class RunningBuild (gobject.GObject):
'build-complete' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'build-aborted' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'task-started' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
'log-error' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'log-warning' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'disk-full' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'no-provider' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
'log' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING, gobject.TYPE_PYOBJECT,)),
}
pids_to_task = {}
tasks_to_iter = {}
@@ -105,7 +93,6 @@ class RunningBuild (gobject.GObject):
gobject.GObject.__init__ (self)
self.model = RunningBuildModel()
self.sequential = sequential
self.buildaborted = False
def reset (self):
self.pids_to_task.clear()
@@ -135,8 +122,6 @@ class RunningBuild (gobject.GObject):
parent = self.tasks_to_iter[(package, task)]
if(isinstance(event, logging.LogRecord)):
if event.taskpid == 0 or event.levelno > logging.INFO:
self.emit("log", "handle", event)
# FIXME: this is a hack! More info in Yocto #1433
# http://bugzilla.pokylinux.org/show_bug.cgi?id=1433, temporarily
# mask the error message as it's not informative for the user.
@@ -154,7 +139,6 @@ class RunningBuild (gobject.GObject):
elif event.levelno >= logging.WARNING:
icon = "dialog-warning"
color = HobColors.WARNING
self.emit("log-warning")
else:
icon = None
color = HobColors.OK
@@ -223,7 +207,6 @@ class RunningBuild (gobject.GObject):
self.tasks_to_iter[(package, task)] = i
elif isinstance(event, bb.build.TaskBase):
self.emit("log", "info", event._message)
current = self.tasks_to_iter[(package, task)]
parent = self.tasks_to_iter[(package, None)]
@@ -291,10 +274,7 @@ class RunningBuild (gobject.GObject):
0))
# Emit the appropriate signal depending on the number of failures
if self.buildaborted:
self.emit ("build-aborted")
self.buildaborted = False
elif (failures >= 1):
if (failures >= 1):
self.emit ("build-failed")
else:
self.emit ("build-succeeded")
@@ -306,12 +286,7 @@ class RunningBuild (gobject.GObject):
if pbar:
pbar.set_text(event.msg)
elif isinstance(event, bb.event.DiskFull):
self.buildaborted = True
self.emit("disk-full")
elif isinstance(event, bb.command.CommandFailed):
self.emit("log", "error", "Command execution failed: %s" % (event.error))
if event.error.startswith("Exited with"):
# If the command fails with an exit code we're done, emit the
# generic signal for the UI to notify the user
@@ -339,24 +314,7 @@ class RunningBuild (gobject.GObject):
elif isinstance(event, bb.event.ParseCompleted) and pbar:
pbar.hide()
#using runqueue events as many as possible to update the progress bar
elif isinstance(event, bb.runqueue.runQueueTaskFailed):
self.emit("log", "error", "Task %s (%s) failed with exit code '%s'" % (event.taskid, event.taskstring, event.exitcode))
elif isinstance(event, bb.runqueue.sceneQueueTaskFailed):
self.emit("log", "warn", "Setscene task %s (%s) failed with exit code '%s' - real task will be run instead" \
% (event.taskid, event.taskstring, event.exitcode))
elif isinstance(event, (bb.runqueue.runQueueTaskStarted, bb.runqueue.sceneQueueTaskStarted)):
if isinstance(event, bb.runqueue.sceneQueueTaskStarted):
self.emit("log", "info", "Running setscene task %d of %d (%s)" % \
(event.stats.completed + event.stats.active + event.stats.failed + 1,
event.stats.total, event.taskstring))
else:
if event.noexec:
tasktype = 'noexec task'
else:
tasktype = 'task'
self.emit("log", "info", "Running %s %s of %s (ID: %s, %s)" % \
(tasktype, event.stats.completed + event.stats.active + event.stats.failed + 1,
event.stats.total, event.taskid, event.taskstring))
message = {}
message["eventname"] = bb.event.getName(event)
num_of_completed = event.stats.completed + event.stats.failed
@@ -365,10 +323,6 @@ class RunningBuild (gobject.GObject):
message["title"] = ""
message["task"] = event.taskstring
self.emit("task-started", message)
elif isinstance(event, bb.event.MultipleProviders):
self.emit("log", "info", "multiple providers are available for %s%s (%s)" \
% (event._is_runtime and "runtime " or "", event._item, ", ".join(event._candidates)))
self.emit("log", "info", "consider defining a PREFERRED_PROVIDER entry to match %s" % (event._item))
elif isinstance(event, bb.event.NoProvider):
msg = ""
if event._runtime:
@@ -383,34 +337,6 @@ class RunningBuild (gobject.GObject):
for reason in event._reasons:
msg += ("%s\n" % reason)
self.emit("no-provider", msg)
self.emit("log", "error", msg)
elif isinstance(event, bb.event.LogExecTTY):
icon = "dialog-warning"
color = HobColors.WARNING
if self.sequential or not parent:
tree_add = self.model.append
else:
tree_add = self.model.prepend
tree_add(parent,
(None,
package,
task,
event.msg,
icon,
color,
0))
else:
if not isinstance(event, (bb.event.BuildBase,
bb.event.StampUpdate,
bb.event.ConfigParsed,
bb.event.RecipeParsed,
bb.event.RecipePreFinalise,
bb.runqueue.runQueueEvent,
bb.runqueue.runQueueExitWait,
bb.event.OperationStarted,
bb.event.OperationCompleted,
bb.event.OperationProgress)):
self.emit("log", "error", "Unknown event: %s" % (event.error if hasattr(event, 'error') else 'error'))
return

View File

@@ -1,85 +0,0 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Bogdan Marinescu <bogdan.a.marinescu@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk, gobject
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hobwidget import hic
from bb.ui.crumbs.hobpages import HobPage
#
# SanityCheckPage
#
class SanityCheckPage (HobPage):
def __init__(self, builder):
super(SanityCheckPage, self).__init__(builder)
self.running = False
self.create_visual_elements()
self.show_all()
def make_label(self, text, bold=True):
label = gtk.Label()
label.set_alignment(0.0, 0.5)
mark = "<span %s>%s</span>" % (self.span_tag('x-large', 'bold') if bold else self.span_tag('medium'), text)
label.set_markup(mark)
return label
def start(self):
if not self.running:
self.running = True
gobject.timeout_add(100, self.timer_func)
def stop(self):
self.running = False
def is_running(self):
return self.running
def timer_func(self):
self.progress_bar.pulse()
return self.running
def create_visual_elements(self):
# Table'd layout. 'rows' and 'cols' give the table size
rows, cols = 30, 50
self.table = gtk.Table(rows, cols, True)
self.pack_start(self.table, expand=False, fill=False)
sx, sy = 2, 2
# 'info' icon
image = gtk.Image()
image.set_from_file(hic.ICON_INFO_DISPLAY_FILE)
self.table.attach(image, sx, sx + 2, sy, sy + 3 )
image.show()
# 'Checking' message
label = self.make_label('Hob is checking for correct build system setup')
self.table.attach(label, sx + 2, cols, sy, sy + 3, xpadding=5 )
label.show()
# 'Shouldn't take long' message.
label = self.make_label("The check shouldn't take long.", False)
self.table.attach(label, sx + 2, cols, sy + 3, sy + 4, xpadding=5)
label.show()
# Progress bar
self.progress_bar = HobProgressBar()
self.table.attach(self.progress_bar, sx + 2, cols - 3, sy + 5, sy + 7, xpadding=5)
self.progress_bar.show()
# All done
self.table.show()

View File

@@ -101,19 +101,7 @@ class HobTemplateFile(ConfigFile):
return self.dictionary[var]
else:
return ""
def getVersion(self):
contents = ConfigFile.readFile(self)
pattern = "^\s*(\S+)\s*=\s*(\".*?\")"
for line in contents:
match = re.search(pattern, line)
if match:
if match.group(1) == "VERSION":
return match.group(2).strip('"')
return None
def load(self):
contents = ConfigFile.readFile(self)
self.dictionary.clear()
@@ -137,6 +125,8 @@ class RecipeFile(ConfigFile):
class TemplateMgr(gobject.GObject):
__gLocalVars__ = ["MACHINE", "PACKAGE_CLASSES", "DISTRO", "DL_DIR", "SSTATE_DIR", "SSTATE_MIRROR", "PARALLEL_MAKE", "BB_NUMBER_THREADS", "CONF_VERSION"]
__gBBLayersVars__ = ["BBLAYERS", "LCONF_VERSION"]
__gRecipeVars__ = ["DEPENDS", "IMAGE_INSTALL"]
def __init__(self):
@@ -150,27 +140,40 @@ class TemplateMgr(gobject.GObject):
def convert_to_template_pathfilename(cls, filename, path):
return "%s/%s%s%s" % (path, "template-", filename, ".hob")
@classmethod
def convert_to_bblayers_pathfilename(cls, filename, path):
return "%s/%s%s%s" % (path, "bblayers-", filename, ".conf")
@classmethod
def convert_to_local_pathfilename(cls, filename, path):
return "%s/%s%s%s" % (path, "local-", filename, ".conf")
@classmethod
def convert_to_image_pathfilename(cls, filename, path):
return "%s/%s%s%s" % (path, "hob-image-", filename, ".bb")
def open(self, filename, path):
self.template_hob = HobTemplateFile(TemplateMgr.convert_to_template_pathfilename(filename, path))
self.bblayers_conf = ConfigFile(TemplateMgr.convert_to_bblayers_pathfilename(filename, path))
self.local_conf = ConfigFile(TemplateMgr.convert_to_local_pathfilename(filename, path))
self.image_bb = RecipeFile(TemplateMgr.convert_to_image_pathfilename(filename, path))
def setVar(self, var, val):
if var in TemplateMgr.__gLocalVars__:
self.local_conf.setVar(var, val)
if var in TemplateMgr.__gBBLayersVars__:
self.bblayers_conf.setVar(var, val)
if var in TemplateMgr.__gRecipeVars__:
self.image_bb.setVar(var, val)
self.template_hob.setVar(var, val)
def save(self):
self.local_conf.save()
self.bblayers_conf.save()
self.image_bb.save()
self.template_hob.save()
def getVersion(self, path):
return HobTemplateFile(path).getVersion()
def load(self, path):
self.template_hob = HobTemplateFile(path)
self.dictionary = self.template_hob.load()
@@ -182,6 +185,12 @@ class TemplateMgr(gobject.GObject):
if self.template_hob:
del self.template_hob
template_hob = None
if self.bblayers_conf:
del self.bblayers_conf
self.bblayers_conf = None
if self.local_conf:
del self.local_conf
self.local_conf = None
if self.image_bb:
del self.image_bb
self.image_bb = None

View File

@@ -22,7 +22,6 @@
# bitbake which will allow more flexibility.
import os
import bb
def which_terminal():
term = bb.utils.which(os.environ["PATH"], "xterm")

View File

@@ -24,7 +24,7 @@ import threading
import xmlrpclib
import bb
import bb.event
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.progress import ProgressBar
# Package Model
(COL_PKG_NAME) = (0)
@@ -226,13 +226,8 @@ def main(server, eventHandler):
gtk.gdk.threads_enter()
dep = DepExplorer()
bardialog = gtk.Dialog(parent=dep,
flags=gtk.DIALOG_MODAL|gtk.DIALOG_DESTROY_WITH_PARENT)
bardialog.set_default_size(400, 50)
pbar = HobProgressBar()
bardialog.vbox.pack_start(pbar)
bardialog.show_all()
bardialog.connect("delete-event", gtk.main_quit)
pbar = ProgressBar(dep)
pbar.connect("delete-event", gtk.main_quit)
gtk.gdk.threads_leave()
progress_total = 0
@@ -251,20 +246,19 @@ def main(server, eventHandler):
if isinstance(event, bb.event.CacheLoadStarted):
progress_total = event.total
gtk.gdk.threads_enter()
bardialog.set_title("Loading Cache")
pbar.update(0)
pbar.set_title("Loading Cache")
pbar.update(0, progress_total)
gtk.gdk.threads_leave()
if isinstance(event, bb.event.CacheLoadProgress):
x = event.current
gtk.gdk.threads_enter()
pbar.update(x * 1.0 / progress_total)
pbar.set_title('')
pbar.update(x, progress_total)
gtk.gdk.threads_leave()
continue
if isinstance(event, bb.event.CacheLoadCompleted):
bardialog.hide()
pbar.hide()
continue
if isinstance(event, bb.event.ParseStarted):
@@ -272,21 +266,19 @@ def main(server, eventHandler):
if progress_total == 0:
continue
gtk.gdk.threads_enter()
pbar.update(0)
bardialog.set_title("Processing recipes")
pbar.set_title("Processing recipes")
pbar.update(0, progress_total)
gtk.gdk.threads_leave()
if isinstance(event, bb.event.ParseProgress):
x = event.current
gtk.gdk.threads_enter()
pbar.update(x * 1.0 / progress_total)
pbar.set_title('')
pbar.update(x, progress_total)
gtk.gdk.threads_leave()
continue
if isinstance(event, bb.event.ParseCompleted):
bardialog.hide()
pbar.hide()
continue
if isinstance(event, bb.event.DepTreeGenerated):

View File

@@ -81,7 +81,7 @@ def main (server, eventHandler):
try:
cmdline, error = server.runCommand(["getCmdLineAction"])
if error:
if err:
print("Error getting bitbake commandline: %s" % error)
return 1
elif not cmdline:

View File

@@ -22,7 +22,7 @@
import sys
import os
requirements = "FATAL: Hob requires Gtk+ 2.20.0 or higher, PyGtk 2.21.0 or higher"
requirements = "FATAL: Gtk+, PyGtk and PyGobject are required to use Hob"
try:
import gobject
import gtk
@@ -30,7 +30,7 @@ try:
pygtk.require('2.0') # to be certain we don't have gtk+ 1.x !?!
gtkver = gtk.gtk_version
pygtkver = gtk.pygtk_version
if gtkver < (2, 20, 0) or pygtkver < (2, 21, 0):
if gtkver < (2, 18, 0) or pygtkver < (2, 16, 0):
sys.exit("%s,\nYou have Gtk+ %s and PyGtk %s." % (requirements,
".".join(map(str, gtkver)),
".".join(map(str, pygtkver))))

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.0 KiB

After

Width:  |  Height:  |  Size: 4.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.1 KiB

After

Width:  |  Height:  |  Size: 4.5 KiB

View File

@@ -25,12 +25,7 @@ import sys
import xmlrpclib
import logging
import progressbar
import signal
import bb.msg
import time
import fcntl
import struct
import copy
from bb.ui import uihelper
logger = logging.getLogger("BitBake")
@@ -42,21 +37,8 @@ class BBProgress(progressbar.ProgressBar):
widgets = [progressbar.Percentage(), ' ', progressbar.Bar(), ' ',
progressbar.ETA()]
try:
self._resize_default = signal.getsignal(signal.SIGWINCH)
except:
self._resize_default = None
progressbar.ProgressBar.__init__(self, maxval, [self.msg + ": "] + widgets)
def _handle_resize(self, signum, frame):
progressbar.ProgressBar._handle_resize(self, signum, frame)
if self._resize_default:
self._resize_default(signum, frame)
def finish(self):
progressbar.ProgressBar.finish(self)
if self._resize_default:
signal.signal(signal.SIGWINCH, self._resize_default)
class NonInteractiveProgress(object):
fobj = sys.stdout
@@ -88,133 +70,39 @@ def pluralise(singular, plural, qty):
else:
return plural % qty
class InteractConsoleLogFilter(logging.Filter):
def __init__(self, tf, format):
self.tf = tf
self.format = format
def filter(self, record):
if record.levelno == self.format.NOTE and (record.msg.startswith("Running") or record.msg.startswith("recipe ")):
return False
self.tf.clearFooter()
return True
class TerminalFilter(object):
columns = 80
def sigwinch_handle(self, signum, frame):
self.columns = self.getTerminalColumns()
if self._sigwinch_default:
self._sigwinch_default(signum, frame)
def getTerminalColumns(self):
def ioctl_GWINSZ(fd):
try:
cr = struct.unpack('hh', fcntl.ioctl(fd, self.termios.TIOCGWINSZ, '1234'))
except:
return None
return cr
cr = ioctl_GWINSZ(sys.stdout.fileno())
if not cr:
try:
fd = os.open(os.ctermid(), os.O_RDONLY)
cr = ioctl_GWINSZ(fd)
os.close(fd)
except:
pass
if not cr:
try:
cr = (env['LINES'], env['COLUMNS'])
except:
cr = (25, 80)
return cr[1]
def __init__(self, main, helper, console, format):
self.main = main
self.helper = helper
self.cuu = None
self.stdinbackup = None
self.interactive = sys.stdout.isatty()
self.footer_present = False
self.lastpids = []
if not self.interactive:
return
try:
import curses
except ImportError:
sys.exit("FATAL: The knotty ui could not load the required curses python module.")
import termios
self.curses = curses
self.termios = termios
try:
fd = sys.stdin.fileno()
self.stdinbackup = termios.tcgetattr(fd)
new = copy.deepcopy(self.stdinbackup)
new[3] = new[3] & ~termios.ECHO
termios.tcsetattr(fd, termios.TCSADRAIN, new)
curses.setupterm()
if curses.tigetnum("colors") > 2:
format.enable_color()
self.ed = curses.tigetstr("ed")
if self.ed:
self.cuu = curses.tigetstr("cuu")
try:
self._sigwinch_default = signal.getsignal(signal.SIGWINCH)
signal.signal(signal.SIGWINCH, self.sigwinch_handle)
except:
pass
self.columns = self.getTerminalColumns()
except:
self.cuu = None
console.addFilter(InteractConsoleLogFilter(self, format))
def clearFooter(self):
if self.footer_present:
lines = self.footer_present
sys.stdout.write(self.curses.tparm(self.cuu, lines))
sys.stdout.write(self.curses.tparm(self.ed))
self.footer_present = False
return
def updateFooter(self):
if not self.cuu:
if not main.shutdown or not self.helper.needUpdate:
return
activetasks = self.helper.running_tasks
failedtasks = self.helper.failed_tasks
runningpids = self.helper.running_pids
if self.footer_present and (self.lastcount == self.helper.tasknumber_current) and (self.lastpids == runningpids):
return
if self.footer_present:
self.clearFooter()
if (not self.helper.tasknumber_total or self.helper.tasknumber_current == self.helper.tasknumber_total) and not len(activetasks):
if len(runningpids) == 0:
return
self.helper.getTasks()
tasks = []
for t in runningpids:
tasks.append("%s (pid %s)" % (activetasks[t]["title"], t))
if self.main.shutdown:
content = "Waiting for %s running tasks to finish:" % len(activetasks)
elif not len(activetasks):
content = "No currently running tasks (%s of %s)" % (self.helper.tasknumber_current, self.helper.tasknumber_total)
if main.shutdown:
print("Waiting for %s running tasks to finish:" % len(activetasks))
else:
content = "Currently %s running tasks (%s of %s):" % (len(activetasks), self.helper.tasknumber_current, self.helper.tasknumber_total)
print content
lines = 1 + int(len(content) / (self.columns + 1))
print("Currently %s running tasks (%s of %s):" % (len(activetasks), self.helper.tasknumber_current, self.helper.tasknumber_total))
for tasknum, task in enumerate(tasks):
content = "%s: %s" % (tasknum, task)
print content
lines = lines + 1 + int(len(content) / (self.columns + 1))
self.footer_present = lines
self.lastpids = runningpids[:]
self.lastcount = self.helper.tasknumber_current
print("%s: %s" % (tasknum, task))
def finish(self):
if self.stdinbackup:
fd = sys.stdin.fileno()
self.termios.tcsetattr(fd, self.termios.TCSADRAIN, self.stdinbackup)
return
def main(server, eventHandler, tf = TerminalFilter):
@@ -232,25 +120,17 @@ def main(server, eventHandler, tf = TerminalFilter):
logger.error("Unable to get the value of BB_CONSOLELOG variable: %s" % error)
return 1
if sys.stdin.isatty() and sys.stdout.isatty():
log_exec_tty = True
else:
log_exec_tty = False
helper = uihelper.BBUIHelper()
console = logging.StreamHandler(sys.stdout)
format_str = "%(levelname)s: %(message)s"
format = bb.msg.BBLogFormatter(format_str)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
bb.msg.addDefaultlogFilter(console)
console.setFormatter(format)
logger.addHandler(console)
if consolelogfile:
bb.utils.mkdirhier(os.path.dirname(consolelogfile))
conlogformat = bb.msg.BBLogFormatter(format_str)
consolelog = logging.FileHandler(consolelogfile)
bb.msg.addDefaultlogFilter(consolelog)
consolelog.setFormatter(conlogformat)
consolelog.setFormatter(format)
logger.addHandler(consolelog)
try:
@@ -296,20 +176,6 @@ def main(server, eventHandler, tf = TerminalFilter):
if not main.shutdown:
main.shutdown = 1
if isinstance(event, bb.event.LogExecTTY):
if log_exec_tty:
tries = event.retries
while tries:
print "Trying to run: %s" % event.prog
if os.system(event.prog) == 0:
break
time.sleep(event.sleep_delay)
tries -= 1
if tries:
continue
logger.warn(event.msg)
continue
if isinstance(event, logging.LogRecord):
if event.levelno >= format.ERROR:
errors = errors + 1
@@ -329,7 +195,7 @@ def main(server, eventHandler, tf = TerminalFilter):
logfile = event.logfile
if logfile and os.path.exists(logfile):
termfilter.clearFooter()
bb.error("Logfile of failure stored in: %s" % logfile)
print("ERROR: Logfile of failure stored in: %s" % logfile)
if includelogs and not event.errprinted:
print("Log data follows:")
f = open(logfile, "r")
@@ -453,8 +319,7 @@ def main(server, eventHandler, tf = TerminalFilter):
bb.runqueue.runQueueExitWait,
bb.event.OperationStarted,
bb.event.OperationCompleted,
bb.event.OperationProgress,
bb.event.DiskFull)):
bb.event.OperationProgress)):
continue
logger.error("Unknown event: %s", event)
@@ -465,6 +330,7 @@ def main(server, eventHandler, tf = TerminalFilter):
if ioerror.args[0] == 4:
pass
except KeyboardInterrupt:
import time
termfilter.clearFooter()
if main.shutdown == 1:
print("\nSecond Keyboard Interrupt, stopping...\n")

View File

@@ -0,0 +1,109 @@
#
# BitBake (No)TTY UI Implementation (v2)
#
# Handling output to TTYs or files (no TTY)
#
# Copyright (C) 2012 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from bb.ui import knotty
import logging
import sys
logger = logging.getLogger("BitBake")
class InteractConsoleLogFilter(logging.Filter):
def __init__(self, tf, format):
self.tf = tf
self.format = format
def filter(self, record):
if record.levelno == self.format.NOTE and (record.msg.startswith("Running") or record.msg.startswith("package ")):
return False
self.tf.clearFooter()
return True
class TerminalFilter2(object):
def __init__(self, main, helper, console, format):
self.main = main
self.helper = helper
self.cuu = None
self.stdinbackup = None
self.interactive = sys.stdout.isatty()
self.footer_present = False
self.lastpids = []
if not self.interactive:
return
import curses
import termios
import copy
self.curses = curses
self.termios = termios
try:
fd = sys.stdin.fileno()
self.stdinbackup = termios.tcgetattr(fd)
new = copy.deepcopy(self.stdinbackup)
new[3] = new[3] & ~termios.ECHO
termios.tcsetattr(fd, termios.TCSADRAIN, new)
curses.setupterm()
self.ed = curses.tigetstr("ed")
if self.ed:
self.cuu = curses.tigetstr("cuu")
except:
self.cuu = None
console.addFilter(InteractConsoleLogFilter(self, format))
def clearFooter(self):
if self.footer_present:
lines = self.footer_present
sys.stdout.write(self.curses.tparm(self.cuu, lines))
sys.stdout.write(self.curses.tparm(self.ed))
self.footer_present = False
def updateFooter(self):
if not self.cuu:
return
activetasks = self.helper.running_tasks
failedtasks = self.helper.failed_tasks
runningpids = self.helper.running_pids
if self.footer_present and (self.lastpids == runningpids):
return
if self.footer_present:
self.clearFooter()
if not activetasks:
return
lines = 1
tasks = []
for t in runningpids:
tasks.append("%s (pid %s)" % (activetasks[t]["title"], t))
if self.main.shutdown:
print("Waiting for %s running tasks to finish:" % len(activetasks))
else:
print("Currently %s running tasks (%s of %s):" % (len(activetasks), self.helper.tasknumber_current, self.helper.tasknumber_total))
for tasknum, task in enumerate(tasks):
print("%s: %s" % (tasknum, task))
lines = lines + 1
self.footer_present = lines
self.lastpids = runningpids[:]
def finish(self):
if self.stdinbackup:
fd = sys.stdin.fileno()
self.termios.tcsetattr(fd, self.termios.TCSADRAIN, self.stdinbackup)
def main(server, eventHandler):
bb.ui.knotty.main(server, eventHandler, TerminalFilter2)

View File

@@ -47,13 +47,7 @@
from __future__ import division
import logging
import os, sys, itertools, time, subprocess
try:
import curses
except ImportError:
sys.exit("FATAL: The ncurses ui could not load the required curses python module.")
import os, sys, curses, itertools, time
import bb
import xmlrpclib
from bb import ui
@@ -295,7 +289,7 @@ class NCursesUI:
# bb.error("log data follows (%s)" % logfile)
# number_of_lines = data.getVar("BBINCLUDELOGS_LINES", d)
# if number_of_lines:
# subprocess.call('tail -n%s %s' % (number_of_lines, logfile), shell=True)
# os.system('tail -n%s %s' % (number_of_lines, logfile))
# else:
# f = open(logfile, "r")
# while True:
@@ -321,8 +315,6 @@ class NCursesUI:
if isinstance(event, bb.cooker.CookerExit):
exitflag = True
if isinstance(event, bb.event.LogExecTTY):
mw.appendText('WARN: ' + event.msg + '\n')
if helper.needUpdate:
activetasks, failedtasks = helper.getTasks()
taw.erase()

View File

@@ -37,17 +37,6 @@ class BBUIEventQueue:
self.BBServer = BBServer
self.clientinfo = clientinfo
server = UIXMLRPCServer(self.clientinfo)
self.host, self.port = server.socket.getsockname()
server.register_function( self.system_quit, "event.quit" )
server.register_function( self.send_event, "event.sendpickle" )
server.socket.settimeout(1)
self.EventHandle = self.BBServer.registerEventHandler(self.host, self.port)
self.server = server
self.t = threading.Thread()
self.t.setDaemon(True)
self.t.run = self.startCallbackHandler
@@ -84,9 +73,19 @@ class BBUIEventQueue:
def startCallbackHandler(self):
while not self.server.quit:
self.server.handle_request()
self.server.server_close()
server = UIXMLRPCServer(self.clientinfo)
self.host, self.port = server.socket.getsockname()
server.register_function( self.system_quit, "event.quit" )
server.register_function( self.send_event, "event.sendpickle" )
server.socket.settimeout(1)
self.EventHandle = self.BBServer.registerEventHandler(self.host, self.port)
self.server = server
while not server.quit:
server.handle_request()
server.server_close()
def system_quit( self ):
"""

Some files were not shown because too many files have changed in this diff Show More