Compare commits

..

346 Commits
1.3 ... denzil

Author SHA1 Message Date
Richard Purdie
6caa7d1d6c bitbake: command: Fix getCmdLineAction bugs
Executing "bitbake" doesn't get a sane message since the None return value
wasn't being handled correctly. Also fix msg -> cmd_action['msg'] as
otherwise an invalid variable is accessed which then crashes the server
due to the previous bug.

(Bitbake rev: c6211291ae07410832031a5274690437cc2b09a6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:44:14 +00:00
Richard Purdie
2a69358610 bitbake: command: Add missing import traceback
Without this, if an exception occurs the server will silently crash
with no feedback to the user about why (since traceback isn't imported).

(Bitbake rev: e637a635bf7b5a9a2e9dc20afc18aceec98d578f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:44:06 +00:00
Christopher Larson
f3acb135e7 bitbake: command: add error to return of runCommand
Currently, command.py can return an error message from runCommand, due to
being unable to run the command, yet few of our UIs (just hob) can handle it
today. This can result in seeing a TypeError with traceback in certain rare
circumstances.

To resolve this, we need a clean way to get errors back from runCommand,
without having to isinstance() the return value. This implements such a thing
by making runCommand also return an error (or None if no error occurred).

As runCommand now has a method of returning errors, we can also alter the
getCmdLineAction bits such that the returned value is just the action, not an
additional message. If a sync command wants to return an error, it raises
CommandError(message), and the message will be passed to the caller
appropriately.

Example Usage:

    result, error = server.runCommand(...)
    if error:
        log.error('Unable to run command: %s' % error)
        return 1

(Bitbake rev: 717831b8315cb3904d9b590e633000bc897e8fb6)

Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:58 +00:00
Saul Wold
878ef6a31c libtasn1: Upgrade to 2.13
(From OE-Core rev: 94c375a281378413d24a402ec6a59762d0eb5b85)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:41 +00:00
Nitin A Kamble
f49e7629df libtasn1: fix build with automake 1.12
(From OE-Core rev: 6fb4913eb237062303bcda50e9910f53dc95d0dd)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:41 +00:00
Saul Wold
e124ac50dc libtasn1: Update to 2.12
Use the GUN_MIRROR correctly

(From OE-Core rev: d02a682360d803f9b5f033ddc5d0f43020eebd13)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:41 +00:00
Martin Jansa
498951a247 bison: move remove-gets.patch to BASE_SRC_URI, it's needed for bison-native too if host has (e)glibc-2.16
(From OE-Core rev: cae95a527c1e9faefc0c051254e67dad7fad4197)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:41 +00:00
Elizabeth Flanagan
3456295898 build-appliance-image: Bump SRCREV
With the pending point release for denzil we need to point
to the release revision and the correct branch.

(From OE-Core rev: 0a9e8bf35afd5990c1b586bba5eb68f643458a4b)

Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-04 22:23:43 +00:00
Elizabeth Flanagan
c917351c9b DISTRO var bump for pending release
With the pending 1.2.2 release we need to bump distro vars.

(From meta-yocto rev: f9b4864a7fb4f25df74f1bf3dc1d55e72bd27fc1)

Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-04 22:21:17 +00:00
Tom Zanussi
f3db1cd8ff yocto-bsp: set branches_base for list_property_values()
yocto_bsp_list_property_values() is missing the context it needs to
properly filter choicelists, so add it to the context object.

Fixes [YOCTO #3233]

(From meta-yocto rev: 064b15f76c5b52899f4c3fdef06412c3063062a5)

(From meta-yocto rev: 601b6227908f3dd7972ad62c53d1041f4429aeb2)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:37:08 +00:00
Scott Rifenbark
b0689417df documentation: Formats. Also, the January 2013 date summary
I went into the chapters and did some formatting in order
to generate a new commit.  The commit summary message for
the previous commit was wrong and I pushed it.  The date
for the 1.2.2 release is January 2013.

(From yocto-docs rev: 457549a44cb7871d5c645f5aab02350cf76b6f1f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:27 +00:00
Scott Rifenbark
270402dd38 documentation: Updated manual history tables for Feb 2013
The release date for the five manuals was updated to
Feb 2013 for the 1.2.2 release.

(From yocto-docs rev: 2110815be55bddbfd24495aad7b8d5e2b69f3475)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:27 +00:00
Scott Rifenbark
5d30ffd8f8 documentation: Added February 2013 as release date for 1.3.1
I added this date as the release date for the five manuals
that have a manual history table.

(From yocto-docs rev: 5f107aab8bd2de0be78163eaf356656ddae4bf5f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:27 +00:00
Scott Rifenbark
c2c8bb40e8 poky.ent: Updated to remove "current" and replace with 1.2.2
(From yocto-docs rev: ad61c64d5b33ca3b7aa02f67934c7c2317d8cbe1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:27 +00:00
Scott Rifenbark
625044e9a8 documentation: Updated Manual History table for 1.2.2 release
Involved putting in a place holder date, bumping the copyright
date to 2013, and updating the poky.ent file as appropriate
for 1.2.2 and 7.0.2.

(From yocto-docs rev: 0a76733066b3440809ecafce756c5fdb4eafaae6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:26 +00:00
Scott Rifenbark
d571e5a68a Documentation: ref-manual - Updated LIC_FILES_CHKSUM example.
One of the examples used "startline" instead of "beginline".
Correction made.

(From yocto-docs rev: 59345ad197619280bef7a469d671feae80f0c4e6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:26 +00:00
Scott Rifenbark
8e078932c3 documentation: poky-ref-manual - Fixed grammar typo.
(From yocto-docs rev: 2660f17b79a36772081e37117be85ee304b78f20)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:26 +00:00
Scott Garman
151d4fbc4e cups: patch for CVE-2011-2896
Patch from: http://cups.org/strfiles/3867/str3867.patch

The LZW decompressor in the LWZReadByte function in giftoppm.c in the
David Koblas GIF decoder in PBMPLUS, as used in the gif_read_lzw
function in filter/image-gif.c in CUPS before 1.4.7, the LZWReadByte
function in plug-ins/common/file-gif-load.c in GIMP 2.6.11 and earlier,
the LZWReadByte function in img/gifread.c in XPCE in SWI-Prolog 5.10.4
and earlier, and other products, does not properly handle code words
that are absent from the decompression table when encountered, which
allows remote attackers to trigger an infinite loop or a heap-based
buffer overflow, and possibly execute arbitrary code, via a crafted
compressed stream, a related issue to CVE-2006-1168 and CVE-2011-2895.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-2896

[YOCTO #3582]
[ CQID: WIND00299595 ]

(From OE-Core rev: f4aca76c7933abf2771999c309d49ab91a3d9480)

Signed-off-by: Li Wang <li.wang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>

Merged with denzil branch, partial fix for denzil bug [YOCTO #3652]

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:26 +00:00
Li Wang
caa1d03089 librsvg: CVE-2011-3146
Store node type separately in RsvgNode

commit 34c95743ca692ea0e44778e41a7c0a129363de84 upstream

The node name (formerly RsvgNode:type) cannot be used to infer
the sub-type of RsvgNode that we're dealing with, since for unknown
elements we put type = node-name. This lead to a (potentially exploitable)
crash e.g. when the element name started with "fe" which tricked
the old code into considering it as a RsvgFilterPrimitive.
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-3146

https://bugzilla.gnome.org/show_bug.cgi?id=658014

[YOCTO #3581]
[ CQID: WIND00376773 ]
Upstream-Status: Backport

(From OE-Core rev: 6d030fcb69221da073ce413049deb8447934bed5)

Signed-off-by: Li Wang <li.wang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>

Resolved merge conflicts with denzil branch.

Fixes denzil bug [YOCTO #3651].

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:25 +00:00
yanjun.zhu
a86e32a18b squashfs: fix CVE-2012-4025
CQID:WIND00366813

Reference: http://squashfs.git.sourceforge.net/git/gitweb.cgi?
p=squashfs/squashfs;a=patch;h=8515b3d420f502c5c0236b86e2d6d7e3b23c190e

Integer overflow in the queue_init function in unsquashfs.c in
unsquashfs in Squashfs 4.2 and earlier allows remote attackers
to execute arbitrary code via a crafted block_log field in the
superblock of a .sqsh file, leading to a heap-based buffer overflow.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-4025

(From OE-Core rev: e6fddd1961061895e9335fa94b636163efdc9caa)

Signed-off-by: yanjun.zhu <yanjun.zhu@windriver.com>

[YOCTO #3564]
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:25 +00:00
Scott Garman
156c2554b7 freetype: patches for CVE-2012-5668, 5669, and 5670
For details of these security issues, please see:

http://www.openwall.com/lists/oss-security/2012/12/25/1

Thanks to Eren Turkay <eren@hambedded.org> for submitting source
patches that apply cleanly to freetype 2.4.9.

This fixes denzil bug [YOCTO #3649]

(From OE-Core rev: be34916d81b71385a560a6990c7b30eba243b356)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:25 +00:00
Scott Garman
b6037b6d2f libxml2: patch for CVE-2012-2871
the patch come from:
http://src.chromium.org/viewvc/chrome/trunk/src/third_party/libxml/ \
src/include/libxml/tree.h?r1=56276&r2=149930

libxml2 2.9.0-rc1 and earlier, as used in Google Chrome before
21.0.1180.89, does not properly support a cast of an unspecified
variable during handling of XSL transforms, which allows remote
attackers to cause a denial of service or possibly have unknown other
impact via a crafted document, related to the _xmlNs data structure in
include/libxml/tree.h.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-2871

[YOCTO #3580]
[ CQID: WIND00376779 ]

(From OE-Core rev: fa3d44594360786b2526d64f0ea5bc26b44a1fa8)

Signed-off-by: Li Wang <li.wang at windriver.com>

This fixes denzil bug [YOCTO #3648]

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:25 +00:00
Scott Garman
cf77faba91 gitignore: add generated doc files to ignore list
(From OE-Core rev: c067fbcb910f888cc6328d725a395ce681862377)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:24 +00:00
Richard Purdie
88b65c79ac boot-directdisk: Fix kernel location after STAGING_KERNEL_DIR change
This catches up with the STAGING_KERNEL_DIR location change
and uses the correct variable to future proof this issue.

[YOCTO #2783]

(From OE-Core rev: 28715eff6dff3415b1d7b0be8cbb465c417e307f)

(From OE-Core rev: f02a7341e37aec155772e1546d8b21ef2c9f5e9d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:24 +00:00
Scott Garman
bfae0622b5 build-appliance-image: Allow SRCREV to be overriden
This will allow use to automagically set the SRCREV for builds on the
autobuilder. It will still require manual updating for releases.

(From OE-Core rev: 1b4781e5c6eee234fcf57dd53d5167b31d81a482)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:24 +00:00
Scott Garman
01c1421270 psplash: new patch to fix segfault
This fixes a segmentation fault when passing -a without
an argument.

Fixes [YOCTO #2903]

(From OE-Core rev: f5b8ba5e51ac41cf375119a88083617f667a85d5)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:23 +00:00
Mihai Lindner
2ccb03f9b7 sysklogd: removed tabs from syslog.conf
Yocto #2926: syslog.conf should not have tabs within the selector field.
Removed tabs from the selector field of syslog rules. Tabs or spaces
should be used, in syslog.conf, only when separating selectors from
actions.

(From OE-Core rev: 1316be4e597332a629842b3f5a7dde8e45dd057d)

(From OE-Core rev: c806466c8d4a9d0d4a66d34d3565d5879c2f2b0f)

Signed-off-by: Mihai Lindner <mihaix.lindner@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:23 +00:00
Khem Raj
07f304f4e2 grub,guile,cpio,tar,wget: Fix gnulib for absense of gets in eglibc
eglibc 2.16 does not export gets anymore

(From OE-Core rev: 043d67c6677fa87496c4c441e9d366e2003ab9aa)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts with denzil branch and backported guile
patch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:23 +00:00
Khem Raj
db915496d3 bison: Fix for gets being removed from eglibc 2.16
(From OE-Core rev: bc91a267d097c100480ea02ece7fb372167eaf7f)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:23 +00:00
Khem Raj
8fe4344e74 gettext,m4,augeas,gnutls: Account for removal of gets in eglibc 2.16
These recipes use gnulib which needs this change to use gets
when its defined and not otherwise. Until that change goes into
gnulib and then all these package upgrade gnulib in their sourcebase
we patch them

(From OE-Core rev: b955f1a7bc716055c78ed575eccac6f611dc2395)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts with denzil branch and backported gnutls
patch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:22 +00:00
Khem Raj
3c57cb356e diffutils: Fix build with eglibc 2.16
eglibc 2.16 has removed gets so we account for that

(From OE-Core rev: b6bcd4e26e94364939c8874db90e64fbb242e841)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:22 +00:00
Khem Raj
38d6972032 coreutils: Fix build with eglibc 2.16
eglibc 2.16 has removed gets so we account for that

(From OE-Core rev: 965243ab5b5d992977193c444dbbbf09701467c2)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:22 +00:00
Scott Garman
ef745cb34f poky.conf: Add Ubuntu 12.04.1 LTS to SANITY_TESTED_DISTROS
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 16:02:04 +00:00
yanjun.zhu
a6f0dcbe7d squashfs: fix for CVE-2012-4024
Reference:http://squashfs.git.sourceforge.net/git/gitweb.cgi?p=
squashfs/squashfs;a=commit;h=19c38fba0be1ce949ab44310d7f49887576cc123

Fix potential stack overflow in get_component() where an individual
pathname component in an extract file (specified on the command line
or in an extract file) could exceed the 1024 byte sized targname
allocated on the stack.

Fix by dynamically allocating targname rather than storing it as
a fixed size on the stack.

[YOCTO #3513]

Fixes denzil [YOCTO #3520]

(From OE-Core rev: d35560f33f257bd12a07c7c0be770319086d6ad9)

Signed-off-by: yanjun.zhu <yanjun.zhu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:20 +00:00
Marcin Juszkiewicz
cf796f8908 libxml: disable lzma
On my system libxml-native got linked with host copy of liblzma and as a
result libxslt-native was not linkable:

| x86_64-linux-libtool: link: gcc -isystem/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/include -O2 -pipe -Wall -Wl,-rpath-link -Wl,/home/hrw
/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib -Wl,-rpath-link -Wl,/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-
linux/lib -Wl,-rpath -Wl,/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib -Wl,-rpath -Wl,/home/hrw/HDD/devel/canonical/ci-linaro/oecore/buil
d/tmp-eglibc/sysroots/x86_64-linux/lib -Wl,-O1 -o .libs/xsltproc xsltproc.o  -L/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib -L/home/hrw/
HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/lib ../libxslt/.libs/libxslt.so ../libexslt/.libs/libexslt.so /home/hrw/HDD/devel/canonical/ci-linaro/oecore/
build/tmp-eglibc/work/x86_64-linux/libxslt-native-1.1.26-r8/libxslt-1.1.26/libxslt/.libs/libxslt.so /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux
/usr/lib/libxml2.so -ldl /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/liblzma.so -lrt -lz -lm -pthread -Wl,-rpath -Wl,/home/hrw/HDD/deve
l/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib
| /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/libxml2.so: undefined reference to `lzma_code@XZ_5.0'
| /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/libxml2.so: undefined reference to `lzma_auto_decoder@XZ_5.0'
| /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/libxml2.so: undefined reference to `lzma_end@XZ_5.0'
| /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/libxml2.so: undefined reference to `lzma_properties_decode@XZ_5.0'
| collect2: error: ld returned 1 exit status
| make[2]: *** [xsltproc] Error 1
| make[2]: Leaving directory `/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/work/x86_64-linux/libxslt-native-1.1.26-r8/libxslt-1.1.26/xsltproc'

(From OE-Core rev: 42e03215cc494f1508b96c2bb63243a02e5ef812)

Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:20 +00:00
Saul Wold
e416fb6920 libxml2: Update to 2.8.0
removed 2 patches that are now fixed upstream
updated hash.c LIC_FILES_CHKSUM due to updating the date to 2012

(From OE-Core rev: c74ed920d3a9a0e379f8fd450e2841628ee0beb2)

Signed-off-by: Saul Wold <sgw@linux.intel.com>

Resolved merge conflicts in denzil branch.

Addresses CVE-2011-1944.

Fixes denzil [YOCTO #2703]

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:19 +00:00
Richard Purdie
aa66dc91d3 libxml2/libxslt: Don't depend on ansidecl.h header
We don't DEPEND on binutils for ansidecl.h so ensure we should never
use the header. This makes builds determinstic and means something like:

bitbake binutils
bitbake libxml2 -c configure
bitbake binutils -c clean
bitbake libxml2

doen't fail to build.

(From OE-Core rev: 54d27bbc26d1e45e51ee8ef0f051a2bd8f627cc0)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:19 +00:00
Nitin A Kamble
b45184aef3 libxml2: fix build with automake 1.12
(From OE-Core rev: dd1b77c473ee92608ad0a5bdbea0880d2f613c2c)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:19 +00:00
Phil Blundell
0e43be806d openssl: Use ${CFLAGS} not ${FULL_OPTIMIZATION}
The latter variable is only applicable for target builds and could
result in passing incompatible options (and/or failing to pass
required options) to ${BUILD_CC} for a virtclass-native build.

(From OE-Core rev: d5a99f3dab07fa676788b434e18174c0798d4460)

Signed-off-by: Phil Blundell <philb@gnu.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:18 +00:00
Scott Garman
d6935247dd openssl: upgrade to 1.0.0j
Addresses CVE-2012-2333

Fixes [YOCTO #2682]

Fixes denzil [YOCTO #2701]

(From OE-Core rev: cf84ebac391b243099fe0d05223433ecb8e71641)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:18 +00:00
yanjun.zhu
b8f6d9cbfd libproxy: Fix for CVE-2012-4504
Reference:https://code.google.com/p/libproxy/source/detail?r=853

Stack-based buffer overflow in the url::get_pac function in url.cpp
in libproxy 0.4.x before 0.4.9 allows remote servers to have an
unspecified impact via a large proxy.pac file.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-4504

[YOCTO #3487]

Fixes denzil [YOCTO #3511]

(From OE-Core rev: 543d608ae6251956b84e6423ec66f146f926d4b8)

Signed-off-by: yanjun.zhu <yanjun.zhu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:18 +00:00
Martin Jansa
2288ff099c opkg-utils: bump SRCREV to latest
(From OE-Core rev: 119215fee75a64de49d498c3d57446783722a292)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:18 +00:00
Andrei Gherzan
93cc23571e opkg-utils: Add needed python modules as RDEPENDS
(From OE-Core rev: dadfb4914b25a970c61e7f2354c01086d4823fd6)

Signed-off-by: Andrei Gherzan <andrei@gherzan.ro>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:17 +00:00
Robert Yang
95f2a5b635 rootfs_rpm.bbclass: save rpmlib rather than remove it
The rpmlib was removed when images that add
"remove_packaging_data_files" to ROOTFS_POSTPROCESS_COMMAND, which would
make the increment rpm image generation doesn't work in the second
build, since list_installed_packages would get incorrect value in the
second build, move the rpmlib to ${T} rather than remove it, and move it
back when INC_RPM_IMAGE_GEN =1.

[YOCTO #2690]

(From OE-Core rev: c30e79510c06701f10f659eedaa0fe785538ac17)

(From OE-Core rev: 15e13ea1fc8a0c29b4ca68c31c83ca013c89c36e)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:17 +00:00
Robert Yang
75997b4565 package_rpm.bbclass: Fix incremental rpm image generation
Fix the incremental rpm image generation, it didn't work since the code
has been changed.

The btmanifest should have a ".manifest" suffix, so that it can be moved
to ${T} by rootfs_rpm.bbclass:
mv ${IMAGE_ROOTFS}/install/*.manifest ${T}/

Note: The locale pkgs would always be re-installed.

[YOCTO #2690]

(From OE-Core rev: 5149630746626c6d416f26ab9dd1c7213fcd8c50)

(From OE-Core rev: 1f5113ae91ed639cf06fcbb9431b460d7a06bbbc)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:17 +00:00
Roy.Li
970d279774 bitbake: compile tar-replacement firstly
Compiling tar-replacement or not is decided by version of host tar,
if the host tar version is lower than 1.23, Compiling tar-replacement
is needed.

When doing popoluate tar-replacement sysroot to write the tar to
sysroot, but writing is not finished. other packages probably
use the being written tar to unzip file, which will lead to failure
and report the below error:
"bitbake_build/tmp/sysroots/x86_64-linux/usr/bin/tar: Text file busy"

Now we compile tar-replacement firstly to ensure that a being written
tar command will not be used.

(From OE-Core rev: 3c1c4719fc96f6f1fbb257413d6baf3d91fdf4e8)

(From OE-Core rev: 1a6f61d9493bdbade256dc6c19bbffe75a2684a4)

Signed-off-by: Roy.Li <rongqing.li@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:16 +00:00
Joe Slater
40e6fc6a65 gettext: install libgettextlib.a before removing it
In a multiple job build, Makefile can simultaneously
be installing and removing libgettextlib.a.  We serialize
the operations.

(From OE-Core rev: 2750546b2152eecdbb37e963a2495383f6944184)

(From OE-Core rev: 500c9c1e0047ba9f35e3591f4252fe2dd38bc4f1)

Signed-off-by: Joe Slater <jslater@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:16 +00:00
Paul Eggleton
e9c2218231 classes/qmake_base: support linux-gnuspe/linux-uclibcspe TARGET_OS
Fix borrowed from OE-Classic. This should fix build failures during
do_configure of Qt applications with the p1022ds machine from
meta-fsl-ppc, for example.

(From OE-Core rev: a19fc8e19a6cc6885a1e0616b1f42cc49c8f2c9f)

(From OE-Core rev: 0baef81f0ebf854b3e3e78b0d3745cc8ad41491e)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:16 +00:00
Ross Burton
84399e189b gst-plugins-good: disable (uninstalled) examples
The examples pull in a GTK+ build dependency, so remove that too.

(From OE-Core rev: f6975629fd5aa34bf423415bf2328e2146a6e675)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:15 +00:00
Paul Eggleton
846b7c3887 bitbake: lib/bb/siggen.py: log when tainting the signature of a task
Log a note when applying a taint to a task signature (e.g. when using
the -f or -C command line options) so that the user knows this has been
done.

(Bitbake rev: 0fd960fdea83874eedb541cbc2920257e0f3fb81)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-12 08:58:48 +01:00
Paul Eggleton
732007cbb6 bitbake: bitbake: ensure -f causes dependent tasks to be re-run
If -f is specified, force dependent tasks to be re-run next time. This
works by changing the force behaviour so that instead of deleting the
task's stamp, we write a "taint" file into the stamps directory, which
will alter the taskhash randomly and thus trigger the task to re-run
next time we evaluate whether or not that should be done as well as
influencing the taskhashes of any dependent tasks so that they are
similarly re-triggered. As a bonus because we write this file as
<stamp file name>.taskname.taint, the existing code which deletes the
stamp files in OE's do_clean will already handle removing it.

This means you can now do the following:

bitbake somepackage
[ change the source code in the package's WORKDIR ]
bitbake -c compile -f somepackage
bitbake somepackage

and the result will be that all of the tasks that depend on do_compile
(do_install, do_package, etc.) will be re-run in the last step.

Note that to operate in the manner described above you need full hashing
enabled (i.e. BB_SIGNATURE_HANDLER must be set to a signature handler
that inherits from BasicHash). If this is not the case, -f will just
delete the stamp for the specified task as it did before.

This fix is required for [YOCTO #2615] and [YOCTO #2256].

(Bitbake rev: f7b55a94226f9acd985f87946e26d01bd86a35bb)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-12 08:58:34 +01:00
Martin Jansa
9945923c8b openssl: add deprecated and unmaintained find.pl from perl-5.14 to fix perlpath.pl
* openembedded-core/meta/recipes-connectivity/openssl/openssl.inc
*
* is using perlpath.pl:
*
*   do_configure () {
*           cd util
*           perl perlpath.pl ${STAGING_BINDIR_NATIVE}
*   ...
*
* and perlpath.pl is using find.pl:
* openssl-1.0.0i/util/perlpath.pl:
*   #!/usr/local/bin/perl
*   #
*   # modify the '#!/usr/local/bin/perl'
*   # line in all scripts that rely on perl.
*   #
*
*   require "find.pl";
*   ...
*
* which was removed in perl-5.16.0 and marked as deprecated and
* unmaintained in 5.14 and older:
* /tmp/usr/lib/perl5/5.14.2/find.pl:
*   warn "Legacy library @{[(caller(0))[6]]} will be removed from the Perl
*   core distribution in the next major release. Please install it from the
*   CPAN distribution Perl4::CoreLibs. It is being used at @{[(caller)[1]]},
*   line @{[(caller)[2]]}.\n";
*
*   # This library is deprecated and unmaintained. It is included for
*   # compatibility with Perl 4 scripts which may use it, but it will be
*   # removed in a future version of Perl. Please use the File::Find module
*   # instead.

(from OE-Core rev c09bf5d177a7ecd2045ef7e13fff4528137a9775)

(From OE-Core rev: c15fae372cf75403facc28cf76f973b1279425dd)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:09 +01:00
Dennis Lan
9bf253f467 openjade-native: fix undefined Getopts error, use std namespace
Using Gentoo Linux as the build host, it fails without this patch
Use Getopt::Std in place of getopts.pl.

https://bugs.gentoo.org/show_bug.cgi?id=420083

which following error:
/usr/bin/perl -w ./../msggen.pl -l jstyleModule InterpreterMessages.msg
/usr/bin/perl -w ./../msggen.pl -l jstyleModule DssslAppMessages.msg
Undefined subroutine &main::Getopts called at ./../msggen.pl line 22.
make[2]: *** [InterpreterMessages.h] Error 2
make[2]: *** Waiting for unfinished jobs....
Undefined subroutine &main::Getopts called at ./../msggen.pl line 22.
make[2]: *** [DssslAppMessages.h] Error 2

(from OE-Core rev 169a89b10817b742c063fcd76721e4dbbcca6199)

(From OE-Core rev: 7c7dcb05685d840c70474d409f6a58ae459c46f0)

Signed-off-by: Dennis Lan <dennis.yxun@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:08 +01:00
Richard Purdie
d156f75fea siteconfig: Clear cache before rebuilding
This ensures consistent build results and avoids build failures when compiler flags
change for example.

(From OE-Core rev: a5ff8396cad130f809f8f8da49bb38e6f80f923c)

(From OE-Core rev: b21d1daf709ddce14c93a5f16c91ff702e1cb7ff)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:08 +01:00
Darren Hart
809d97b938 gnutls: Update SRC_URI to use GNU_MIRROR
The current SRC_URI fails. Update it with the GNU_MIRROR SRC_URI from
upstream commit 753b22012f10c393c191d3116b9d38ee4be6d112.

(From OE-Core rev: 8430748e838872b22fe0e83a7dbf3a2a5b1faba2)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: John Howard <john.howard@intel.com>
CC: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:08 +01:00
Paul Eggleton
e5a85ac2e4 classes/cml1: ensure -c menuconfig forces a rebuild next time
Ensure the following results in the kernel being rebuilt, repackaged and
re-deployed in the final step:

bitbake virtual/kernel
bitbake -c menuconfig virtual/kernel
[ make changes to the kernel configuration and save ]
bitbake virtual/kernel

If there are no changes to the configuration saved, the rebuild will not
be triggered.

Note that this relies on a function recently added to BitBake and
requires full hashing (i.e. BB_SIGNATURE_HANDLER must be set to a
signature handler that inherits from BasicHash) - if this is not the
case or the function is not available in the version of BitBake being
used this change will do nothing.

Fixes [YOCTO #2256].

(From OE-Core rev: 9bf6b60e1599cf5dd87089d42584583cdfd6807a)

(From OE-Core rev: a9600e68e64a111be4cb934e14b914fa553b5654)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:08 +01:00
Jesse Zhang
b4bb378261 udev: don't mount with -o sync
mount.sh mounts all partitions with -o sync, which is bad for system
performance.

(From OE-Core rev: d49cf73754150b50a911d326aaa666f5da78855c)

(From OE-Core rev: 44c102386c9bca17743d2edd1f94d4071974204d)

Signed-off-by: Jesse Zhang <sen.zhang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:07 +01:00
Bogdan Marinescu
e7d4eba0a9 Save proxy settings
Proxy settings were not properly saved between Hob runs. This
fix is mostly a backport of code in master.

[YOCTO #3024]

Signed-off-by: Bogdan Marinescu <bogdan.a.marinescu@intel.com>
2012-10-01 18:13:15 +01:00
Paul Eggleton
aad5c9f699 bitbake: hob: format error messages properly
Error messages that use arguments need to be formatted properly, or we
don't get the full message. Use a formatter to do this when an error
occurs.

Partial fix for [YOCTO #2983].

(Bitbake rev: 6783538884adecd914909a9ab4ca73c27575f3ad)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-01 18:13:05 +01:00
Matthew McClintock
42d9652fb1 poky.conf: add distros that work correctly
All the builds pass for these distros as well

atom-pc, beagleboard, mpc8513e-rdb, routerstationpro

As well as other powerpc targets:

p1022ds, p4080ds, p5020ds, p5020ds-64b

Signed-off-by: Matthew McClintock <msm@freescale.com>
2012-10-01 18:12:48 +01:00
Paul Eggleton
4e09d164d9 bitbake: hob: ensure error message text is properly escaped
Our lack of markup escaping was causing invalid markup, leading to the
error dialog being blank. Use the glib markup escaping function provided
by PyGTK+ to do this properly and avoid the blank error dialogs.

Partial fix for [YOCTO #2983].

(Bitbake rev: 563ea5233a5ab1629c51e802d04280692f96c596)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-01 18:12:32 +01:00
Darren Hart
d567e770c3 bootimg: Use STAGING_KERNEL_DIR
bootimg.bbclass using STAGING_DIR_HOST/kernel instead of
STAGING_KERNEL_DIR, resulting in build failure of live images.

| install: cannot stat `/usr/local/dev/yocto/fishriver-test/build/tmp/sysroots/fishriver/kernel/bzImage': No such file or directory

Replace it with STAGING_KERNEL_DIR.

(From OE-Core rev: 8f16811a8d51982a8b3d70e6087aef4a41926840)

(From OE-Core rev: 032bd9a856f9ca0b43dff272bd4f95481aa46597)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Tested-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:18 +01:00
Richard Purdie
a5f5b1e80a texi2html: Fix perl location on recent distros
This fixes errors like:
| error: Failed dependencies:
|       /bin/perl is needed by texi2html-5.0-r1.i586

(From OE-Core rev: d4c27021ffc813732526ab9ae6969e5ae0bdf7e8)

(From OE-Core rev: f28dcaf565050d5c857c3d09164104410a2e4173)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:18 +01:00
Richard Purdie
d47234d4c9 dbus: Ensure dbus-nativesdk doesn't RPROVIDE dbus-x11
dbus-x11 should not RPROVIDE dbus-x11 as this is incorrect and confuses
builds. This fixes the nativesdk case.

(From OE-Core rev: f4cc32585f9ac392460991b46b8cfa7a347a27e6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:18 +01:00
Radu Moisan
3cdd930eeb dbus: include dbus-launch in the main dbus package
Followed suggestions from Bugz 2261:

2) make the virtual/libx11 DEPENDS conditional based on the x11 distro feature.
This makes the build dependencies reflect the feature list.

3) remove dbus-x11, meaning that dbus-launch with its potential X11 dependency
is now back in dbus where is belongs.

4) make dbus provide dbus-x11, for compatibility.

Fixes [Yocto #2261]

(From OE-Core rev: 9bf6d834312581e8b8741fb9d1621e4c40de5687)

Signed-off-by: Radu Moisan <radu.moisan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:18 +01:00
Franklin S. Cooper Jr
cfd36177f7 u-boot: Use fw_env.config if avaliable
* Add support for board specific fw_env.config file if avaliable

(From OE-Core rev: 4bc2151d6bb500b0489bc00bce7574dc24f41b90)

Signed-off-by: Franklin S. Cooper Jr <fcooper@ti.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:17 +01:00
Ross Burton
1ecc27c20c pulseaudio: remove ConsoleKit dependency
ConsoleKit is a runtime dependency for the ConsoleKit module, but there isn't a
build-time dependency.

(From OE-Core rev: ebfc81f57bbc60e958472d9a1257e6a19f60adbb)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:17 +01:00
Martin Jansa
82114edb4a pulseaudio: fix pulseaudio-server RDEPENDS
* module-cork-music-on-phone was renamed to module-role-cork
  http://cgit.freedesktop.org/pulseaudio/pulseaudio/commit/?id=3c5cc345472302b9511c19244b3eceb4a3674d8c

(From OE-Core rev: b102849a145ca6602ac8e499b1420672d290c11b)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:17 +01:00
Khem Raj
a62752bb4f pulseaudio: Always enable NLS
When NLS is disabled e.g. on uclibc the build fails
The actual problem is that pulseaudio build system
should cater for it but it does not

(From OE-Core rev: b7d10637059b2352bcca45bc15b26d0dd056e78f)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:17 +01:00
Constantin Musca
7ffad91e79 pulseaudio: upgrade to 2.1
(From OE-Core rev: 540093fd9f52c86e6803554e2796668227bb89b5)

Signed-off-by: Constantin Musca <constantinx.musca@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:16 +01:00
Xin Ouyang
1e2d6bffc5 libatomics-ops: update to the latest version 7.2
All old patches are droped because:

Merged into 7.2 by upstream:
* fedora/libatomic_ops-1.2-ppclwzfix.patch
* gentoo/libatomic_ops-1.2-mips.patch
* gentoo/sh4-atomic-ops.patch
* libatomics-ops_fix_for_x32.patch

Obsolete:
* doublefix.patch

(From OE-Core rev: 59afdbbddbacf5d9c668bb8f011c8f150421d498)

Signed-off-by: Xin Ouyang <Xin.Ouyang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:16 +01:00
Cristian Iorga
cb1c31939d libcanberra: upgrade to 0.29
(From OE-Core rev: 074c34617a361a589665774ac8d9060a4ef4ef82)

Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:16 +01:00
Cristian Iorga
bec76a3813 pulseaudio: upgrade to 2.0
(From OE-Core rev: 961872787ac2c2b18d4589967e68a60ab3a4cc86)

Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Conflicts:

	meta/recipes-multimedia/pulseaudio/pulseaudio_2.0.bb
2012-09-28 16:53:16 +01:00
Denys Dmytriyenko
ead623b2a0 pulseaudio: fix typo in the patch name, pulseaudo -> pulseaudio
No PR bump is needed.

(From OE-Core rev: ac3edffa2586064ff480b89e80b608f14e566fa7)

Signed-off-by: Denys Dmytriyenko <denys@ti.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:16 +01:00
Khem Raj
b973813796 libatomics-ops: Make it build for SH4
(From OE-Core rev: fc47820982aea41f2b0fdd4d87fb0242bf7346dd)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:15 +01:00
Saul Wold
8941d5aa3b pulseaudio: disable tcpwrap by default
This ensures that tcpwrapper usage is always disabled, this was
inconsistent because it would test for libwrap and sometimes enable
and sometimes not.

This ensures consistent build reproducibility.

(From OE-Core rev: 5b3d18d12fff156d4d360b779eb4ae78a480ce10)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:15 +01:00
Saul Wold
f6876adb6a bluez4: fix packaging issue after update
WARNING: QA Issue: bluez4: Files/directories were installed but not shipped
  /usr/share/dbus-1
  /usr/share/dbus-1/system-services
  /usr/share/dbus-1/system-services/org.bluez.service

(From OE-Core rev: 5c52472af73ed91970096eed3d216fdbc7ff42d2)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:15 +01:00
Cristian Iorga
3c3614c562 bluez4: update to ver. 4.101
(From OE-Core rev: 6e6407e9a8c59cb51685c4b767b62eacb8dbf852)

Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:15 +01:00
Cristian Iorga
1c71304f81 gst-plugin-bluetooth: update to ver. 4.101
(From OE-Core rev: 6d1bbc26d27506e5dd5c32013ea5074c5c5a342d)

Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:14 +01:00
Dongxiao Xu
5be28549ae bluez-hcidump: upgrade to version 2.4
(From OE-Core rev: 4934e54821ed19fb98ea7691af6a2d3bcb1922f7)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:14 +01:00
Jonas Danielsson
bbe83b55cd bluez4: make alsa support conditional upon DISTRO_FEATURES
Do not enable alsa in bluez4 unless it's included in DISTRO_FEATURES.

(From OE-Core rev: f6297d648b1464719d1e1e42e99d473b69a13e56)

Signed-off-by: Jonas Danielsson <jonas.danielsson@lundinova.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:14 +01:00
Paul Eggleton
3c363a70aa dhcp: remove dependency of dev/staticdev packages on main package
The main package is empty and is not produced, which leaves the dev
and staticdev packages broken. Remove the dependencies (added in
bitbake.conf by default) to fix this.

(From OE-Core rev: 5380c65e819d82f783cb75aa21db7c73bb445189)

(From OE-Core rev: 02dc5c9b7b1f21c9f8d9a9299933fa88dc16c542)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:14 +01:00
Paul Eggleton
2fab410f2e classes/mirrors: remove bogus gnutls mirror
This mirror entry which maps to itself plus a slash, if matched, put the
fetcher into a circular loop until the stack space is exhausted. A patch
has been sent to fix this issue in BitBake, but we should remove the
bogus entry as well.

(Note that this entry does not actually trigger the issue with current
master because the gnutls recipe now uses GNU_MIRROR instead of
ftp.gnutls.org, thus the bogus mirror entry is not matched.)

(from OE-Core rev 0de1827a9601143b090f751ea702fdb65a936b77)

(From OE-Core rev: 31ec9690c37c3a57e557684cbf5e5a4069bd57b7)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:13 +01:00
Matthew McClintock
4b8d430c1f sysvinit-inittab_2.88dsf.bb: only run serial checks at boot if we have items to check
Right now, we delay running the serial console checks to we boot up. This causes
issues for read only file systems. So, if have not configured any serial ports to
check via SERIAL_CONSOLES_CHECK we can skip the check at boot. This fixes any
issues with read only file systems and ipk packaging.

(From OE-Core rev: 2136030c1d240d9b8f123e3c8af5dacf66e86ab4)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:13 +01:00
Saul Wold
b0f05958fc kernel: Fix packaging issue
Remove /etc since it is empty, when creating a machine that does not
deliver any module config files, the /etc is empty and is then warned
about not being shipped, so we remove it.

This occurs in the routerstationpro with the following warning:
WARNING: For recipe linux-yocto, the following files/directories were installed but not shipped in any package:
WARNING:   /etc

(From OE-Core rev: 961498e3b4c4a93070bf278e67fc48c02333cd63)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:13 +01:00
Matthew McClintock
57cdce3108 valgrind_3.7.0.bb: fix missing leading space on _append
(From OE-Core rev: a175e09d1b0be85d8cbc58672485ec5ee5475ae2)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:13 +01:00
Richard Purdie
241653a01b autotools.bbclass: Add functionality to force a clean of ${B} when reconfiguring (and ${S} != ${B})
Unfortunately whilst rerunning configure and make against a project will mostly
work there are situations where it does not correctly do the right thing.

In particular, eglibc and gcc will fail out with errors where settings
do not match a previously built configuration. It could be argued they are
broken but the situation is what it is. There is the possibility of more subtle
errors too.

This patch adds removal of the build directory (${B}) when configure is
rerunning, the sstate checksum for do_configure has changed and ${S} != ${B}.
We could simply use a stamp but saving out the previous configuration checksum
adds some data at no real overhead.

If we find there are things where we want to disable this behaviour with
CONFIGURESTAMPFILE = "" in the recipe, or users could disable it globally.

[YOCTO #2774]
[YOCTO #2848]

This is particularly helpful for eglibc and gcc which use split builds by default and
are a particular source of reconfigure type problems.

(From OE-Core rev: f15f61af77cc4e52a037f509f8e49e1ea530cf35)

(From OE-Core rev: 14fc04e480aaf1cb5cd9d3a04a5b38d2fda115b1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:12 +01:00
Matthew McClintock
f6fb4890df eglibc_2.15: make patch only for Freescale machines
It's only Freescale machines that don't imlpement fsqrt, we don't want
this to effect others.

This patch was only added after the last release of denzil, so it's not
present in the release yet. Also, 2.15 is removed from master so it
should only apply to denzil branch

(From OE-Core rev: c541f746253fdb6d11cd961c7dff1aca8c2d2703)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:12 +01:00
Zhenhua Luo
b4de44f425 valgrind: fix debug info reading error when do memcheck on ppc targets
following is the error message:
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /lib/ld-2.13.so:
        --2263-- Can't make sense of .got section mapping
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /home/root/lzh:
        --2263-- Can't make sense of .data section mapping
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /usr/lib/valgrind/vgpreload_core-ppc32-linux.so:
        --2263-- Can't make sense of .data section mapping
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /usr/lib/valgrind/vgpreload_memcheck-ppc32-linux.so:
        --2263-- Can't make sense of .data section mapping
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /lib/libc-2.13.so:
        --2263-- Can't make sense of .data section mapping

(From OE-Core rev: 14626cc76210ed6fe40316a311f24147ed8de8be)

(From OE-Core rev: a76be502fbb9517c38cd716fa1f21a238b314162)

Signed-off-by: Zhenhua Luo <b19537@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:11 +01:00
Saul Wold
b32c50e016 python-pygtk: Upgrade to 2.24
This is needed for the build appliance and Hob also

(From OE-Core rev: a0abfd60e8cb78b40278eec85a8d0c722f8ef1e4)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:11 +01:00
Saul Wold
b5013e9e9e build-appliance: add zip-native, which is needed to build the final zip bundle
(From OE-Core rev: 8aeceab5d03fa3c88f0128ce1ac6bfde0d88e1b6)

(From OE-Core rev: 9ab2613327fcd4cf811afc52eff92903644e9b11)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:10 +01:00
Scott Garman
0cd0a3b475 kernel.bbclass: put perf .debug dir in -dbg package
This is needed to avoid the following QA error:

ERROR: QA Issue: non debug package contains .debug directory:
kernel-dev path

Patch proposed by Matthew McClintock <msm@freescale.com>

(From OE-Core rev: df879f191d1e86596444cb30a0a77a785958520c)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:10 +01:00
Scott Garman
825c647f65 relocatable.bbclass: Account for case when ORIGIN is in RPATH
This patch was backported from OE-Core rev:
43600df0d4efc976a9451163dd334b4763937932

This fixes a case when RPATH embedded in program have one of
its path already relative to ORIGIN. We were losing that path
if such a path existed. This patch appends it to the new edited
rpath being created when we see it.

so RPATH like below

(RPATH) Library rpath:
[$ORIGIN/../lib/amd64/jli:$ORIGIN/../jre/lib/amd64/jli]

would end up being empty

but after this patch its kept intact

(From OE-Core rev: 9ebb327ae17d1a765fd1499546ccf9076bb93234)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:10 +01:00
Khem Raj
16b6bc4288 kernel.bbclass: Dont package kxgettext.o
kxgettext.o is generated when building ppc kernels
so we end up with packaging errors like

> ERROR: QA Issue: Architecture did not match (20 to 62) on
> /work/virtex5-poky-linux/linux-xilinx-2.6.38-r00/packages-split/kernel-dev/usr/src/kernel/scripts/kconfig/kxgettext.o

(From OE-Core rev: 21952b62e3fca6c9fe750db62ca2b0587912be8a)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:09 +01:00
Bruce Ashfield
4ef357f033 kernel.bbclass: add non-santized kernel provides
If the kernel version string uses characters or symbols that
need to be santized for the package name, we can end up with a
mismatch between module requirements and what the kernel
provides.

The kernel version is pulled from utsrelease.h, which contains
the exact string that was passed to the kernel build, not
one that is santized, this can result in:

 echo "CONFIG_LOCALVERSION="\"MYVER+snapshot_standard\" >> ${B}/.config

 <build>

 % rpm -qp kernel-module-uvesafb-3.4-r0.qemux86.rpm --requires
update-modules
kernel-3.4.3-MYVER+snapshot_standard
 % rpm -qp kernel-3.4.3-myver+snapshot-standard-3.4-r0.qemux86.rpm --provides
kernel-3.4.3-myver+snapshot-standard = 3.4-r0

At rootfs assembly time, we'll have a dependency issue with the kernel
providing the santizied string and the modules requiring the utsrelease.h
string.

To not break existing use cases, we can add a second provides to the
kernel packaging with the unsantized version string, and allowing the
kernel module packaging to be unchanged.

   RPROVIDES_kernel-base += "kernel-${KERNEL_VERSION}"

 % rpm -qp kernel-3.4.3-myver+snapshot-standard-3.4-r0.qemux86.rpm --provides
kernel-3.4.3-MYVER+snapshot_standard
kernel-3.4.3-myver+snapshot-standard = 3.4-r0

(From OE-Core rev: 0af1d1412add1baf3f6c1a5cfb2e4f92fb6a85dc)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:09 +01:00
Darren Hart
91714ea8ae kernel: Add kernel headers to kernel-dev package
[YOCTO #1614]

Add the kernel headers to the kernel-dev package. This packages what was
already built and kept in sysroots for building modules with bitbake.
Making this available on the target requires removing some additional
host binaries.

Move the location to /usr/src/kernel

Before use on the target, the user will need to:

    # cd /usr/src/kernel
    # make scripts

This renders the kernel-misc recipe empty, so remove it.

As we use /usr/src/kernel in several places (and I missed one in the
previous version), add a KERNEL_SRC_DIR variable and use that throughout
the class to avoid update errors in the future.

Now that we package the kernel headers, drop the
kernel_package_preprocess function which removed them from PKGD.

All *-sdk image recipes include dev-pkgs, so the kernel-dev package will
be installed by default on all such images.

(From OE-Core rev: 0e3e88f9f87d1083ddd7dcaa526b3cd7a1cd53ff)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: Bruce Ashfield <bruce.ashfield@windriver.com>
CC: Tom Zanussi <tom.zanussi@intel.com>
CC: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:09 +01:00
Bruce Ashfield
fbb459aab9 kernel: save $kerndir/tools and $kerndir/lib from pruning
The kernel source tree in the sysroot has all unecessary source
code removed. The existing use case is to support module building
out of the sysroot, but as more toolsa are moved into the kernel
tree itself there are new use cases for the kernel sysroot source.

To avoid putting dependencies on the kernel, and to be able to
individually build and package these tools out of the source tree,
we can save $kerndir/tools and $kernddir/lib from being removed.
This enables tools like perf to be built our of the kernel source
in the sysroot, without significantly increasing the amount of
source in the sysroot.

(From OE-Core rev: 456f97c25488c2f6f6810b1a32781513cc719d8e)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:09 +01:00
Martin Jansa
7b7f3cff44 kernel.bbclass: pass KERNEL_VERSION to depmod calls in postinst
* without this, kernel upgrades where KERNEL_VERSION is changed
  e.g. 3.4.2 -> 3.4.3 generate .dep for running 3.4.2 and after reboot user ends
  up without any module loaded to make it worse after reboot nothing is upgraded
  to trigger another kernel(-module) postinst to generate .dep for now running 3.4.3

(From OE-Core rev: 2a7cdf088e484bb123a72826a02c3169a418ed0a)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:08 +01:00
Koen Kooi
a512e68202 man: make man actually work by installing custom man.config
The default man.conf is named wrong and doesn't work. It references gtbl, while groff installs tbl and other things. This man.conf is imported from OE classic and runtime tested on angstrom.

Before:

root@beaglebone:~# man man
sh: /usr/bin/gtbl: No such file or directory
sh: line 0: echo: write error: Broken pipe
gunzip: write: Broken pipe
gunzip: error inflating
sh: line 0: echo: write error: Broken pipe
sh: line 0: echo: write error: Broken pipe

After:

root@beaglebone:~# man man
MAN(1)                        Manual pager utils                        MAN(1)

NAME
       man - an interface to the on-line reference manuals

SYNOPSIS
       man  [-C  file]  [-d]  [-D]  [--warnings[=warnings]]  [-R encoding] [-L
       locale] [-m system[,...]] [-M path] [-S list]  [-e  extension]  [-i|-I]
       [--regex|--wildcard]   [--names-only]  [-a]  [-u]  [--no-subpages]  [-P
       pager] [-r prompt] [-7] [-E encoding] [--no-hyphenation] [--no-justifi-
       cation]  [-p  string]  [-t]  [-T[device]]  [-H[browser]] [-X[dpi]] [-Z]
       [[section] page ...] ...
       man -k [apropos options] regexp ...
       man -K [-w|-W] [-S list] [-i|-I] [--regex] [section] term ...
       man -f [whatis options] page ...
       man -l [-C file] [-d] [-D] [--warnings[=warnings]]  [-R  encoding]  [-L
       locale]  [-P  pager]  [-r  prompt]  [-7] [-E encoding] [-p string] [-t]
       [-T[device]] [-H[browser]] [-X[dpi]] [-Z] file ...
       man -w|-W [-C file] [-d] [-D] page ...
       man -c [-C file] [-d] [-D] page ...
       man [-hV]

Check for config name:

root@beaglebone:~# rm /etc/man.config
root@beaglebone:~# man man
Warning: cannot open configuration file /etc/man.config
No manual entry for man

As a bonus a bunch of references to the buildhost get removed from the config file.

(From OE-Core rev: 13d82ecd6b25ff4c34b3639e10113d7ebb33dc88)

Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:08 +01:00
Koen Kooi
ef789c1327 man: fix RDEPENDS and reformat recipe
(From OE-Core rev: f9aba0793123dafffc305c028f10e8f595c5a4ee)

Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:08 +01:00
Richard Purdie
2011408939 opkg-utils: UPdate to version with python 2.6 fix
(From OE-Core rev: ba2058aa74eb6cd263bd19a8338eeeced734f55c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:07 +01:00
Martin Jansa
9520635a00 opkg-utils: bump SRCREV
* there are 2 small fixes
  python-2.6 compatibility
  missing C option for opkg-build

(From OE-Core rev: 825a992af39d4eb75f105241e4cd94624b1dea43)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:07 +01:00
Martin Jansa
a259fdb296 opkg-utils: bump SRCREV for Packages cache fix and other fixes
(From OE-Core rev: ce5b46980f35097bd5fcc8195c5d5be1b980c870)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:07 +01:00
Valentin Popa
55997cbf3d xz: updated to version 5.1.1alpha
The licenses are the same, only some white spaces added/removed.

(From OE-Core rev: dbfc3d05e49b46ec033623d66e64cf781df4f632)

Signed-off-by: Valentin Popa <valentin.popa@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:07 +01:00
Martin Jansa
988318927a package.bbclass: fix TypeError in runstrip
* some packages have .ko files which are not elf, without this change
  it fails with TypeError, with this change only runstip fails and
  reports where:
  ERROR: runstrip: ''arm-oe-linux-gnueabi-strip'  '/OE/shr-core/tmp-eglibc/work/armv4t-oe-linux-gnueabi/emacs-23.4-r0/package/usr/share/emacs/23.4/etc/tutorials/TUTORIAL.ko'' strip command failed

(From OE-Core rev: 12e40ca7317289fec126d9f30b28a717fe72d274)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:06 +01:00
Richard Purdie
bf9a5226fc distutils/steuptools: Fix files layout and unbreak builds
The last two distutils changes progressivly broke the builds. Firstly they
moved things from the site_packages directory to being higher up the tree
which introduced package QA warnings as a side effect. Secondly, it interacts
badly with setuptools which passes in --root=${D} itself.

This patch restores the original directory layout, hence fixing the QA
warnings and also passes extra options to setuptools to deal with the
--root option it passes.

(From OE-Core rev: bed18d5df7915e4127a538be9c7550e185c8c850)

(From OE-Core rev: f60a04ccbf9ed614b5b5b9b02c8a24980bf17d3d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:06 +01:00
Matthew McClintock
d72eea5a5d distutils.bblass: change order of args to install step
This let's the user override install-lib argument again if it needs
to be something else, otherwise things like python-setuptools
won't be able to modify the install-lib dir

This fixes a new issue exposed by my previous distutils patch
that fixed the python modules default install location

(From OE-Core rev: 0fd7b7dd361e59c687366480cd9f89c81c2e5bce)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:06 +01:00
Matthew McClintock
a99b03e3ed distutils.bbclass: fix libdir for 64-bit python modules built with distutils
Without this some modules will be intalled in /usr/lib/python2.6/
instead of /usr/${libdir}/python2.6

(From OE-Core rev: bc6bd774aa8a3e085e9cabcefb11c3fc537139d5)

(From OE-Core rev: fc513eda1cfccc583f49847c3c04b5c781585e15)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:06 +01:00
Saul Wold
1773d1da62 build-appliance-image: Add vmx* files and build zip file
This commit adds the vmx* files needed to setup a VMware image,
this also packages the vmdk along with the vmx files.

(From OE-Core rev: ed0ffc12ed6f98471dced1ce2020af4e5c99f2c7)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:05 +01:00
Saul Wold
d8ccc44114 build-appliance-image: Update SRCREV to Denzil 1.2.1
(From OE-Core rev: 242fb49ac416824e53c58a8a0cb9bb9d19a72ec4)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:05 +01:00
Valentin Popa
6f3e6c75de build-appliance-image: rename from self-hosted-image
(-) renamed self-hosted-image to build-appliance-image
(-) replaced build-appliance-image description

[YOCTO #2636]

(From OE-Core rev: 04096f31778886479dac479132bded57e717653e)

(From OE-Core rev: bf133c331029ac588b27173145db5be5f6ee1ef5)

Signed-off-by: Valentin Popa <valentin.popa@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:05 +01:00
Franklin S Cooper Jr
89fa2c1fc6 psplash: LIC_CHKSUM Tweak
* Change the license checksum to use the lines in the psplash.h that contains
  license information instead of doing a checksum on the entire file.

(From OE-Core rev: 2c80eb5b9c103087774f032be01f5cf6309464d7)

Signed-off-by: Franklin S Cooper Jr <fcooper@ti.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:05 +01:00
Kang Kai
e22b4411ae ltp_20120104: add rdepends
[Yocto #2973]

Add rdepends libaio to fix this defect.

(From OE-Core rev: 79d12314729649add741509a46b7770e22dd23ad)

Signed-off-by: Kang Kai <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:04 +01:00
Bruce Ashfield
33921847a2 kernel-yocto: set master branch to a defined SRCREV
To support custom repositories that set a SRCREV and that only have
a single master branch, do_validate_branches needs a special case
for 'master'. We can't delete and recreate the branch, since you
cannot delete the current branch, instead we must reset the branch
to the proper SRCREV.

(From OE-Core rev: 3a8dc0a01d2756bb8f51afccad772fca1dc48af3)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:04 +01:00
Bruce Ashfield
5427f5d70f linux-yocto: allow do_validate_branches to handle all branches
Branch validation will not restrict a branch that doesn't exist
in the tree at the time of validation (since you can't reset a
SRCREV on a non-existent branch). This restriction can be removed
by looking for all branches that contain the specified SRCREV
and forcing them to that value.

(From OE-Core rev: 790f6441851fd4b2b84129340c438092f058135b)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:04 +01:00
Jeff Polk
4624b5eefb recipes-core/eglibc-2.13: Patch for locale-base-tt-ru packaging
The eglibc-2.13 build can fail because locale-base-tt-ru is in
PACKAGES twice. This is because the SUPPORTED list and the i18n
directories are out of sync with each other; the SUPPORTED list
expects a directory named "tt_RU.UTF8", but the directory is
actually named "tt_RU", and likewise for the @iqtelif variants.

(From OE-Core rev: 280886bb865efde6bda327a1c821220d64c893ba)

Signed-off-by: Peter Seebach <peter.seebach@windriver.com>
Signed-off-by: Jeff Polk <jeff.polk@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:04 +01:00
Matthew McClintock
e7376bb459 eglibc/gcc: add patches to fix eglibc 2.15 build
This drops one patch against eglibc for 2.15 and adds two new ones,
also it adds a gcc patch. We use all of these internally and they
are tested quite well.

(From OE-Core rev: a7014c446b0d2f3b40c4b058c64bb61c8720d799)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:03 +01:00
Marcin Juszkiewicz
c2bbe5f5d3 libpam: disable NIS to not link with libtirpc when it is available
I was checking ways to make incremental builds faster so I started using
sstate-cache and SSTATE_MIRRORS. But this gave me some nasty bug:

| Collected errors:
|  * satisfy_dependencies_for: Cannot satisfy the following dependencies
for php-cgi:
|  *    libtirpc1 (>= 0.2.2) *
|  * opkg_install_cmd: Cannot install package php-cgi.

I checked details:

In my previous build libtirpc got built before libpam so libpam found it
and linked. As a result packages depend on libtirpc1 but as there is no
such build dependency sstate handling code did not used libtirpc copy...

(From OE-Core rev: e629bdcd1bcb51f2d2101fb53daeac0bd29ab637)

(From OE-Core rev: 8ff92269cd63e153892d129e6e2255812a454a99)

Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:03 +01:00
Matthew McClintock
758677e212 glib.inc: disable selinux for native builds
In addition to dbus, we also need to disable selinux for glib as well
otherwise we will get the same link error

(Note: Upstream master has disabled selinux AFAICT)

(From OE-Core rev: 318bc896b1bd5399807a417865b8e088d9d9eb15)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:03 +01:00
Matthew McClintock
b96bd8465a dbus.inc: disable selinux for native builds
(Note: Upstream master has disabled selinux for this AFAICT)

Fixes issues such as:

| /usr/bin/ld: cannot find -lselinux
| collect2: ld returned 1 exit status
| make[3]: *** [libdbus-glib-1.la] Error 1

(From OE-Core rev: 7ee10f25430421dc6e9552ffe15a6a5acbd4cb51)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:02 +01:00
Tom Zanussi
65ffa73950 yocto-bsp: use base branches for qemu 'newbranch' case
The branch updating for the [YOCTO #2587] fix inadvertently changed
some of the qemu branch names incorrectly, fix it.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:22 +01:00
Tom Zanussi
a76fc366ce yocto-bsp: generate default properties even if json specified
Users seem to want to specify incomplete property sets when using json
input.  Allow this by generating default properties before the
user-specified properties are applied; the user will then get the
defaults for any unspecified values, and avoid cryptic backtraces.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:22 +01:00
Tom Zanussi
759237f721 yocto-bsp: use emgd 1.10 for i386 template
Make i386 template use emgd 1.10 for denzil, along with associated
changes.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:22 +01:00
Tom Zanussi
3e632506c2 yocto-bsp: add i586 option for i386
Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
1b9306705c yocto-bsp: add some standard policy
Add some useful default options to to the i386 and x86_64 templates.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
65706548f6 yocto-bsp: remove 'branch' statements in .scc if reusing branch
If reusing a branch (need_new_branch == 'n') we don't need to branch
in the .scc, so make it conditional on need_new_branch.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
bc6b04fe22 yocto-bsp: use rstrip() for assignment lines
strip() isn't necessary and causes unintended formatting changes in
the output; rstrip() remove the trailing newlines as intended while
leaving indenting whitespace intact.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
8a51a8afe8 yocto-bsp: use standard branch mapping in bsp templates
Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
2517376ee8 yocto-bsp: add standard branch mapping
Add a mechanism to distinguish common-pc variants of standard
branches.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
9078e985ac yocto-bsp: use branches_base
Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
b123553c1a yocto-bsp: allow branch display filtering
Add a "branches_base" property that can be used to allow only matching
branches to be returned from all_branches().

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
717704d12c yocto-bsp: update default branch names
Make sure the default branch names match branch names found in the
kernel branch listing.

Fixes [YOCTO #2587].

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
e5c5e20a5e yocto-bsp: strip '/base' from kernel branches in templates
For new branches, users can specify /base branches, but we don't want
the '/base' in the resultant branch name, so remove it.

Fixes [YOCTO #2693].

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:20 +01:00
Tom Zanussi
96631f080b yocto-bsp: add new strip_base() function
Add a strip_base() function to remove '/base' from the branch names
presented to the user.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:20 +01:00
Paul Eggleton
78d15a8015 cooker: fix UnboundLocalError when exception occurs during parsing
Fix a recent regression where we see the following additional error
after an error occurs during parsing:

ERROR: Command execution failed: Traceback (most recent call last):
  File "/home/paul/poky/poky/bitbake/lib/bb/command.py", line 84, in runAsyncCommand
    self.cooker.updateCache()
  File "/home/paul/poky/poky/bitbake/lib/bb/cooker.py", line 1202, in updateCache
    if not self.parser.parse_next():
  File "/home/paul/poky/poky/bitbake/lib/bb/cooker.py", line 1672, in parse_next
    self.virtuals += len(result)
UnboundLocalError: local variable 'result' referenced before assignment

(Bitbake rev: 1ae0181ba49ccfcb2d889de5dd1d8912b9e49157)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:20 +01:00
Matthew McClintock
ffff5fa2d7 bitbake/fetch2: remove references to ChecksumError class
From: Matthew McClintock <msm@freescale.com>

When merging fetch2 improvements from master into denzil, there
were too many dependencies to pull in the entire ChecksumError
class, so this patch removes references to ChecksumError for
compatability.

Fixes this issue:

NameError: global name 'ChecksumError' is not defined

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
2012-08-21 11:35:08 +01:00
Richard Purdie
119b7ff164 bitbake: fetch2: Handle errors orruring when building mirror urls
When we build the mirror urls, its possible an error will occur. If it
does, it should just mean we don't attempt this mirror url. The current
code actually aborts *all* the mirrors, not just the failed url.

This patch catches and logs the exception allowing things to continue.

(Bitbake rev: c35cbd1a1403865cf4f59ec88e1881669868103c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:08 +01:00
Richard Purdie
1162729c79 bitbake: fetch2: Improve mirror looping to consider more cases
Currently we only consider one pass through the mirror list. This doesn't
catch cases where for example you might want to setup a mirror of a mirror
and allow multiple redirection. There is no reason we can't support this
and the patch loops through the list recursively now.

As a safeguard, it will stop if any duplicate urls are found, hence
avoiding circular dependency looping.

(From Poky rev: 0ec0a4412865e54495c07beea1ced8355da58073)

(Bitbake rev: e585730e931e6abdb15ba8a3849c5fd22845b891)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:08 +01:00
Richard Purdie
4591963649 bitbake: fetch2: Explicitly check for mirror tarballs in mirror handling code
With support for things like git:// -> git:// urls, we need to be
more explicity about the mirrortarball check since we need to fall
through to the following code in other cases.

(From Poky rev: 28e858cd6f7509468ef3e527a86820b9e06044db)

(Bitbake rev: a2459f5ca2f517964287f9a7c666a6856434e631)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Richard Purdie
3f30bf4eab bitbake: fetch2: Split try_mirrors into two parts
There are no functionality changes in this change

(From Poky rev: d222ebb7c75d74fde4fd04ea6feb27e10a862bae)

(Bitbake rev: db62e109cc36380ff8b8918628c9dea14ac9afbc)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Conflicts:

	bitbake/lib/bb/fetch2/__init__.py

Signed-off-by: Khem Raj <kraj@juniper.net>
2012-08-21 11:35:07 +01:00
Richard Purdie
ea032bb2c3 bitbake: fetch2: Ensure when downloading we are consistently in the same directory
This assists with build reproducuility. It also avoids errors if cwd
happens not to exist when we call into the fetcher. That situation
would be unusual but I hit it with the unit tests.

(From Poky rev: 86517af9e066c2da1d580fa66b7c7f0340f3403e)

(Bitbake rev: b886c6c15a58643e06ca5ad7a3ff1f7766e4f48c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Richard Purdie
4bfb54e0ba bitbake: fetch2: Only cache data if fn is set, its pointless caching it against a None value
(From Poky rev: c2df30bf6d1f8c263a38c45866936c1bf496ece5)

(Bitbake rev: f4b59cc6e1c3ddc168a1678ce39ff402ea1ff4cc)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Richard Purdie
729b396b21 bitbake: fetch2: Fix error handling in uri_replace()
(From Poky rev: 1bfba28a583cb167f60e05ecdf34d0786dc1eec5)

(Bitbake rev: aa7467a764ddcbc7d65af99e88cf093b6ec6d24e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Richard Purdie
bdb918f808 bitbake: fetch2/__init__: Make it clearer when uri_replace doesn't return a match
(From Poky rev: dc9976331c5cbb0983adb54f6deb97b9203bacbc)

(Bitbake rev: eb96609864dec95a516e6e687dd6a2f31d523acf)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Zhenhua Luo
e4f8c7f693 valgrind: fix default.supp missing issue
When run valgrind, following error appears:
    ==2254== FATAL: can't open suppressions file "/usr/lib/valgrind/default.supp"

(From OE-Core rev: 0b3261d513cdad80174a9b9e804981c50bcb7ca2)

(From OE-Core rev: 95756cfbb7a9348b23cb46a49a5509e57e973faf)

Signed-off-by: Zhenhua Luo <b19537@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:33:31 +01:00
Paul Eggleton
4539cf1936 classes/license: fix manifest to work with deb
Prepend the license manifest creation call to ROOTFS_POSTPROCESS_COMMAND
instead of appending to ROOTFS_POSTINSTALL_COMMAND. The latter is not
implemented for the deb backend (and probably ought to just be removed
completely), and by using _prepend we can still ensure it occurs before
package info is removed (and before buildhistory in case it is needed
there in future).

(From OE-Core rev: 6ffd958ff2f7f1d07ab9da5ca8db1727dd074980)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:45 +01:00
Paul Eggleton
6631752c25 scripts/buildhistory-diff: add GitPython version check
Display an error if the user does not have at least version 0.3.1 of
GitPython installed.

(From OE-Core rev: 07b9c3bc67439d47627fe256796465520b533753)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:45 +01:00
Paul Eggleton
75b7901f22 buildhistory_analysis: fix error when version specifier missing
Passing None to split_versions() will raise an exception, so check that
the version is specified before passing it in.

(From OE-Core rev: a530aee6d9b2b63ab5fa780b1761eac759e8c833)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:45 +01:00
Paul Eggleton
c520132cb8 classes/rootfs_*: fix splitting package dependency strings
If a + character appears in a version specification within the list of
package dependencies, the version will not be removed from the list in
list_package_depends/recommends leading to garbage appearing in the
dependency graphs generated by buildhistory. To avoid any future
problems due to unusual characters appearing in versions, change the
regex to match almost any character.

Fixes [YOCTO #2451].

(From OE-Core rev: d592c3a26c630d5f3bfba4804a93766447bf72c9)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:45 +01:00
Saul Wold
a8d4eb449f foomatic: fix perl path for target
This problem appears on F17 when configure finds /bin/perl, since the beh
script is a target side script, we need to set PERL in the do_configure_prepend
in order for the correct perl to be used

(From OE-Core rev: f189ee78bed0920cfd33689ebb9aad45fded2c4d)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:44 +01:00
Otavio Salvador
4253c34000 shadow: use 'users' group by default
The rootfs has 'users' group at number 100 and without this fix it
would assign to a non-existent group and if a group with gid as 1000
is created later it would own all files for users created.

(From OE-Core rev: c2bd2936907ea8b776d58e8cc58a8359a6e7e9b9)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Saul Wold <sgw@linux.intel.com>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:44 +01:00
Otavio Salvador
1a70ddc4e8 shadow-native: use 'users' group by default
The rootfs has 'users' group at number 100 and without this fix it
would assign to a non-existent group and if a group with gid as 1000
is created later it would own all files for users created.

(From OE-Core rev: 42e9f988bc691ca763d5eda3537d6281b7902794)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Saul Wold <sgw@linux.intel.com>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:44 +01:00
Matthew McClintock
6352d0a9a1 u-boot.inc: update linker arguments to pass --sysroot arg
If we are building from sstate-cache it's possible to be building
from another folder on another machine, therefore the linker requires
that a proper --sysroot is passed too it so it can find things like
libgcc.a and avoid errors such as:

| arm-poky-linux-gnueabi-gcc  -g  -O2  -fno-common -ffixed-r8 -msoft-float   -D__KERNEL__ -DCONFIG_SYS_TEXT_BASE=0x80008000 -I/local/yocto/upstream/label/ubuntu1204-64b/machine/beagleboard/poky/edison/tmp/work/beagleboard-poky-linux-gnueabi/u-boot-v2011.06+git5+b1af6f532e0d348b153d5c148369229d24af361a-r0/git/include -fno-builtin -ffreestanding -nostdinc -isystem /local/yocto/upstream/label/ubuntu1204-64b/machine/beagleboard/poky/edison/tmp/sysroots/x86_64-linux/usr/bin/armv7a-vfp-neon-poky-linux-gnueabi/../../lib/armv7a-vfp-neon-poky-linux-gnueabi/gcc/arm-poky-linux-gnueabi/4.6.3/include -pipe  -DCONFIG_ARM -D__ARM__ -marm  -mabi=aapcs-linux -mno-thumb-interwork -march=armv5 -Wall -Wstrict-prototypes -fno-stack-protector -fno-toplevel-reorder   -o hello_world.o hello_world.c -c
| arm-poky-linux-gnueabi-gcc  -g  -O2  -fno-common -ffixed-r8 -msoft-float   -D__KERNEL__ -DCONFIG_SYS_TEXT_BASE=0x80008000 -I/local/yocto/upstream/label/ubuntu1204-64b/machine/beagleboard/poky/edison/tmp/work/beagleboard-poky-linux-gnueabi/u-boot-v2011.06+git5+b1af6f532e0d348b153d5c148369229d24af361a-r0/git/include -fno-builtin -ffreestanding -nostdinc -isystem /local/yocto/upstream/label/ubuntu1204-64b/machine/beagleboard/poky/edison/tmp/sysroots/x86_64-linux/usr/bin/armv7a-vfp-neon-poky-linux-gnueabi/../../lib/armv7a-vfp-neon-poky-linux-gnueabi/gcc/arm-poky-linux-gnueabi/4.6.3/include -pipe  -DCONFIG_ARM -D__ARM__ -marm  -mabi=aapcs-linux -mno-thumb-interwork -march=armv5 -Wall -Wstrict-prototypes -fno-stack-protector -fno-toplevel-reorder   -o stubs.o stubs.c -c
| arm-poky-linux-gnueabi-ld  -r -o libstubs.o  stubs.o
| arm-poky-linux-gnueabi-ld -g -Ttext 0x80300000 \
| 			-o hello_world -e hello_world hello_world.o libstubs.o \
| 			-L. -lgcc
| arm-poky-linux-gnueabi-ld: cannot find -lgcc
| make[1]: *** [hello_world] Error 1

(From OE-Core rev: ad78441045183277a7e77341f4af6d9d65a4a3c8)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:44 +01:00
Nitin A Kamble
0a6f9a5f5a tcl: fix target recipe build issue on older distros
the builddir is put in front of the LD_LIBRARY_PATH, causing dynamically
linking of target library with native tclsh.

Fix this behavior to cross build tcl correctly.

This issue got exposed when eglibc-2.15 was configured for the target.

(From OE-Core rev: 8e25fe0ecc3d6fe2d5456b525c5014554bc70cfe)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:43 +01:00
Richard Purdie
7c5318e1a0 utils.bbclass: add helper function to add all multilib variants of a specific package
This is useful for the scenario where we want to add 'gcc' to
the root file system for all multilib variants

(From OE-Core rev: e82c2f0b91611f3e755985bb8d1608ca5792e825)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:43 +01:00
Enrico Scholz
88927f4f1f libtool: fixed parallel build related race
While building libtool, the libtool script itself will be regenerated
because OE modifies a dependency[1]. With -jX, this operation (-->
removal, creation of non-x file, 'chmod a+x') can happen at a time when
the script is going to be executed.  This can cause errors like:

| arm-linux-gnueabi-libtool: compile:  ccache arm-linux-gnueabi-gcc ...
| ...
| /bin/sh ./config.status libtool
| ...
| arm-linux-gnueabi-libtool: compile:  ccache arm-linux-gnueabi-gcc ...
| /bin/sh: ./arm-linux-gnueabi-libtool: Permission denied
| make[2]: *** [libltdl/libltdl_libltdl_la-lt__alloc.lo] Error 126

I am not sure whether the custom do_compile_prepend() is still needed.
For now only the issue above will be fixed by executing ./config.status
yet again.

[1] see 648290d5bf

(From OE-Core rev: 15204a6cbcdbbb84e02da05b1fb15644fe7df332)

Signed-off-by: Enrico Scholz <enrico.scholz@sigma-chemnitz.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:43 +01:00
Ting Liu
68888987ce image_types.bbclass: redefine EXTRA_IMAGECMD_jffs2 to leverage siteinfo
(From OE-Core rev: 3bff2398cd2d730111faa182d16356e189a36353)

Signed-off-by: Ting Liu <b28495@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:43 +01:00
Matthew McClintock
3129f4ff0e task-core-tools-testapps.bb: kexec-tools does not work on e5500-64b parts
This prevents kexec from building for this part since it does not work

(From OE-Core rev: d9bf008b36e8b2211624705d8ee4e90d94463dd5)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Matthew McClintock
09ae715bb1 gcc-package-runtime.inc: Fix QA warning
> ERROR: QA Issue: gcc-runtime: Files/directories were installed but not shipped
>   /usr/lib/libgomp.so.1.0.0
>   /usr/lib/libgomp.so.1

(From OE-Core rev: 4ec107f822453bd9468009d7a2124a3d592610b5)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Leon Woestenberg
c26cfe7738 kernel.bbclass: Copy bounds.h only if it exists, needed for 2.6.x.
Linux 2.6.x kernels did not (all) have the bounds.h file, so copy
only iff exists.

(See OE-Core 02ac0d1b65389e1779d5f95047f761d7a82ef7a4)

(From OE-Core rev: 6e9cfa4ba34d8899dfb271818ef30730de8353fa)

Signed-off-by: Leon Woestenberg <leon@sidebranch.com>
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Khem Raj
81aac104fa xserver-xorg: Fix build on powerpc
(From OE-Core rev: 5e141b2a7331f7ee8d9eedf02c4fc2ae5ed8d5ec)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Saul Wold
bc754beb3a curl: Use gnutls for target and openssl for native
Since gnutls is available on the target use it, but we do not build gnutls for
the native side as it adds too many dependecies, so use openssl.

(From OE-Core rev: 0dc6543a2d898d381c287d6b7becfc8fb8f279c0)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Saul Wold
e20af93811 curl: enable ssl support
This patch enables ssl support for curl to allow git to clone from
https / ssl sites. We do not want to enable gnutls for native or
nativesdk, as it adds additional dependency and increase build time

[YOCTO #2532]

(From OE-Core rev: 9f7e9fb6cd08b3048e97dd1011f0510416beb103)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Martin Donnelly
235667966c augeas: Add libxml2 dependency
This patch fixes the following Augeas configure error.

| checking for LIBXML... no
| configure: error: Package requirements (libxml-2.0) were not met:
|
| No package 'libxml-2.0' found
|
| Consider adjusting the PKG_CONFIG_PATH environment variable if you
| installed software in a non-standard prefix.
|
| Alternatively, you may set the environment variables LIBXML_CFLAGS
| and LIBXML_LIBS to avoid the need to call pkg-config.
| See the pkg-config man page for more details.
| ERROR: oe_runconf failed

(From OE-Core rev: 1d55679821003ac4d652b08f2eebab1636505042)

Signed-off-by: Martin Donnelly <martin.donnelly@ge.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Zhenhua Luo
258bbaa1d2 task-core-sdk.bb: add libgomp and libgomp-dev by RECOMMENDS
(From OE-Core rev: e072dc29c6f4dd3c429e2f0e07da3c29bda36023)

Signed-off-by: Zhenhua Luo <b19537@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Ting Liu
f44faa8757 lsof: define linux C library type when using eglibc
lsof tries to compile a temp c source file and execute the binary to
determine linux C library type (file Configure, line 2689-2717).
It is inpracticable for cross-compilation and may have build issue on
some distros since it depends on host settings.

Fix below error when building for 64bit target on 64bit host:
[...]
| dsock.c:481:44: error: 'TCP_LISTEN' undeclared (first use in this function)
| dsock.c:482:45: error: 'TCP_CLOSING' undeclared (first use in this function)
[...]
| make: *** [dsock.o] Error 1

The actual issue exists in do_configure:
[...]
Testing C library type with cc ... done
Cannot determine C library type; assuming it is not glibc.

Which is in turn caused by missing 'gnu/stubs-32.h" when compiling
the temp c source file on host:
[...]
fatal error: gnu/stubs-32.h: No such file or directory compilation terminated.

file gnu/stubs-32.h is provided by 32bit glibc.

(From OE-Core rev: 8c38bc022de209187f31952ae02313dd3104f4c6)

Signed-off-by: Ting Liu <b28495@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Matthew McClintock
3cf5fc2272 sysvinit-inittab_2.88dsf.bb: Allow multiple serial port consoles to be defined
Set SERIAL_CONSOLES if you want to define multiple serial consoles, also if
you need to check for the presence of the serial consoles you can also define
SERIAL_CONSOLES_CHECK to determine if these are present when you boot. This
will prevent error message that pop up when the serial port is not present.

SERIAL_CONSOLES = "115200;ttyS0 115200;ttyS1 115200;ttyEHV0"
SERIAL_CONSOLES_CHECK = "${SERIAL_CONSOLES}"

The above lines in machine.conf or elsewhere will have the effect of having
two serial consoles and removing any that are not present at boot

(From OE-Core rev: 2e7dddfce4a40a56f671116a2001b13c57667c70)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Matthew McClintock
ffd554d2ff libgomp: add libgomp (openmp) library, and build for powerpc targets by default
(From OE-Core rev: d58668c6770f519199192c7e3817fbc7d6576af3)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:40 +01:00
Matthew McClintock
4899d07aa7 gcc: gcc-cross-canadian: use correct location for libraries for powerpc64
This fixes the issue where gcc invokes the linker with an incorrect -L
library location and gives up because it can't find libraries. It was
looking in a /lib folder instead of /lib64

(From OE-Core rev: aa010039a38188f1b1b38a978287d1597138b8b9)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:40 +01:00
Matthew McClintock
0b71ac7a99 gcc-configure-common.inc: use --with-long-double-128 on powerpc to comply with ABI
(From OE-Core rev: a2e00d2cae8e4b58fc3b9fc7853da519a615aa31)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:40 +01:00
Matthew McClintock
bdebedb6cf openjade-native_1.3.2.bb: fix typo and change the deps exclusion to correct var
(From OE-Core rev: 6ef3b77ba8baddb5748f2ee27d39a5a0d32e3bfb)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:40 +01:00
Matthew McClintock
b2b365e8f0 dtc.inc: fix for libdir == /usr/lib64
On 64bit systems dtc will still install libaries in /usr/lib
unless we havet this override

(From OE-Core rev: 679e04a33b6e4569e7a95758ccb10d50931f5d67)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:39 +01:00
Matthew McClintock
738f163797 packagedata.py: Fix get_subpkgedata_fn for multilib
This happens when tryng to add libgcc-dev to as a multilib package
(e.g. IMAGE_INSTALL_append = " lib32-libgcc-dev")

| Processing task-core-boot...
| Processing fman-ucode...
| Processing dosfstools...
| Processing lib32-libgcc-dev...
| Unable to find package lib32-libgcc-dev (libgcc-dev)!
NOTE: package fsl-image-full-1.0-r1.1.3.6: task do_rootfs: Failed

RPM (or bitbake?) is looking in the tmp/pkgdata, however some of these file
paths are mungned for the multilib scenario:

$ find tmp/pkgdata/ | grep libgcc-dev$
tmp/pkgdata/ppce5500-fsl-linux/runtime/lib32-libgcc-dev
tmp/pkgdata/ppc64e5500-fsl-linux/runtime/libgcc-dev

This patch fixes where we look for these files so they can be found and
properly installed for the multilib root file system

(From OE-Core rev: 24e8399aeccf4b0742acd986bb506ff6f388b4a2)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:39 +01:00
Matthew McClintock
1ddd5378ec qemu-0.15.1: add patch to fix compilatation problems on powerpc
ERROR: Function failed: do_compile (see /opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/work/ppc64e5500-fsl-linux/qemu-0.15.1-r6/temp/log.do_compile.28447 for further information)
ERROR: Logfile of failure stored in: /opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/work/ppc64e5500-fsl-linux/qemu-0.15.1-r6/temp/log.do_compile.28447
Log data follows:
| DEBUG: SITE files ['endian-big', 'bit-64', 'powerpc-common', 'common-linux', 'common-glibc', 'powerpc-linux', 'powerpc64-linux', 'common']
| ERROR: Function failed: do_compile (see /opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/work/ppc64e5500-fsl-linux/qemu-0.15.1-r6/temp/log.do_compile.28447 for further information)
| NOTE: make -j 24
|   LINK  ppc-linux-user/qemu-ppc
| /opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/sysroots/x86_64-linux/usr/libexec/ppc64e5500-fsl-linux/gcc/powerpc64-fsl-linux/4.6.4/ld:/opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/work/ppc64e5500-fsl-linux/qemu-0.15.1-r6/qemu-0.15.1/ppc64.ld:84: syntax error
| collect2: ld returned 1 exit status
| make[1]: *** [qemu-ppc] Error 1
| make: *** [subdir-ppc-linux-user] Error 2
| make: *** Waiting for unfinished jobs....
| ERROR: oe_runmake failed

(From OE-Core rev: 2a1f7a8be5170cdb85f9faae81d94ac2ca8b6566)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:39 +01:00
Matthew McClintock
ebbd09e21d libxml-parser-perl_2.41.bb: fix MakeMaker issues with using wrong CC/LD/etc
MakeMaker has a bug where it does not propagate CC/LD/etc information
down to subproject it generates Makefiles for... this recipe has has an
Expat subproject which has issues building if we are using sstate-cache
and it will reference the old sysroots and be unable to build properly.
There is an upstream MakeMaker bug for this issue but we can work around
it by fixing up the Makefiles for now

See:
https://rt.cpan.org/Public/Bug/Display.html?id=28632

(From OE-Core rev: e1609123a6ca6aef18e48afe0ce61325da910fc1)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:39 +01:00
Zhenhua Luo
09c83947da linux-dtb: add multi-dtb build support
including following enhancement:
    * support multi-dtb build
    * skip dtb build and install when KERNEL_DEVICETREE is empty
    * print a warning message when specified dts file is not available

(From OE-Core rev: 59b149e466c9fc81ec94a740e805339db97fc3ac)

Signed-off-by: Zhenhua Luo <b19537@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:38 +01:00
Saul Wold
8c6455b6db xinetd: Update to 2.3.15
(From OE-Core rev: 48f93e0ade1c534b9af2b84874f9b17e3107c724)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:38 +01:00
Scott Rifenbark
73cdebf60d documentation/dev-manual/dev-manual-kernel-appendix.xml: Add note about conflict
Added a note to the part of the example where you bitbake the kernel
after turning off CONFIG_SMP.  The warnings you get can cause confusion.
the note explains they are normal.

(From yocto-docs rev: 08ed090f0b8b6970832242a52827ae2957918cf3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-29 15:54:26 +01:00
Scott Rifenbark
6b06a4fa1b documentation/dev-manual/dev-manual-kernel-appendix.xml: Added branch step
The example did not specify to switch to the "denzil" branch after
establishing the local repo of poky-extras.  The example will not
work without this step.

(From yocto-docs rev: 69b99a77f1f8247c217e77af89ecec3982adc264)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-29 15:54:26 +01:00
Ross Burton
2dcbb48df9 gconf.bbclass: don't register schemas in the install stage
Previously this was installing schemas in the sysroot, which is wrong for native
packages as nothing should touch the sysroot directly, and even more wrong for
non-native packages as the sysroot is irrelevant.

So, export the environment variable that stops the registration happening at
install time. The postinst script will handle the non-native case, and for the
sysroot I've opened #2648.  This isn't a massive problem as nothing to my
knowledge actually installs schemas to the sysroot.

[YOCTO #2245]

(From OE-Core rev: 741146fa90f28f7ce8d82ee7f7e254872d519724)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-29 15:49:34 +01:00
Elizabeth Flanagan
fb2335fa2b distro.conf: Flipping for pending point release
7.0.1/1.2.1 release. Flipping distro.conf values

Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
2012-06-21 14:22:32 -07:00
Richard Purdie
e972d78009 poky-tiny: eliminate mtrace rdepends
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 12:01:52 +01:00
Scott Rifenbark
91d6344765 documentation: Updated Manual revision history table
Using July 2012 for the release date.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 12:01:39 +01:00
Anders Darander
a707b3269c qt4-embedded: fix QT_ARCH usage in QT_CONFIG_FLAGS
After the change to shell style functions (from python style), the
ability to use oe_filter_out on QT_CONFIG_FLAGS got broken.

This patch solves that by referring to QT_ARCH in a more correct way.

(From OE-Core rev: 8394dda5f12157c88005a788cd35421f498c9b82)

Signed-off-by: Anders Darander <anders@chargestorm.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:35 +01:00
Bruce Ashfield
0d3748ca5d linux-yocto/3.0: update to v3.0.32
Updating the 3.0 kernel SRCREVs to integrate the v3.0.32 -stable
release.

(From OE-Core rev: 6d97c94d25713b47417e184308ab43947c7f243d)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:34 +01:00
Bruce Ashfield
5f2b526109 linux-yocto/3.2: update to v3.2.18
Updating the 3.2 kernel SRCREVs to pickup the -stable update
to v3.2.18.

(From OE-Core rev: 0308f91b17b052902a01c98afdd5619cd0c617e5)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:34 +01:00
Bruce Ashfield
6bb0fdda40 linux-yocto: policy cleanups
Updating the meta SRCREVs to pickup configuration policy cleanups:

  49f931b meta/fishriver: remove redundant features and options
  51a6d3f meta/emenlow: remove redundant features and options
  101dd7f meta/crownbay: remove redundant features and options
  4110ecd meta/sugarbay: remove redundant features and options
  0f1304a meta/jasperforest: remove redundant features and options
  0a56a3b meta/common-pc-64: factor out SCSI CDROM option
  b71938a meta/common-pc-64: use usb-mass-storage feature
  0724f40 meta: add scsi cdrom feature
  438bca8 meta/common-pc: use usb-mass-storage feature
  c970881 meta: factor out SCSI options from the usb-mass-storage feature
  4c8135e meta: add scsi disk feature
  6872a81 meta: add scsi feature
  e706ec5 meta/sugarbay: factor out policy-related options
  8b7fbc2 meta/jasperforest: factor out policy-related options
  fea1b0e meta/fishriver: factor out policy-related options
  13bf9ab meta/emenlow: factor out policy-related options
  4748d50 meta/crownbay: factor out policy-related options
  44f592f meta/common-pc-64: factor out policy-related options
  5a3f5c7 meta/common-pc: factor out policy-related options
  1f5a10b meta/common-pc-64: use usb features
  4b87723 meta/common-pc: use usb features
  594ba05 meta: add ROOT_HUB_TT config option to the usb/ehci-hcd feature

(From OE-Core rev: db35cd40c7abe13a9701eb74099d69d461cadb0a)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:34 +01:00
Bruce Ashfield
5ad28e97e6 linux-yocto: intel BSP config changes
Updating the meta SRCREV for the following fixes:

   1dfd60f meta/fishriver: move smp options from recipe-space
   012780a meta/emenlow: move smp options from recipe-space
   b59b1a5 meta/crownbay: move smp options from recipe-space
   74dc6ac meta/sugarbay: remove boot-live options
   a4bedcb meta/jasperforest: remove boot-live options
   4ae7b81 meta/sugarbay: use usb features
   30e7e8c meta/jasperforest: use usb features
   22d0c5d meta/fishriver: use usb features
   e262965 meta/emenlow: use usb features

(From OE-Core rev: bde50853658bab563a888b82278a6acfdce6305b)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:34 +01:00
Bruce Ashfield
495ea21c8b linux-yocto/3.2: configuration and pch merge
Updating the 3.2 SRCREVs to import the following meta/config
changes:

   6b3d4e0 meta: add mei feature
   519abac meta: add usb/uhci-hcd feature
   a67c5a3 meta/crownbay: use usb features
   0855066 meta: add usb/ohci-hcd feature
   15f1a99 meta: add usb/ehci-hcd feature
   8fa6408 meta: add usb/xhci-hcd feature
   c724a55 meta: add usb/base feature
   b55b3a1 sys940x: Cleanup sys940x.scc
   93f2e97 sys940x: Use PHYSICAL_START of 0x200000 to boot
   aaa034b sys940x: Add common standard and preempt-rt features
   e2b1286 sys940x: Add efi-ext to standard and preempt-rt configs
   d188c21 sys940x: Move emgd-1.10 data to the standard scc file
   72d9369 fri2: Cleanup fri2-$KTYPE.scc files re efi-ext.scc
   dbcb120 fri2: Use emgd-1.10 feature and branch

And the following driver fix:

   f39a0a9 pch_gbe: Do not abort probe on bad MAC

(From OE-Core rev: 0609299880ad0aca121e7192d84f85d913c40c62)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:33 +01:00
Saul Wold
94e3e894d0 eglibc: added ac_cv_path_ to CACHED_CONFIGUREVARS
On Fedora 17, bash has moved to /usr/bin/bash and the configure process finds it
on the host machine there, this ensures that it is set correctly for the target.

[YOCTO #2363]

(From OE-Core rev: 0d957dd0604230bef1d01ee9992c56d2aca62ec1)

Signed-off-by: Saul Wold <sgw@linux.intel.com>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:33 +01:00
Saul Wold
8389decfe6 quilt: added ac_cv_path_BASH to CACHED_CONFIGUREVARS
On Fedora 17, bash has moved to /usr/bin/bash and the configure process finds it
on the host machine there, this ensures that it is set correctly for the target.

[YOCTO #2363]

(From OE-Core rev: d54ff1f79f05ba5bd0e1006545e7f1e699998668)

Signed-off-by: Saul Wold <sgw@linux.intel.com>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:33 +01:00
Nitin A Kamble
7412611252 eglibc: package mtrace separately
add libc-mtrace as dependency for task-core-tools-debug

now eglibc-mtrace gets included in an sdk image and not in a non-sdk image.

This does not affect builds with uclibc.

This fixes bug: [YOCTO# 2374]

(From OE-Core rev: 6f78625dbab5c81ef20b197aee5206f63611b673)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:32 +01:00
Gary Thomas
026d502b2a webkit-gtk: Apply work around for all PowerPC targets
The current patch for bug #1570 only applies to qemuppc but should be
applicable for all PowerPC targets.  Also update the patch so that
only one language backend, either ICU or PANGO, is built.

Also remove some old customizations (dependencies on darwin) as these
should now be handled in a layer specific .bbappend file.

(From OE-Core rev: 87eae0851e5334734df40a833596c6cbc6715f7f)

Signed-off-by: Gary Thomas <gary@mlbassoc.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:32 +01:00
Richard Purdie
e81f7c6152 openjade-native: Ensure we reautoconf the package
Currently since configure.in in is in a subdirectory, we don't reautoconf the
recipe. We really need to do this, to update things like the libtool script used
and fix various issues such as those that could creep in if a reautoconf is
triggered for some reason. Since this source only calls AM_INIT_AUTOMAKE to gain the
PACKAGE and VERSION definitions and that macro now errors if Makefile.am doesn't
exist, we need to add these definitions manually.

These changes avoid failures like:

----
| ...
| DssslApp.cxx:117:36: error: 'PACKAGE' was not declared in this scope
| DssslApp.cxx:118:36: error: 'VERSION' was not declared in this scope
| make[2]: *** [DssslApp.lo] Error 1
----

(From OE-Core rev: 87753615435c8aec7df5964045e24f13877cd7cc)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:32 +01:00
Paul Eggleton
119e1b7dc9 poky.conf: use correct version string for Ubuntu 12.04
Since it is an LTS release, the final version string was not
"Ubuntu 12.04" but "Ubuntu 12.04 LTS", so use this when doing the tested
host distribution check.

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:27:19 +01:00
Paul Eggleton
5b3a0eac61 hob: handle sanity check failures as a separate event
In order to show a friendlier error message that does not bury the
actual sanity error in our typical preamble about disabling sanity
checks, use a separate event to indicate that sanity checks failed.

This change is intended to work together with the related change to
sanity.bbclass in OE-Core.

Fixes [YOCTO #2336].

(Bitbake rev: 24b631acdaa143a4de39c6e1328849660c66f219)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:27:11 +01:00
Kang Kai
4c4924ad1b cooker.py: terminate the Parser processes
[Yocto 2142]

Force to exit HOB when hob is parsing recipes, the bitbake doesn't stop.
It hangs on function BitBakeServerConnection::terminate in file
server/process.py:
    else:
        self.procserver.join()
It is waiting for the children process quit.

In stage of parse recipes BBCooker spawns Parser processes as many as
cpu numbers. When quit the Parser processes they make their internal
Queue to call cancel_join_thread() to avoid block but don't work at
this time.
So force to terminate the Parser processes.

(Bitbake rev: bebef58b21bdff7a3ee1fa2449b7df19144f26fd)

Signed-off-by: Kang Kai <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:27:05 +01:00
Shane Wang
9e6d1101b4 Hob: Adjust the progress bar and set 100% only when all is done.
After parsing recipes, Hob will populate recipes and packages, which is probably
time exhaused. So, this patch is to adjust the progress bar and ensure 100% is
set if and only if all populations are done.

The patch also fixes "weird 18 second delay when parsing recipes" on build appliance.
Because Hob is doing something, but the progress bar shows 100% and wait there.

[Yocto #2341]

(Bitbake rev: 2c4a21dc8a588c8cf05549ddd9734731a46bea10)

Signed-off-by: Shane Wang <shane.wang@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:55 +01:00
Scott Rifenbark
2bddf70a84 documentation/poky.ent: Updated variables for correct 1.2.1 build.
Key variables are DISTRO at "1.2.1", YOCTO_DOC_VERSION at "current",
and POKYVERSION at "7.0.1".  Note that I have to change "current"
to "1.2.1" before publishing any manuals prior to the official release
of 1.2.1.

(From yocto-docs rev: e62e0baec71c9d39473a9c67caf17f26346539d5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:23 +01:00
Scott Rifenbark
6698060d8e documentation/bsp-guide/bsp.xml: Review comments to recommendations
I added a small review comment to the section based on reviewer
feedback.

(From yocto-docs rev: 206d43c23efa114b57a1e75e469a6f5bdaf94715)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:22 +01:00
Scott Rifenbark
bd3cd64da3 documentation/bsp-guide/bsp.xml: Updates to requirements section
Implemented review feedback from Dave Stewart and Tom Zanussi.

(From yocto-docs rev: 774e00d34d2abd466a6d64b4b91f60d87203add4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:22 +01:00
Scott Rifenbark
c88f25ddb4 documentation/bsp-guide/bsp.xml: BSP recommendations section added
Added the "Requirements and Recommendations for Released BSPs"
section.  This section was requested by Dave Stewart based on
community input for direction on how to create a BSP that was
compliant with the Yocto Project.  The input for the section came
from Tom Zanussi.

A spell-check was performed also prior to this commit that addressed
a few spelling issues across the file.

(From yocto-docs rev: 6357eb7a26abb3dca14daf5d9b9a4e245dd0827b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:22 +01:00
Laurentiu Palcu
ef215694de freetype: upgrade to 2.4.9
(From OE-Core rev: 7c767d3723e0b55d3bcd3864a9cdbce6d11d5b35)

Signed-off-by: Laurentiu Palcu <laurentiu.palcu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:50 +01:00
Paul Gortmaker
71a6fb605a gitignore: add wildcard to match toplevel patch files
To support the basic workflow of trivial patches:

 git format-patch HEAD~.. ; git send-email --to foo@bar.com 0001-foo.patch

We don't want git status reporting on patches lying in the top
level dir in this case.

Cc: Richard Purdie <richard.purdie@linuxfoundation.org>
(From OE-Core rev: 7e32cbf30352e12c55c3c378631f4e238cf682c5)

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:49 +01:00
Joe Slater
bfc8589048 shared-mime-info: fix build race condition
The definition of install-data-hook in Makefile.am leads
to multiple, overlapping, executions of the install-binPROGRAMS
target.  We modify the definition to avoid that.

(From OE-Core rev: d8a09cb17f2f3b43718ba354da7368a2ed793766)

Signed-off-by: Joe Slater <jslater@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:49 +01:00
Song.Li
6514e193ac groff: Fix build on Fedora 17
Generally distros keep perl at /usr/bin/perl
But Fedora 17 also has /bin/perl,
this causes groff_1.20.1 build to put perl
interpreter path as /bin/perl
But we set perl location for target as /usr/bin/perl

This mismatch of perl path causes failure of rootfs image creation
like this:

| error: Failed dependencies:
|       bin/perl is needed by groff-1.20.1-r1.ppc603e

(From OE-Core rev: 75824ff13f43b330b11cf9a130f061baee785e1a)

Signed-off-by: Song.Li <song.li@windriver.com>

Sync up with the do_install_append_virtclass-native chunk.

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:49 +01:00
Jesse Zhang
44fb9daa81 beecrypt: disable java
If java is installed on host, beecrypt will attempt to use it.

(From OE-Core rev: 4d2ff0a69692f54313ffa9dc83d0e4a2ddba47c3)

Signed-off-by: Jesse Zhang <sen.zhang@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:49 +01:00
Scott Garman
309b2c090e runqemu-ifup: enable arp proxying
This allows core-image-sato to access the WAN.

Thanks to Dexuan Cui for proposing this fix.

Fixes [YOCTO #2329]

(From OE-Core rev: 680a94c378f20c00e8bee0575b8922bccc008fec)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Tom Zanussi
67c7bc5e6c gnupg: disable CCID driver
The CCID driver driver is apparently unnecessary, so disable it.

Also remove the associated libusb dependency, since that won't be
needed either.

According to Scott Garman <scott.a.garman@intel.com>:

I'd just note that the CCID smartcard reader is a specific piece of
hardware that is unlikely to be used in a majority of our use cases.

(From OE-Core rev: 2fcd564b5395950f480a288d434c64c8fee65ece)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts when importing from oe-core master.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Tom Zanussi
e06e502bbb gnupg: add libusb to DEPENDS
gnupg apparently depends on libusb:

| error: Failed dependencies:
| 	 libusb-0.1-4 >= 0.1.3 is needed by gnupg-2.0.18-r1.core2

So add libusb to gnupg DEPENDS.

(From OE-Core rev: 1a76f50c1f159477a86dc7a6cb95873cee05d9e6)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts when importing from oe-core master.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Zhai Edwin
89c0e81273 webkit-gtk: Use glib as unicode backend to avoid browser crash
webkit-gtk depends on ICU for the unicode, but ICU is not safe when build and
target system owns different endian. ICU's community is not responsive to make
a patch for this, so glib is used as work around here.

[YOCTO #1570] got fixed

(From OE-Core rev: df83a9480ba7b2fd2bcc0a92932d51434d7795a0)

Signed-off-by: Zhai Edwin <edwin.zhai@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Paul Eggleton
78a5471a29 classes/sanity: send sanity check failure as a separate event for Hob
In order to show a friendlier error message within Hob that does not
bury the actual sanity error in our typical preamble about disabling
sanity checks, use a separate event to indicate that sanity checks
failed.

This change is intended to work together with the related change to
BitBake, however it has a check to ensure that it does not fail with
older versions that do not include that change.

Fixes [YOCTO #2336].

(From OE-Core rev: 3788f9bcb36cca90ca8cf650c9d33f5485e3087b)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Joshua Lock
b30a243f3f sanity.bbclass: copy the data store and finalise before running checks
At the ConfigParsed event the datastore has yet to be finalised and thus
appends and overrides have not been set.
To ensure the sanity check is being run against the configuration values
the user has set call finalize() on a copy of the datastore and pass that
for all sanity checks.

(From OE-Core rev: 527e26ea1e44f114fc9fcec1bc7d83156dba1a70)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Richard Purdie
20657c1fa0 image.bbclass: Ensure ${S} is cleaned at the start of rootfs generation
Some image classes such as bootimg save files into ${S} as part of rootfs
generation. For correctness we should therefore clean this at the start of
image generation to ensure reproducibility.

I found this issue when some files I thought should disappear from my rootfs
would not disappear.

(From OE-Core rev: 23b7d7dab475caca4558e3b20db534122bee1525)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Mihai Lindner
3029a08744 sudo: fixed wrong chmod path
Placed $D between braces ${D} to be correctly expanded to the
workdir path, instead of a path relative to host rootfs.
Currently, bitbake sudo fails on host systems where sudo is not
installed.

(From OE-Core rev: 83c5acfe4731990c296be1bf67059452a72f9584)

Signed-off-by: Mihai Lindner <mihaix.lindner@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Richard Purdie
d376a4e8f1 bitbake.conf: Improve wget timeouts
The wget default is a 900 second timeout and 20 retries. This is way too long
for most of our usecases so this patch changes it to a 30 second timeout and
reduces retries from 5 to 2. We have good mirror infrastructure, this will
let us fall back to it easier.

(From OE-Core rev: dbb88617576ea9bbeec08f5e5e15c26c4c18347f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Xiaofeng Yan
0d0846e06f ncurses: Avoid occasional builling failure when having parallel processable task
ncurses failure non-gplv3 build (race issue) like the following \
error information:

| tic: error while loading shared libraries: /srv/home/pokybuild \
/yocto-autobuilder/yocto-slave/nightly-non-gpl3/build/build/tmp/\
work/x86_64-linux/ncurses-native-5.9-r8.1/ncurses-5.9/narrowc/lib\
/libtinfo.so.5: file too short
| ? tic could not build /srv/home/pokybuild/yocto-autobuilder/\
yocto-slave/nightly-non-gpl3/build/build/tmp/work/x86_64-linux/\
ncurses-native-5.9-r8.1/image/srv/home/pokybuild/yocto-autobuilder\
/yocto-slave/nightly-non-gpl3/build/build/tmp/sysroots/x86_64-linux\
/usr/share/terminfo
| make[1]: *** [install.data] Error 1

This is a race issue which is caused by
install.libs and install.data:

1) install.data needs run tic
2) tic needs libtinfo.so
3) install.libs would regenerate libtinfo.so
4) but install.data doesn't depend on install.libs, and they can run
   parallelly

So there would be errors in a very critical condition: tic is begining
to run at the same time when install.libs is generating libtinfo.so, and
this libtinfo.so is not integrity, then there would be the  above error.

Let task install.libs run before install.data for fixing this bug.

[YOCTO #2298]

(From OE-Core rev: 6993570787a97fbca5ea81513b0120c6d7563484)

Signed-off-by: Xiaofeng Yan <xiaofeng.yan@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Robert Yang
59ac33c77f rpm 5.4.0: respect to the arch when choose the alternatives
There is a bug if we:
1) bitbake diffutils with MACHINE=crownbay
2) bitbake diffutils with MACHINE=qemux86
3) bitbake core-image-sato with MACHINE=crownbay

Then the diffutils.i586 would be installed to the crownbay's image, this
is because diffutils.i586 is newer than diffutils.core2, and rpm doesn't
respect to the arch priorities:

We have put the archs in order in _solve_dbpath:

crownbay/solvedb:core2/solvedb:i586/solvedb:all/solvedb

Fix rpm to respect to the order, for example, if it finds a pkg in both
core2/ and i586/, and the core2/ comes first, it should not use the one
in i586/ even if it's build time is newer.

Note: Don't worry about the _free(*ptr), it can check whether ptr is
NULL or not.

This is for the denzil branch, and the master branch also needs it.

[YOCTO #2360]

(From OE-Core rev: 2199e6b9c82bb2b6738e87903f30329586db20e2)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Richard Purdie
3cb36a5ed9 Update version to 1.15.2 (correspdoning to Yocto 1.2 release)
(Bitbake rev: 270a05b0b4ba0959fe0624d2a4885d7b70426da5)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:10:07 +01:00
Liming An
75e32007ef Hob: add original url show function with the tooltip hyperlink for user
When case about No browser, such as running in 'Build Appliance', user can't open
the hyper link, so add this work around for user. (Checking the browser is avaiable
or not is hard by different system and browser type)

[YOCTO #2340]

(Bitbake rev: 02cc701869bceb2d0e11fe3cf51fb0582cda01b0)

Signed-off-by: Liming An <limingx.l.an@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:59 +01:00
Liming An
d52e74cee9 Hob: change the refresh icon speed to make it view clear
Because the arrow icon refresh so fast as the go backward by illusion, so adjust it slow.

[YOCTO #2335]

(Bitbake rev: ac4a8885fafdc0d1e79831334ead9a8ddb6e2472)

Signed-off-by: Liming An <limingx.l.an@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:52 +01:00
Dongxiao Xu
f1630d3cd4 Hob: Clear the building status if command failed
We may meet certain command failure during build time, for example,
out of memory. In this case, we need to clear the "building" status.

This fixes [YOCTO #2371]

(Bitbake rev: 283dbbbf5d34adb4c9e3aa87e3925fdebe21ff42)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:42 +01:00
Tom Zanussi
b7f1a8f870 yocto-bsp: clarify help with reference to meta-intel
The current yocto-bsp help assumes knowledge that the meta-intel layer
needs to be cloned before it's put into the BBLAYERS.  Avoid the
guesswork and state the details explicitly in the help.

Also, the shorter 'usage' string doesn't mention it at all; it would
help to at minimum mention it and refer the user to the detailed help.

Fixes [YOCTO #2330].

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:31 +01:00
Tom Zanussi
015f117d85 yocto-kernel: use BUILDDIR to find bblayers.conf
The current code assumes that builddir == srcdir/build, which it
obviously isn't sometimes.  Use BUILDDIR to get the actual builddir
being used.

Fixes [YOCTO #2219].

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:24 +01:00
Richard Purdie
b8338046ba netbase: Correctly set FILESEXTRAPATHS to include the version
[YOCTO #2366]

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:10 +01:00
Scott Rifenbark
7552ccd06c documentation/yocto-project-qs/yocto-project-qs.xml: pre-built example fix
The example showing how to use pre-built images, the toolchain, and
filesystem was off a bit.  I changed some wording to indicate using the
.ext3 filetype of the filesystem.  Previously it talked about expanding
the tarball version but the example has been changed to use .ext3.
Also, the environment setup file has been mis-named forever.  It should
have i586 in it and not i686.  And, finally, the image name does not
have a release number as part of the name.

(From yocto-docs rev: 97ed79993dd3e2eede4807482e15633b66b99f49)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:29 +01:00
Scott Rifenbark
e24d5cc2cd documentation/dev-manual/dev-manual-bsp-appendix.xml: .bbappend example
The linux-yocto_3.2.bbappend example was out of date.  There is no
longer a kernel features statement in the last part of the section.
Only COMPATIBLE_MACHINE, KMACHINE, and KBRANCH remain.  I removed
the fourth one from the text description and the example code.

(From yocto-docs rev: 89a11ce3c2a43e2d7c26599976d906011130131f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Scott Rifenbark
1f2fc974df documentation/dev-manual/dev-manual-bsp-appendix.xml: Added note
Added a note telling the user that the commit ID strings in the
example might not match the actual commit ID strings found in
the .bbappend file.

(From yocto-docs rev: 0477122c42eaf6d5e18e28a2356fe58c1070c608)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Scott Rifenbark
a473ba170d documentation/yocto-project-qs/yocto-project-qs.xml: Minor wording changes
(From yocto-docs rev: 528e34b1694739396295b769cc6f83d58dd3bf59)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Scott Rifenbark
a5fe09c6aa documentation/dev-manual/dev-manual-kernel-appendix.xml: Kernel example fixed
Due to a bug (2256) the example that changes the kernel configuration
through menuconfig did not work.  I have re-written the section
to now start with the default behavior of CONFIG_SMP=y and then
have the user change the configuration to where it is not set.

The changes include the reversing of the flow and the work-around
needed due to bug 2256.

(From yocto-docs rev: 2eaaafab0390d1108b212b9cfb7ca8365e0f39a9)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Scott Rifenbark
2072256b05 documentation: Added 1.2.1 manual entry information.
Added 1.2.1 manual history entry to five manuals.  The date
is to be determined.

(From yocto-docs rev: bb920814d5adaa24d37fbcefd85de2ba93ddf604)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Paul Eggleton
b623203ac9 documentation/yocto-project-qs/yocto-project-qs.xml: Setup changes
Remove mercurial as this is no longer needed.
linuxdoc-tools was mentioned twice in the CentOS list.
We no longer support Fedora versions older than 15 so remove this note.

This commit applies to 1.2, 1.2.1, and 1.3.

(From yocto-docs rev: 1347f92c49e61a42aa51e5c1ffccde88a449a4fb)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:27 +01:00
Scott Rifenbark
c7e4a6ae2c documentation/Makefile: Fixed figures publishing bug
I discovered a bug when publishing documents.  There are two scp
commands that copy a document's files and figures to the appropriate
directory in the srifenbark@yocto-www:~/www.yoctoproject.or-docs
server where the manuals are published.  The second scp command
had a "/figures" at the end.  This was causing a new "figures"
directory to be created within the "figures" directory.  This
redundancy shows up as missing figures in the manuals if a new figure
or changed figure is ever added to the book after initial
publishing.  I removed the extra "/figures" at the end of the scp
command.

(From yocto-docs rev: 5ab530f998427405a0486b94ca76cff58a4cf463)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:27 +01:00
Andrei Gherzan
1628159028 fotowall: Add #include ui_wizard.h to ExportWizard.cpp
App/ExportWizard.cpp depends on wizard.h which depends on ui_wizard. The last one
should be already generated before compiling ExportWizard.cpp.

[YOCTO #2297]

(From OE-Core rev: d7bf94647f17c0382caad8af0bdda837b14b22dc)

Signed-off-by: Andrei Gherzan <andrei@gherzan.ro>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:14 +01:00
Khem Raj
b477f676e3 classes/mirrors.bbclass: Point snapshot.debian.org mirror to working location
If you point to snapshot.debian.net/archive/pool then it will fetch
you a html page which will end up in corrupt download. The locations
have changed for archives and here we point the mirror to right
location.

(From OE-Core rev: d5574749b2272357f6bdad04c37ec0657b391cca)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:14 +01:00
Khem Raj
9d2534ab24 uclibc.inc: uclibc rtld does support GNU_HASH
(From OE-Core rev: 090d8a687517c2d4deb33295a3cceb5175aa28f3)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:13 +01:00
Khem Raj
df815f20c8 openssl: Fix build for mips64(el)
(From OE-Core rev: 8c74ddf5fd5502fd759f310096e9013fad0ca4db)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:13 +01:00
Zhai Edwin
b64eefe2bb pango: Fix modules load failure in multilib environment
Multi-libs of Pango need different modules, thus different config files and
utils. This patch separate config file and utils with different MLPREFIX to
avoid conflict.

[YOCTO #2356] got fixed.

(From OE-Core rev: 535e298b98182d95c3280d2d46aa6388e27aac40)

Signed-off-by: Zhai Edwin <edwin.zhai@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:13 +01:00
Richard Purdie
4abd299bf0 sed: Explicitly disable acl for deterministic builds
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:13 +01:00
Dongxiao Xu
30c3c8420e qt4: move functions from python to shell style
In qt4's do_configure operation, it will refer to some variables that
are derived from 'd', however these variable values may be not correct
in multilib case since the extraction of these variables happens before
the multilib handler.

The fix is to move these python style functions back to shell style.

This fixes [YOCTO #2355]

[RP: Fix whitepace]
(From OE-Core rev: 98cb2efe4e9f3092d531c9fc809406c3ef559725)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
[SG: Resolve merge conflicts for 1.2.1]
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:12 +01:00
Darren Hart
a74fb01b6b initrdscripts: Update install.sh to work with mmc devices
Fixes [YOCTO #2385]

The installer only searches for hd[ab] sd[ab]. Some newer BSPs have mmcblk
devices that should be used as the install target. These devices also have a
partition prefix (mmcblk0p1 instead of mmcblk01). As they are detected
asynchronously, it is necessary to add the rootwait kernel parameter to avoid
a race condition trying to mount the root device.

As BSPs like the FRI2 and the sys940x have mmc devices and will have a 1.2
release, we should push this to 1.2.1. The changes are perfectly contained and
easily verified.

Test for an mmcblk device and add the p partition prefix if necessary. Add the
rootwait kernel parameter when an mmcblk device is detected.  Replace the series
of explicit umount commands with a single umount using a wildcard. This will
find all the partitions and will not try to unmount non-existant devices. Avoid
copy and paste errors by replacing /dev/${device}${pX} references with the
previously assigned rootfs, bootfs, and swap variables.

These changes have been tested on the FRI2 Sato image which installed to
/dev/mmcblk0 as well as the N450 Sato image which installed to /dev/sda. Both
were successful.

(From OE-Core rev: 36634e16c0a0c80674bacf20f9841e3b042bd5fd)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:12 +01:00
Paul Eggleton
c003c04590 buildhistory: fix multiple commit of images and packages at the same time
The echo line here was merging multiple lines into one, and the result
was that if both image and package changes had to be comitted then only
the image changes were being committed and the package changes could
potentially be merged into the next package change. Quoting the variable
reference fixes this.

Fixes [YOCTO #2411]

(From OE-Core rev: 540cd9d42a4db562e5eca431cec89ac5a6a05cab)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:12 +01:00
Saul Wold
35cc0b023f builder: Add Please Wait Dialog Box
Add dialog box while bitbake starts hob to inform user
to please wait for the hob screen to become visible.

(From OE-Core rev: e9239e4250ef140920847f78625cc7206763c32c)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:12 +01:00
Nitin A Kamble
c2826b50ce quilt: fix perl path in target perl scripts
While building on distros like fedora17, which has /bin/perl,
the target perl scripts get perl path also as /bin/perl.
And that is not correction path of perl on the target.

This commit avoids this error.

| error: Failed dependencies:
|       /bin/perl is needed by quilt-0.51-r2.i586
NOTE: package core-image-sato-sdk-1.0-r0: task do_rootfs: Failed
ERROR: Task 8
(/home/nitin/prj/poky.git/meta/recipes-sato/images/core-image-sato-sdk.bb,
do_rootfs) failed with exit code '1'

(From OE-Core rev: c8c394bd806978c867f2fe82e4bde65c98764880)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:11 +01:00
Saul Wold
75225bcc84 boost: Ensure we use our user-config.jam file
This change ensures we use the user-config.jam Configuration
that we created and will not use anything from the user's home
directory.

[YOCTO #2302]

(From OE-Core rev: f246e467b8513f1c1c33b5e7462ae6478754d531)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:11 +01:00
Mark Norman
d3204ddc12 uclibc SDK not including libpthread_nonshared.a
Modified the uclibc PACKAGES list order to ensure the uclibc-dev package is
processed before uclibc-staticdev to allow *_nonshared.a libraries to be
packaged in the uclibc-dev package.  The *_nonshared.a libraries are required
by the SDK.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:11 +01:00
Scott Garman
7c7ac8548d runqemu-ifup: enable ip masquerading for QEMU NAT addresses
Fix the IP masquerading settings so that networked QEMU sessions can
reach external networks.

This is a partial fix for [YOCTO #2329].

(From OE-Core rev: 78c7a82a2e3214eaec3c559269e3cc6c219759c0)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:11 +01:00
Scott Garman
bbf95cae4c openssl: upgrade to 1.0.0i
Addresses CVE-2012-2110

Fixes bug [YOCTO #2368]

(From OE-Core rev: 51a122a5593c62d7ffd07f860e54a2fb0327959c)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:10 +01:00
Scott Garman
3df821277d libpng: upgrade to 1.2.49
License hasn't changed, just updated the md5 checksums due to trivial
date changes within the text (and the position of the license text
within png.h).

Addresses CVE-2011-3045

Fixes [YOCTO #2352]

(From OE-Core rev: 6e2235a4d769b16ebf68d6bbed56d8bcc0e0c83f)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:10 +01:00
Andrei Gherzan
4f4685469a python: Add patch to search for db.h in inc_dirs and remove warning
python should search for db.h in inc_dirs and not in a hardcoded path.
If db.h is found but HASHVERSION is not 2 we avoid a warning by not.
adding this module to missing variable.

[YOCTO #1937]

(From OE-Core rev: 8eb3e52d39147f8cb98ec95857be17db0444098e)

Signed-off-by: Andrei Gherzan <andrei@gherzan.ro>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:10 +01:00
Andrei Gherzan
d1bc1191d6 python: Add patch for 64bit platform
This patch was added for 64bit host machines. In the compile process python
is checking if platform is a 64bit platform using sys.maxint which is the host's
value. The patch fixes this issue so that python would check if TARGET machine
is 64bit not the HOST machine. In this way will have "dl" and "imageop" modules
built if HOST machine is 64bit but the target machine is 32bit.

[YOCTO #1937]

(From OE-Core rev: 22ae3959f40845ebcc00413ccf733539472a1a81)

Signed-off-by: Andrei Gherzan <andrei@gherzan.ro>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:10 +01:00
Andreas Oberritter
5c507a2fd7 {kernel, module}.bbclass: don't run depmod for module packages during do_rootfs
* depmod already gets executed by pkg_postinst_kernel-image.

* If you build a module using module.bbclass, pkg_postinst returns 1 in
  do_rootfs, causing pkg_postinst to run again on first boot. To improve
  this situation, I copied pkg_postinst from kernel.bbclass to module.bbclass.
  This was rejected by Koen, because he doesn't like the code from
  kernel.bblcass, which uses ${STAGING_DIR_KERNEL}. Richard then suggested
  that calling depmod during do_rootfs wasn't necessary at all, because
  it already gets done by kernel-image.

(From OE-Core rev: 663b4be025283a30adb823760ce9d9a056106bcf)

Signed-off-by: Andreas Oberritter <obi@opendreambox.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:09 +01:00
Richard Purdie
bf4740cf66 utils.bbclass: Testing via env in create_wrapper is a nice idea but breaks things
For example, pseudo-native wants to set LD_LIBRBARY_PATH but setting this
into the environment here causes the existing pseudo (running during do_install)
to poke into paths in /opt and this breaks builds.

The simplest fix is simply not to do this. Comments tweaks to match the code.

(From OE-Core rev: 915769c405e24751eae613e9ef55f05490a726de)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:09 +01:00
Nitin A Kamble
24ffb5c0b1 libproxy: fix compilation with gcc 4.7
(From OE-Core rev: 6689c52eb13430593d6afe48dba3973467cd2404)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:09 +01:00
Paul Eggleton
a92fed4fe5 dpkg-native: fix deb-based rootfs construction failure on Fedora 16
Backport a fix from 1.16.x upstream to use fd instead of stream-based
I/O in dpkg-deb, which avoids the use of fflush() on an input stream
(the behaviour of which is undefined by POSIX, and appears to have
changed in the version of glibc introduced in Fedora 16 and presumably
other systems).

Fixes [YOCTO #1858].

(From OE-Core rev: b1c28667592e736115ab5e603a12c2723b939cf2)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:09 +01:00
Saul Wold
a518e1e3b1 quilt: move empty quiltrc to native sysconfdir
patch.bbclass orignally pointed at /usr/bin/quiltrc for an empty
version to ensure that no user setting were picked up, change this
to /etc/quiltrc in the Native sysroot since we now have a native
sysconfdir.

Make sure that the quiltrc is actually installed in the Native
sysconfdir, not the target, so fix this after the recipe split.

(From OE-Core rev: aec4cdc6efda430a0965d6b3b4f84c7943390273)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:08 +01:00
Saul Wold
fc9716930a gcc: Add plugins package for ARM, fix /usr/incude packaging
WARNING: For recipe gcc, the following files/directories were installed but not shipped in any package:
WARNING:   /usr/include
WARNING:   /usr/lib/gcc/arm-poky-linux-gnueabi/4.6.4/plugin/libgcc
WARNING:   /usr/lib/gcc/arm-poky-linux-gnueabi/4.6.4/plugin/libgcc/config
WARNING:   /usr/lib/gcc/arm-poky-linux-gnueabi/4.6.4/plugin/libgcc/config/arm
WARNING:   /usr/lib/gcc/arm-poky-linux-gnueabi/4.6.4/plugin/libgcc/config/arm/bpabi-lib.h
(From OE-Core rev: cf49cf3958b24fdb89d57abbf1f1b30c07a06030)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:08 +01:00
Saul Wold
f99ced96cf xserver-kdrive: Add xkb to existing docs list
WARNING: For recipe xserver-kdrive, the following files/directories were installed but not shipped in any package:
WARNING:   /usr/share/man
WARNING:   /usr/share/man/man5
WARNING:   /usr/share/man/man1
WARNING:   /usr/share/man/man1/Xephyr.1
WARNING:   /usr/share/man/man1/Xserver.1
(From OE-Core rev: 515cbe565b684359ac9d8bd0fb523aa3d2f810e2)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:08 +01:00
Saul Wold
d0f0d1b41d libgcc: Package additional *crt*.o files for PPC
WARNING: For recipe libgcc, the following files/directories were installed but not shipped in any package:
WARNING:   /usr/lib/powerpc-poky-linux/4.6.4/ecrti.o
WARNING:   /usr/lib/powerpc-poky-linux/4.6.4/ncrti.o
WARNING:   /usr/lib/powerpc-poky-linux/4.6.4/ecrtn.o
WARNING:   /usr/lib/powerpc-poky-linux/4.6.4/ncrtn.o
(From OE-Core rev: 580d734ddc928aaaac9acaa248427b01731074f2)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:08 +01:00
Saul Wold
77203b75f5 binutils: add embedspu for ppc builds
WARNING: For recipe binutils, the following files/directories were installed but not shipped in any package:
WARNING:   /usr/bin/embedspu
(From OE-Core rev: 15c8ea4d35edbcaf03c94aba06ded85851679157)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:07 +01:00
Ken Werner
a0f1aca7a0 bdwgc: Set ARM_INSTRUCTION_SET to "arm"
The bdwgc recipe uses a version of libatomic that fails when building in Thumb
mode. This has been fixed upstream already. The
pulseaudio/libatomics-ops_1.2.bb has the same issue and sets the
ARM_INSTRUCTION_SET to "arm" (probably until a new version gets pulled in).
This patch applies the same workaround to the bdwgc/bdwgc_20110107.bb recipe.

(From OE-Core rev: 544fe63b6a861129ea15f4cd37952e513ab0013e)

Signed-off-by: Ken Werner <ken.werner@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:07 +01:00
Darren Hart
b3de1f1140 gthumb: Disable parallel make for gthumb install
With PARALLEL_MAKE set to 14, I frequently see the gthumb do_install
task hang. Make is spinning at 100% CPU and the build makes no
more progress.

The following work-around proposed by Richard Purdie allows progress
to be made.

(From OE-Core rev: e933129ddb8ae55d618b5875fca26bc46fcb100b)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: Joshua Lock <josh@linux.intel.com>
CC: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:07 +01:00
Andreas Oberritter
6e93ac2581 python: use PKGSUFFIX for libpython2
* python-nativesdk shouldn't provide libpython2, but
  libpython2-nativesdk.

(From OE-Core rev: 260dfd9ccbf7d1e0ed60256aaf80fed5bf0c24e2)

Signed-off-by: Andreas Oberritter <obi@opendreambox.org>

[PR Bump - sgw]

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:07 +01:00
Otavio Salvador
c1bfbf7168 connman: backport test script fixes
Those fixes are required to get the test scripts to work with current
0.79 DBus API.

(From OE-Core rev: aadeb3199d1b34369b63810696b9d61a86afb31d)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:06 +01:00
Khem Raj
5a7d852a94 connman: Fix linking with gold linker
Fixes errors like below

/home/kraj/work/angstrom/build/tmp-angstrom_2010_x-eglibc/sysroots/x86_64-linux/usr/libexec/armv5te-angstrom-linux-gnueabi/gcc/arm-angstrom-linux-gnueabi/4.6.3/ld:
error: hidden symbol '__start___debug' is not defined locally
/home/kraj/work/angstrom/build/tmp-angstrom_2010_x-eglibc/sysroots/x86_64-linux/usr/libexec/armv5te-angstrom-linux-gnueabi/gcc/arm-angstrom-linux-gnueabi/4.6.3/ld:
error: hidden symbol '__stop___debug' is not defined locally
collect2: ld returned 1 exit status
make[1]: *** [plugins/loopback.la] Error 1

(From OE-Core rev: 3e6e97b40f8cb9568993c5cc65d73189ec6b7b8a)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:06 +01:00
Scott Rifenbark
3bf8069100 documentation/yocto-project-qs/yocto-project-qs.xml: added quotes
Need quotes around the INHERIT statement.

Reported-by: Frans Meulenbroeks <fransmeulenbroeks@gmail.com>
(From yocto-docs rev: b040ab0cf8e776c04fc787ba09722327b60913f2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:36 +01:00
Scott Rifenbark
cbd192a6c5 documentation/dev-manual/dev-manual-kernel-appendix.xml: grammar fix
(From yocto-docs rev: 8f62155b56f82c705f05585d2ab68d4a4af5a501)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:36 +01:00
Scott Rifenbark
6d22ae627b documentation/dev-manual/dev-manual-kernel-appendix.xml: Removed KMACHINE
The example that compliles the altered code will not run now when the
KMACHINE statement is in the linux-yocto_3.2.bbappend file.  I have
commented it out of the book.

(From yocto-docs rev: 112a10a32ba3d7b24f22e25e39202b717571cbf0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:36 +01:00
Scott Rifenbark
49a58c65b6 documentation/dev-manual/dev-manual-kernel-appendix.xml: updated KSRC
The KSRC example needs "_3_2" at the end of the variable.

(From yocto-docs rev: 99bf77dd648b28c2d425d23215383b7c733b054d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:36 +01:00
Scott Rifenbark
06cde35657 documentation/dev-manual/dev-manual-kernel-appendix.xml: added quotes
Turns out the KSRC_linux_yocto ?= /home/scottrif/linux-yocto-3.2.git
statement in the linux-yocto_3.2.bbappend file in poky-extras needs
quote characters around the pathname.  I updated the example
statement.

(From yocto-docs rev: bcdb8d230f20bf69567380d562c991ff6eeb41cd)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
196a62b50c documentation/dev-manual/dev-manual-kernel-appendix.xml: kernel machine update
Found two instances of "yocto/standard/common-pc/base".  this should
now be "standard/default/common-pc/base".

(From yocto-docs rev: d710bc868409ad21bdf9e63c042ec40b0d305ad0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
8ddaa3ede8 documentation/dev-manual/dev-manual-kernel-appendix.xml: 3.0 to 3.2
The kernel used for example is no linux-yocto_3.2.  I changed
all occurences of yocto_3.0 over to yocto_3.2

(From yocto-docs rev: c3585a0fec0381a88071004660ab96016f9674e2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
52ccf5a9eb documentation/dev-manual/dev-manual-kernel-appendix.xml: formatting fix
(From yocto-docs rev: 1d1a5059163749b5adecf9432ffc5e2f2207acf4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
b2c9b25f97 documentation/dev-manual/dev-manual-kernel-appendix.xml: altered example
The example code with the printk statements needed to be altered.
And the wording supporting the example was modifyied to be more
generic.

(From yocto-docs rev: 4d03fe2e08dbdcab438aae551e9696e11a3e4477)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
5a1fb95a8d documentation/dev-manual/dev-manual-kernel-appendix.xml: updated cpuinit
Looks like calibrate_delay(void) changed in the example.  Updated
to the most recent code.

(From yocto-docs rev: 402af7d379b0df5e97b1863aa627aad98ceb5e6f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:34 +01:00
Scott Rifenbark
22b9983cc7 documentation/dev-manual/dev-manual-kernel-appendix.xml: output updated
Updated the example sets up the bare clone.  The shell output changed
because the upstream repo changed to
"origin/standard/default/common-pc/base".

(From yocto-docs rev: 72077ca9e7db747cbccc4d9d8deabfa424c6147c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:34 +01:00
Scott Rifenbark
7f5e6a1959 documentation/Makefile: Added denzil specific .PNG file logic
A new figure was needed in the Kernel appendix.  I added that
figure to the block of code that creates the tarball for the
denzil branch.

(From yocto-docs rev: cb3997cad15f7bf6f812f939a3fe4dcac955376d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:34 +01:00
Scott Rifenbark
01025ad2c4 documentation/dev-manual: New figure just for denzil
New image needed for Denzil.  I created a new file named
"kernel-example-repos-denzil.png" and copied it to the
Figures folder.  I also deleted the
"kernel-example-repos.png" image.

(From yocto-docs rev: 150b4c01cce283ae8de29f51a2e4e7dcb60281ca)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:34 +01:00
Scott Rifenbark
71580376c9 documentation/dev-manual/dev-manual-bsp-appendix.xml: updated hddimg example
In the "Building and Booting the Image" section there is an example
.hddimg file.  I updated the file to be the actual file used during
the BSP example build.

(From yocto-docs rev: ce759fb3d4e5e22f0928cdd03c17c0b5d9f4167b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
0774a11505 documentation/dev-manual/dev-manual-bsp-appendix.xml: BBLAYER update
For 1.2 you evidently need to add $HOME/poky/meta-intel to your
bblayers.conf file.

(From yocto-docs rev: 05bb85dd133d8da0697cd4414b05dde2a636b737)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
4d2d5abd8b documentation/dev-manual/dev-manual-bsp-appendix.xml: wording updated
Wording for linux-yocto_3.2.bbappend file updated to support the
addition o fhte KBRANCH variable.

(From yocto-docs rev: 6bd32650f1004055ac67157f96ab62abf5883047)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
884034b256 documentation/dev-manual/dev-manual-bsp-appendix.xml: mymachine.conf
edited it to now include the PREFERRED_VERSION variable.

(From yocto-docs rev: 6ddd56cbcec752e27b2bbf0fc687af79b2249377)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
ee98021efe documentation/dev-manual/dev-manual-bsp-appendix.xml: recipes-kernel update
The section on changing recipes-kernel was way out of date.
I updated all relavent changes.

(From yocto-docs rev: b9f954983447e45766a0bf785285c0591fe9d340)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
f52747d7a2 documentation/dev-manual/dev-manual-bsp-appendix.xml: 3.2 for 3.0
Kernel used in now 3.2 and not 3.0.

(From yocto-docs rev: 8ee757e0d4f97f7652de2c9ee1556c142920596a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
1bf998fe41 documentation/dev-manual/dev-manual-bsp-appendix.xml: added layerdepends
The layer.conf file now uses a LAYERDEPENDS variable.  I added that
to the example.

(From yocto-docs rev: 09f4d9e74ceccb3053a36d2a3deed5cc3d3be157)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
1b3c00a34f documentation/dev-manual/dev-manual-bsp-appendix.xml: changed kernel
The kernel in mymachine.conf had to be changed from 3.0 to 3.2

(From yocto-docs rev: 8a385bfa11298251fd80445d6fd2da6034d6b9dc)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
b3f870297e documentation/dev-manual/dev-manual-bsp-appendix.xml: Output updated
The output for creating and switching to the denzil branch
for meta-intel needed updated.

(From yocto-docs rev: 54602beb1aa56521c7f5812803724ff53bf11bf1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
93deb57c91 documentation/dev-manual/dev-manual-bsp-appendix.xml: Bad variable
The variable substitution had to be changed from
"&DISTRO_NAME;-6.0.0.tar.bz2" to "&DISTRO_NAME;-&POKYVERSION;.tar.bz2".

(From yocto-docs rev: 8ed6cb5e2b56dee3fa8d127b449183ae141a9153)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
2883b754a1 documentation/dev-manual/dev-manual-bsp-appendix.xml: Added link
Created a link to the Yocto Project Files term.

(From yocto-docs rev: 32d7d7008ebcb0b25f77b855025c7059526b9694)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
86325bbc5d documentation/dev-manual/dev-manual-bsp-appendix.xml: typo corrected.
(From yocto-docs rev: 73eba4180162fcd6570ae90c6cac1b16088d4a01)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
746d718f53 documentation/dev-manual/dev-manual-bsp-appendix.xml: Bad variable
Had to remove "poky-" from the front of this variable that resolves
to a YP Files top-level name from the tarball.

(From yocto-docs rev: d01d5bd6c4d1fd754d4fccc087d557058d6a5733)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:31 +01:00
Scott Rifenbark
c498338197 documentation/dev-manual/dev-manual-model.xml: Fixed a bad link title.
(From yocto-docs rev: 0b59afe539b2adc3459c1e22404136d81250d292)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:31 +01:00
Scott Rifenbark
ba554bd865 documentation/dev-manual/dev-manual-model.xml: Better wording.
(From yocto-docs rev: bb3fa5eeed2784b415d009ae07c39149adc1a147)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:31 +01:00
Scott Rifenbark
0dda5d88a5 documentation/dev-manual/dev-manual-model.xml: Added BitBake
Throughout the documentation set I have refered to the YP build
system generically in order to avoid use of the "Poky" term.  Richard
has suggested that we refer to the actual thing that does the building.
So I have added BitBake to this particular sentence to refer to the
tool.

(From yocto-docs rev: 4d52fc9c8d1e1cbfca99590fcaa09392f5d235bf)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:31 +01:00
Scott Rifenbark
06f44161f1 documentation/dev-manual/dev-manual-model.xml: Fixed poor references.
(From yocto-docs rev: 91885c11cc33a10b3d65006304bf5a6ca748f13f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
4b4a018466 documentation/dev-manual/figures/kernel-overview-3.png: Removed file
This file was replaced by a release-specific file named
"kernel-overview-3-denzil.

(From yocto-docs rev: e9604111299d3699105225302c43a25e7b2730b1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
caf6532b51 documentation: release-specific figure needed for denzil in dev-manual
dev-manual/dev-manual-model.xml:  The Bare Clone and Copy of the Bare
Clone figures are out of date for denzil.  These needed to be
re-done so they use "linux-yocto-3.2.git" and "my-linux-yocto-3.0-work"
as the root names.  This presents a Makefile issue when making the
denzil and pre-denzil versions of the manuals.  Whenever you use a
different figure for a different release, you need to involve the
BRANCH variable in the Makefile.  This is necessary because you are
using different figures in the generated tarballs.  The set of figures
could be unique to the release.  The outdated figure is
"kernel-overview-3.png" and will eventually be removed (later commit).
I created a new figure named "kernel-overview-3-denzil.png" and used
that in the dev-manual-model.xml file.

documentation/Makefile:  I updated the Makefile to test for a
"denzil" release build and if so include the new file in the
generated tarball.  This commit adds the new .PNG file as well.
Fixed the Makefile so that if you don't supply a BRANCH value, it
uses the latest figures (denzil).

(From yocto-docs rev: 49552b12a967f97eb4d75477895bf32f61d69aa6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
a81cb954bb documentation/dev-manual/dev-manual-model.xml: Wording change
Changed the Note wording to work with the list and not be specific
to a number of supported kernels.

(From yocto-docs rev: a6ffe0834c0ed76ec09315f34c65888c20eed958)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
0089bb9ad0 documentation/dev-manual/dev-manual-model.xml: Updated wording for list
The list specifically named four kernels supported.  I changed it so
it would say "several kernels".

(From yocto-docs rev: b6c34f86c1f3724c1416b8fb7770e1c33587e065)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
ce8d4157db documentation/dev-manual/dev-manual-model.xml: BSP Layer step updated
Several things out-of-date for step five of the BSP Creation overview.

(From yocto-docs rev: ec06bd4f7bb1764e4a37328a51923d7b707d19e6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:29 +01:00
Scott Rifenbark
66b18cb5cd documentation/dev-manual/dev-manual-model.xml: Fixed link
The link and wording to the YP Downloads page on the website was
wrong.  Fixed it up.

(From yocto-docs rev: 5baf847c9b5b8af07c8945921352d3aba2a9cfa8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:29 +01:00
Scott Rifenbark
bbf33914ea documentation/dev-manual/dev-manual-common-tasks.xml: Font corrected.
(From yocto-docs rev: 0fab3eecf7f67ae890ff4fc2f6c12fed4aa4d897)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:29 +01:00
Scott Rifenbark
e1e12bfd0c documentation/dev-manual/dev-manual-common-tasks.xml: Section name fixed.
(From yocto-docs rev: 6c5724d8c0e75efc22dd2f4477a797afeaed5347)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:29 +01:00
Scott Rifenbark
bc7f18c61d documentation/dev-manual/dev-manual-common-tasks.xml: fixed path
Added more detail to the pathname for the example
formfactor_0.0.bbappend file.

(From yocto-docs rev: 32e60999494bb5b69d683008ad804613e4b99d07)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
89e2958475 documentation/dev-manual/dev-manual-common-tasks.xml: link and output fixed
Fixed a reference to Yocto Project Files and provided a link.

Put in an updated version of the meta/recipes-bsp/formfactor/
formfactor_0.0.bb file in the example.

(From yocto-docs rev: 05001174d2337a91e839e991a3e9ecd6657a56f4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
943c6917e6 documentation/dev-manual/dev-manual-start.xml: shell output examples updated
Updated various shell output examples created from cloning various
Git repositories, etc.

(From yocto-docs rev: ed167b1643a60ab30c09c2f42baebf781564ca20)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
2b6e86beae documentation/dev-manual/dev-manual-start.xml: updated output for bare clone
Updated the shell output example when user creates a bare clone
of kernel.  We use linux-yocto-3.2 here.

(From yocto-docs rev: e24beac8c8b6c65f94b71f36bf9f5d918ee4375e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
42a9a50771 documentation/dev-manual/dev-manual-newbie.xml: Tag example fixed
The example that creates a local branch based on a release tag
in the "Repositories, Tags, and Branches" section was not optimal.
Darren Hart informed me that naming a local branch the same name
as a tag confuses Git.  Plus, the "-b" option was mis-placed.
Renamed the local branch to have "my-" in front of it and moved
the "-b" option earlier in the command.

(From yocto-docs rev: 24ab16d18fb317efb86d2c4ddb2ac1a1449df519)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
74c34c9d3c documentation/dev-manual/dev-manual-newbie.xml: Fixed branch example
The example in the "Repositories, Tags, and Branches" section that
creates a local branch that tracks the upstream branch is incorrect.
The syntax should be "git checkout -b &DISTRO_NAME; origin/&DISTRO_NAME;

Fixed it.

(From yocto-docs rev: 7b47dd460f240a0d7f07edf2767bcad1ddc9d4c3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:27 +01:00
Scott Rifenbark
e9b8cf485c documentation/dev-manual/dev-manual-newbie.xml: Link added for TOPDIR
(From yocto-docs rev: e02c1762fadd22f6ffc06e91ac82ebb59a7a7f68)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:27 +01:00
Scott Rifenbark
2863d953bd documentation/dev-manual/dev-manual-intro.xml: Hob and BA added
Added Hob and Build Appliance to the list of other stuff the user
might want to reference.

(From yocto-docs rev: 74ca0a95f0ea1b2045a42f0895ba874bdfa2d46c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:27 +01:00
Scott Rifenbark
716bdd4bf5 documentation/dev-manual/dev-manual-start.xml: Misc fixes
Better wording for the role of BitBake.  Updated shell output for
the clone of poky.

(From yocto-docs rev: 0f7d9557413827f82388d3fe677109074f04e30c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:27 +01:00
Paul Eggleton
dfecd3e3d7 documentation/yocto-project-qs/yocto-project-qs.xml: Package requirements
The following packages no longer need to be installed on the host
system:

* python-psyco
* help2man
* cvs
* hg

Additionally, linuxdoc-tools was mentioned twice in the Fedora list.

(From yocto-docs rev: bf7f37e040e5d5e19738f4c3a313acfd406351e3)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:26 +01:00
Scott Rifenbark
142de43be2 documentation/dev-manual/dev-manual-common-tasks.xml: Fix cusomizing example
As suggested by Paul Eggleton and Richard Purdie, the example that
describes another method for creating a cusomt image was modified
so that it is based on an existing recipe instead of requiring a
new image.

Reported-by: Paul Eggleton <paul.eggleton@linux.intel.com>
(From yocto-docs rev: b5b32be9087c3d1c8e8d97751ce2cce09829f23b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:26 +01:00
Scott Rifenbark
752c707df3 documentation/poky.ent: Changed "latest" to "current"
Needed to change this so that the manuals will make correctly and
manual links will not point to the "latest" version of manuals on the
YP website.  This change should have been made prior to the final
1.2 build so that it would have been in the 1.2 tarball.

(From yocto-docs rev: a8615e05aef205629c832041f30c76567d8359bd)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 20:57:24 +01:00
Richard Purdie
d20a24310e self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: fd989e1bceef6df36619ba8944c8141abefd282e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:21:45 +01:00
Richard Purdie
8e04664ffd self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: 117ca04008415ed0e6e10dcd373ab5f685b3225a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:17:25 +01:00
Dongxiao Xu
3ab5d73f0c sanity.bbclass: Add a new case to issue sanity_check()
Judge if "SanityCheck" event is received, it will issue the
sanity_check() and send "SanityCheckPassed" back if succeeded.

(From OE-Core rev: 19704f9e69ecf09531687385b478b47f49fe372d)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:15:47 +01:00
Dongxiao Xu
33f048240d Hob: Issue sanity check after parse is completed
In original scheme, sanity check is part of the parsing process. If a
sanity check fails, it means the parsing is failed and values in Hob
GUI may not correct.

With this commit, Hob will actively issue sanity_check() after the
parsing is completed.

This fixes [YOCTO #2361]

(Bitbake rev: 36968815dcc91759eeacb308bf4b294af416eee5)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:15:42 +01:00
Dongxiao Xu
0bf04aa4ad Hob: Add proxy setting into setting's md5
If user changed the proxy setting, we will reparse configuration because
it may need sanity check.

(Bitbake rev: 0be54917cd88ea8f110027a7840ac69a411fd589)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:15:36 +01:00
Dongxiao Xu
612555e6fe event.py: Add SanityCheck and SanityCheckPassed events
(Bitbake rev: 4d7bf9d813229b78b1cd87d06f7042e7923b7db4)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:15:29 +01:00
Richard Purdie
35196ff703 self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: 85bebd85c4f6603ac8fc1290121c34b92cc434f9)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:13:00 +01:00
Lianhao Lu
0a48c697d7 pseudo: PR bump.
Bump PR value due to the commit
c6c701f424aeb502d20ff02d02712e56f4e259a5.

(From OE-Core rev: b6ee2880fccf04923ede31256ea418451cbf2e46)

Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:10:39 +01:00
Scott Rifenbark
7e56770a60 documentation: Updated Manual Revision Tables again.
After some discussion from Song and Richard, the dates in the
manual revision table has been updated to "April 2012" for the
1.2 release.

(From yocto-docs rev: b3fc2ec7c5aedb8ea0a2d502bdcd7e8f4092ed96)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:08:16 +01:00
Scott Rifenbark
9a548f0ee4 documentation: Replacements for "1.1" and "edison", etc.
I did a quick and dirty scrub over the manuals for the strings
"1.1" and "edison".  I found some instances that were not properly
variablized.  Also, discovered some references to the
linux-yocto-3.0-1.1.x.  All but one instance of this needed changed
to linux-yocto-3.2.

(From yocto-docs rev: 620fb4b7626defcefc8a039de09ae4599ee7f454)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:08:09 +01:00
Scott Rifenbark
946c650a47 documentation: Manual Revision Tables updated
Five tables updated for the five manuals that have the tables.
Used "May 2012" as the date.

(From yocto-docs rev: 0d4d46ba300c07ff9c73186506be5b409bef9d1b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:08:02 +01:00
Scott Rifenbark
f99c947c32 documentation/yocto-project-qs/yocto-project-qs.xml: Added Build Appliance
Added a blurb about the Build Appliance to the start of the QS.

(From yocto-docs rev: b2766121c05740300fd5a6cea2f3b8a2f62db6e5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:55 +01:00
Tom Zanussi
9ffbd2ef22 documentation/dev-manual/dev-manual-common-tasks.xml: removed kernel26
kernel26 is now obsolete so remove mention of it from the docs.
Removed from docs.

(From yocto-docs rev: 7b9da106d746192f802095584b04e3ee8347eabd)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:48 +01:00
Scott Rifenbark
66625417b4 documentation/poky-ref-manual/ref-images.xml: added link
added the link for the Build Appliance page to the description of the
self-hosted image.

(From yocto-docs rev: 719ba4308489b29eefa7f08ddffb65bd5e41fc2c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:40 +01:00
Joshua Lock
e95ce40abd scripts/hob: disable sanity checks when launching
This enables us to use the GUI to change any settings which might cause
sanity checks to fail, such as the proxy configuration.

(From OE-Core rev: fe98d1c7159636f123b27292bbd4cc224b532bf0)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:33 +01:00
Joshua Lock
4a83ebbee0 sanity.bbclass: add variable to disable the sanity checks
It's useful for Hob to be able to disable the sanity checks completely
without marking them as passed so that the user can get into the GUI to
configure their settings, etc.

Add a variable, DISABLE_SANITY_CHECKS, to do so.

(From OE-Core rev: b022641f939bcfcdaddddc4db3af4d2dc70de832)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:26 +01:00
Richard Purdie
90705b36ad python: Fix various contamination issues leading to broken/missing c modules
The move of libcrypto to /lib instead of /usr/lib has broken the _hashlib module
compilation. There were also a number of other failing modules which should
have been building correctly. This turned out partly to be the /lib issue
but also due to a number of native paths creeping into compiler commandlines.

These changes add in /lib as part of the searh directory and remove
a number of host contamination issues within setup.py. Post release we
should really further go through this file and just delete large sections
of it as its hard to be sure what strange paths python is injecting as
search paths.

This patch also fixes issues where re-execution of the compile task
would corrupt the Makefile in various ways, again leading to puzzling
paths within the configuration.

(From OE-Core rev: 20e2761e1da1cb5dcd267e161f2a6b6a429e9f39)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:19 +01:00
Richard Purdie
be5a5c7e7b bitbake.conf: Add a STAGING_BASELIBDIR variable that recipes can use to find base_libdir
(From OE-Core rev: 4697911991caa2f2a21dd43f54e0c4404d722873)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:12 +01:00
Joshua Lock
64471e9340 hob: enable sanity checks after launch
To ensure the users configuration is sanity tested enable the sanity
checks after the GUI has started but before any parsing is done.

(Bitbake rev: 244ce2b900ae6cecbeeccfe2056e61c132476261)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:05 +01:00
Richard Purdie
4becd60e65 self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: b19af63)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 16:08:34 +01:00
Richard Purdie
9fcfda78b9 poky-tiny: Drop now unneeded DISTRO_FEATURES_LIBC_TOOLCHAIN (after gettext fix)
After the recent gettext dependency fix (commit 6e5cb40dfa
"gettext.bbclass: Ensure we don't overwrite other DEPENDS_GETTEXT values",
its no longer necessary to have to have these options to build meta-toolchain.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 16:05:58 +01:00
Richard Purdie
8558c3e1f4 initramfs-live-boot: Disable unionfs until its issue with the system rootdir are resolved
There are issues with the current unionfs when making a union mount over "/".
Until these are resolved we can't use unionfs for live booting so disable this
temporarily as a workaround.

unionfs is usable in other circumstances.

[YOCTO #2331 workaround]

(From OE-Core rev: 60ee26ae23132b916019d58e20b8c2e1ddd2b471)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:42 +01:00
Richard Purdie
0bfb42dbb6 pseudo: Drop nativesdk wrapper and link against old memcpy symbol
The -nativesdk pseudo wrapper setting LD_LIBRARY_PATH turned out to be a
bad idea since it can mix up different libc and lib-dl verisons which
may or may not work depending on the phase of the moon.

As an alternative to solving the original problem, this patch drops the
symbol version requirement on memcpy which allows pseudo to work with
libc's back to 2.7 which should be sufficient for our supported targets
using nativesdk.

[YOCTO #2299]
[YOCTO #2351]

(From OE-Core rev: c6c701f424aeb502d20ff02d02712e56f4e259a5)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:42 +01:00
Richard Purdie
37b069ea5d pseudo: Fix bashisms
(From OE-Core rev: 90e22bbb316088fa951d51e75de4e5424bd51ed6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:42 +01:00
Richard Purdie
6d7260e8f6 package.bbclass: Ensure kernel modules get stripped
Kernel modules are not marked as executable but we do expect to strip them.
This patch adds in missing code to ensure we do this. Without this images
are getting sigificantly bloated in size.

(From OE-Core rev: 00b0a5f2f51bb3f88bbb9ae558c2859e3c1c406c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:41 +01:00
Richard Purdie
236bda9ed6 gettext.bbclass: Ensure we don't overwrite other DEPENDS_GETTEXT values
In particular, this overwrites the value from cross-canadian.bbclass in
some cases which isn't the desired behaviour and unnecessarily
complicates/breaks the dependency chain.

(From OE-Core rev: 751ead4fa7d4120de906a1d9cb1d5a29357bebad)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:41 +01:00
Richard Purdie
375835092c qemu: Backport a patch to solve SSE2 instruction emulation issues
This fix addresses various issues seen in qemux86-64 images:
 * scroll bars in matchbox-terminal not working
 * files not appearing in pcmanfm
 * warnings on the console from glib/gobject about invalid gdouble values

Its due to an emulation issue in qemu which the backported patch fixes.

I managed to debug it to a specific function, Khem found the qemu patch
to backport, thanks Khem!

[YOCTO #1906]

(From OE-Core rev: 69d083f8b8d8f7d095ed5682d305870c4d93fe62)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:41 +01:00
Scott Rifenbark
2c3d4f5bee documentation/bsp-guide/bsp.xml: spelling corrected.
Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: c0ee8ce391114f7a5b4f1c59fdf997ba4f3bcf75)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-18 16:42:15 +01:00
Scott Rifenbark
45114a9df0 documentation/poky-ref-manual/ref-images.xml: Added self-hosted image
I added the self-hosted-image to the list of images.

(From yocto-docs rev: a8265cb523705a374d23bf60aab5b7969ad937fc)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-18 16:42:05 +01:00
Richard Purdie
08290c6003 self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: 00256125873ff6f1630743a712e882e5f473a9d2)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-18 15:59:56 +01:00
Elizabeth Flanagan
729e7f774c distro.conf: Flipping for denzil
Flipping values in distro.conf for upcoming release

Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
2012-04-18 15:55:32 +01:00
2226 changed files with 158289 additions and 73232 deletions

32
.gitignore vendored
View File

@@ -1,17 +1,41 @@
*.pyc
*.pyo
/*.patch
build*/
build*/conf/local.conf
build*/conf/bblayers.conf
build*/downloads
build*/tmp/
build*/sstate-cache
pyshtables.py
pstage/
scripts/oe-git-proxy-socks
sources/
meta-*
!meta-skeleton
!meta-hob
!meta-demoapps
*.swp
*.orig
*.rej
*~
!meta-yocto
!meta-yocto-bsp
documentation/adt-manual/adt-manual.html
documentation/adt-manual/adt-manual.pdf
documentation/adt-manual/adt-manual.tgz
documentation/dev-manual/dev-manual.html
documentation/dev-manual/dev-manual.pdf
documentation/dev-manual/dev-manual.tgz
documentation/poky-ref-manual/poky-ref-manual.html
documentation/poky-ref-manual/poky-ref-manual.pdf
documentation/poky-ref-manual/poky-ref-manual.tgz
documentation/poky-ref-manual/bsp-guide.html
documentation/poky-ref-manual/bsp-guide.pdf
documentation/bsp-guide/bsp-guide.html
documentation/bsp-guide/bsp-guide.pdf
documentation/bsp-guide/bsp-guide.tgz
documentation/yocto-project-qs/yocto-project-qs.html
documentation/yocto-project-qs/yocto-project-qs.tgz
documentation/kernel-manual/kernel-manual.html
documentation/kernel-manual/kernel-manual.tgz
documentation/kernel-manual/kernel-manual.pdf
documentation/mega-manual/mega-manual.html
documentation/mega-manual/mega-manual.pdf
documentation/mega-manual/mega-manual.tgz

22
README
View File

@@ -18,7 +18,7 @@ e.g. for the hardware support. Poky is in turn a component of the Yocto Project.
The Yocto Project has extensive documentation about the system including a
reference manual which can be found at:
http://yoctoproject.org/documentation
http://yoctoproject.org/community/documentation
OpenEmbedded-Core is a layer containing the core metadata for current versions
of OpenEmbedded. It is distro-less (can build a functional image with
@@ -27,23 +27,3 @@ DISTRO = "") and contains only emulated machine support.
For information about OpenEmbedded, see the OpenEmbedded website:
http://www.openembedded.org/
Where to Send Patches
=====================
As Poky is an integration repository, patches against the various components
should be sent to their respective upstreams.
bitbake:
bitbake-devel@lists.openembedded.org
meta-yocto:
poky@yoctoproject.org
Most everything else should be sent to the OpenEmbedded Core mailing list. If
in doubt, check the oe-core git repository for the content you intend to modify.
Before sending, be sure the patches apply cleanly to the current oe-core git
repository.
openembedded-core@lists.openembedded.org
Note: The scripts directory should be treated with extra care as it is a mix
of oe-core and poky-specific files.

View File

@@ -40,17 +40,9 @@ from bb import cooker
from bb import ui
from bb import server
__version__ = "1.16.0"
__version__ = "1.15.2"
logger = logging.getLogger("BitBake")
# Unbuffer stdout to avoid log truncation in the event
# of an unorderly exit as well as to provide timely
# updates to log files for use with tail
try:
if sys.stdout.name == '<stdout>':
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
except:
pass
class BBConfiguration(object):
"""
@@ -64,11 +56,10 @@ class BBConfiguration(object):
def get_ui(config):
if not config.ui:
# modify 'ui' attribute because it is also read by cooker
config.ui = os.environ.get('BITBAKE_UI', 'knotty')
interface = config.ui
if config.ui:
interface = config.ui
else:
interface = 'knotty'
try:
# Dynamically load the UI based on the ui name. Although we
@@ -78,7 +69,7 @@ def get_ui(config):
return getattr(module, interface).main
except AttributeError:
sys.exit("FATAL: Invalid user interface '%s' specified.\n"
"Valid interfaces: depexp, goggle, ncurses, hob, knotty [default]." % interface)
"Valid interfaces: depexp, goggle, ncurses, hob, knotty [default], knotty2." % interface)
# Display bitbake/OE warnings via the BitBake.Warnings logger, ignoring others"""
@@ -126,9 +117,6 @@ Default BBFILES are the .bb files in the current directory.""")
parser.add_option("-c", "--cmd", help = "Specify task to execute. Note that this only executes the specified task for the providee and the packages it depends on, i.e. 'compile' does not implicitly call stage for the dependencies (IOW: use only if you know what you are doing). Depending on the base.bbclass a listtasks tasks is defined and will show available tasks",
action = "store", dest = "cmd")
parser.add_option("-C", "--clear-stamp", help = "Invalidate the stamp for the specified cmd such as 'compile' and run the default task for the specified target(s)",
action = "store", dest = "invalidate_stamp")
parser.add_option("-r", "--read", help = "read the specified file before bitbake.conf",
action = "append", dest = "prefile", default = [])
@@ -150,13 +138,13 @@ Default BBFILES are the .bb files in the current directory.""")
parser.add_option("-p", "--parse-only", help = "quit after parsing the BB files (developers only)",
action = "store_true", dest = "parse_only", default = False)
parser.add_option("-s", "--show-versions", help = "show current and preferred versions of all recipes",
parser.add_option("-s", "--show-versions", help = "show current and preferred versions of all packages",
action = "store_true", dest = "show_versions", default = False)
parser.add_option("-e", "--environment", help = "show the global or per-package environment (this is what used to be bbread)",
action = "store_true", dest = "show_environment", default = False)
parser.add_option("-g", "--graphviz", help = "emit the dependency trees of the specified packages in the dot syntax, and the pn-buildlist to show the build list",
parser.add_option("-g", "--graphviz", help = "emit the dependency trees of the specified packages in the dot syntax",
action = "store_true", dest = "dot_graph", default = False)
parser.add_option("-I", "--ignore-deps", help = """Assume these dependencies don't exist and are already provided (equivalent to ASSUME_PROVIDED). Useful to make dependency graphs more appealing""",
@@ -182,8 +170,6 @@ Default BBFILES are the .bb files in the current directory.""")
parser.add_option("-B", "--bind", help = "The name/address for the bitbake server to bind to",
action = "store", dest = "bind", default = False)
parser.add_option("", "--no-setscene", help = "Do not run any setscene tasks, forces builds",
action = "store_true", dest = "nosetscene", default = False)
options, args = parser.parse_args(sys.argv)
configuration = BBConfiguration(options)

View File

@@ -1,102 +1,12 @@
#!/usr/bin/env python
# bitbake-diffsigs
# BitBake task signature data comparison utility
#
# Copyright (C) 2012 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import warnings
import fnmatch
import optparse
import logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb.tinfoil
import bb.siggen
logger = logging.getLogger('BitBake')
def find_compare_task(bbhandler, pn, taskname):
""" Find the most recent signature files for the specified PN/task and compare them """
if not hasattr(bb.siggen, 'find_siginfo'):
logger.error('Metadata does not support finding signature data files')
sys.exit(1)
filedates = bb.siggen.find_siginfo(pn, taskname, None, bbhandler.config_data)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-2:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
elif len(latestfiles) < 2:
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (pn, taskname))
sys.exit(1)
else:
# Define recursion callback
def recursecb(key, hash1, hash2):
hashes = [hash1, hash2]
hashfiles = bb.siggen.find_siginfo(key, None, hashes, bbhandler.config_data)
recout = []
if len(hashfiles) == 2:
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb)
recout.extend(list(' ' + l for l in out2))
else:
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
return recout
# Recurse into signature comparison
output = bb.siggen.compare_sigfiles(latestfiles[0], latestfiles[1], recursecb)
if output:
print '\n'.join(output)
sys.exit(0)
parser = optparse.OptionParser(
usage = """
%prog -t recipename taskname
%prog sigdatafile1 sigdatafile2
%prog sigdatafile1""")
parser.add_option("-t", "--task",
help = "find the signature data files for last two runs of the specified task and compare them",
action="store_true", dest="taskmode")
options, args = parser.parse_args(sys.argv)
if len(args) == 1:
parser.print_help()
if len(sys.argv) > 2:
bb.siggen.compare_sigfiles(sys.argv[1], sys.argv[2])
else:
tinfoil = bb.tinfoil.Tinfoil()
if options.taskmode:
if len(args) < 3:
logger.error("Please specify a recipe and task name")
sys.exit(1)
tinfoil.prepare(config_only = True)
find_compare_task(tinfoil, args[1], args[2])
else:
if len(args) == 2:
output = bb.siggen.dump_sigfile(sys.argv[1])
else:
output = bb.siggen.compare_sigfiles(sys.argv[1], sys.argv[2])
if output:
print '\n'.join(output)
bb.siggen.dump_sigfile(sys.argv[1])

View File

@@ -6,6 +6,4 @@ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), '
import bb.siggen
output = bb.siggen.dump_sigfile(sys.argv[1])
if output:
print '\n'.join(output)
bb.siggen.dump_sigfile(sys.argv[1])

View File

@@ -6,22 +6,10 @@
# Copyright (C) 2011 Mentor Graphics Corporation
# Copyright (C) 2012 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import cmd
import logging
import warnings
import os
import sys
import fnmatch
@@ -35,14 +23,26 @@ import bb.cache
import bb.cooker
import bb.providers
import bb.utils
import bb.tinfoil
from bb.cooker import state
import bb.fetch2
logger = logging.getLogger('BitBake')
warnings.filterwarnings("ignore", category=DeprecationWarning)
def main(args):
cmds = Commands()
# Set up logging
console = logging.StreamHandler(sys.stdout)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
bb.msg.addDefaultlogFilter(console)
console.setFormatter(format)
logger.addHandler(console)
initialenv = os.environ.copy()
bb.utils.clean_environment()
cmds = Commands(initialenv)
if args:
# Allow user to specify e.g. show-layers instead of show_layers
args = [args[0].replace('-', '_')] + args[1:]
@@ -53,11 +53,42 @@ def main(args):
class Commands(cmd.Cmd):
def __init__(self):
def __init__(self, initialenv):
cmd.Cmd.__init__(self)
self.bbhandler = bb.tinfoil.Tinfoil()
self.returncode = 0
self.bblayers = (self.bbhandler.config_data.getVar('BBLAYERS', True) or "").split()
self.config = Config(parse_only=True)
self.cooker = bb.cooker.BBCooker(self.config,
self.register_idle_function,
initialenv)
self.config_data = self.cooker.configuration.data
bb.providers.logger.setLevel(logging.ERROR)
self.cooker_data = None
self.bblayers = (self.config_data.getVar('BBLAYERS', True) or "").split()
def register_idle_function(self, function, data):
pass
def prepare_cooker(self):
sys.stderr.write("Parsing recipes..")
logger.setLevel(logging.WARNING)
try:
while self.cooker.state in (state.initial, state.parsing):
self.cooker.updateCache()
except KeyboardInterrupt:
self.cooker.shutdown()
self.cooker.updateCache()
sys.exit(2)
logger.setLevel(logging.INFO)
sys.stderr.write("done.\n")
self.cooker_data = self.cooker.status
self.cooker_data.appends = self.cooker.appendlist
def check_prepare_cooker(self):
if not self.cooker_data:
self.prepare_cooker()
def default(self, line):
"""Handle unrecognised commands"""
@@ -82,13 +113,14 @@ class Commands(cmd.Cmd):
def do_show_layers(self, args):
"""show current configured layers"""
self.bbhandler.prepare(config_only = True)
self.check_prepare_cooker()
logger.plain('')
logger.plain("%s %s %s" % ("layer".ljust(20), "path".ljust(40), "priority"))
logger.plain('=' * 74)
for layerdir in self.bblayers:
layername = self.get_layer_name(layerdir)
layerpri = 0
for layer, _, regex, pri in self.bbhandler.cooker.status.bbfile_config_priorities:
for layer, _, regex, pri in self.cooker.status.bbfile_config_priorities:
if regex.match(os.path.join(layerdir, 'test')):
layerpri = pri
break
@@ -106,7 +138,7 @@ class Commands(cmd.Cmd):
def do_show_overlayed(self, args):
"""list overlayed recipes (where the same recipe exists in another layer)
"""list overlayed recipes (where the same recipe exists in another layer that has a higher layer priority)
usage: show-overlayed [-f] [-s]
@@ -119,7 +151,7 @@ Options:
recipes with the ones they overlay indented underneath
-s only list overlayed recipes where the version is the same
"""
self.bbhandler.prepare()
self.check_prepare_cooker()
show_filenames = False
show_same_ver_only = False
@@ -151,7 +183,7 @@ Options:
# factor - however, each layer.conf is free to either prepend or append to
# BBPATH (or indeed do crazy stuff with it). Thus the order in BBPATH might
# not be exactly the order present in bblayers.conf either.
bbpath = str(self.bbhandler.config_data.getVar('BBPATH', True))
bbpath = str(self.config_data.getVar('BBPATH', True))
overlayed_class_found = False
for (classfile, classdirs) in classes.items():
if len(classdirs) > 1:
@@ -202,7 +234,7 @@ Options:
-m only list where multiple recipes (in the same layer or different
layers) exist for the same recipe name
"""
self.bbhandler.prepare()
self.check_prepare_cooker()
show_filenames = False
show_multi_provider_only = False
@@ -224,15 +256,15 @@ Options:
def list_recipes(self, title, pnspec, show_overlayed_only, show_same_ver_only, show_filenames, show_multi_provider_only):
pkg_pn = self.bbhandler.cooker.status.pkg_pn
(latest_versions, preferred_versions) = bb.providers.findProviders(self.bbhandler.cooker.configuration.data, self.bbhandler.cooker.status, pkg_pn)
allproviders = bb.providers.allProviders(self.bbhandler.cooker.status)
pkg_pn = self.cooker.status.pkg_pn
(latest_versions, preferred_versions) = bb.providers.findProviders(self.cooker.configuration.data, self.cooker.status, pkg_pn)
allproviders = bb.providers.allProviders(self.cooker.status)
# Ensure we list skipped recipes
# We are largely guessing about PN, PV and the preferred version here,
# but we have no choice since skipped recipes are not fully parsed
skiplist = self.bbhandler.cooker.skiplist.keys()
skiplist.sort( key=lambda fileitem: self.bbhandler.cooker.calc_bbfile_priority(fileitem) )
skiplist = self.cooker.skiplist.keys()
skiplist.sort( key=lambda fileitem: self.cooker.calc_bbfile_priority(fileitem) )
skiplist.reverse()
for fn in skiplist:
recipe_parts = os.path.splitext(os.path.basename(fn))[0].split('_')
@@ -340,7 +372,7 @@ build results (as the layer priority order has effectively changed).
logger.error('Directory %s exists and is non-empty, please clear it out first' % outputdir)
return
self.bbhandler.prepare()
self.check_prepare_cooker()
layers = self.bblayers
if len(arglist) > 2:
layernames = arglist[:-1]
@@ -370,8 +402,8 @@ build results (as the layer priority order has effectively changed).
appended_recipes = []
for layer in layers:
overlayed = []
for f in self.bbhandler.cooker.overlayed.iterkeys():
for of in self.bbhandler.cooker.overlayed[f]:
for f in self.cooker.overlayed.iterkeys():
for of in self.cooker.overlayed[f]:
if of.startswith(layer):
overlayed.append(of)
@@ -395,8 +427,8 @@ build results (as the layer priority order has effectively changed).
logger.warn('Overwriting file %s', fdest)
bb.utils.copyfile(f1full, fdest)
if ext == '.bb':
if f1 in self.bbhandler.cooker.appendlist:
appends = self.bbhandler.cooker.appendlist[f1]
if f1 in self.cooker_data.appends:
appends = self.cooker_data.appends[f1]
if appends:
logger.plain(' Applying appends to %s' % fdest )
for appendname in appends:
@@ -405,9 +437,9 @@ build results (as the layer priority order has effectively changed).
appended_recipes.append(f1)
# Take care of when some layers are excluded and yet we have included bbappends for those recipes
for recipename in self.bbhandler.cooker.appendlist.iterkeys():
for recipename in self.cooker_data.appends.iterkeys():
if recipename not in appended_recipes:
appends = self.bbhandler.cooker.appendlist[recipename]
appends = self.cooker_data.appends[recipename]
first_append = None
for appendname in appends:
layer = layer_path_match(appendname)
@@ -425,14 +457,14 @@ build results (as the layer priority order has effectively changed).
# have come from)
first_regex = None
layerdir = layers[0]
for layername, pattern, regex, _ in self.bbhandler.cooker.status.bbfile_config_priorities:
for layername, pattern, regex, _ in self.cooker.status.bbfile_config_priorities:
if regex.match(os.path.join(layerdir, 'test')):
first_regex = regex
break
if first_regex:
# Find the BBFILES entries that match (which will have come from this conf/layer.conf file)
bbfiles = str(self.bbhandler.config_data.getVar('BBFILES', True)).split()
bbfiles = str(self.config_data.getVar('BBFILES', True)).split()
bbfiles_layer = []
for item in bbfiles:
if first_regex.match(item):
@@ -455,7 +487,7 @@ build results (as the layer priority order has effectively changed).
logger.warning("File %s does not match the flattened layer's BBFILES setting, you may need to edit conf/layer.conf or move the file elsewhere" % f1full)
def get_file_layer(self, filename):
for layer, _, regex, _ in self.bbhandler.cooker.status.bbfile_config_priorities:
for layer, _, regex, _ in self.cooker.status.bbfile_config_priorities:
if regex.match(filename):
for layerdir in self.bblayers:
if regex.match(os.path.join(layerdir, 'test')):
@@ -481,14 +513,14 @@ usage: show-appends
Recipes are listed with the bbappends that apply to them as subitems.
"""
self.bbhandler.prepare()
if not self.bbhandler.cooker.appendlist:
self.check_prepare_cooker()
if not self.cooker_data.appends:
logger.plain('No append files found')
return
logger.plain('=== Appended recipes ===')
logger.plain('State of append files:')
pnlist = list(self.bbhandler.cooker_data.pkg_pn.keys())
pnlist = list(self.cooker_data.pkg_pn.keys())
pnlist.sort()
for pn in pnlist:
self.show_appends_for_pn(pn)
@@ -496,19 +528,19 @@ Recipes are listed with the bbappends that apply to them as subitems.
self.show_appends_for_skipped()
def show_appends_for_pn(self, pn):
filenames = self.bbhandler.cooker_data.pkg_pn[pn]
filenames = self.cooker_data.pkg_pn[pn]
best = bb.providers.findBestProvider(pn,
self.bbhandler.cooker.configuration.data,
self.bbhandler.cooker_data,
self.bbhandler.cooker_data.pkg_pn)
self.cooker.configuration.data,
self.cooker_data,
self.cooker_data.pkg_pn)
best_filename = os.path.basename(best[3])
self.show_appends_output(filenames, best_filename)
def show_appends_for_skipped(self):
filenames = [os.path.basename(f)
for f in self.bbhandler.cooker.skiplist.iterkeys()]
for f in self.cooker.skiplist.iterkeys()]
self.show_appends_output(filenames, None, " (skipped)")
def show_appends_output(self, filenames, best_filename, name_suffix = ''):
@@ -534,7 +566,7 @@ Recipes are listed with the bbappends that apply to them as subitems.
continue
basename = os.path.basename(filename)
appends = self.bbhandler.cooker.appendlist.get(basename)
appends = self.cooker_data.appends.get(basename)
if appends:
appended.append((basename, list(appends)))
else:
@@ -542,5 +574,22 @@ Recipes are listed with the bbappends that apply to them as subitems.
return appended, notappended
class Config(object):
def __init__(self, **options):
self.pkgs_to_build = []
self.debug_domains = []
self.extra_assume_provided = []
self.prefile = []
self.postfile = []
self.debug = 0
self.__dict__.update(options)
def __getattr__(self, attribute):
try:
return super(Config, self).__getattribute__(attribute)
except AttributeError:
return None
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]) or 0)

View File

@@ -1,38 +0,0 @@
#!/usr/bin/env python
#
# Copyright (C) 2012 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys, logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
import unittest
try:
import bb
except RuntimeError as exc:
sys.exit(str(exc))
tests = ["bb.tests.codeparser",
"bb.tests.cow",
"bb.tests.data",
"bb.tests.fetch",
"bb.tests.utils"]
for t in tests:
__import__(t)
unittest.main(argv=["bitbake-selftest"] + tests)

View File

@@ -1,120 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2012 Wind River Systems, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
import os
import sys
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname( \
os.path.abspath(__file__))), 'lib'))
try:
import bb
except RuntimeError as exc:
sys.exit(str(exc))
import gtk
import optparse
import pygtk
from bb.ui.crumbs.hig import DeployImageDialog, ImageSelectionDialog, CrumbsMessageDialog
from bb.ui.crumbs.hobwidget import HobAltButton, HobButton
# I put all the fs bitbake supported here. Need more test.
DEPLOYABLE_IMAGE_TYPES = ["jffs2", "cramfs", "ext2", "ext3", "btrfs", "squashfs", "ubi", "vmdk"]
Title = "USB Image Writer"
class DeployWindow(gtk.Window):
def __init__(self, image_path=''):
super(DeployWindow, self).__init__()
if len(image_path) > 0:
valid = True
if not os.path.exists(image_path):
valid = False
lbl = "<b>Invalid image file path: %s.</b>\nPress <b>Select Image</b> to select an image." % image_path
else:
image_path = os.path.abspath(image_path)
extend_name = os.path.splitext(image_path)[1][1:]
if extend_name not in DEPLOYABLE_IMAGE_TYPES:
valid = False
lbl = "<b>Undeployable imge type: %s</b>\nPress <b>Select Image</b> to select an image." % extend_name
if not valid:
image_path = ''
crumbs_dialog = CrumbsMessageDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
button = crumbs_dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
crumbs_dialog.run()
crumbs_dialog.destroy()
self.deploy_dialog = DeployImageDialog(Title, image_path, self,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT
| gtk.DIALOG_NO_SEPARATOR, None, standalone=True)
close_button = self.deploy_dialog.add_button("Close", gtk.RESPONSE_NO)
HobAltButton.style_button(close_button)
close_button.connect('clicked', gtk.main_quit)
write_button = self.deploy_dialog.add_button("Write USB image", gtk.RESPONSE_YES)
HobAltButton.style_button(write_button)
self.deploy_dialog.connect('select_image_clicked', self.select_image_clicked_cb)
self.deploy_dialog.connect('destroy', gtk.main_quit)
response = self.deploy_dialog.show()
def select_image_clicked_cb(self, dialog):
cwd = os.getcwd()
dialog = ImageSelectionDialog(cwd, DEPLOYABLE_IMAGE_TYPES, Title, self, gtk.FILE_CHOOSER_ACTION_SAVE )
button = dialog.add_button("Cancel", gtk.RESPONSE_NO)
HobAltButton.style_button(button)
button = dialog.add_button("Open", gtk.RESPONSE_YES)
HobAltButton.style_button(button)
response = dialog.run()
if response == gtk.RESPONSE_YES:
if not dialog.image_names:
lbl = "<b>No selections made</b>\nClicked the radio button to select a image."
crumbs_dialog = CrumbsMessageDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
button = crumbs_dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
crumbs_dialog.run()
crumbs_dialog.destroy()
dialog.destroy()
return
# get the full path of image
image_path = os.path.join(dialog.image_folder, dialog.image_names[0])
self.deploy_dialog.set_image_text_buffer(image_path)
self.deploy_dialog.set_image_path(image_path)
dialog.destroy()
def main():
parser = optparse.OptionParser(
usage = """%prog [-h] [image_file]
%prog writes bootable images to USB devices. You can
provide the image file on the command line or select it using the GUI.""")
options, args = parser.parse_args(sys.argv)
image_file = args[1] if len(args) > 1 else ''
dw = DeployWindow(image_file)
if __name__ == '__main__':
try:
main()
gtk.main()
except Exception:
import traceback
traceback.print_exc(3)

View File

@@ -1,68 +0,0 @@
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Copyright (C) 2012 Wind River Systems, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# This is used for dumping the bb_cache.dat, the output format is:
# recipe_path PN PV PACKAGES
#
import os
import sys
import warnings
# For importing bb.cache
sys.path.insert(0, os.path.join(os.path.abspath(os.path.dirname(sys.argv[0])), '../lib'))
from bb.cache import CoreRecipeInfo
import cPickle as pickle
def main(argv=None):
"""
Get the mapping for the target recipe.
"""
if len(argv) != 1:
print >>sys.stderr, "Error, need one argument!"
return 2
cachefile = argv[0]
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
while cachefile:
try:
key = pickled.load()
val = pickled.load()
except Exception:
break
if isinstance(val, CoreRecipeInfo) and (not val.skipped):
pn = val.pn
# Filter out the native recipes.
if key.startswith('virtual:native:') or pn.endswith("-native"):
continue
# 1.0 is the default version for a no PV recipe.
if val.__dict__.has_key("pv"):
pv = val.pv
else:
pv = "1.0"
print("%s %s %s %s" % (key, pn, pv, ' '.join(val.packages)))
if __name__ == "__main__":
sys.exit(main(sys.argv[1:]))

View File

@@ -103,13 +103,7 @@ Show debug logging for the specified logging domains
.TP
.B \-P, \-\-profile
profile the command and print a report
.SH ENVIRONMENT VARIABLES
bitbake uses the following environment variables to control its
operation:
.TP
.B BITBAKE_UI
The bitbake user interface; overridden by the \fB-u\fP commandline option.
.SH AUTHORS
BitBake was written by

View File

@@ -228,7 +228,7 @@ addtask printdate before do_build</screen></para>
<para>'nostamp' - don't generate a stamp file for a task. This means the task is always rexecuted.</para>
<para>'fakeroot' - this task needs to be run in a fakeroot environment, obtained by adding the variables in FAKEROOTENV to the environment.</para>
<para>'umask' - the umask to run the task under.</para>
<para> For the 'deptask', 'rdeptask', 'depends', 'rdepends' and 'recrdeptask' flags please see the dependencies section.</para>
<para> For the 'deptask', 'rdeptask', 'recdeptask' and 'recrdeptask' flags please see the dependencies section.</para>
</section>
<section>
@@ -308,35 +308,37 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
</section>
<section>
<title>Dependency handling</title>
<para>BitBake handles dependencies at the task level since to allow for efficient operation with multiple processed executing in parallel. A robust method of specifying task dependencies is therefore needed. </para>
<para>BitBake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
<section>
<title>Dependencies internal to the .bb file</title>
<para>Where the dependencies are internal to a given .bb file, the dependencies are handled by the previously detailed addtask directive.</para>
</section>
<section>
<title>Build Dependencies</title>
<title>DEPENDS</title>
<para>DEPENDS lists build time dependencies. The 'deptask' flag for tasks is used to signify the task of each item listed in DEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_configure[deptask] = "do_populate_staging"</screen></para>
<para>means the do_populate_staging task of each item in DEPENDS must have completed before do_configure can execute.</para>
</section>
<section>
<title>Runtime Dependencies</title>
<para>The PACKAGES variable lists runtime packages and each of these can have RDEPENDS and RRECOMMENDS runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each item runtime dependency which must have completed before that task can be executed.</para>
<title>RDEPENDS</title>
<para>RDEPENDS lists runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each item listed in RDEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_package_write[rdeptask] = "do_package"</screen></para>
<para>means the do_package task of each item in RDEPENDS must have completed before do_package_write can execute.</para>
</section>
<section>
<title>Recursive Dependencies</title>
<para>These are specified with the 'recrdeptask' flag which is used signify the task(s) of dependencies which must have completed before that task can be executed. It works by looking though the build and runtime dependencies of the current recipe as well as any inter-task dependencies the task has, then adding a dependency on the listed task. It will then recurse through the dependencies of those tasks and so on.</para>
<para>It may be desireable to recurse not just through the dependencies of those tasks but through the build and runtime dependencies of dependent tasks too. If that is the case, the taskname itself should be referenced in the task list, e.g. do_a[recrdeptask] = "do_a do_b".</para>
<title>Recursive DEPENDS</title>
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
</section>
<section>
<title>Recursive RDEPENDS</title>
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first.</para>
</section>
<section>
<title>Inter task</title>
<para>The 'depends' flag for tasks is a more generic form of which allows an interdependency on specific tasks rather than specifying the data in DEPENDS.</para>
<para>The 'depends' flag for tasks is a more generic form of which allows an interdependency on specific tasks rather than specifying the data in DEPENDS or RDEPENDS.</para>
<para><screen>do_patch[depends] = "quilt-native:do_populate_staging"</screen></para>
<para>means the do_populate_staging task of the target quilt-native must have completed before the do_patch can execute.</para>
<para>The 'rdepends' flag works in a similar way but takes targets in the runtime namespace instead of the build time dependency namespace.</para>
</section>
</section>

View File

@@ -21,7 +21,7 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
__version__ = "1.16.0"
__version__ = "1.15.2"
import sys
if sys.version_info < (2, 6, 0):

View File

@@ -72,7 +72,7 @@ class TaskBase(event.Event):
self._task = t
self._package = d.getVar("PF", True)
event.Event.__init__(self)
self._message = "recipe %s: task %s: %s" % (d.getVar("PF", True), t, self.getDisplayName())
self._message = "package %s: task %s: %s" % (d.getVar("PF", True), t, self.getDisplayName())
def getTask(self):
return self._task
@@ -135,8 +135,7 @@ class LogTee(object):
def __repr__(self):
return '<LogTee {0}>'.format(self.name)
def flush(self):
self.outfile.flush()
def exec_func(func, d, dirs = None):
"""Execute an BB 'function'"""
@@ -175,19 +174,8 @@ def exec_func(func, d, dirs = None):
lockfiles = None
tempdir = data.getVar('T', d, 1)
# or func allows items to be executed outside of the normal
# task set, such as buildhistory
task = data.getVar('BB_RUNTASK', d, 1) or func
if task == func:
taskfunc = task
else:
taskfunc = "%s.%s" % (task, func)
runfmt = data.getVar('BB_RUNFMT', d, 1) or "run.{func}.{pid}"
runfn = runfmt.format(taskfunc=taskfunc, task=task, func=func, pid=os.getpid())
runfile = os.path.join(tempdir, runfn)
bb.utils.mkdirhier(os.path.dirname(runfile))
bb.utils.mkdirhier(tempdir)
runfile = os.path.join(tempdir, 'run.{0}.{1}'.format(func, os.getpid()))
with bb.utils.fileslocked(lockfiles):
if ispython:
@@ -218,8 +206,6 @@ def exec_func_python(func, d, runfile, cwd=None):
olddir = None
os.chdir(cwd)
bb.debug(2, "Executing python function %s" % func)
try:
comp = utils.better_compile(code, func, bbfile)
utils.better_exec(comp, {"d": d}, code, bbfile)
@@ -229,15 +215,13 @@ def exec_func_python(func, d, runfile, cwd=None):
raise FuncFailed(func, None)
finally:
bb.debug(2, "Python function %s finished" % func)
if cwd and olddir:
try:
os.chdir(olddir)
except OSError:
pass
def exec_func_shell(func, d, runfile, cwd=None):
def exec_func_shell(function, d, runfile, cwd=None):
"""Execute a shell function from the metadata
Note on directory behavior. The 'dirs' varflag should contain a list
@@ -250,18 +234,18 @@ def exec_func_shell(func, d, runfile, cwd=None):
with open(runfile, 'w') as script:
script.write('#!/bin/sh -e\n')
data.emit_func(func, script, d)
data.emit_func(function, script, d)
if bb.msg.loggerVerboseLogs:
script.write("set -x\n")
if cwd:
script.write("cd %s\n" % cwd)
script.write("%s\n" % func)
script.write("%s\n" % function)
os.chmod(runfile, 0775)
cmd = runfile
if d.getVarFlag(func, 'fakeroot'):
if d.getVarFlag(function, 'fakeroot'):
fakerootcmd = d.getVar('FAKEROOT', True)
if fakerootcmd:
cmd = [fakerootcmd, runfile]
@@ -271,15 +255,11 @@ def exec_func_shell(func, d, runfile, cwd=None):
else:
logfile = sys.stdout
bb.debug(2, "Executing shell function %s" % func)
try:
bb.process.run(cmd, shell=False, stdin=NULL, log=logfile)
except bb.process.CmdError:
logfn = d.getVar('BB_LOGFILE', True)
raise FuncFailed(func, logfn)
bb.debug(2, "Shell function %s finished" % func)
raise FuncFailed(function, logfn)
def _task_data(fn, task, d):
localdata = data.createCopy(d)
@@ -310,23 +290,8 @@ def _exec_task(fn, task, d, quieterr):
bb.fatal("T variable not set, unable to build")
bb.utils.mkdirhier(tempdir)
# Determine the logfile to generate
logfmt = localdata.getVar('BB_LOGFMT', True) or 'log.{task}.{pid}'
logbase = logfmt.format(task=task, pid=os.getpid())
# Document the order of the tasks...
logorder = os.path.join(tempdir, 'log.task_order')
try:
logorderfile = file(logorder, 'a')
except OSError:
logger.exception("Opening log file '%s'", logorder)
pass
logorderfile.write('{0} ({1}): {2}\n'.format(task, os.getpid(), logbase))
logorderfile.close()
# Setup the courtesy link to the logfn
loglink = os.path.join(tempdir, 'log.{0}'.format(task))
logbase = 'log.{0}.{1}'.format(task, os.getpid())
logfn = os.path.join(tempdir, logbase)
if loglink:
bb.utils.remove(loglink)
@@ -349,7 +314,6 @@ def _exec_task(fn, task, d, quieterr):
# Handle logfiles
si = file('/dev/null', 'r')
try:
bb.utils.mkdirhier(os.path.dirname(logfn))
logfile = file(logfn, 'w')
except OSError:
logger.exception("Opening log file '%s'", logfn)
@@ -376,7 +340,6 @@ def _exec_task(fn, task, d, quieterr):
bblogger.addHandler(errchk)
localdata.setVar('BB_LOGFILE', logfn)
localdata.setVar('BB_RUNTASK', task)
event.fire(TaskStarted(task, localdata), localdata)
try:
@@ -464,46 +427,15 @@ def stamp_internal(taskname, d, file_name):
stamp = bb.parse.siggen.stampfile(stamp, file_name, taskname, extrainfo)
stampdir = os.path.dirname(stamp)
if bb.parse.cached_mtime_noerror(stampdir) == 0:
bb.utils.mkdirhier(stampdir)
bb.utils.mkdirhier(os.path.dirname(stamp))
return stamp
def stamp_cleanmask_internal(taskname, d, file_name):
"""
Internal stamp helper function to generate stamp cleaning mask
Returns the stamp path+filename
In the bitbake core, d can be a CacheData and file_name will be set.
When called in task context, d will be a data store, file_name will not be set
"""
taskflagname = taskname
if taskname.endswith("_setscene") and taskname != "do_setscene":
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stamp_base_clean[file_name].get(taskflagname) or d.stampclean[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVarFlag(taskflagname, 'stamp-base-clean', True) or d.getVar('STAMPCLEAN', True)
file_name = d.getVar('BB_FILENAME', True)
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
if not stamp:
return
return bb.parse.siggen.stampcleanmask(stamp, file_name, taskname, extrainfo)
def make_stamp(task, d, file_name = None):
"""
Creates/updates a stamp for a given task
(d can be a data dict or dataCache)
"""
cleanmask = stamp_cleanmask_internal(task, d, file_name)
if cleanmask:
bb.utils.remove(cleanmask)
stamp = stamp_internal(task, d, file_name)
# Remove the file and recreate to force timestamp
# change on broken NFS filesystems
@@ -575,7 +507,6 @@ def add_tasks(tasklist, d):
deptask = data.expand(flags[name], d)
task_deps[name][task] = deptask
getTask('depends')
getTask('rdepends')
getTask('deptask')
getTask('rdeptask')
getTask('recrdeptask')

View File

@@ -1,12 +1,11 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# BitBake Cache implementation
# BitBake 'Event' implementation
#
# Caching of bitbake variables before task execution
# Copyright (C) 2006 Richard Purdie
# Copyright (C) 2012 Intel Corporation
# but small sections based on code from bin/bitbake:
# Copyright (C) 2003, 2004 Chris Larson
@@ -43,7 +42,7 @@ except ImportError:
logger.info("Importing cPickle failed. "
"Falling back to a very slow implementation.")
__cache_version__ = "145"
__cache_version__ = "143"
def getCacheFile(path, filename, data_hash):
return os.path.join(path, filename + "." + data_hash)
@@ -76,13 +75,9 @@ class RecipeInfoCommon(object):
for task in tasks)
@classmethod
def flaglist(cls, flag, varlist, metadata, squash=False):
out_dict = dict((var, metadata.getVarFlag(var, flag, True))
def flaglist(cls, flag, varlist, metadata):
return dict((var, metadata.getVarFlag(var, flag, True))
for var in varlist)
if squash:
return dict((k,v) for (k,v) in out_dict.iteritems() if v)
else:
return out_dict
@classmethod
def getvar(cls, var, metadata):
@@ -130,11 +125,8 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.broken = self.getvar('BROKEN', metadata)
self.not_world = self.getvar('EXCLUDE_FROM_WORLD', metadata)
self.stamp = self.getvar('STAMP', metadata)
self.stampclean = self.getvar('STAMPCLEAN', metadata)
self.stamp_base = self.flaglist('stamp-base', self.tasks, metadata)
self.stamp_base_clean = self.flaglist('stamp-base-clean', self.tasks, metadata)
self.stamp_extrainfo = self.flaglist('stamp-extra-info', self.tasks, metadata)
self.file_checksums = self.flaglist('file-checksums', self.tasks, metadata, True)
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
self.depends = self.depvar('DEPENDS', metadata)
self.provides = self.depvar('PROVIDES', metadata)
@@ -159,11 +151,8 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.pkg_dp = {}
cachedata.stamp = {}
cachedata.stampclean = {}
cachedata.stamp_base = {}
cachedata.stamp_base_clean = {}
cachedata.stamp_extrainfo = {}
cachedata.file_checksums = {}
cachedata.fn_provides = {}
cachedata.pn_provides = defaultdict(list)
cachedata.all_depends = []
@@ -193,11 +182,8 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.pkg_pepvpr[fn] = (self.pe, self.pv, self.pr)
cachedata.pkg_dp[fn] = self.defaultpref
cachedata.stamp[fn] = self.stamp
cachedata.stampclean[fn] = self.stampclean
cachedata.stamp_base[fn] = self.stamp_base
cachedata.stamp_base_clean[fn] = self.stamp_base_clean
cachedata.stamp_extrainfo[fn] = self.stamp_extrainfo
cachedata.file_checksums[fn] = self.file_checksums
provides = [self.pn]
for provide in self.provides:
@@ -717,115 +703,4 @@ class CacheData(object):
for info in info_array:
info.add_cacheData(self, fn)
class MultiProcessCache(object):
"""
BitBake multi-process cache implementation
Used by the codeparser & file checksum caches
"""
def __init__(self):
self.cachefile = None
self.cachedata = self.create_cachedata()
self.cachedata_extras = self.create_cachedata()
def init_cache(self, d):
cachedir = (d.getVar("PERSISTENT_DIR", True) or
d.getVar("CACHE", True))
if cachedir in [None, '']:
return
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir, self.__class__.cache_file_name)
logger.debug(1, "Using cache in '%s'", self.cachefile)
try:
p = pickle.Unpickler(file(self.cachefile, "rb"))
data, version = p.load()
except:
return
if version != self.__class__.CACHE_VERSION:
return
self.cachedata = data
def internSet(self, items):
new = set()
for i in items:
new.add(intern(i))
return new
def compress_keys(self, data):
# Override in subclasses if desired
return
def create_cachedata(self):
data = [{}]
return data
def save_extras(self, d):
if not self.cachefile:
return
glf = bb.utils.lockfile(self.cachefile + ".lock", shared=True)
i = os.getpid()
lf = None
while not lf:
lf = bb.utils.lockfile(self.cachefile + ".lock." + str(i), retry=False)
if not lf or os.path.exists(self.cachefile + "-" + str(i)):
if lf:
bb.utils.unlockfile(lf)
lf = None
i = i + 1
continue
p = pickle.Pickler(file(self.cachefile + "-" + str(i), "wb"), -1)
p.dump([self.cachedata_extras, self.__class__.CACHE_VERSION])
bb.utils.unlockfile(lf)
bb.utils.unlockfile(glf)
def merge_data(self, source, dest):
for j in range(0,len(dest)):
for h in source[j]:
if h not in dest[j]:
dest[j][h] = source[j][h]
def save_merge(self, d):
if not self.cachefile:
return
glf = bb.utils.lockfile(self.cachefile + ".lock")
try:
p = pickle.Unpickler(file(self.cachefile, "rb"))
data, version = p.load()
except (IOError, EOFError):
data, version = None, None
if version != self.__class__.CACHE_VERSION:
data = self.create_cachedata()
for f in [y for y in os.listdir(os.path.dirname(self.cachefile)) if y.startswith(os.path.basename(self.cachefile) + '-')]:
f = os.path.join(os.path.dirname(self.cachefile), f)
try:
p = pickle.Unpickler(file(f, "rb"))
extradata, version = p.load()
except (IOError, EOFError):
extradata, version = self.create_cachedata(), None
if version != self.__class__.CACHE_VERSION:
continue
self.merge_data(extradata, data)
os.unlink(f)
self.compress_keys(data)
p = pickle.Pickler(file(self.cachefile, "wb"), -1)
p.dump([data, self.__class__.CACHE_VERSION])
bb.utils.unlockfile(glf)

View File

@@ -1,90 +0,0 @@
# Local file checksum cache implementation
#
# Copyright (C) 2012 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import stat
import bb.utils
import logging
from bb.cache import MultiProcessCache
logger = logging.getLogger("BitBake.Cache")
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info("Importing cPickle failed. "
"Falling back to a very slow implementation.")
# mtime cache (non-persistent)
# based upon the assumption that files do not change during bitbake run
class FileMtimeCache(object):
cache = {}
def cached_mtime(self, f):
if f not in self.cache:
self.cache[f] = os.stat(f)[stat.ST_MTIME]
return self.cache[f]
def cached_mtime_noerror(self, f):
if f not in self.cache:
try:
self.cache[f] = os.stat(f)[stat.ST_MTIME]
except OSError:
return 0
return self.cache[f]
def update_mtime(self, f):
self.cache[f] = os.stat(f)[stat.ST_MTIME]
return self.cache[f]
def clear(self):
self.cache.clear()
# Checksum + mtime cache (persistent)
class FileChecksumCache(MultiProcessCache):
cache_file_name = "local_file_checksum_cache.dat"
CACHE_VERSION = 1
def __init__(self):
self.mtime_cache = FileMtimeCache()
MultiProcessCache.__init__(self)
def get_checksum(self, f):
entry = self.cachedata[0].get(f)
cmtime = self.mtime_cache.cached_mtime(f)
if entry:
(mtime, hashval) = entry
if cmtime == mtime:
return hashval
else:
bb.debug(2, "file %s changed mtime, recompute checksum" % f)
hashval = bb.utils.md5_file(f)
self.cachedata_extras[0][f] = (cmtime, hashval)
return hashval
def merge_data(self, source, dest):
for h in source[0]:
if h in dest:
(smtime, _) = source[0][h]
(dmtime, _) = dest[0][h]
if smtime > dmtime:
dest[0][h] = source[0][h]
else:
dest[0][h] = source[0][h]

View File

@@ -5,10 +5,10 @@ import os.path
import bb.utils, bb.data
from itertools import chain
from pysh import pyshyacc, pyshlex, sherrors
from bb.cache import MultiProcessCache
logger = logging.getLogger('BitBake.CodeParser')
PARSERCACHE_VERSION = 2
try:
import cPickle as pickle
@@ -32,56 +32,133 @@ def check_indent(codestr):
return codestr
pythonparsecache = {}
shellparsecache = {}
pythonparsecacheextras = {}
shellparsecacheextras = {}
class CodeParserCache(MultiProcessCache):
cache_file_name = "bb_codeparser.dat"
CACHE_VERSION = 2
def __init__(self):
MultiProcessCache.__init__(self)
self.pythoncache = self.cachedata[0]
self.shellcache = self.cachedata[1]
self.pythoncacheextras = self.cachedata_extras[0]
self.shellcacheextras = self.cachedata_extras[1]
def init_cache(self, d):
MultiProcessCache.init_cache(self, d)
# cachedata gets re-assigned in the parent
self.pythoncache = self.cachedata[0]
self.shellcache = self.cachedata[1]
def compress_keys(self, data):
# When the dicts are originally created, python calls intern() on the set keys
# which significantly improves memory usage. Sadly the pickle/unpickle process
# doesn't call intern() on the keys and results in the same strings being duplicated
# in memory. This also means pickle will save the same string multiple times in
# the cache file. By interning the data here, the cache file shrinks dramatically
# meaning faster load times and the reloaded cache files also consume much less
# memory. This is worth any performance hit from this loops and the use of the
# intern() data storage.
# Python 3.x may behave better in this area
for h in data[0]:
data[0][h]["refs"] = self.internSet(data[0][h]["refs"])
data[0][h]["execs"] = self.internSet(data[0][h]["execs"])
for h in data[1]:
data[1][h]["execs"] = self.internSet(data[1][h]["execs"])
return
def create_cachedata(self):
data = [{}, {}]
return data
codeparsercache = CodeParserCache()
def parser_cachefile(d):
cachedir = (d.getVar("PERSISTENT_DIR", True) or
d.getVar("CACHE", True))
if cachedir in [None, '']:
return None
bb.utils.mkdirhier(cachedir)
cachefile = os.path.join(cachedir, "bb_codeparser.dat")
logger.debug(1, "Using cache in '%s' for codeparser cache", cachefile)
return cachefile
def parser_cache_init(d):
codeparsercache.init_cache(d)
global pythonparsecache
global shellparsecache
cachefile = parser_cachefile(d)
if not cachefile:
return
try:
p = pickle.Unpickler(file(cachefile, "rb"))
data, version = p.load()
except:
return
if version != PARSERCACHE_VERSION:
return
pythonparsecache = data[0]
shellparsecache = data[1]
def parser_cache_save(d):
codeparsercache.save_extras(d)
cachefile = parser_cachefile(d)
if not cachefile:
return
glf = bb.utils.lockfile(cachefile + ".lock", shared=True)
i = os.getpid()
lf = None
while not lf:
shellcache = {}
pythoncache = {}
lf = bb.utils.lockfile(cachefile + ".lock." + str(i), retry=False)
if not lf or os.path.exists(cachefile + "-" + str(i)):
if lf:
bb.utils.unlockfile(lf)
lf = None
i = i + 1
continue
shellcache = shellparsecacheextras
pythoncache = pythonparsecacheextras
p = pickle.Pickler(file(cachefile + "-" + str(i), "wb"), -1)
p.dump([[pythoncache, shellcache], PARSERCACHE_VERSION])
bb.utils.unlockfile(lf)
bb.utils.unlockfile(glf)
def internSet(items):
new = set()
for i in items:
new.add(intern(i))
return new
def parser_cache_savemerge(d):
codeparsercache.save_merge(d)
cachefile = parser_cachefile(d)
if not cachefile:
return
glf = bb.utils.lockfile(cachefile + ".lock")
try:
p = pickle.Unpickler(file(cachefile, "rb"))
data, version = p.load()
except (IOError, EOFError):
data, version = None, None
if version != PARSERCACHE_VERSION:
data = [{}, {}]
for f in [y for y in os.listdir(os.path.dirname(cachefile)) if y.startswith(os.path.basename(cachefile) + '-')]:
f = os.path.join(os.path.dirname(cachefile), f)
try:
p = pickle.Unpickler(file(f, "rb"))
extradata, version = p.load()
except (IOError, EOFError):
extradata, version = [{}, {}], None
if version != PARSERCACHE_VERSION:
continue
for h in extradata[0]:
if h not in data[0]:
data[0][h] = extradata[0][h]
for h in extradata[1]:
if h not in data[1]:
data[1][h] = extradata[1][h]
os.unlink(f)
# When the dicts are originally created, python calls intern() on the set keys
# which significantly improves memory usage. Sadly the pickle/unpickle process
# doesn't call intern() on the keys and results in the same strings being duplicated
# in memory. This also means pickle will save the same string multiple times in
# the cache file. By interning the data here, the cache file shrinks dramatically
# meaning faster load times and the reloaded cache files also consume much less
# memory. This is worth any performance hit from this loops and the use of the
# intern() data storage.
# Python 3.x may behave better in this area
for h in data[0]:
data[0][h]["refs"] = internSet(data[0][h]["refs"])
data[0][h]["execs"] = internSet(data[0][h]["execs"])
for h in data[1]:
data[1][h]["execs"] = internSet(data[1][h]["execs"])
p = pickle.Pickler(file(cachefile, "wb"), -1)
p.dump([data, PARSERCACHE_VERSION])
bb.utils.unlockfile(glf)
Logger = logging.getLoggerClass()
class BufferedLogger(Logger):
@@ -158,14 +235,14 @@ class PythonParser():
def parse_python(self, node):
h = hash(str(node))
if h in codeparsercache.pythoncache:
self.references = codeparsercache.pythoncache[h]["refs"]
self.execs = codeparsercache.pythoncache[h]["execs"]
if h in pythonparsecache:
self.references = pythonparsecache[h]["refs"]
self.execs = pythonparsecache[h]["execs"]
return
if h in codeparsercache.pythoncacheextras:
self.references = codeparsercache.pythoncacheextras[h]["refs"]
self.execs = codeparsercache.pythoncacheextras[h]["execs"]
if h in pythonparsecacheextras:
self.references = pythonparsecacheextras[h]["refs"]
self.execs = pythonparsecacheextras[h]["execs"]
return
@@ -179,9 +256,9 @@ class PythonParser():
self.references.update(self.var_references)
self.references.update(self.var_execs)
codeparsercache.pythoncacheextras[h] = {}
codeparsercache.pythoncacheextras[h]["refs"] = self.references
codeparsercache.pythoncacheextras[h]["execs"] = self.execs
pythonparsecacheextras[h] = {}
pythonparsecacheextras[h]["refs"] = self.references
pythonparsecacheextras[h]["execs"] = self.execs
class ShellParser():
def __init__(self, name, log):
@@ -199,12 +276,12 @@ class ShellParser():
h = hash(str(value))
if h in codeparsercache.shellcache:
self.execs = codeparsercache.shellcache[h]["execs"]
if h in shellparsecache:
self.execs = shellparsecache[h]["execs"]
return self.execs
if h in codeparsercache.shellcacheextras:
self.execs = codeparsercache.shellcacheextras[h]["execs"]
if h in shellparsecacheextras:
self.execs = shellparsecacheextras[h]["execs"]
return self.execs
try:
@@ -216,8 +293,8 @@ class ShellParser():
self.process_tokens(token)
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)
codeparsercache.shellcacheextras[h] = {}
codeparsercache.shellcacheextras[h]["execs"] = self.execs
shellparsecacheextras[h] = {}
shellparsecacheextras[h]["execs"] = self.execs
return self.execs

View File

@@ -44,6 +44,9 @@ class CommandFailed(CommandExit):
self.error = message
CommandExit.__init__(self, 1)
class CommandError(Exception):
pass
class Command:
"""
A queue of asynchronous commands for bitbake
@@ -57,21 +60,26 @@ class Command:
self.currentAsyncCommand = None
def runCommand(self, commandline):
try:
command = commandline.pop(0)
if command in CommandsSync.__dict__:
# Can run synchronous commands straight away
return getattr(CommandsSync, command)(self.cmds_sync, self, commandline)
if self.currentAsyncCommand is not None:
return "Busy (%s in progress)" % self.currentAsyncCommand[0]
if command not in CommandsAsync.__dict__:
return "No such command"
self.currentAsyncCommand = (command, commandline)
self.cooker.server_registration_cb(self.cooker.runCommands, self.cooker)
return True
except:
import traceback
return traceback.format_exc()
command = commandline.pop(0)
if hasattr(CommandsSync, command):
# Can run synchronous commands straight away
command_method = getattr(self.cmds_sync, command)
try:
result = command_method(self, commandline)
except CommandError as exc:
return None, exc.args[0]
except Exception:
import traceback
return None, traceback.format_exc()
else:
return result, None
if self.currentAsyncCommand is not None:
return None, "Busy (%s in progress)" % self.currentAsyncCommand[0]
if command not in CommandsAsync.__dict__:
return None, "No such command"
self.currentAsyncCommand = (command, commandline)
self.cooker.server_registration_cb(self.cooker.runCommands, self.cooker)
return True, None
def runAsyncCommand(self):
try:
@@ -139,7 +147,13 @@ class CommandsSync:
"""
Get any command parsed from the commandline
"""
return command.cooker.commandlineAction
cmd_action = command.cooker.commandlineAction
if cmd_action is None:
return None
elif 'msg' in cmd_action and cmd_action['msg']:
raise CommandError(cmd_action['msg'])
else:
return cmd_action['action']
def getVariable(self, command, params):
"""
@@ -157,7 +171,7 @@ class CommandsSync:
Set the value of variable in configuration.data
"""
varname = params[0]
value = str(params[1])
value = params[1]
command.cooker.configuration.data.setVar(varname, value)
def initCooker(self, command, params):

View File

@@ -1,11 +1,5 @@
"""Code pulled from future python versions, here for compatibility"""
from collections import MutableMapping, KeysView, ValuesView, ItemsView
try:
from thread import get_ident as _get_ident
except ImportError:
from dummy_thread import get_ident as _get_ident
def total_ordering(cls):
"""Class decorator that fills in missing ordering methods"""
convert = {
@@ -32,210 +26,3 @@ def total_ordering(cls):
opfunc.__doc__ = getattr(int, opname).__doc__
setattr(cls, opname, opfunc)
return cls
class OrderedDict(dict):
'Dictionary that remembers insertion order'
# An inherited dict maps keys to values.
# The inherited dict provides __getitem__, __len__, __contains__, and get.
# The remaining methods are order-aware.
# Big-O running times for all methods are the same as regular dictionaries.
# The internal self.__map dict maps keys to links in a doubly linked list.
# The circular doubly linked list starts and ends with a sentinel element.
# The sentinel element never gets deleted (this simplifies the algorithm).
# Each link is stored as a list of length three: [PREV, NEXT, KEY].
def __init__(self, *args, **kwds):
'''Initialize an ordered dictionary. The signature is the same as
regular dictionaries, but keyword arguments are not recommended because
their insertion order is arbitrary.
'''
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
try:
self.__root
except AttributeError:
self.__root = root = [] # sentinel node
root[:] = [root, root, None]
self.__map = {}
self.__update(*args, **kwds)
def __setitem__(self, key, value, PREV=0, NEXT=1, dict_setitem=dict.__setitem__):
'od.__setitem__(i, y) <==> od[i]=y'
# Setting a new item creates a new link at the end of the linked list,
# and the inherited dictionary is updated with the new key/value pair.
if key not in self:
root = self.__root
last = root[PREV]
last[NEXT] = root[PREV] = self.__map[key] = [last, root, key]
dict_setitem(self, key, value)
def __delitem__(self, key, PREV=0, NEXT=1, dict_delitem=dict.__delitem__):
'od.__delitem__(y) <==> del od[y]'
# Deleting an existing item uses self.__map to find the link which gets
# removed by updating the links in the predecessor and successor nodes.
dict_delitem(self, key)
link_prev, link_next, key = self.__map.pop(key)
link_prev[NEXT] = link_next
link_next[PREV] = link_prev
def __iter__(self):
'od.__iter__() <==> iter(od)'
# Traverse the linked list in order.
NEXT, KEY = 1, 2
root = self.__root
curr = root[NEXT]
while curr is not root:
yield curr[KEY]
curr = curr[NEXT]
def __reversed__(self):
'od.__reversed__() <==> reversed(od)'
# Traverse the linked list in reverse order.
PREV, KEY = 0, 2
root = self.__root
curr = root[PREV]
while curr is not root:
yield curr[KEY]
curr = curr[PREV]
def clear(self):
'od.clear() -> None. Remove all items from od.'
for node in self.__map.itervalues():
del node[:]
root = self.__root
root[:] = [root, root, None]
self.__map.clear()
dict.clear(self)
# -- the following methods do not depend on the internal structure --
def keys(self):
'od.keys() -> list of keys in od'
return list(self)
def values(self):
'od.values() -> list of values in od'
return [self[key] for key in self]
def items(self):
'od.items() -> list of (key, value) pairs in od'
return [(key, self[key]) for key in self]
def iterkeys(self):
'od.iterkeys() -> an iterator over the keys in od'
return iter(self)
def itervalues(self):
'od.itervalues -> an iterator over the values in od'
for k in self:
yield self[k]
def iteritems(self):
'od.iteritems -> an iterator over the (key, value) pairs in od'
for k in self:
yield (k, self[k])
update = MutableMapping.update
__update = update # let subclasses override update without breaking __init__
__marker = object()
def pop(self, key, default=__marker):
'''od.pop(k[,d]) -> v, remove specified key and return the corresponding
value. If key is not found, d is returned if given, otherwise KeyError
is raised.
'''
if key in self:
result = self[key]
del self[key]
return result
if default is self.__marker:
raise KeyError(key)
return default
def setdefault(self, key, default=None):
'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od'
if key in self:
return self[key]
self[key] = default
return default
def popitem(self, last=True):
'''od.popitem() -> (k, v), return and remove a (key, value) pair.
Pairs are returned in LIFO order if last is true or FIFO order if false.
'''
if not self:
raise KeyError('dictionary is empty')
key = next(reversed(self) if last else iter(self))
value = self.pop(key)
return key, value
def __repr__(self, _repr_running={}):
'od.__repr__() <==> repr(od)'
call_key = id(self), _get_ident()
if call_key in _repr_running:
return '...'
_repr_running[call_key] = 1
try:
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, self.items())
finally:
del _repr_running[call_key]
def __reduce__(self):
'Return state information for pickling'
items = [[k, self[k]] for k in self]
inst_dict = vars(self).copy()
for k in vars(OrderedDict()):
inst_dict.pop(k, None)
if inst_dict:
return (self.__class__, (items,), inst_dict)
return self.__class__, (items,)
def copy(self):
'od.copy() -> a shallow copy of od'
return self.__class__(self)
@classmethod
def fromkeys(cls, iterable, value=None):
'''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S.
If not specified, the value defaults to None.
'''
self = cls()
for key in iterable:
self[key] = value
return self
def __eq__(self, other):
'''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive
while comparison to a regular mapping is order-insensitive.
'''
if isinstance(other, OrderedDict):
return len(self)==len(other) and self.items() == other.items()
return dict.__eq__(self, other)
def __ne__(self, other):
'od.__ne__(y) <==> od!=y'
return not self == other
# -- the following methods support python 3.x style dictionary views --
def viewkeys(self):
"od.viewkeys() -> a set-like object providing a view on od's keys"
return KeysView(self)
def viewvalues(self):
"od.viewvalues() -> an object providing a view on od's values"
return ValuesView(self)
def viewitems(self):
"od.viewitems() -> a set-like object providing a view on od's items"
return ItemsView(self)

View File

@@ -158,7 +158,6 @@ class BBCooker:
#
self.configuration.event_data = bb.data.createCopy(self.configuration.data)
bb.data.update_data(self.configuration.event_data)
bb.parse.init_parser(self.configuration.event_data)
# TOSTOP must not be set or our children will hang when they output
fd = sys.stdout.fileno()
@@ -219,12 +218,6 @@ class BBCooker:
nice = int(nice) - curnice
buildlog.verbose("Renice to %s " % os.nice(nice))
if self.status:
del self.status
self.status = bb.cache.CacheData(self.caches_array)
self.handleCollections( self.configuration.data.getVar("BBFILE_COLLECTIONS", True) )
def parseCommandLine(self):
# Parse any commandline into actions
self.commandlineAction = {'action':None, 'msg':None}
@@ -278,8 +271,8 @@ class BBCooker:
pkg_pn = self.status.pkg_pn
(latest_versions, preferred_versions) = bb.providers.findProviders(self.configuration.data, self.status, pkg_pn)
logger.plain("%-35s %25s %25s", "Recipe Name", "Latest Version", "Preferred Version")
logger.plain("%-35s %25s %25s\n", "===========", "==============", "=================")
logger.plain("%-35s %25s %25s", "Package Name", "Latest Version", "Preferred Version")
logger.plain("%-35s %25s %25s\n", "============", "==============", "=================")
for p in sorted(pkg_pn):
pref = preferred_versions[p]
@@ -304,6 +297,8 @@ class BBCooker:
# Parse the configuration here. We need to do it explicitly here since
# this showEnvironment() code path doesn't use the cache
self.parseConfiguration()
self.status = bb.cache.CacheData(self.caches_array)
self.handleCollections( self.configuration.data.getVar("BBFILE_COLLECTIONS", True) )
fn, cls = bb.cache.Cache.virtualfn2realfn(buildfile)
fn = self.matchFile(fn)
@@ -539,15 +534,11 @@ class BBCooker:
# Prints a flattened form of package-depends below where subpackages of a package are merged into the main pn
depends_file = file('pn-depends.dot', 'w' )
buildlist_file = file('pn-buildlist', 'w' )
print("digraph depends {", file=depends_file)
for pn in depgraph["pn"]:
fn = depgraph["pn"][pn]["filename"]
version = depgraph["pn"][pn]["version"]
print('"%s" [label="%s %s\\n%s"]' % (pn, pn, version, fn), file=depends_file)
print("%s" % pn, file=buildlist_file)
buildlist_file.close()
logger.info("PN build list saved to 'pn-buildlist'")
for pn in depgraph["depends"]:
for depend in depgraph["depends"][pn]:
print('"%s" -> "%s"' % (pn, depend), file=depends_file)
@@ -642,8 +633,7 @@ class BBCooker:
# Calculate priorities for each file
matched = set()
for p in self.status.pkg_fn:
realfn, cls = bb.cache.Cache.virtualfn2realfn(p)
self.status.bbfile_priority[p] = self.calc_bbfile_priority(realfn, matched)
self.status.bbfile_priority[p] = self.calc_bbfile_priority(p, matched)
# Don't show the warning if the BBFILE_PATTERN did match .bbappend files
unmatched = set()
@@ -939,13 +929,13 @@ class BBCooker:
errors = True
continue
if lver <> depver:
parselog.error("Layer '%s' depends on version %d of layer '%s', but version %d is enabled in your configuration", c, depver, dep, lver)
parselog.error("Layer dependency %s of layer %s is at version %d, expected %d", dep, c, lver, depver)
errors = True
else:
parselog.error("Layer '%s' depends on version %d of layer '%s', which exists in your configuration but does not specify a version", c, depver, dep)
parselog.error("Layer dependency %s of layer %s has no version, expected %d", dep, c, depver)
errors = True
else:
parselog.error("Layer '%s' depends on layer '%s', but this layer is not enabled in your configuration", c, dep)
parselog.error("Layer dependency %s of layer %s not found", dep, c)
errors = True
collection_depends[c] = depnamelist
else:
@@ -995,12 +985,12 @@ class BBCooker:
"""
Find the .bb files which match the expression in 'buildfile'.
"""
if bf.startswith("/") or bf.startswith("../"):
bf = os.path.abspath(bf)
filelist, masked = self.collect_bbfiles()
try:
os.stat(bf)
bf = os.path.abspath(bf)
return [bf]
except OSError:
regexp = re.compile(bf)
@@ -1040,6 +1030,8 @@ class BBCooker:
# Parse the configuration here. We need to do it explicitly here since
# buildFile() doesn't use the cache
self.parseConfiguration()
self.status = bb.cache.CacheData(self.caches_array)
self.handleCollections( self.configuration.data.getVar("BBFILE_COLLECTIONS", True) )
# If we are told to do the None task then query the default task
if (task == None):
@@ -1061,10 +1053,6 @@ class BBCooker:
info_array = infos[fn]
except KeyError:
bb.fatal("%s does not exist" % fn)
if info_array[0].skipped:
bb.fatal("%s was skipped: %s" % (fn, info_array[0].skipreason))
self.status.add_from_recipeinfo(fn, info_array)
# Tweak some variables
@@ -1193,12 +1181,18 @@ class BBCooker:
if self.state != state.parsing:
self.parseConfiguration ()
if self.status:
del self.status
self.status = bb.cache.CacheData(self.caches_array)
ignore = self.configuration.data.getVar("ASSUME_PROVIDED", True) or ""
self.status.ignored_dependencies = set(ignore.split())
for dep in self.configuration.extra_assume_provided:
self.status.ignored_dependencies.add(dep)
self.handleCollections( self.configuration.data.getVar("BBFILE_COLLECTIONS", True) )
(filelist, masked) = self.collect_bbfiles()
self.configuration.data.renameVar("__depends", "__base_depends")
@@ -1207,8 +1201,6 @@ class BBCooker:
if not self.parser.parse_next():
collectlog.debug(1, "parsing complete")
if self.parser.error:
sys.exit(1)
self.show_appends_with_no_recipes()
self.buildDepgraph()
self.state = state.running
@@ -1578,7 +1570,6 @@ class CookerParser(object):
def init():
Parser.cfg = self.cfgdata
multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, args=(self.cfgdata,), exitpriority=1)
multiprocessing.util.Finalize(None, bb.fetch.fetcher_parse_save, args=(self.cfgdata,), exitpriority=1)
self.feeder_quit = multiprocessing.Queue(maxsize=1)
self.parser_quit = multiprocessing.Queue(maxsize=self.num_processes)
@@ -1605,7 +1596,6 @@ class CookerParser(object):
self.skipped, self.masked,
self.virtuals, self.error,
self.total)
bb.event.fire(event, self.cfgdata)
self.feeder_quit.put(None)
for process in self.processes:
@@ -1631,7 +1621,6 @@ class CookerParser(object):
sync.start()
multiprocessing.util.Finalize(None, sync.join, exitpriority=-100)
bb.codeparser.parser_cache_savemerge(self.cooker.configuration.data)
bb.fetch.fetcher_parse_done(self.cooker.configuration.data)
def load_cached(self):
for filename, appends in self.fromcache:
@@ -1662,45 +1651,21 @@ class CookerParser(object):
except StopIteration:
self.shutdown()
return False
except bb.BBHandledException as exc:
self.error += 1
logger.error('Failed to parse recipe: %s' % exc.recipe)
self.shutdown(clean=False)
return False
except ParsingFailure as exc:
self.error += 1
logger.error('Unable to parse %s: %s' %
(exc.recipe, bb.exceptions.to_string(exc.realexception)))
self.shutdown(clean=False)
return False
except bb.parse.ParseError as exc:
self.error += 1
except (bb.parse.ParseError, bb.data_smart.ExpansionError) as exc:
logger.error(str(exc))
self.shutdown(clean=False)
return False
except bb.data_smart.ExpansionError as exc:
self.error += 1
_, value, _ = sys.exc_info()
logger.error('ExpansionError during parsing %s: %s', value.recipe, str(exc))
self.shutdown(clean=False)
return False
except SyntaxError as exc:
self.error += 1
logger.error('Unable to parse %s', exc.recipe)
self.shutdown(clean=False)
return False
except Exception as exc:
self.error += 1
etype, value, tb = sys.exc_info()
if hasattr(value, "recipe"):
logger.error('Unable to parse %s', value.recipe,
exc_info=(etype, value, exc.traceback))
else:
# Most likely, an exception occurred during raising an exception
import traceback
logger.error('Exception during parse: %s' % traceback.format_exc())
logger.error('Unable to parse %s', value.recipe,
exc_info=(etype, value, exc.traceback))
self.shutdown(clean=False)
return False
self.current += 1
self.virtuals += len(result)

View File

@@ -279,20 +279,13 @@ def build_dependencies(key, keys, shelldeps, vardepvals, d):
deps = set()
vardeps = d.getVarFlag(key, "vardeps", True)
try:
if key[-1] == ']':
vf = key[:-1].split('[')
value = d.getVarFlag(vf[0], vf[1], False)
else:
value = d.getVar(key, False)
value = d.getVar(key, False)
if key in vardepvals:
value = d.getVarFlag(key, "vardepvalue", True)
elif d.getVarFlag(key, "func"):
if d.getVarFlag(key, "python"):
parsedvar = d.expandWithRefs(value, key)
parser = bb.codeparser.PythonParser(key, logger)
if parsedvar.value and "\t" in parsedvar.value:
logger.warn("Variable %s contains tabs, please remove these (%s)" % (key, d.getVar("FILE", True)))
parser.parse_python(parsedvar.value)
deps = deps | parser.references
else:
@@ -308,23 +301,11 @@ def build_dependencies(key, keys, shelldeps, vardepvals, d):
parser = d.expandWithRefs(value, key)
deps |= parser.references
deps = deps | (keys & parser.execs)
# Add varflags, assuming an exclusion list is set
varflagsexcl = d.getVar('BB_SIGNATURE_EXCLUDE_FLAGS', True)
if varflagsexcl:
varfdeps = []
varflags = d.getVarFlags(key)
if varflags:
for f in varflags:
if f not in varflagsexcl:
varfdeps.append('%s[%s]' % (key, f))
if varfdeps:
deps |= set(varfdeps)
deps |= set((vardeps or "").split())
deps -= set((d.getVarFlag(key, "vardepsexclude", True) or "").split())
except Exception as e:
raise bb.data_smart.ExpansionError(key, None, e)
except:
bb.note("Error expanding variable %s" % key)
raise
return deps, value
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
#d.setVarFlag(key, "vardeps", deps)

View File

@@ -39,7 +39,7 @@ from bb.COW import COWDictBase
logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = ["_append", "_prepend"]
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend)(_(?P<add>.*))?$')
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend)(_(?P<add>.*))?')
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
@@ -102,13 +102,7 @@ class ExpansionError(Exception):
self.expression = expression
self.variablename = varname
self.exception = exception
if varname:
if expression:
self.msg = "Failure expanding variable %s, expression was %s which triggered exception %s: %s" % (varname, expression, type(exception).__name__, exception)
else:
self.msg = "Failure expanding variable %s: %s: %s" % (varname, type(exception).__name__, exception)
else:
self.msg = "Failure expanding expression %s which triggered exception %s: %s" % (expression, type(exception).__name__, exception)
self.msg = "Failure expanding variable %s, expression was %s which triggered exception %s: %s" % (varname, expression, type(exception).__name__, exception)
Exception.__init__(self, self.msg)
self.args = (varname, expression, exception)
def __str__(self):
@@ -201,12 +195,7 @@ class DataSmart(MutableMapping):
for append in appends:
keep = []
for (a, o) in self.getVarFlag(append, op) or []:
match = True
if o:
for o2 in o.split("_"):
if not o2 in overrides:
match = False
if not match:
if o and not o in overrides:
keep.append((a ,o))
continue
@@ -283,10 +272,10 @@ class DataSmart(MutableMapping):
self._seen_overrides[override].add( var )
# setting var
self.dict[var]["_content"] = value
self.dict[var]["content"] = value
def getVar(self, var, expand=False, noweakdefault=False):
value = self.getVarFlag(var, "_content", False, noweakdefault)
value = self.getVarFlag(var, "content", False, noweakdefault)
# Call expand() separately to make use of the expand cache
if expand and value:
@@ -343,7 +332,7 @@ class DataSmart(MutableMapping):
if local_var:
if flag in local_var:
value = copy.copy(local_var[flag])
elif flag == "_content" and "defaultval" in local_var and not noweakdefault:
elif flag == "content" and "defaultval" in local_var and not noweakdefault:
value = copy.copy(local_var["defaultval"])
if expand and value:
value = self.expand(value, None)
@@ -372,7 +361,7 @@ class DataSmart(MutableMapping):
self._makeShadowCopy(var)
for i in flags:
if i == "_content":
if i == "content":
continue
self.dict[var][i] = flags[i]
@@ -382,7 +371,7 @@ class DataSmart(MutableMapping):
if local_var:
for i in local_var:
if i.startswith("_"):
if i == "content":
continue
flags[i] = local_var[i]
@@ -399,10 +388,10 @@ class DataSmart(MutableMapping):
content = None
# try to save the content
if "_content" in self.dict[var]:
content = self.dict[var]["_content"]
if "content" in self.dict[var]:
content = self.dict[var]["content"]
self.dict[var] = {}
self.dict[var]["_content"] = content
self.dict[var]["content"] = content
else:
del self.dict[var]

View File

@@ -32,7 +32,6 @@ import logging
import atexit
import traceback
import bb.utils
import bb.compat
# This is the pid for which we should generate the event. This is set when
# the runqueue forks off.
@@ -54,7 +53,7 @@ Registered = 10
AlreadyRegistered = 14
# Internal
_handlers = bb.compat.OrderedDict()
_handlers = {}
_ui_handlers = {}
_ui_handler_seq = 0
@@ -105,18 +104,6 @@ def print_ui_queue():
console = logging.StreamHandler(sys.stdout)
console.setFormatter(BBLogFormatter("%(levelname)s: %(message)s"))
logger.handlers = [console]
# First check to see if we have any proper messages
msgprint = False
for event in ui_queue:
if isinstance(event, logging.LogRecord):
if event.levelno > logging.DEBUG:
logger.handle(event)
msgprint = True
if msgprint:
return
# Nope, so just print all of the messages we have (including debug messages)
for event in ui_queue:
if isinstance(event, logging.LogRecord):
logger.handle(event)
@@ -188,7 +175,7 @@ def register(name, handler):
_handlers[name] = noop
return
env = {}
bb.utils.better_exec(code, env)
bb.utils.simple_exec(code, env)
func = bb.utils.better_eval(name, env)
_handlers[name] = func
else:
@@ -325,14 +312,6 @@ class BuildCompleted(BuildBase, OperationCompleted):
OperationCompleted.__init__(self, total, "Building Failed")
BuildBase.__init__(self, n, p, failures)
class DiskFull(Event):
"""Disk full case build aborted"""
def __init__(self, dev, type, freespace, mountpoint):
Event.__init__(self)
self._dev = dev
self._type = type
self._free = freespace
self._mountpoint = mountpoint
class NoProvider(Event):
"""No Provider for an Event"""
@@ -510,15 +489,6 @@ class MsgFatal(MsgBase):
class MsgPlain(MsgBase):
"""General output"""
class LogExecTTY(Event):
"""Send event containing program to spawn on tty of the logger"""
def __init__(self, msg, prog, sleep_delay, retries):
Event.__init__(self)
self.msg = msg
self.prog = prog
self.sleep_delay = sleep_delay
self.retries = retries
class LogHandler(logging.Handler):
"""Dispatch logging messages as bitbake events"""
@@ -562,7 +532,6 @@ class SanityCheckFailed(Event):
"""
Event to indicate sanity check has failed
"""
def __init__(self, msg, network_error=False):
def __init__(self, msg):
Event.__init__(self)
self._msg = msg
self._network_error = network_error

View File

@@ -32,14 +32,7 @@ class TracebackEntry(namedtuple.abc):
def _get_frame_args(frame):
"""Get the formatted arguments and class (if available) for a frame"""
arginfo = inspect.getargvalues(frame)
try:
if not arginfo.args:
return '', None
# There have been reports from the field of python 2.6 which doesn't
# return a namedtuple here but simply a tuple so fallback gracefully if
# args isn't present.
except AttributeError:
if not arginfo.args:
return '', None
firstarg = arginfo.args[0]

View File

@@ -8,7 +8,6 @@ BitBake build tools.
"""
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2012 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -29,13 +28,10 @@ from __future__ import absolute_import
from __future__ import print_function
import os, re
import logging
import urllib
import bb.persist_data, bb.utils
import bb.checksum
from bb import data
__version__ = "2"
_checksum_cache = bb.checksum.FileChecksumCache()
logger = logging.getLogger("BitBake.Fetcher")
@@ -54,7 +50,7 @@ class MalformedUrl(BBFetchException):
msg = "The URL: '%s' is invalid and cannot be interpreted" % url
self.url = url
BBFetchException.__init__(self, msg)
self.args = (url,)
self.args = url
class FetchError(BBFetchException):
"""General fetcher exception when something happens incorrectly"""
@@ -67,12 +63,6 @@ class FetchError(BBFetchException):
BBFetchException.__init__(self, msg)
self.args = (message, url)
class ChecksumError(FetchError):
"""Exception when mismatched checksum encountered"""
class NoChecksumError(FetchError):
"""Exception when no checksum is specified, but BB_STRICT_CHECKSUM is set"""
class UnpackError(BBFetchException):
"""General fetcher exception when something happens incorrectly when unpacking"""
def __init__(self, message, url):
@@ -87,7 +77,7 @@ class NoMethodError(BBFetchException):
msg = "Could not find a fetcher which supports the URL: '%s'" % url
self.url = url
BBFetchException.__init__(self, msg)
self.args = (url,)
self.args = url
class MissingParameterError(BBFetchException):
"""Exception raised when a fetch method is missing a critical parameter in the url"""
@@ -109,15 +99,12 @@ class ParameterError(BBFetchException):
class NetworkAccess(BBFetchException):
"""Exception raised when network access is disabled but it is required."""
def __init__(self, url, cmd):
msg = "Network access disabled through BB_NO_NETWORK but access requested with command %s (for url %s)" % (cmd, url)
msg = "Network access disabled through BB_NO_NETWORK but access rquested with command %s (for url %s)" % (cmd, url)
self.url = url
self.cmd = cmd
BBFetchException.__init__(self, msg)
self.args = (url, cmd)
class NonLocalMethod(Exception):
def __init__(self):
Exception.__init__(self)
def decodeurl(url):
"""Decodes an URL into the tokens (scheme, network location, path,
@@ -157,14 +144,14 @@ def decodeurl(url):
s1, s2 = s.split('=')
p[s1] = s2
return type, host, urllib.unquote(path), user, pswd, p
return (type, host, path, user, pswd, p)
def encodeurl(decoded):
"""Encodes a URL from tokens (scheme, network location, path,
user, password, parameters).
"""
type, host, path, user, pswd, p = decoded
(type, host, path, user, pswd, p) = decoded
if not path:
raise MissingParameterError('path', "encoded from the data %s" % str(decoded))
@@ -178,17 +165,14 @@ def encodeurl(decoded):
url += "@"
if host and type != "file":
url += "%s" % host
# Standardise path to ensure comparisons work
while '//' in path:
path = path.replace("//", "/")
url += "%s" % urllib.quote(path)
url += "%s" % path
if p:
for parm in p:
url += ";%s=%s" % (parm, p[parm])
return url
def uri_replace(ud, uri_find, uri_replace, replacements, d):
def uri_replace(ud, uri_find, uri_replace, d):
if not ud.url or not uri_find or not uri_replace:
logger.error("uri_replace: passed an undefined value, not replacing")
return None
@@ -197,45 +181,27 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d):
uri_replace_decoded = list(decodeurl(uri_replace))
logger.debug(2, "For url %s comparing %s to %s" % (uri_decoded, uri_find_decoded, uri_replace_decoded))
result_decoded = ['', '', '', '', '', {}]
for loc, i in enumerate(uri_find_decoded):
for i in uri_find_decoded:
loc = uri_find_decoded.index(i)
result_decoded[loc] = uri_decoded[loc]
regexp = i
if loc == 0 and regexp and not regexp.endswith("$"):
# Leaving the type unanchored can mean "https" matching "file" can become "files"
# which is clearly undesirable.
regexp += "$"
if loc == 5:
# Handle URL parameters
if i:
# Any specified URL parameters must match
for k in uri_replace_decoded[loc]:
if uri_decoded[loc][k] != uri_replace_decoded[loc][k]:
return None
# Overwrite any specified replacement parameters
for k in uri_replace_decoded[loc]:
result_decoded[loc][k] = uri_replace_decoded[loc][k]
elif (re.match(regexp, uri_decoded[loc])):
if not uri_replace_decoded[loc]:
result_decoded[loc] = ""
if isinstance(i, basestring):
if (re.match(i, uri_decoded[loc])):
if not uri_replace_decoded[loc]:
result_decoded[loc] = ""
else:
result_decoded[loc] = re.sub(i, uri_replace_decoded[loc], uri_decoded[loc])
if uri_find_decoded.index(i) == 2:
basename = None
if ud.mirrortarball:
basename = os.path.basename(ud.mirrortarball)
elif ud.localpath:
basename = os.path.basename(ud.localpath)
if basename and result_decoded[loc].endswith("/"):
result_decoded[loc] = os.path.dirname(result_decoded[loc])
if basename and not result_decoded[loc].endswith(basename):
result_decoded[loc] = os.path.join(result_decoded[loc], basename)
else:
for k in replacements:
uri_replace_decoded[loc] = uri_replace_decoded[loc].replace(k, replacements[k])
#bb.note("%s %s %s" % (regexp, uri_replace_decoded[loc], uri_decoded[loc]))
result_decoded[loc] = re.sub(regexp, uri_replace_decoded[loc], uri_decoded[loc])
if loc == 2:
# Handle path manipulations
basename = None
if uri_decoded[0] != uri_replace_decoded[0] and ud.mirrortarball:
# If the source and destination url types differ, must be a mirrortarball mapping
basename = os.path.basename(ud.mirrortarball)
# Kill parameters, they make no sense for mirror tarballs
uri_decoded[5] = {}
elif ud.localpath and ud.method.supports_checksum(ud):
basename = os.path.basename(ud.localpath)
if basename and not result_decoded[loc].endswith(basename):
result_decoded[loc] = os.path.join(result_decoded[loc], basename)
else:
return None
return None
result = encodeurl(result_decoded)
if result == ud.url:
return None
@@ -266,18 +232,10 @@ def fetcher_init(d):
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
_checksum_cache.init_cache(d)
for m in methods:
if hasattr(m, "init"):
m.init(d)
def fetcher_parse_save(d):
_checksum_cache.save_extras(d)
def fetcher_parse_done(d):
_checksum_cache.save_merge(d)
def fetcher_compare_revisions(d):
"""
Compare the revisions in the persistant cache with current values and
@@ -304,37 +262,39 @@ def verify_checksum(u, ud, d):
"""
verify the MD5 and SHA256 checksum for downloaded src
Raises a FetchError if one or both of the SRC_URI checksums do not match
the downloaded file, or if BB_STRICT_CHECKSUM is set and there are no
checksums specified.
return value:
- True: a checksum matched
- False: neither checksum matched
if checksum is missing in recipes file, "BB_STRICT_CHECKSUM" decide the return value.
if BB_STRICT_CHECKSUM = "1" then return false as unmatched, otherwise return true as
matched
"""
if not ud.method.supports_checksum(ud):
if not ud.type in ["http", "https", "ftp", "ftps"]:
return
md5data = bb.utils.md5_file(ud.localpath)
sha256data = bb.utils.sha256_file(ud.localpath)
if ud.method.recommends_checksum(ud):
# If strict checking enabled and neither sum defined, raise error
strict = d.getVar("BB_STRICT_CHECKSUM", True) or None
if (strict and ud.md5_expected == None and ud.sha256_expected == None):
raise NoChecksumError('No checksum specified for %s, please add at least one to the recipe:\n'
'SRC_URI[%s] = "%s"\nSRC_URI[%s] = "%s"' %
(ud.localpath, ud.md5_name, md5data,
ud.sha256_name, sha256data), u)
# If strict checking enabled and neither sum defined, raise error
strict = d.getVar("BB_STRICT_CHECKSUM", True) or None
if (strict and ud.md5_expected == None and ud.sha256_expected == None):
raise FetchError('No checksum specified for %s, please add at least one to the recipe:\n'
'SRC_URI[%s] = "%s"\nSRC_URI[%s] = "%s"' %
(ud.localpath, ud.md5_name, md5data,
ud.sha256_name, sha256data), u)
# Log missing sums so user can more easily add them
if ud.md5_expected == None:
logger.warn('Missing md5 SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"',
ud.localpath, ud.md5_name, md5data)
# Log missing sums so user can more easily add them
if ud.md5_expected == None:
logger.warn('Missing md5 SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"',
ud.localpath, ud.md5_name, md5data)
if ud.sha256_expected == None:
logger.warn('Missing sha256 SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"',
ud.localpath, ud.sha256_name, sha256data)
if ud.sha256_expected == None:
logger.warn('Missing sha256 SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"',
ud.localpath, ud.sha256_name, sha256data)
md5mismatch = False
sha256mismatch = False
@@ -348,20 +308,14 @@ def verify_checksum(u, ud, d):
# We want to alert the user if a checksum is defined in the recipe but
# it does not match.
msg = ""
mismatch = False
if md5mismatch and ud.md5_expected:
msg = msg + "\nFile: '%s' has %s checksum %s when %s was expected" % (ud.localpath, 'md5', md5data, ud.md5_expected)
mismatch = True;
if sha256mismatch and ud.sha256_expected:
msg = msg + "\nFile: '%s' has %s checksum %s when %s was expected" % (ud.localpath, 'sha256', sha256data, ud.sha256_expected)
mismatch = True;
if mismatch:
msg = msg + '\nIf this change is expected (e.g. you have upgraded to a new version without updating the checksums) then you can use these lines within the recipe:\nSRC_URI[%s] = "%s"\nSRC_URI[%s] = "%s"\nOtherwise you should retry the download and/or check with upstream to determine if the file has become corrupted or otherwise unexpectedly modified.\n' % (ud.md5_name, md5data, ud.sha256_name, sha256data)
if len(msg):
raise ChecksumError('Checksum mismatch!%s' % msg, u)
raise FetchError('Checksum mismatch!%s' % msg, u)
def update_stamp(u, ud, d):
@@ -470,16 +424,11 @@ def runfetchcmd(cmd, d, quiet = False, cleanup = []):
success = True
except bb.process.NotFoundError as e:
error_message = "Fetch command %s" % (e.command)
except bb.process.ExecutionError as e:
if e.stdout:
output = "output:\n%s\n%s" % (e.stdout, e.stderr)
elif e.stderr:
output = "output:\n%s" % e.stderr
else:
output = "no output"
error_message = "Fetch command failed with exit code %s, %s" % (e.exitcode, output)
except bb.process.CmdError as e:
error_message = "Fetch command %s could not be run:\n%s" % (e.command, e.msg)
except bb.process.ExecutionError as e:
error_message = "Fetch command %s failed with exit code %s, output:\n%s" % (e.command, e.exitcode, e.stderr)
if not success:
for f in cleanup:
try:
@@ -504,20 +453,13 @@ def build_mirroruris(origud, mirrors, ld):
uris = []
uds = []
replacements = {}
replacements["TYPE"] = origud.type
replacements["HOST"] = origud.host
replacements["PATH"] = origud.path
replacements["BASENAME"] = origud.path.split("/")[-1]
replacements["MIRRORNAME"] = origud.host.replace(':','.') + origud.path.replace('/', '.').replace('*', '.')
def adduri(uri, ud, uris, uds):
for line in mirrors:
try:
(find, replace) = line
except ValueError:
continue
newuri = uri_replace(ud, find, replace, replacements, ld)
newuri = uri_replace(ud, find, replace, ld)
if not newuri or newuri in uris or newuri == origud.url:
continue
try:
@@ -575,11 +517,7 @@ def try_mirror_url(newuri, origud, ud, ld, check = False):
return None
# Otherwise the result is a local file:// and we symlink to it
if not os.path.exists(origud.localpath):
if os.path.islink(origud.localpath):
# Broken symbolic link
os.unlink(origud.localpath)
os.symlink(ud.localpath, origud.localpath)
os.symlink(ud.localpath, origud.localpath)
update_stamp(newuri, origud, ld)
return ud.localpath
@@ -587,14 +525,8 @@ def try_mirror_url(newuri, origud, ud, ld, check = False):
raise
except bb.fetch2.BBFetchException as e:
if isinstance(e, ChecksumError):
logger.warn("Mirror checksum failure for url %s (original url: %s)\nCleaning and trying again." % (newuri, origud.url))
logger.warn(str(e))
elif isinstance(e, NoChecksumError):
raise
else:
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
logger.debug(1, str(e))
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
logger.debug(1, str(e))
try:
ud.method.clean(ud, ld)
except UnboundLocalError:
@@ -651,85 +583,11 @@ def srcrev_internal_helper(ud, d, name):
return rev
def get_checksum_file_list(d):
""" Get a list of files checksum in SRC_URI
Returns the resolved local paths of all local file entries in
SRC_URI as a space-separated string
"""
fetch = Fetch([], d, cache = False, localonly = True)
dl_dir = d.getVar('DL_DIR', True)
filelist = []
for u in fetch.urls:
ud = fetch.ud[u]
if ud and isinstance(ud.method, local.Local):
ud.setup_localpath(d)
f = ud.localpath
if f.startswith(dl_dir):
# The local fetcher's behaviour is to return a path under DL_DIR if it couldn't find the file anywhere else
if os.path.exists(f):
bb.warn("Getting checksum for %s SRC_URI entry %s: file not found except in DL_DIR" % (d.getVar('PN', True), os.path.basename(f)))
else:
bb.warn("Unable to get checksum for %s SRC_URI entry %s: file could not be found" % (d.getVar('PN', True), os.path.basename(f)))
continue
filelist.append(f)
return " ".join(filelist)
def get_file_checksums(filelist, pn):
"""Get a list of the checksums for a list of local files
Returns the checksums for a list of local files, caching the results as
it proceeds
"""
def checksum_file(f):
try:
checksum = _checksum_cache.get_checksum(f)
except OSError as e:
import traceback
bb.warn("Unable to get checksum for %s SRC_URI entry %s: %s" % (pn, os.path.basename(f), e))
return None
return checksum
checksums = []
for pth in filelist.split():
checksum = None
if '*' in pth:
# Handle globs
import glob
for f in glob.glob(pth):
checksum = checksum_file(f)
if checksum:
checksums.append((f, checksum))
elif os.path.isdir(pth):
# Handle directories
for root, dirs, files in os.walk(pth):
for name in files:
fullpth = os.path.join(root, name)
checksum = checksum_file(fullpth)
if checksum:
checksums.append((fullpth, checksum))
else:
checksum = checksum_file(pth)
if checksum:
checksums.append((pth, checksum))
checksums.sort()
return checksums
class FetchData(object):
"""
A class which represents the fetcher state for a given URI.
"""
def __init__(self, url, d, localonly = False):
def __init__(self, url, d):
# localpath is the location of a downloaded result. If not set, the file is local.
self.donestamp = None
self.localfile = ""
@@ -754,14 +612,10 @@ class FetchData(object):
self.sha256_name = "sha256sum"
if self.md5_name in self.parm:
self.md5_expected = self.parm[self.md5_name]
elif self.type not in ["http", "https", "ftp", "ftps"]:
self.md5_expected = None
else:
self.md5_expected = d.getVarFlag("SRC_URI", self.md5_name)
if self.sha256_name in self.parm:
self.sha256_expected = self.parm[self.sha256_name]
elif self.type not in ["http", "https", "ftp", "ftps"]:
self.sha256_expected = None
else:
self.sha256_expected = d.getVarFlag("SRC_URI", self.sha256_name)
@@ -776,13 +630,6 @@ class FetchData(object):
if not self.method:
raise NoMethodError(url)
if localonly and not isinstance(self.method, local.Local):
raise NonLocalMethod()
if self.parm.get("proto", None) and "protocol" not in self.parm:
logger.warn('Consider updating %s recipe to use "protocol" not "proto" in SRC_URI.', d.getVar('PN', True))
self.parm["protocol"] = self.parm.get("proto", None)
if hasattr(self.method, "urldata_init"):
self.method.urldata_init(self, d)
@@ -847,26 +694,6 @@ class FetchMethod(object):
"""
return os.path.join(data.getVar("DL_DIR", d, True), urldata.localfile)
def supports_checksum(self, urldata):
"""
Is localpath something that can be represented by a checksum?
"""
# We cannot compute checksums for directories
if os.path.isdir(urldata.localpath) == True:
return False
if urldata.localpath.find("*") != -1:
return False
return True
def recommends_checksum(self, urldata):
"""
Is the backend on where checksumming is recommended (should warnings
by displayed if there is no checksum)?
"""
return False
def _strip_leading_slashes(self, relpath):
"""
Remove leading slash as os.path.join can't cope
@@ -917,7 +744,7 @@ class FetchMethod(object):
dots = file.split(".")
if dots[-1] in ['gz', 'bz2', 'Z']:
efile = os.path.join(rootdir, os.path.basename('.'.join(dots[0:-1])))
efile = os.path.join(data.getVar('WORKDIR', True),os.path.basename('.'.join(dots[0:-1])))
else:
efile = file
cmd = None
@@ -947,16 +774,14 @@ class FetchMethod(object):
if dos:
cmd = '%s -a' % cmd
cmd = "%s '%s'" % (cmd, file)
elif file.endswith('.rpm') or file.endswith('.srpm'):
elif file.endswith('.src.rpm') or file.endswith('.srpm'):
if 'extract' in urldata.parm:
unpack_file = urldata.parm.get('extract')
cmd = 'rpm2cpio.sh %s | cpio -id %s' % (file, unpack_file)
cmd = 'rpm2cpio.sh %s | cpio -i %s' % (file, unpack_file)
iterate = True
iterate_file = unpack_file
else:
cmd = 'rpm2cpio.sh %s | cpio -id' % (file)
elif file.endswith('.deb') or file.endswith('.ipk'):
cmd = 'ar -p %s data.tar.gz | zcat | tar --no-same-owner -xpf -' % file
cmd = 'rpm2cpio.sh %s | cpio -i' % (file)
if not unpack or not cmd:
# If file == dest, then avoid any copies, as we already put the file into dest!
@@ -995,9 +820,7 @@ class FetchMethod(object):
bb.utils.mkdirhier(newdir)
os.chdir(newdir)
path = data.getVar('PATH', True)
if path:
cmd = "PATH=\"%s\" %s" % (path, cmd)
cmd = "PATH=\"%s\" %s" % (data.getVar('PATH', True), cmd)
bb.note("Unpacking %s to %s/" % (file, os.getcwd()))
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True)
@@ -1108,10 +931,7 @@ class FetchMethod(object):
return "%s-%s" % (key, d.getVar("PN", True) or "")
class Fetch(object):
def __init__(self, urls, d, cache = True, localonly = False):
if localonly and cache:
raise Exception("bb.fetch2.Fetch.__init__: cannot set cache and localonly at same time")
def __init__(self, urls, d, cache = True):
if len(urls) == 0:
urls = d.getVar("SRC_URI", True).split()
self.urls = urls
@@ -1124,12 +944,7 @@ class Fetch(object):
for url in urls:
if url not in self.ud:
try:
self.ud[url] = FetchData(url, d, localonly)
except NonLocalMethod:
if localonly:
self.ud[url] = None
pass
self.ud[url] = FetchData(url, d)
if fn and cache:
urldata_cache[fn] = self.ud
@@ -1203,17 +1018,12 @@ class Fetch(object):
raise
except BBFetchException as e:
if isinstance(e, ChecksumError):
logger.warn("Checksum failure encountered with download of %s - will attempt other sources if available" % u)
logger.debug(1, str(e))
elif isinstance(e, NoChecksumError):
raise
else:
logger.warn('Failed to fetch URL %s, attempting MIRRORS if available' % u)
logger.debug(1, str(e))
logger.warn('Failed to fetch URL %s' % u)
logger.debug(1, str(e))
firsterr = e
# Remove any incomplete fetch
m.clean(ud, self.d)
if os.path.isfile(ud.localpath):
bb.utils.remove(ud.localpath)
logger.debug(1, "Trying MIRRORS")
mirrors = mirror_from_string(self.d.getVar('MIRRORS', True))
localpath = try_mirrors (self.d, ud, mirrors)
@@ -1225,13 +1035,6 @@ class Fetch(object):
update_stamp(u, ud, self.d)
except BBFetchException as e:
if isinstance(e, NoChecksumError):
logger.error("%s" % str(e))
elif isinstance(e, ChecksumError):
logger.error("Checksum failure fetching %s" % u)
raise
finally:
bb.utils.unlockfile(lf)

View File

@@ -60,7 +60,7 @@ class Bzr(FetchMethod):
basecmd = data.expand('${FETCHCMD_bzr}', d)
proto = ud.parm.get('protocol', 'http')
proto = ud.parm.get('proto', 'http')
bzrroot = ud.host + ud.path
@@ -73,7 +73,7 @@ class Bzr(FetchMethod):
options.append("-r %s" % ud.revision)
if command == "fetch":
bzrcmd = "%s branch %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
bzrcmd = "%s co %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
elif command == "update":
bzrcmd = "%s pull %s --overwrite" % (basecmd, " ".join(options))
else:

View File

@@ -29,6 +29,7 @@ BitBake build tools.
import os
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod, FetchError, MissingParameterError, logger
from bb.fetch2 import runfetchcmd
@@ -63,7 +64,7 @@ class Cvs(FetchMethod):
if 'fullpath' in ud.parm:
fullpath = '_fullpath'
ud.localfile = bb.data.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath), d)
ud.localfile = data.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath), d)
def need_update(self, url, ud, d):
if (ud.date == "now"):
@@ -87,10 +88,10 @@ class Cvs(FetchMethod):
cvsroot = ud.path
else:
cvsroot = ":" + method
cvsproxyhost = d.getVar('CVS_PROXY_HOST', True)
cvsproxyhost = data.getVar('CVS_PROXY_HOST', d, True)
if cvsproxyhost:
cvsroot += ";proxy=" + cvsproxyhost
cvsproxyport = d.getVar('CVS_PROXY_PORT', True)
cvsproxyport = data.getVar('CVS_PROXY_PORT', d, True)
if cvsproxyport:
cvsroot += ";proxyport=" + cvsproxyport
cvsroot += ":" + ud.user
@@ -110,9 +111,15 @@ class Cvs(FetchMethod):
if ud.tag:
options.append("-r %s" % ud.tag)
cvsbasecmd = d.getVar("FETCHCMD_cvs", True)
cvscmd = cvsbasecmd + " '-d" + cvsroot + "' co " + " ".join(options) + " " + ud.module
cvsupdatecmd = cvsbasecmd + " '-d" + cvsroot + "' update -d -P " + " ".join(options)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "cvs:%s" % data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
data.setVar('CVSROOT', cvsroot, localdata)
data.setVar('CVSCOOPTS', " ".join(options), localdata)
data.setVar('CVSMODULE', ud.module, localdata)
cvscmd = data.getVar('FETCHCOMMAND', localdata, True)
cvsupdatecmd = data.getVar('UPDATECOMMAND', localdata, True)
if cvs_rsh:
cvscmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvscmd)
@@ -120,8 +127,8 @@ class Cvs(FetchMethod):
# create module directory
logger.debug(2, "Fetch: checking for module directory")
pkg = d.getVar('PN', True)
pkgdir = os.path.join(d.getVar('CVSDIR', True), pkg)
pkg = data.expand('${PN}', d)
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
moddir = os.path.join(pkgdir, localdir)
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
logger.info("Update " + loc)
@@ -162,9 +169,12 @@ class Cvs(FetchMethod):
def clean(self, ud, d):
""" Clean CVS Files and tarballs """
pkg = d.getVar('PN', True)
pkgdir = os.path.join(d.getVar("CVSDIR", True), pkg)
pkg = data.expand('${PN}', d)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "cvs:%s" % data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
bb.utils.remove(pkgdir, True)
bb.utils.remove(ud.localpath)

View File

@@ -82,9 +82,6 @@ class Git(FetchMethod):
"""
return ud.type in ['git']
def supports_checksum(self, urldata):
return False
def urldata_init(self, ud, d):
"""
init git specific variable within url data
@@ -126,11 +123,10 @@ class Git(FetchMethod):
for name in ud.names:
# Ensure anything that doesn't look like a sha256 checksum/revision is translated into one
if not ud.revisions[name] or len(ud.revisions[name]) != 40 or (False in [c in "abcdef0123456789" for c in ud.revisions[name]]):
if ud.revisions[name]:
ud.branches[name] = ud.revisions[name]
ud.branches[name] = ud.revisions[name]
ud.revisions[name] = self.latest_revision(ud.url, ud, d, name)
gitsrcname = '%s%s' % (ud.host.replace(':','.'), ud.path.replace('/', '.').replace('*', '.'))
gitsrcname = '%s%s' % (ud.host.replace(':','.'), ud.path.replace('/', '.'))
# for rebaseable git repo, it is necessary to keep mirror tar ball
# per revision, so that even the revision disappears from the
# upstream repo in the future, the mirror will remain intact and still
@@ -139,9 +135,8 @@ class Git(FetchMethod):
for name in ud.names:
gitsrcname = gitsrcname + '_' + ud.revisions[name]
ud.mirrortarball = 'git2_%s.tar.gz' % (gitsrcname)
ud.fullmirror = os.path.join(d.getVar("DL_DIR", True), ud.mirrortarball)
gitdir = d.getVar("GITDIR", True) or (d.getVar("DL_DIR", True) + "/git2/")
ud.clonedir = os.path.join(gitdir, gitsrcname)
ud.fullmirror = os.path.join(data.getVar("DL_DIR", d, True), ud.mirrortarball)
ud.clonedir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
ud.localfile = ud.clonedir
@@ -188,12 +183,8 @@ class Git(FetchMethod):
# If the repo still doesn't exist, fallback to cloning it
if not os.path.exists(ud.clonedir):
# We do this since git will use a "-l" option automatically for local urls where possible
if repourl.startswith("file://"):
repourl = repourl[7:]
clone_cmd = "%s clone --bare --mirror %s %s" % (ud.basecmd, repourl, ud.clonedir)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, clone_cmd)
bb.fetch2.check_network_access(d, clone_cmd)
runfetchcmd(clone_cmd, d)
os.chdir(ud.clonedir)
@@ -204,14 +195,14 @@ class Git(FetchMethod):
needupdate = True
if needupdate:
try:
runfetchcmd("%s remote prune origin" % ud.basecmd, d)
runfetchcmd("%s remote rm origin" % ud.basecmd, d)
except bb.fetch2.FetchError:
logger.debug(1, "No Origin")
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d)
fetch_cmd = "%s fetch -f --prune %s refs/*:refs/*" % (ud.basecmd, repourl)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
runfetchcmd(fetch_cmd, d)
runfetchcmd("%s prune-packed" % ud.basecmd, d)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
@@ -245,23 +236,7 @@ class Git(FetchMethod):
if ud.bareclone:
cloneflags += " --mirror"
# Versions of git prior to 1.7.9.2 have issues where foo.git and foo get confused
# and you end up with some horrible union of the two when you attempt to clone it
# The least invasive workaround seems to be a symlink to the real directory to
# fool git into ignoring any .git version that may also be present.
#
# The issue is fixed in more recent versions of git so we can drop this hack in future
# when that version becomes common enough.
clonedir = ud.clonedir
if not ud.path.endswith(".git"):
indirectiondir = destdir[:-1] + ".indirectionsymlink"
if os.path.exists(indirectiondir):
os.remove(indirectiondir)
bb.utils.mkdirhier(os.path.dirname(indirectiondir))
os.symlink(ud.clonedir, indirectiondir)
clonedir = indirectiondir
runfetchcmd("git clone %s %s/ %s" % (cloneflags, clonedir, destdir), d)
runfetchcmd("git clone %s %s/ %s" % (cloneflags, ud.clonedir, destdir), d)
if not ud.nocheckout:
os.chdir(destdir)
if subdir != "":
@@ -306,8 +281,7 @@ class Git(FetchMethod):
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
cmd = "%s ls-remote %s://%s%s%s %s" % \
(basecmd, ud.proto, username, ud.host, ud.path, ud.branches[name])
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, cmd)
bb.fetch2.check_network_access(d, cmd)
output = runfetchcmd(cmd, d, True)
if not output:
raise bb.fetch2.FetchError("The command %s gave empty output unexpectedly" % cmd, url)

View File

@@ -82,7 +82,7 @@ class Hg(FetchMethod):
basecmd = data.expand('${FETCHCMD_hg}', d)
proto = ud.parm.get('protocol', 'http')
proto = ud.parm.get('proto', 'http')
host = ud.host
if proto == "file":

View File

@@ -26,12 +26,10 @@ BitBake build tools.
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import urllib
import bb
import bb.utils
from bb import data
from bb.fetch2 import FetchMethod, FetchError
from bb.fetch2 import logger
from bb.fetch2 import FetchMethod
class Local(FetchMethod):
def supports(self, url, urldata, d):
@@ -42,31 +40,27 @@ class Local(FetchMethod):
def urldata_init(self, ud, d):
# We don't set localfile as for this fetcher the file is already local!
ud.decodedurl = urllib.unquote(ud.url.split("://")[1].split(";")[0])
ud.basename = os.path.basename(ud.decodedurl)
ud.basename = os.path.basename(ud.url.split("://")[1].split(";")[0])
return
def localpath(self, url, urldata, d):
"""
Return the local filename of a given url assuming a successful fetch.
"""
path = urldata.decodedurl
path = url.split("://")[1]
path = path.split(";")[0]
newpath = path
if path[0] != "/":
filespath = data.getVar('FILESPATH', d, True)
if filespath:
logger.debug(2, "Searching for %s in paths: \n%s" % (path, "\n ".join(filespath.split(":"))))
newpath = bb.utils.which(filespath, path)
if not newpath:
filesdir = data.getVar('FILESDIR', d, True)
if filesdir:
logger.debug(2, "Searching for %s in path: %s" % (path, filesdir))
newpath = os.path.join(filesdir, path)
if not os.path.exists(newpath) and path.find("*") == -1:
dldirfile = os.path.join(d.getVar("DL_DIR", True), path)
logger.debug(2, "Defaulting to %s for %s" % (dldirfile, path))
bb.utils.mkdirhier(os.path.dirname(dldirfile))
return dldirfile
if not os.path.exists(newpath) and path.find("*") == -1:
dldirfile = os.path.join(data.getVar("DL_DIR", d, True), os.path.basename(path))
return dldirfile
return newpath
def need_update(self, url, ud, d):
@@ -79,20 +73,7 @@ class Local(FetchMethod):
def download(self, url, urldata, d):
"""Fetch urls (no-op for Local method)"""
# no need to fetch local files, we'll deal with them in place.
if self.supports_checksum(urldata) and not os.path.exists(urldata.localpath):
locations = []
filespath = data.getVar('FILESPATH', d, True)
if filespath:
locations = filespath.split(":")
filesdir = data.getVar('FILESDIR', d, True)
if filesdir:
locations.append(filesdir)
locations.append(d.getVar("DL_DIR", True))
msg = "Unable to find file " + url + " anywhere. The paths that were searched were:\n " + "\n ".join(locations)
raise FetchError(msg)
return True
return 1
def checkstatus(self, url, urldata, d):
"""

View File

@@ -57,7 +57,7 @@ class Osc(FetchMethod):
basecmd = data.expand('${FETCHCMD_osc}', d)
proto = ud.parm.get('protocol', 'ocs')
proto = ud.parm.get('proto', 'ocs')
options = []

View File

@@ -27,7 +27,6 @@ BitBake build tools.
from future_builtins import zip
import os
import subprocess
import logging
import bb
from bb import data
@@ -91,8 +90,8 @@ class Perforce(FetchMethod):
p4cmd = data.getVar('FETCHCOMMAND_p4', d, True)
logger.debug(1, "Running %s%s changes -m 1 %s", p4cmd, p4opt, depot)
p4file, errors = bb.process.run("%s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
cset = p4file.strip()
p4file = os.popen("%s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
cset = p4file.readline().strip()
logger.debug(1, "READ %s", cset)
if not cset:
return -1
@@ -155,8 +154,8 @@ class Perforce(FetchMethod):
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
data.setVar('TMPBASE', data.expand('${WORKDIR}/oep4.XXXXXX', localdata), localdata)
tmpfile, errors = bb.process.run(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
tmpfile = tmpfile.strip()
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
tmpfile = tmppipe.readline().strip()
if not tmpfile:
raise FetchError("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.", loc)
@@ -169,8 +168,7 @@ class Perforce(FetchMethod):
os.chdir(tmpfile)
logger.info("Fetch " + loc)
logger.info("%s%s files %s", p4cmd, p4opt, depot)
p4file, errors = bb.process.run("%s%s files %s" % (p4cmd, p4opt, depot))
p4file = p4file.strip()
p4file = os.popen("%s%s files %s" % (p4cmd, p4opt, depot))
if not p4file:
raise FetchError("Fetch: unable to get the P4 files from %s" % depot, loc)
@@ -186,7 +184,7 @@ class Perforce(FetchMethod):
dest = list[0][len(path)+1:]
where = dest.find("#")
subprocess.call("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module, dest[:where], list[0]), shell=True)
os.system("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module, dest[:where], list[0]))
count = count + 1
if count == 0:

View File

@@ -69,9 +69,6 @@ class SSH(FetchMethod):
def supports(self, url, urldata, d):
return __pattern__.match(url) != None
def supports_checksum(self, urldata):
return False
def localpath(self, url, urldata, d):
m = __pattern__.match(urldata.url)
path = m.group('path')

View File

@@ -77,8 +77,8 @@ class Svk(FetchMethod):
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
data.setVar('TMPBASE', data.expand('${WORKDIR}/oesvk.XXXXXX', localdata), localdata)
tmpfile, errors = bb.process.run(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
tmpfile = tmpfile.strip()
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
tmpfile = tmppipe.readline().strip()
if not tmpfile:
logger.error()
raise FetchError("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.", loc)

View File

@@ -49,8 +49,6 @@ class Svn(FetchMethod):
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
ud.basecmd = d.getVar('FETCHCMD_svn', True)
ud.module = ud.parm["module"]
# Create paths to svn checkouts
@@ -71,7 +69,9 @@ class Svn(FetchMethod):
command is "fetch", "update", "info"
"""
proto = ud.parm.get('protocol', 'svn')
basecmd = data.expand('${FETCHCMD_svn}', d)
proto = ud.parm.get('proto', 'svn')
svn_rsh = None
if proto == "svn+ssh" and "rsh" in ud.parm:
@@ -88,7 +88,7 @@ class Svn(FetchMethod):
options.append("--password %s" % ud.pswd)
if command == "info":
svncmd = "%s info %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
svncmd = "%s info %s %s://%s/%s/" % (basecmd, " ".join(options), proto, svnroot, ud.module)
else:
suffix = ""
if ud.revision:
@@ -96,9 +96,9 @@ class Svn(FetchMethod):
suffix = "@%s" % (ud.revision)
if command == "fetch":
svncmd = "%s co %s %s://%s/%s%s %s" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module, suffix, ud.module)
svncmd = "%s co %s %s://%s/%s%s %s" % (basecmd, " ".join(options), proto, svnroot, ud.module, suffix, ud.module)
elif command == "update":
svncmd = "%s update %s" % (ud.basecmd, " ".join(options))
svncmd = "%s update %s" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid svn command %s" % command, ud.url)
@@ -117,11 +117,6 @@ class Svn(FetchMethod):
logger.info("Update " + loc)
# update sources there
os.chdir(ud.moddir)
# We need to attempt to run svn upgrade first in case its an older working format
try:
runfetchcmd(ud.basecmd + " upgrade", d)
except FetchError:
pass
logger.debug(1, "Running %s", svnupdatecmd)
bb.fetch2.check_network_access(d, svnupdatecmd, ud.url)
runfetchcmd(svnupdatecmd, d)

View File

@@ -45,55 +45,47 @@ class Wget(FetchMethod):
"""
return ud.type in ['http', 'https', 'ftp']
def recommends_checksum(self, urldata):
return True
def urldata_init(self, ud, d):
if 'protocol' in ud.parm:
if ud.parm['protocol'] == 'git':
raise bb.fetch2.ParameterError("Invalid protocol - if you wish to fetch from a git repository using http, you need to instead use the git:// prefix with protocol=http", ud.url)
if 'downloadfilename' in ud.parm:
ud.basename = ud.parm['downloadfilename']
else:
ud.basename = os.path.basename(ud.path)
ud.basename = os.path.basename(ud.path)
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
def download(self, uri, ud, d, checkonly = False):
"""Fetch urls"""
basecmd = d.getVar("FETCHCMD_wget", True) or "/usr/bin/env wget -t 2 -T 30 -nv --passive-ftp --no-check-certificate"
def fetch_uri(uri, ud, d):
if checkonly:
fetchcmd = data.getVar("CHECKCOMMAND", d, True)
elif os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
fetchcmd = data.getVar("RESUMECOMMAND", d, True)
else:
fetchcmd = data.getVar("FETCHCOMMAND", d, True)
if 'downloadfilename' in ud.parm:
basecmd += " -O ${DL_DIR}/" + ud.localfile
uri = uri.split(";")[0]
uri_decoded = list(decodeurl(uri))
uri_type = uri_decoded[0]
uri_host = uri_decoded[1]
if checkonly:
fetchcmd = d.getVar("CHECKCOMMAND_wget", True) or d.expand(basecmd + " -c -P ${DL_DIR} '${URI}'")
elif os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
fetchcmd = d.getVar("RESUMECOMMAND_wget", True) or d.expand(basecmd + " --spider -P ${DL_DIR} '${URI}'")
else:
fetchcmd = d.getVar("FETCHCOMMAND_wget", True) or d.expand(basecmd + " -P ${DL_DIR} '${URI}'")
fetchcmd = fetchcmd.replace("${URI}", uri.split(";")[0])
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
if not checkonly:
logger.info("fetch " + uri)
logger.debug(2, "executing " + fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd)
runfetchcmd(fetchcmd, d, quiet=checkonly)
uri = uri.split(";")[0]
uri_decoded = list(decodeurl(uri))
uri_type = uri_decoded[0]
uri_host = uri_decoded[1]
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(ud.localpath) and not checkonly:
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, ud.localpath), uri)
fetchcmd = fetchcmd.replace("${URI}", uri.split(";")[0])
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
if not checkonly:
logger.info("fetch " + uri)
logger.debug(2, "executing " + fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd)
runfetchcmd(fetchcmd, d, quiet=checkonly)
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(ud.localpath) and not checkonly:
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, ud.localpath), uri)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "wget:" + data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
fetch_uri(uri, ud, localdata)
return True
def checkstatus(self, uri, ud, d):

View File

@@ -33,7 +33,9 @@
from bb.utils import better_compile, better_exec
from bb import error
# A dict of function names we have seen
# A dict of modules we have handled
# it is the number of .bbclasses + x in size
_parsed_methods = { }
_parsed_fns = { }
def insert_method(modulename, code, fn):
@@ -50,22 +52,33 @@ def insert_method(modulename, code, fn):
if name in ['None', 'False']:
continue
elif name in _parsed_fns and not _parsed_fns[name] == modulename:
error("The function %s defined in %s was already declared in %s. BitBake has a global python function namespace so shared functions should be declared in a common include file rather than being duplicated, or if the functions are different, please use different function names." % (name, modulename, _parsed_fns[name]))
error( "Error Method already seen: %s in' %s' now in '%s'" % (name, _parsed_fns[name], modulename))
else:
_parsed_fns[name] = modulename
# A dict of modules the parser has finished with
_parsed_methods = {}
def check_insert_method(modulename, code, fn):
"""
Add the code if it wasnt added before. The module
name will be used for that
Variables:
@modulename a short name e.g. base.bbclass
@code The actual python code
@fn The filename from the outer file
"""
if not modulename in _parsed_methods:
return insert_method(modulename, code, fn)
_parsed_methods[modulename] = 1
def parsed_module(modulename):
"""
Has module been parsed?
Inform me file xyz was parsed
"""
return modulename in _parsed_methods
def set_parsed_module(modulename):
"""
Set module as parsed
"""
_parsed_methods[modulename] = True
def get_parsed_dict():
"""
shortcut
"""
return _parsed_methods

View File

@@ -176,7 +176,6 @@ class diskMonitor:
def __init__(self, configuration):
self.enableMonitor = False
self.configuration = configuration
BBDirs = configuration.getVar("BB_DISKMON_DIRS", True) or None
if BBDirs:
@@ -220,12 +219,10 @@ class diskMonitor:
logger.error("No new tasks can be excuted since the disk space monitor action is \"STOPTASKS\"!")
self.checked[dev] = True
rq.finish_runqueue(False)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, self.devDict[dev][1]), self.configuration)
elif self.devDict[dev][0] == "ABORT" and not self.checked[dev]:
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
self.checked[dev] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, self.devDict[dev][1]), self.configuration)
# The free inodes, float point number
freeInode = st.f_favail
@@ -240,10 +237,8 @@ class diskMonitor:
logger.error("No new tasks can be excuted since the disk space monitor action is \"STOPTASKS\"!")
self.checked[dev] = True
rq.finish_runqueue(False)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeSpace, self.devDict[dev][1]), self.configuration)
elif self.devDict[dev][0] == "ABORT" and not self.checked[dev]:
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
self.checked[dev] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeSpace, self.devDict[dev][1]), self.configuration)
return

View File

@@ -31,6 +31,7 @@ import itertools
from bb import methodpool
from bb.parse import logger
__parsed_methods__ = bb.methodpool.get_parsed_dict()
_bbversions_re = re.compile(r"\[(?P<from>[0-9]+)-(?P<to>[0-9]+)\]")
class StatementGroup(list):
@@ -125,25 +126,23 @@ class MethodNode(AstNode):
self.body = body
def eval(self, data):
text = '\n'.join(self.body)
if self.func_name == "__anonymous":
funcname = ("__anon_%s_%s" % (self.lineno, self.filename.translate(string.maketrans('/.+-', '____'))))
if not funcname in bb.methodpool._parsed_fns:
text = "def %s(d):\n" % (funcname) + text
text = "def %s(d):\n" % (funcname) + '\n'.join(self.body)
bb.methodpool.insert_method(funcname, text, self.filename)
anonfuncs = data.getVar('__BBANONFUNCS') or []
anonfuncs.append(funcname)
data.setVar('__BBANONFUNCS', anonfuncs)
data.setVar(funcname, text)
else:
data.setVarFlag(self.func_name, "func", 1)
data.setVar(self.func_name, text)
data.setVar(self.func_name, '\n'.join(self.body))
class PythonMethodNode(AstNode):
def __init__(self, filename, lineno, function, modulename, body):
def __init__(self, filename, lineno, function, define, body):
AstNode.__init__(self, filename, lineno)
self.function = function
self.modulename = modulename
self.define = define
self.body = body
def eval(self, data):
@@ -151,8 +150,8 @@ class PythonMethodNode(AstNode):
# 'this' file. This means we will not parse methods from
# bb classes twice
text = '\n'.join(self.body)
if not bb.methodpool.parsed_module(self.modulename):
bb.methodpool.insert_method(self.modulename, text, self.filename)
if not bb.methodpool.parsed_module(self.define):
bb.methodpool.insert_method(self.define, text, self.filename)
data.setVarFlag(self.function, "func", 1)
data.setVarFlag(self.function, "python", 1)
data.setVar(self.function, text)
@@ -213,9 +212,9 @@ class ExportFuncsNode(AstNode):
data.setVarFlag(calledvar, flag, data.getVarFlag(var, flag))
if data.getVarFlag(calledvar, "python"):
data.setVar(var, " bb.build.exec_func('" + calledvar + "', d)\n")
data.setVar(var, "\tbb.build.exec_func('" + calledvar + "', d)\n")
else:
data.setVar(var, " " + calledvar + "\n")
data.setVar(var, "\t" + calledvar + "\n")
data.setVarFlag(var, 'export_func', '1')
class AddTaskNode(AstNode):
@@ -282,8 +281,8 @@ def handleData(statements, filename, lineno, groupd):
def handleMethod(statements, filename, lineno, func_name, body):
statements.append(MethodNode(filename, lineno, func_name, body))
def handlePythonMethod(statements, filename, lineno, funcname, modulename, body):
statements.append(PythonMethodNode(filename, lineno, funcname, modulename, body))
def handlePythonMethod(statements, filename, lineno, funcname, root, body):
statements.append(PythonMethodNode(filename, lineno, funcname, root, body))
def handleMethodFlags(statements, filename, lineno, key, m):
statements.append(MethodFlagsNode(filename, lineno, key, m))
@@ -321,7 +320,7 @@ def finalize(fn, d, variant = None):
code = []
for funcname in d.getVar("__BBANONFUNCS") or []:
code.append("%s(d)" % funcname)
bb.utils.better_exec("\n".join(code), {"d": d})
bb.utils.simple_exec("\n".join(code), {"d": d})
bb.data.update_data(d)
tasklist = d.getVar('__BBTASKS') or []

View File

@@ -69,7 +69,7 @@ def supports(fn, d):
return os.path.splitext(fn)[-1] in [".bb", ".bbclass", ".inc"]
def inherit(files, fn, lineno, d):
__inherit_cache = d.getVar('__inherit_cache') or []
__inherit_cache = data.getVar('__inherit_cache', d) or []
files = d.expand(files).split()
for file in files:
if not os.path.isabs(file) and not file.endswith(".bbclass"):
@@ -80,7 +80,7 @@ def inherit(files, fn, lineno, d):
__inherit_cache.append( file )
data.setVar('__inherit_cache', __inherit_cache, d)
include(fn, file, lineno, d, "inherit")
__inherit_cache = d.getVar('__inherit_cache') or []
__inherit_cache = data.getVar('__inherit_cache', d) or []
def get_statements(filename, absolute_filename, base_name):
global cached_statements
@@ -126,13 +126,13 @@ def handle(fn, d, include):
if ext == ".bbclass":
__classname__ = root
classes.append(__classname__)
__inherit_cache = d.getVar('__inherit_cache') or []
__inherit_cache = data.getVar('__inherit_cache', d) or []
if not fn in __inherit_cache:
__inherit_cache.append(fn)
data.setVar('__inherit_cache', __inherit_cache, d)
if include != 0:
oldfile = d.getVar('FILE')
oldfile = data.getVar('FILE', d)
else:
oldfile = None
@@ -161,7 +161,7 @@ def handle(fn, d, include):
# we have parsed the bb class now
if ext == ".bbclass" or ext == ".inc":
bb.methodpool.set_parsed_module(base_name)
bb.methodpool.get_parsed_dict()[base_name] = 1
return d

View File

@@ -29,7 +29,7 @@ import logging
import bb.utils
from bb.parse import ParseError, resolve_file, ast, logger
__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}/]+)(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?\s*((?P<colon>:=)|(?P<lazyques>\?\?=)|(?P<ques>\?=)|(?P<append>\+=)|(?P<prepend>=\+)|(?P<predot>=\.)|(?P<postdot>\.=)|=)\s*(?!'[^']*'[^']*'$)(?!\"[^\"]*\"[^\"]*\"$)(?P<apo>['\"])(?P<value>.*)(?P=apo)$")
__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}/]+)(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?\s*((?P<colon>:=)|(?P<lazyques>\?\?=)|(?P<ques>\?=)|(?P<append>\+=)|(?P<prepend>=\+)|(?P<predot>=\.)|(?P<postdot>\.=)|=)\s*(?P<apo>['\"])(?P<value>.*)(?P=apo)$")
__include_regexp__ = re.compile( r"include\s+(.+)" )
__require_regexp__ = re.compile( r"require\s+(.+)" )
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/]+)$" )

View File

@@ -1,8 +1,6 @@
import logging
import signal
import subprocess
import errno
import select
logger = logging.getLogger('BitBake.Process')
@@ -70,38 +68,20 @@ def _logged_communicate(pipe, log, input):
pipe.stdin.write(input)
pipe.stdin.close()
bufsize = 512
outdata, errdata = [], []
rin = []
while pipe.poll() is None:
if pipe.stdout is not None:
data = pipe.stdout.read(bufsize)
if data is not None:
outdata.append(data)
log.write(data)
if pipe.stdout is not None:
bb.utils.nonblockingfd(pipe.stdout.fileno())
rin.append(pipe.stdout)
if pipe.stderr is not None:
bb.utils.nonblockingfd(pipe.stderr.fileno())
rin.append(pipe.stderr)
try:
while pipe.poll() is None:
rlist = rin
try:
r,w,e = select.select (rlist, [], [])
except OSError, e:
if e.errno != errno.EINTR:
raise
if pipe.stdout in r:
data = pipe.stdout.read()
if data is not None:
outdata.append(data)
log.write(data)
if pipe.stderr in r:
data = pipe.stderr.read()
if data is not None:
errdata.append(data)
log.write(data)
finally:
log.flush()
if pipe.stderr is not None:
data = pipe.stderr.read(bufsize)
if data is not None:
errdata.append(data)
log.write(data)
return ''.join(outdata), ''.join(errdata)
def run(cmd, input=None, log=None, **options):

View File

@@ -35,8 +35,6 @@ class NoProvider(bb.BBHandledException):
class NoRProvider(bb.BBHandledException):
"""Exception raised when no provider of a runtime dependency can be found"""
class MultipleRProvider(bb.BBHandledException):
"""Exception raised when multiple providers of a runtime dependency can be found"""
def findProviders(cfgData, dataCache, pkg_pn = None):
"""
@@ -130,7 +128,7 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
m = re.match('(\d+:)*(.*)(_.*)*', preferred_v)
if m:
if m.group(1):
preferred_e = m.group(1)[:-1]
preferred_e = int(m.group(1)[:-1])
else:
preferred_e = None
preferred_v = m.group(2)

View File

@@ -375,8 +375,9 @@ class RunQueueData:
"""
runq_build = []
recursivetasks = {}
recursivetasksselfref = set()
recursive_tdepends = {}
runq_recrdepends = []
tdepends_fnid = {}
taskData = self.taskData
@@ -405,10 +406,11 @@ class RunQueueData:
depdata = taskData.build_targets[depid][0]
if depdata is None:
continue
dep = taskData.fn_index[depdata]
for taskname in tasknames:
taskid = taskData.gettask_id_fromfnid(depdata, taskname)
taskid = taskData.gettask_id(dep, taskname, False)
if taskid is not None:
depends.add(taskid)
depends.append(taskid)
def add_runtime_dependencies(depids, tasknames, depends):
for depid in depids:
@@ -417,20 +419,15 @@ class RunQueueData:
depdata = taskData.run_targets[depid][0]
if depdata is None:
continue
dep = taskData.fn_index[depdata]
for taskname in tasknames:
taskid = taskData.gettask_id_fromfnid(depdata, taskname)
taskid = taskData.gettask_id(dep, taskname, False)
if taskid is not None:
depends.add(taskid)
def add_resolved_dependencies(depids, tasknames, depends):
for depid in depids:
for taskname in tasknames:
taskid = taskData.gettask_id_fromfnid(depid, taskname)
if taskid is not None:
depends.add(taskid)
depends.append(taskid)
for task in xrange(len(taskData.tasks_name)):
depends = set()
depends = []
recrdepends = []
fnid = taskData.tasks_fnid[task]
fn = taskData.fn_index[fnid]
task_deps = self.dataCache.task_deps[fn]
@@ -442,7 +439,7 @@ class RunQueueData:
# Resolve task internal dependencies
#
# e.g. addtask before X after Y
depends = set(taskData.tasks_tdepends[task])
depends = taskData.tasks_tdepends[task]
# Resolve 'deptask' dependencies
#
@@ -457,91 +454,99 @@ class RunQueueData:
# e.g. do_sometask[rdeptask] = "do_someothertask"
# (makes sure sometask runs after someothertask of all RDEPENDS)
if 'rdeptask' in task_deps and taskData.tasks_name[task] in task_deps['rdeptask']:
tasknames = task_deps['rdeptask'][taskData.tasks_name[task]].split()
add_runtime_dependencies(taskData.rdepids[fnid], tasknames, depends)
taskname = task_deps['rdeptask'][taskData.tasks_name[task]]
add_runtime_dependencies(taskData.rdepids[fnid], [taskname], depends)
# Resolve inter-task dependencies
#
# e.g. do_sometask[depends] = "targetname:do_someothertask"
# (makes sure sometask runs after targetname's someothertask)
if fnid not in tdepends_fnid:
tdepends_fnid[fnid] = set()
idepends = taskData.tasks_idepends[task]
for (depid, idependtask) in idepends:
if depid in taskData.build_targets and not depid in taskData.failed_deps:
if depid in taskData.build_targets:
# Won't be in build_targets if ASSUME_PROVIDED
depdata = taskData.build_targets[depid][0]
if depdata is not None:
taskid = taskData.gettask_id_fromfnid(depdata, idependtask)
dep = taskData.fn_index[depdata]
taskid = taskData.gettask_id(dep, idependtask, False)
if taskid is None:
bb.msg.fatal("RunQueue", "Task %s in %s depends upon non-existent task %s in %s" % (taskData.tasks_name[task], fn, idependtask, dep))
depends.add(taskid)
irdepends = taskData.tasks_irdepends[task]
for (depid, idependtask) in irdepends:
if depid in taskData.run_targets:
# Won't be in run_targets if ASSUME_PROVIDED
depdata = taskData.run_targets[depid][0]
if depdata is not None:
taskid = taskData.gettask_id_fromfnid(depdata, idependtask)
if taskid is None:
bb.msg.fatal("RunQueue", "Task %s in %s rdepends upon non-existent task %s in %s" % (taskData.tasks_name[task], fn, idependtask, dep))
depends.add(taskid)
depends.append(taskid)
if depdata != fnid:
tdepends_fnid[fnid].add(taskid)
# Resolve recursive 'recrdeptask' dependencies (Part A)
# Resolve recursive 'recrdeptask' dependencies (A)
#
# e.g. do_sometask[recrdeptask] = "do_someothertask"
# (makes sure sometask runs after someothertask of all DEPENDS, RDEPENDS and intertask dependencies, recursively)
# We cover the recursive part of the dependencies below
if 'recrdeptask' in task_deps and taskData.tasks_name[task] in task_deps['recrdeptask']:
tasknames = task_deps['recrdeptask'][taskData.tasks_name[task]].split()
recursivetasks[task] = tasknames
add_build_dependencies(taskData.depids[fnid], tasknames, depends)
add_runtime_dependencies(taskData.rdepids[fnid], tasknames, depends)
if taskData.tasks_name[task] in tasknames:
recursivetasksselfref.add(task)
for taskname in task_deps['recrdeptask'][taskData.tasks_name[task]].split():
recrdepends.append(taskname)
add_build_dependencies(taskData.depids[fnid], [taskname], depends)
add_runtime_dependencies(taskData.rdepids[fnid], [taskname], depends)
# Rmove all self references
if task in depends:
newdep = []
logger.debug(2, "Task %s (%s %s) contains self reference! %s", task, taskData.fn_index[taskData.tasks_fnid[task]], taskData.tasks_name[task], depends)
for dep in depends:
if task != dep:
newdep.append(dep)
depends = newdep
self.runq_fnid.append(taskData.tasks_fnid[task])
self.runq_task.append(taskData.tasks_name[task])
self.runq_depends.append(depends)
self.runq_depends.append(set(depends))
self.runq_revdeps.append(set())
self.runq_hash.append("")
runq_build.append(0)
runq_recrdepends.append(recrdepends)
# Resolve recursive 'recrdeptask' dependencies (Part B)
#
# Build a list of recursive cumulative dependencies for each fnid
# We do this by fnid, since if A depends on some task in B
# we're interested in later tasks B's fnid might have but B itself
# doesn't depend on
#
# Algorithm is O(tasks) + O(tasks)*O(fnids)
#
reccumdepends = {}
for task in xrange(len(self.runq_fnid)):
fnid = self.runq_fnid[task]
if fnid not in reccumdepends:
if fnid in tdepends_fnid:
reccumdepends[fnid] = tdepends_fnid[fnid]
else:
reccumdepends[fnid] = set()
reccumdepends[fnid].update(self.runq_depends[task])
for task in xrange(len(self.runq_fnid)):
taskfnid = self.runq_fnid[task]
for fnid in reccumdepends:
if task in reccumdepends[fnid]:
reccumdepends[fnid].add(task)
if taskfnid in reccumdepends:
reccumdepends[fnid].update(reccumdepends[taskfnid])
# Resolve recursive 'recrdeptask' dependencies (B)
#
# e.g. do_sometask[recrdeptask] = "do_someothertask"
# (makes sure sometask runs after someothertask of all DEPENDS, RDEPENDS and intertask dependencies, recursively)
# We need to do this separately since we need all of self.runq_depends to be complete before this is processed
extradeps = {}
for task in recursivetasks:
extradeps[task] = set(self.runq_depends[task])
tasknames = recursivetasks[task]
seendeps = set()
seenfnid = []
def generate_recdeps(t):
newdeps = set()
add_resolved_dependencies([taskData.tasks_fnid[t]], tasknames, newdeps)
extradeps[task].update(newdeps)
seendeps.add(t)
newdeps.add(t)
for i in newdeps:
for n in self.runq_depends[i]:
if n not in seendeps:
generate_recdeps(n)
generate_recdeps(task)
# Remove circular references so that do_a[recrdeptask] = "do_a do_b" can work
for task in recursivetasks:
extradeps[task].difference_update(recursivetasksselfref)
for task in xrange(len(taskData.tasks_name)):
# Add in extra dependencies
if task in extradeps:
self.runq_depends[task] = extradeps[task]
# Remove all self references
if task in self.runq_depends[task]:
logger.debug(2, "Task %s (%s %s) contains self reference! %s", task, taskData.fn_index[taskData.tasks_fnid[task]], taskData.tasks_name[task], self.runq_depends[task])
self.runq_depends[task].remove(task)
for task in xrange(len(self.runq_fnid)):
if len(runq_recrdepends[task]) > 0:
taskfnid = self.runq_fnid[task]
for dep in reccumdepends[taskfnid]:
# Ignore self references
if dep == task:
continue
for taskname in runq_recrdepends[task]:
if taskData.tasks_name[dep] == taskname:
self.runq_depends[task].add(dep)
# Step B - Mark all active tasks
#
@@ -692,36 +697,19 @@ class RunQueueData:
stampfnwhitelist.append(fn)
self.stampfnwhitelist = stampfnwhitelist
# Iterate over the task list looking for tasks with a 'setscene' function
# Interate over the task list looking for tasks with a 'setscene' function
self.runq_setscene = []
if not self.cooker.configuration.nosetscene:
for task in range(len(self.runq_fnid)):
setscene = taskData.gettask_id(self.taskData.fn_index[self.runq_fnid[task]], self.runq_task[task] + "_setscene", False)
if not setscene:
continue
self.runq_setscene.append(task)
def invalidate_task(fn, taskname, error_nostamp):
taskdep = self.dataCache.task_deps[fn]
if 'nostamp' in taskdep and taskname in taskdep['nostamp']:
if error_nostamp:
bb.fatal("Task %s is marked nostamp, cannot invalidate this task" % taskname)
else:
bb.debug(1, "Task %s is marked nostamp, cannot invalidate this task" % taskname)
else:
logger.verbose("Invalidate task %s, %s", taskname, fn)
bb.parse.siggen.invalidate_task(taskname, self.dataCache, fn)
for task in range(len(self.runq_fnid)):
setscene = taskData.gettask_id(self.taskData.fn_index[self.runq_fnid[task]], self.runq_task[task] + "_setscene", False)
if not setscene:
continue
self.runq_setscene.append(task)
# Invalidate task if force mode active
if self.cooker.configuration.force:
for (fn, target) in self.target_pairs:
invalidate_task(fn, target, False)
# Invalidate task if invalidate mode active
if self.cooker.configuration.invalidate_stamp:
for (fn, target) in self.target_pairs:
for st in self.cooker.configuration.invalidate_stamp.split(','):
invalidate_task(fn, "do_%s" % st, True)
logger.verbose("Invalidate task %s, %s", target, fn)
bb.parse.siggen.invalidate_task(target, self.dataCache, fn)
# Interate over the task list and call into the siggen code
dealtwith = set()
@@ -793,7 +781,101 @@ class RunQueue:
self.rqexe = None
def check_stamp_task(self, task, taskname = None, recurse = False, cache = None):
def check_stamps(self):
unchecked = {}
current = []
notcurrent = []
buildable = []
if self.stamppolicy == "perfile":
fulldeptree = False
else:
fulldeptree = True
stampwhitelist = []
if self.stamppolicy == "whitelist":
stampwhitelist = self.rqdata.stampfnwhitelist
for task in xrange(len(self.rqdata.runq_fnid)):
unchecked[task] = ""
if len(self.rqdata.runq_depends[task]) == 0:
buildable.append(task)
def check_buildable(self, task, buildable):
for revdep in self.rqdata.runq_revdeps[task]:
alldeps = 1
for dep in self.rqdata.runq_depends[revdep]:
if dep in unchecked:
alldeps = 0
if alldeps == 1:
if revdep in unchecked:
buildable.append(revdep)
for task in xrange(len(self.rqdata.runq_fnid)):
if task not in unchecked:
continue
fn = self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[task]]
taskname = self.rqdata.runq_task[task]
stampfile = bb.build.stampfile(taskname, self.rqdata.dataCache, fn)
# If the stamp is missing its not current
if not os.access(stampfile, os.F_OK):
del unchecked[task]
notcurrent.append(task)
check_buildable(self, task, buildable)
continue
# If its a 'nostamp' task, it's not current
taskdep = self.rqdata.dataCache.task_deps[fn]
if 'nostamp' in taskdep and task in taskdep['nostamp']:
del unchecked[task]
notcurrent.append(task)
check_buildable(self, task, buildable)
continue
while (len(buildable) > 0):
nextbuildable = []
for task in buildable:
if task in unchecked:
fn = self.taskData.fn_index[self.rqdata.runq_fnid[task]]
taskname = self.rqdata.runq_task[task]
stampfile = bb.build.stampfile(taskname, self.rqdata.dataCache, fn)
iscurrent = True
t1 = os.stat(stampfile)[stat.ST_MTIME]
for dep in self.rqdata.runq_depends[task]:
if iscurrent:
fn2 = self.taskData.fn_index[self.rqdata.runq_fnid[dep]]
taskname2 = self.rqdata.runq_task[dep]
stampfile2 = bb.build.stampfile(taskname2, self.rqdata.dataCache, fn2)
if fn == fn2 or (fulldeptree and fn2 not in stampwhitelist):
if dep in notcurrent:
iscurrent = False
else:
t2 = os.stat(stampfile2)[stat.ST_MTIME]
if t1 < t2:
iscurrent = False
del unchecked[task]
if iscurrent:
current.append(task)
else:
notcurrent.append(task)
check_buildable(self, task, nextbuildable)
buildable = nextbuildable
#for task in range(len(self.runq_fnid)):
# fn = self.taskData.fn_index[self.runq_fnid[task]]
# taskname = self.runq_task[task]
# print "%s %s.%s" % (task, taskname, fn)
#print "Unchecked: %s" % unchecked
#print "Current: %s" % current
#print "Not current: %s" % notcurrent
if len(unchecked) > 0:
bb.msg.fatal("RunQueue", "check_stamps fatal internal error")
return current
def check_stamp_task(self, task, taskname = None, recurse = False):
def get_timestamp(f):
try:
if not os.access(f, os.F_OK):
@@ -829,9 +911,6 @@ class RunQueue:
if taskname != "do_setscene" and taskname.endswith("_setscene"):
return True
if cache is None:
cache = {}
iscurrent = True
t1 = get_timestamp(stampfile)
for dep in self.rqdata.runq_depends[task]:
@@ -852,18 +931,10 @@ class RunQueue:
logger.debug(2, 'Stampfile %s < %s', stampfile, stampfile2)
iscurrent = False
if recurse and iscurrent:
if dep in cache:
iscurrent = cache[dep]
if not iscurrent:
logger.debug(2, 'Stampfile for dependency %s:%s invalid (cached)' % (fn2, taskname2))
else:
iscurrent = self.check_stamp_task(dep, recurse=True, cache=cache)
cache[dep] = iscurrent
if recurse:
cache[task] = iscurrent
iscurrent = self.check_stamp_task(dep, recurse=True)
return iscurrent
def _execute_runqueue(self):
def execute_runqueue(self):
"""
Run the tasks in a queue prepared by rqdata.prepare()
Upon failure, optionally try to recover the build using any alternate providers
@@ -927,19 +998,6 @@ class RunQueue:
# Loop
return retval
def execute_runqueue(self):
# Catch unexpected exceptions and ensure we exit when an error occurs, not loop.
try:
return self._execute_runqueue()
except bb.runqueue.TaskFailure:
raise
except SystemExit:
raise
except:
logger.error("An uncaught exception occured in runqueue, please see the failure below:")
self.state = runQueueComplete
raise
def finish_runqueue(self, now = False):
if not self.rqexe:
return
@@ -983,36 +1041,23 @@ class RunQueueExecute:
self.build_stamps = {}
self.failed_fnids = []
self.stampcache = {}
def runqueue_process_waitpid(self):
"""
Return none is there are no processes awaiting result collection, otherwise
collect the process exit codes and close the information pipe.
"""
pid, status = os.waitpid(-1, os.WNOHANG)
if pid == 0 or os.WIFSTOPPED(status):
result = os.waitpid(-1, os.WNOHANG)
if result[0] == 0 and result[1] == 0:
return None
if os.WIFEXITED(status):
status = os.WEXITSTATUS(status)
elif os.WIFSIGNALED(status):
# Per shell conventions for $?, when a process exits due to
# a signal, we return an exit code of 128 + SIGNUM
status = 128 + os.WTERMSIG(status)
task = self.build_pids[pid]
del self.build_pids[pid]
self.build_pipes[pid].close()
del self.build_pipes[pid]
# self.build_stamps[pid] may not exist when use shared work directory.
if pid in self.build_stamps:
del self.build_stamps[pid]
if status != 0:
self.task_fail(task, status)
task = self.build_pids[result[0]]
del self.build_pids[result[0]]
self.build_pipes[result[0]].close()
del self.build_pipes[result[0]]
# self.build_stamps[result[0]] may not exist when use shared work directory.
if result[0] in self.build_stamps.keys():
del self.build_stamps[result[0]]
if result[1] != 0:
self.task_fail(task, result[1]>>8)
else:
self.task_complete(task)
return True
@@ -1119,6 +1164,8 @@ class RunQueueExecute:
os.umask(umask)
self.cooker.configuration.data.setVar("BB_WORKERCONTEXT", "1")
self.cooker.configuration.data.setVar("__RUNQUEUE_DO_NOT_USE_EXTERNALLY", self)
self.cooker.configuration.data.setVar("__RUNQUEUE_DO_NOT_USE_EXTERNALLY2", fn)
bb.parse.siggen.set_taskdata(self.rqdata.hashes, self.rqdata.hash_deps)
ret = 0
try:
@@ -1176,8 +1223,6 @@ class RunQueueExecuteTasks(RunQueueExecute):
self.stats = RunQueueStats(len(self.rqdata.runq_fnid))
self.stampcache = {}
# Mark initial buildable tasks
for task in xrange(self.stats.total):
self.runq_running.append(0)
@@ -1186,7 +1231,7 @@ class RunQueueExecuteTasks(RunQueueExecute):
self.runq_buildable.append(1)
else:
self.runq_buildable.append(0)
if len(self.rqdata.runq_revdeps[task]) > 0 and self.rqdata.runq_revdeps[task].issubset(self.rq.scenequeue_covered) and task not in self.rq.scenequeue_notcovered:
if len(self.rqdata.runq_revdeps[task]) > 0 and self.rqdata.runq_revdeps[task].issubset(self.rq.scenequeue_covered):
self.rq.scenequeue_covered.add(task)
found = True
@@ -1197,7 +1242,7 @@ class RunQueueExecuteTasks(RunQueueExecute):
continue
logger.debug(1, 'Considering %s (%s): %s' % (task, self.rqdata.get_user_idstring(task), str(self.rqdata.runq_revdeps[task])))
if len(self.rqdata.runq_revdeps[task]) > 0 and self.rqdata.runq_revdeps[task].issubset(self.rq.scenequeue_covered) and task not in self.rq.scenequeue_notcovered:
if len(self.rqdata.runq_revdeps[task]) > 0 and self.rqdata.runq_revdeps[task].issubset(self.rq.scenequeue_covered):
ok = True
for revdep in self.rqdata.runq_revdeps[task]:
if self.rqdata.runq_fnid[task] != self.rqdata.runq_fnid[revdep]:
@@ -1214,30 +1259,9 @@ class RunQueueExecuteTasks(RunQueueExecute):
# Allow the metadata to elect for setscene tasks to run anyway
covered_remove = set()
if self.rq.setsceneverify:
invalidtasks = []
for task in xrange(len(self.rqdata.runq_task)):
fn = self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[task]]
taskname = self.rqdata.runq_task[task]
taskdep = self.rqdata.dataCache.task_deps[fn]
if 'noexec' in taskdep and taskname in taskdep['noexec']:
continue
if self.rq.check_stamp_task(task, taskname + "_setscene", cache=self.stampcache):
logger.debug(2, 'Setscene stamp current for task %s(%s)', task, self.rqdata.get_user_idstring(task))
continue
if self.rq.check_stamp_task(task, taskname, recurse = True, cache=self.stampcache):
logger.debug(2, 'Normal stamp current for task %s(%s)', task, self.rqdata.get_user_idstring(task))
continue
invalidtasks.append(task)
call = self.rq.setsceneverify + "(covered, tasknames, fnids, fns, d, invalidtasks=invalidtasks)"
call2 = self.rq.setsceneverify + "(covered, tasknames, fnids, fns, d)"
locs = { "covered" : self.rq.scenequeue_covered, "tasknames" : self.rqdata.runq_task, "fnids" : self.rqdata.runq_fnid, "fns" : self.rqdata.taskData.fn_index, "d" : self.cooker.configuration.data, "invalidtasks" : invalidtasks }
# Backwards compatibility with older versions without invalidtasks
try:
covered_remove = bb.utils.better_eval(call, locs)
except TypeError:
covered_remove = bb.utils.better_eval(call2, locs)
call = self.rq.setsceneverify + "(covered, tasknames, fnids, fns, d)"
locs = { "covered" : self.rq.scenequeue_covered, "tasknames" : self.rqdata.runq_task, "fnids" : self.rqdata.runq_fnid, "fns" : self.rqdata.taskData.fn_index, "d" : self.cooker.configuration.data }
covered_remove = bb.utils.better_eval(call, locs)
for task in covered_remove:
fn = self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[task]]
@@ -1349,7 +1373,7 @@ class RunQueueExecuteTasks(RunQueueExecute):
self.task_skip(task)
return True
if self.rq.check_stamp_task(task, taskname, cache=self.stampcache):
if self.rq.check_stamp_task(task, taskname):
logger.debug(2, "Stamp current task %s (%s)", task,
self.rqdata.get_user_idstring(task))
self.task_skip(task)
@@ -1490,7 +1514,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
dep = self.rqdata.taskData.fn_index[depdata]
taskid = self.rqdata.get_task_id(self.rqdata.taskData.getfn_id(dep), idependtask.replace("_setscene", ""))
if taskid is None:
bb.msg.fatal("RunQueue", "Task %s:%s depends upon non-existent task %s:%s" % (self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[realid]], self.rqdata.taskData.tasks_name[realid], dep, idependtask))
bb.msg.fatal("RunQueue", "Task %s depends upon non-existent task %s:%s" % (self.rqdata.taskData.tasks_name[realid], dep, idependtask))
sq_revdeps_squash[self.rqdata.runq_setscene.index(task)].add(self.rqdata.runq_setscene.index(taskid))
# Have to zero this to avoid circular dependencies
@@ -1533,18 +1557,12 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
bb.build.make_stamp(taskname + "_setscene", self.rqdata.dataCache, fn)
continue
if self.rq.check_stamp_task(realtask, taskname + "_setscene", cache=self.stampcache):
if self.rq.check_stamp_task(realtask, taskname + "_setscene"):
logger.debug(2, 'Setscene stamp current for task %s(%s)', task, self.rqdata.get_user_idstring(realtask))
stamppresent.append(task)
self.task_skip(task)
continue
if self.rq.check_stamp_task(realtask, taskname, recurse = True, cache=self.stampcache):
logger.debug(2, 'Normal stamp current for task %s(%s)', task, self.rqdata.get_user_idstring(realtask))
stamppresent.append(task)
self.task_skip(task)
continue
sq_fn.append(fn)
sq_hashfn.append(self.rqdata.dataCache.hashfn[fn])
sq_hash.append(self.rqdata.runq_hash[realtask])
@@ -1632,7 +1650,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
fn = self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[realtask]]
taskname = self.rqdata.runq_task[realtask] + "_setscene"
if self.rq.check_stamp_task(realtask, self.rqdata.runq_task[realtask], recurse = True, cache=self.stampcache):
if self.rq.check_stamp_task(realtask, self.rqdata.runq_task[realtask], recurse = True):
logger.debug(2, 'Stamp for underlying task %s(%s) is current, so skipping setscene variant',
task, self.rqdata.get_user_idstring(realtask))
self.task_failoutright(task)
@@ -1644,7 +1662,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
self.task_failoutright(task)
return True
if self.rq.check_stamp_task(realtask, taskname, cache=self.stampcache):
if self.rq.check_stamp_task(realtask, taskname):
logger.debug(2, 'Setscene stamp current task %s(%s), so skip it and its dependencies',
task, self.rqdata.get_user_idstring(realtask))
self.task_skip(task)
@@ -1675,9 +1693,6 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
self.rq.scenequeue_covered = set()
for task in oldcovered:
self.rq.scenequeue_covered.add(self.rqdata.runq_setscene[task])
self.rq.scenequeue_notcovered = set()
for task in self.scenequeue_notcovered:
self.rq.scenequeue_notcovered.add(self.rqdata.runq_setscene[task])
logger.debug(1, 'We can skip tasks %s', sorted(self.rq.scenequeue_covered))
@@ -1761,6 +1776,15 @@ class runQueueTaskCompleted(runQueueEvent):
Event notifing a task completed
"""
def check_stamp_fn(fn, taskname, d):
rqexe = d.getVar("__RUNQUEUE_DO_NOT_USE_EXTERNALLY")
fn = d.getVar("__RUNQUEUE_DO_NOT_USE_EXTERNALLY2")
fnid = rqexe.rqdata.taskData.getfn_id(fn)
taskid = rqexe.rqdata.get_task_id(fnid, taskname)
if taskid is not None:
return rqexe.rq.check_stamp_task(taskid)
return None
class runQueuePipe():
"""
Abstraction for a pipe between a worker thread and the server
@@ -1768,7 +1792,7 @@ class runQueuePipe():
def __init__(self, pipein, pipeout, d):
self.input = pipein
pipeout.close()
bb.utils.nonblockingfd(self.input)
fcntl.fcntl(self.input, fcntl.F_SETFL, fcntl.fcntl(self.input, fcntl.F_GETFL) | os.O_NONBLOCK)
self.queue = ""
self.d = d

View File

@@ -48,7 +48,7 @@ class ServerCommunicator():
if self.connection.poll(.5):
return self.connection.recv()
else:
return None
return None, "Timeout while attempting to communicate with bitbake server"
except KeyboardInterrupt:
pass

View File

@@ -2,7 +2,6 @@ import hashlib
import logging
import os
import re
import tempfile
import bb.data
logger = logging.getLogger('BitBake.SigGen')
@@ -48,9 +47,6 @@ class SignatureGenerator(object):
def stampfile(self, stampbase, file_name, taskname, extrainfo):
return ("%s.%s.%s" % (stampbase, taskname, extrainfo)).rstrip('.')
def stampcleanmask(self, stampbase, file_name, taskname, extrainfo):
return ("%s.%s*.%s" % (stampbase, taskname, extrainfo)).rstrip('.')
def dump_sigtask(self, fn, task, stampbase, runtime):
return
@@ -68,7 +64,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.taskhash = {}
self.taskdeps = {}
self.runtaskdeps = {}
self.file_checksum_values = {}
self.gendeps = {}
self.lookupcache = {}
self.pkgnameextract = re.compile("(?P<fn>.*)\..*")
@@ -116,10 +111,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
data = data + dep
if dep in lookupcache:
var = lookupcache[dep]
elif dep[-1] == ']':
vf = dep[:-1].split('[')
var = d.getVarFlag(vf[0], vf[1], False)
lookupcache[dep] = var
else:
var = d.getVar(dep, False)
lookupcache[dep] = var
@@ -174,7 +165,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
k = fn + "." + task
data = dataCache.basetaskhash[k]
self.runtaskdeps[k] = []
self.file_checksum_values[k] = {}
recipename = dataCache.pkg_fn[fn]
for dep in sorted(deps, key=clean_basepath):
depname = dataCache.pkg_fn[self.pkgnameextract.search(dep).group('fn')]
@@ -185,12 +175,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
data = data + self.taskhash[dep]
self.runtaskdeps[k].append(dep)
if task in dataCache.file_checksums[fn]:
checksums = bb.fetch2.get_file_checksums(dataCache.file_checksums[fn][task], recipename)
for (f,cs) in checksums:
self.file_checksum_values[k][f] = cs
data = data + cs
taint = self.read_taint(fn, task, dataCache.stamp[fn])
if taint:
data = data + taint
@@ -231,7 +215,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
if runtime and k in self.taskhash:
data['runtaskdeps'] = self.runtaskdeps[k]
data['file_checksum_values'] = self.file_checksum_values[k]
data['runtaskhashes'] = {}
for dep in data['runtaskdeps']:
data['runtaskhashes'][dep] = self.taskhash[dep]
@@ -240,20 +223,9 @@ class SignatureGeneratorBasic(SignatureGenerator):
if taint:
data['taint'] = taint
fd, tmpfile = tempfile.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
try:
with os.fdopen(fd, "wb") as stream:
p = pickle.dump(data, stream, -1)
stream.flush()
os.fsync(fd)
os.chmod(tmpfile, 0664)
os.rename(tmpfile, sigfile)
except (OSError, IOError), err:
try:
os.unlink(tmpfile)
except OSError:
pass
raise err
p = pickle.Pickler(file(sigfile, "wb"), -1)
p.dump(data)
def dump_sigs(self, dataCache):
for fn in self.taskdeps:
@@ -269,24 +241,18 @@ class SignatureGeneratorBasic(SignatureGenerator):
class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
name = "basichash"
def stampfile(self, stampbase, fn, taskname, extrainfo, clean=False):
def stampfile(self, stampbase, fn, taskname, extrainfo):
if taskname != "do_setscene" and taskname.endswith("_setscene"):
k = fn + "." + taskname[:-9]
else:
k = fn + "." + taskname
if clean:
h = "*"
taskname = taskname + "*"
elif k in self.taskhash:
if k in self.taskhash:
h = self.taskhash[k]
else:
# If k is not in basehash, then error
h = self.basehash[k]
return ("%s.%s.%s.%s" % (stampbase, taskname, h, extrainfo)).rstrip('.')
def stampcleanmask(self, stampbase, fn, taskname, extrainfo):
return self.stampfile(stampbase, fn, taskname, extrainfo, clean=True)
def invalidate_task(self, task, d, fn):
bb.note("Tainting hash to force rebuild of task %s, %s" % (fn, task))
bb.build.write_taint(task, d, fn)
@@ -299,7 +265,7 @@ def dump_this_task(outfile, d):
def clean_basepath(a):
if a.startswith("virtual:"):
b = a.rsplit(":", 1)[0] + ":" + a.rsplit("/", 1)[1]
b = a.rsplit(":", 1)[0] + a.rsplit("/", 1)[1]
else:
b = a.rsplit("/", 1)[1]
return b
@@ -310,12 +276,10 @@ def clean_basepaths(a):
b[clean_basepath(x)] = a[x]
return b
def compare_sigfiles(a, b, recursecb = None):
output = []
p1 = pickle.Unpickler(open(a, "rb"))
def compare_sigfiles(a, b):
p1 = pickle.Unpickler(file(a, "rb"))
a_data = p1.load()
p2 = pickle.Unpickler(open(b, "rb"))
p2 = pickle.Unpickler(file(b, "rb"))
b_data = p2.load()
def dict_diff(a, b, whitelist=set()):
@@ -331,123 +295,97 @@ def compare_sigfiles(a, b, recursecb = None):
return changed, added, removed
if 'basewhitelist' in a_data and a_data['basewhitelist'] != b_data['basewhitelist']:
output.append("basewhitelist changed from %s to %s" % (a_data['basewhitelist'], b_data['basewhitelist']))
print "basewhitelist changed from %s to %s" % (a_data['basewhitelist'], b_data['basewhitelist'])
if a_data['basewhitelist'] and b_data['basewhitelist']:
output.append("changed items: %s" % a_data['basewhitelist'].symmetric_difference(b_data['basewhitelist']))
print "changed items: %s" % a_data['basewhitelist'].symmetric_difference(b_data['basewhitelist'])
if 'taskwhitelist' in a_data and a_data['taskwhitelist'] != b_data['taskwhitelist']:
output.append("taskwhitelist changed from %s to %s" % (a_data['taskwhitelist'], b_data['taskwhitelist']))
print "taskwhitelist changed from %s to %s" % (a_data['taskwhitelist'], b_data['taskwhitelist'])
if a_data['taskwhitelist'] and b_data['taskwhitelist']:
output.append("changed items: %s" % a_data['taskwhitelist'].symmetric_difference(b_data['taskwhitelist']))
print "changed items: %s" % a_data['taskwhitelist'].symmetric_difference(b_data['taskwhitelist'])
if a_data['taskdeps'] != b_data['taskdeps']:
output.append("Task dependencies changed from:\n%s\nto:\n%s" % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps'])))
print "Task dependencies changed from:\n%s\nto:\n%s" % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps']))
if a_data['basehash'] != b_data['basehash']:
output.append("basehash changed from %s to %s" % (a_data['basehash'], b_data['basehash']))
print "basehash changed from %s to %s" % (a_data['basehash'], b_data['basehash'])
changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basewhitelist'] & b_data['basewhitelist'])
if changed:
for dep in changed:
output.append("List of dependencies for variable %s changed from %s to %s" % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep]))
print "List of dependencies for variable %s changed from %s to %s" % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep])
if a_data['gendeps'][dep] and b_data['gendeps'][dep]:
output.append("changed items: %s" % a_data['gendeps'][dep].symmetric_difference(b_data['gendeps'][dep]))
print "changed items: %s" % a_data['gendeps'][dep].symmetric_difference(b_data['gendeps'][dep])
if added:
for dep in added:
output.append("Dependency on variable %s was added" % (dep))
print "Dependency on variable %s was added" % (dep)
if removed:
for dep in removed:
output.append("Dependency on Variable %s was removed" % (dep))
print "Dependency on Variable %s was removed" % (dep)
changed, added, removed = dict_diff(a_data['varvals'], b_data['varvals'])
if changed:
for dep in changed:
output.append("Variable %s value changed from %s to %s" % (dep, a_data['varvals'][dep], b_data['varvals'][dep]))
changed, added, removed = dict_diff(a_data['file_checksum_values'], b_data['file_checksum_values'])
if changed:
for f in changed:
output.append("Checksum for file %s changed from %s to %s" % (f, a_data['file_checksum_values'][f], b_data['file_checksum_values'][f]))
if added:
for f in added:
output.append("Dependency on checksum of file %s was added" % (f))
if removed:
for f in removed:
output.append("Dependency on checksum of file %s was removed" % (f))
print "Variable %s value changed from %s to %s" % (dep, a_data['varvals'][dep], b_data['varvals'][dep])
if 'runtaskhashes' in a_data and 'runtaskhashes' in b_data:
a = a_data['runtaskhashes']
b = b_data['runtaskhashes']
a = clean_basepaths(a_data['runtaskhashes'])
b = clean_basepaths(b_data['runtaskhashes'])
changed, added, removed = dict_diff(a, b)
if added:
for dep in added:
bdep_found = False
if removed:
for bdep in removed:
if a[dep] == b[bdep]:
#output.append("Dependency on task %s was replaced by %s with same hash" % (dep, bdep))
bdep_found = True
if not bdep_found:
output.append("Dependency on task %s was added with hash %s" % (clean_basepath(dep), a[dep]))
bdep_found = False
if removed:
for bdep in removed:
if a[dep] == b[bdep]:
#print "Dependency on task %s was replaced by %s with same hash" % (dep, bdep)
bdep_found = True
if not bdep_found:
print "Dependency on task %s was added with hash %s" % (dep, a[dep])
if removed:
for dep in removed:
adep_found = False
if added:
for adep in added:
if a[adep] == b[dep]:
#output.append("Dependency on task %s was replaced by %s with same hash" % (adep, dep))
adep_found = True
if not adep_found:
output.append("Dependency on task %s was removed with hash %s" % (clean_basepath(dep), b[dep]))
adep_found = False
if added:
for adep in added:
if a[adep] == b[dep]:
#print "Dependency on task %s was replaced by %s with same hash" % (adep, dep)
adep_found = True
if not adep_found:
print "Dependency on task %s was removed with hash %s" % (dep, b[dep])
if changed:
for dep in changed:
output.append("Hash for dependent task %s changed from %s to %s" % (clean_basepath(dep), a[dep], b[dep]))
if callable(recursecb):
recout = recursecb(dep, a[dep], b[dep])
if recout:
output.extend(recout)
print "Hash for dependent task %s changed from %s to %s" % (dep, a[dep], b[dep])
a_taint = a_data.get('taint', None)
b_taint = b_data.get('taint', None)
if a_taint != b_taint:
output.append("Taint (by forced/invalidated task) changed from %s to %s" % (a_taint, b_taint))
return output
print "Taint (by forced/invalidated task) changed from %s to %s" % (a_taint, b_taint)
def dump_sigfile(a):
output = []
p1 = pickle.Unpickler(open(a, "rb"))
p1 = pickle.Unpickler(file(a, "rb"))
a_data = p1.load()
output.append("basewhitelist: %s" % (a_data['basewhitelist']))
print "basewhitelist: %s" % (a_data['basewhitelist'])
output.append("taskwhitelist: %s" % (a_data['taskwhitelist']))
print "taskwhitelist: %s" % (a_data['taskwhitelist'])
output.append("Task dependencies: %s" % (sorted(a_data['taskdeps'])))
print "Task dependencies: %s" % (sorted(a_data['taskdeps']))
output.append("basehash: %s" % (a_data['basehash']))
print "basehash: %s" % (a_data['basehash'])
for dep in a_data['gendeps']:
output.append("List of dependencies for variable %s is %s" % (dep, a_data['gendeps'][dep]))
print "List of dependencies for variable %s is %s" % (dep, a_data['gendeps'][dep])
for dep in a_data['varvals']:
output.append("Variable %s value is %s" % (dep, a_data['varvals'][dep]))
print "Variable %s value is %s" % (dep, a_data['varvals'][dep])
if 'runtaskdeps' in a_data:
output.append("Tasks this task depends on: %s" % (a_data['runtaskdeps']))
if 'file_checksum_values' in a_data:
output.append("This task depends on the checksums of files: %s" % (a_data['file_checksum_values']))
print "Tasks this task depends on: %s" % (a_data['runtaskdeps'])
if 'runtaskhashes' in a_data:
for dep in a_data['runtaskhashes']:
output.append("Hash for dependent task %s is %s" % (dep, a_data['runtaskhashes'][dep]))
print "Hash for dependent task %s is %s" % (dep, a_data['runtaskhashes'][dep])
if 'taint' in a_data:
output.append("Tainted (by forced/invalidated task): %s" % a_data['taint'])
return output
print "Tainted (by forced/invalidated task): %s" % a_data['taint']

View File

@@ -55,7 +55,6 @@ class TaskData:
self.tasks_name = []
self.tasks_tdepends = []
self.tasks_idepends = []
self.tasks_irdepends = []
# Cache to speed up task ID lookups
self.tasks_lookup = {}
@@ -116,16 +115,6 @@ class TaskData:
ids.append(self.tasks_lookup[fnid][task])
return ids
def gettask_id_fromfnid(self, fnid, task):
"""
Return an ID number for the task matching fnid and task.
"""
if fnid in self.tasks_lookup:
if task in self.tasks_lookup[fnid]:
return self.tasks_lookup[fnid][task]
return None
def gettask_id(self, fn, task, create = True):
"""
Return an ID number for the task matching fn and task.
@@ -145,7 +134,6 @@ class TaskData:
self.tasks_fnid.append(fnid)
self.tasks_tdepends.append([])
self.tasks_idepends.append([])
self.tasks_irdepends.append([])
listid = len(self.tasks_name) - 1
@@ -176,9 +164,6 @@ class TaskData:
# Work out task dependencies
parentids = []
for dep in task_deps['parents'][task]:
if dep not in task_deps['tasks']:
bb.debug(2, "Not adding dependeny of %s on %s since %s does not exist" % (task, dep, dep))
continue
parentid = self.gettask_id(fn, dep)
parentids.append(parentid)
taskid = self.gettask_id(fn, task)
@@ -193,15 +178,6 @@ class TaskData:
bb.msg.fatal("TaskData", "Error for %s, dependency %s does not contain ':' character\n. Task 'depends' should be specified in the form 'packagename:task'" % (fn, dep))
ids.append(((self.getbuild_id(dep.split(":")[0])), dep.split(":")[1]))
self.tasks_idepends[taskid].extend(ids)
if 'rdepends' in task_deps and task in task_deps['rdepends']:
ids = []
for dep in task_deps['rdepends'][task].split():
if dep:
if ":" not in dep:
bb.msg.fatal("TaskData", "Error for %s, dependency %s does not contain ':' character\n. Task 'rdepends' should be specified in the form 'packagename:task'" % (fn, dep))
ids.append(((self.getrun_id(dep.split(":")[0])), dep.split(":")[1]))
self.tasks_irdepends[taskid].extend(ids)
# Work out build dependencies
if not fnid in self.depids:
@@ -485,7 +461,6 @@ class TaskData:
providers_list.append(dataCache.pkg_fn[fn])
bb.event.fire(bb.event.MultipleProviders(item, providers_list, runtime=True), cfgData)
self.consider_msgs_cache.append(item)
raise bb.providers.MultipleRProvider(item)
# run through the list until we find one that we can build
for fn in eligible:
@@ -558,11 +533,6 @@ class TaskData:
dependees = self.get_rdependees(targetid)
for fnid in dependees:
self.fail_fnid(fnid, missing_list)
for taskid in xrange(len(self.tasks_irdepends)):
irdepends = self.tasks_irdepends[taskid]
for (idependid, idependtask) in irdepends:
if idependid == targetid:
self.fail_fnid(self.tasks_fnid[taskid], missing_list)
def add_unresolved(self, cfgData, dataCache):
"""
@@ -584,7 +554,7 @@ class TaskData:
try:
self.add_rprovider(cfgData, dataCache, target)
added = added + 1
except (bb.providers.NoRProvider, bb.providers.MultipleRProvider):
except bb.providers.NoRProvider:
self.remove_runtarget(self.getrun_id(target))
logger.debug(1, "Resolved " + str(added) + " extra dependencies")
if added == 0:

View File

@@ -1,369 +0,0 @@
#
# BitBake Test for codeparser.py
#
# Copyright (C) 2010 Chris Larson
# Copyright (C) 2012 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import unittest
import logging
import bb
logger = logging.getLogger('BitBake.TestCodeParser')
import bb.data
class ReferenceTest(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
def setEmptyVars(self, varlist):
for k in varlist:
self.d.setVar(k, "")
def setValues(self, values):
for k, v in values.items():
self.d.setVar(k, v)
def assertReferences(self, refs):
self.assertEqual(self.references, refs)
def assertExecs(self, execs):
self.assertEqual(self.execs, execs)
class VariableReferenceTest(ReferenceTest):
def parseExpression(self, exp):
parsedvar = self.d.expandWithRefs(exp, None)
self.references = parsedvar.references
def test_simple_reference(self):
self.setEmptyVars(["FOO"])
self.parseExpression("${FOO}")
self.assertReferences(set(["FOO"]))
def test_nested_reference(self):
self.setEmptyVars(["BAR"])
self.d.setVar("FOO", "BAR")
self.parseExpression("${${FOO}}")
self.assertReferences(set(["FOO", "BAR"]))
def test_python_reference(self):
self.setEmptyVars(["BAR"])
self.parseExpression("${@bb.data.getVar('BAR', d, True) + 'foo'}")
self.assertReferences(set(["BAR"]))
class ShellReferenceTest(ReferenceTest):
def parseExpression(self, exp):
parsedvar = self.d.expandWithRefs(exp, None)
parser = bb.codeparser.ShellParser("ParserTest", logger)
parser.parse_shell(parsedvar.value)
self.references = parsedvar.references
self.execs = parser.execs
def test_quotes_inside_assign(self):
self.parseExpression('foo=foo"bar"baz')
self.assertReferences(set([]))
def test_quotes_inside_arg(self):
self.parseExpression('sed s#"bar baz"#"alpha beta"#g')
self.assertExecs(set(["sed"]))
def test_arg_continuation(self):
self.parseExpression("sed -i -e s,foo,bar,g \\\n *.pc")
self.assertExecs(set(["sed"]))
def test_dollar_in_quoted(self):
self.parseExpression('sed -i -e "foo$" *.pc')
self.assertExecs(set(["sed"]))
def test_quotes_inside_arg_continuation(self):
self.setEmptyVars(["bindir", "D", "libdir"])
self.parseExpression("""
sed -i -e s#"moc_location=.*$"#"moc_location=${bindir}/moc4"# \\
-e s#"uic_location=.*$"#"uic_location=${bindir}/uic4"# \\
${D}${libdir}/pkgconfig/*.pc
""")
self.assertReferences(set(["bindir", "D", "libdir"]))
def test_assign_subshell_expansion(self):
self.parseExpression("foo=$(echo bar)")
self.assertExecs(set(["echo"]))
def test_shell_unexpanded(self):
self.setEmptyVars(["QT_BASE_NAME"])
self.parseExpression('echo "${QT_BASE_NAME}"')
self.assertExecs(set(["echo"]))
self.assertReferences(set(["QT_BASE_NAME"]))
def test_incomplete_varexp_single_quotes(self):
self.parseExpression("sed -i -e 's:IP{:I${:g' $pc")
self.assertExecs(set(["sed"]))
def test_until(self):
self.parseExpression("until false; do echo true; done")
self.assertExecs(set(["false", "echo"]))
self.assertReferences(set())
def test_case(self):
self.parseExpression("""
case $foo in
*)
bar
;;
esac
""")
self.assertExecs(set(["bar"]))
self.assertReferences(set())
def test_assign_exec(self):
self.parseExpression("a=b c='foo bar' alpha 1 2 3")
self.assertExecs(set(["alpha"]))
def test_redirect_to_file(self):
self.setEmptyVars(["foo"])
self.parseExpression("echo foo >${foo}/bar")
self.assertExecs(set(["echo"]))
self.assertReferences(set(["foo"]))
def test_heredoc(self):
self.setEmptyVars(["theta"])
self.parseExpression("""
cat <<END
alpha
beta
${theta}
END
""")
self.assertReferences(set(["theta"]))
def test_redirect_from_heredoc(self):
v = ["B", "SHADOW_MAILDIR", "SHADOW_MAILFILE", "SHADOW_UTMPDIR", "SHADOW_LOGDIR", "bindir"]
self.setEmptyVars(v)
self.parseExpression("""
cat <<END >${B}/cachedpaths
shadow_cv_maildir=${SHADOW_MAILDIR}
shadow_cv_mailfile=${SHADOW_MAILFILE}
shadow_cv_utmpdir=${SHADOW_UTMPDIR}
shadow_cv_logdir=${SHADOW_LOGDIR}
shadow_cv_passwd_dir=${bindir}
END
""")
self.assertReferences(set(v))
self.assertExecs(set(["cat"]))
# def test_incomplete_command_expansion(self):
# self.assertRaises(reftracker.ShellSyntaxError, reftracker.execs,
# bbvalue.shparse("cp foo`", self.d), self.d)
# def test_rogue_dollarsign(self):
# self.setValues({"D" : "/tmp"})
# self.parseExpression("install -d ${D}$")
# self.assertReferences(set(["D"]))
# self.assertExecs(set(["install"]))
class PythonReferenceTest(ReferenceTest):
def setUp(self):
self.d = bb.data.init()
if hasattr(bb.utils, "_context"):
self.context = bb.utils._context
else:
import __builtin__
self.context = __builtin__.__dict__
def parseExpression(self, exp):
parsedvar = self.d.expandWithRefs(exp, None)
parser = bb.codeparser.PythonParser("ParserTest", logger)
parser.parse_python(parsedvar.value)
self.references = parsedvar.references | parser.references
self.execs = parser.execs
@staticmethod
def indent(value):
"""Python Snippets have to be indented, python values don't have to
be. These unit tests are testing snippets."""
return " " + value
def test_getvar_reference(self):
self.parseExpression("bb.data.getVar('foo', d, True)")
self.assertReferences(set(["foo"]))
self.assertExecs(set())
def test_getvar_computed_reference(self):
self.parseExpression("bb.data.getVar('f' + 'o' + 'o', d, True)")
self.assertReferences(set())
self.assertExecs(set())
def test_getvar_exec_reference(self):
self.parseExpression("eval('bb.data.getVar(\"foo\", d, True)')")
self.assertReferences(set())
self.assertExecs(set(["eval"]))
def test_var_reference(self):
self.context["foo"] = lambda x: x
self.setEmptyVars(["FOO"])
self.parseExpression("foo('${FOO}')")
self.assertReferences(set(["FOO"]))
self.assertExecs(set(["foo"]))
del self.context["foo"]
def test_var_exec(self):
for etype in ("func", "task"):
self.d.setVar("do_something", "echo 'hi mom! ${FOO}'")
self.d.setVarFlag("do_something", etype, True)
self.parseExpression("bb.build.exec_func('do_something', d)")
self.assertReferences(set(["do_something"]))
def test_function_reference(self):
self.context["testfunc"] = lambda msg: bb.msg.note(1, None, msg)
self.d.setVar("FOO", "Hello, World!")
self.parseExpression("testfunc('${FOO}')")
self.assertReferences(set(["FOO"]))
self.assertExecs(set(["testfunc"]))
del self.context["testfunc"]
def test_qualified_function_reference(self):
self.parseExpression("time.time()")
self.assertExecs(set(["time.time"]))
def test_qualified_function_reference_2(self):
self.parseExpression("os.path.dirname('/foo/bar')")
self.assertExecs(set(["os.path.dirname"]))
def test_qualified_function_reference_nested(self):
self.parseExpression("time.strftime('%Y%m%d',time.gmtime())")
self.assertExecs(set(["time.strftime", "time.gmtime"]))
def test_function_reference_chained(self):
self.context["testget"] = lambda: "\tstrip me "
self.parseExpression("testget().strip()")
self.assertExecs(set(["testget"]))
del self.context["testget"]
class DependencyReferenceTest(ReferenceTest):
pydata = """
bb.data.getVar('somevar', d, True)
def test(d):
foo = 'bar %s' % 'foo'
def test2(d):
d.getVar(foo, True)
d.getVar('bar', False)
test2(d)
def a():
\"\"\"some
stuff
\"\"\"
return "heh"
test(d)
bb.data.expand(bb.data.getVar("something", False, d), d)
bb.data.expand("${inexpand} somethingelse", d)
bb.data.getVar(a(), d, False)
"""
def test_python(self):
self.d.setVar("FOO", self.pydata)
self.setEmptyVars(["inexpand", "a", "test2", "test"])
self.d.setVarFlags("FOO", {"func": True, "python": True})
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEquals(deps, set(["somevar", "bar", "something", "inexpand", "test", "test2", "a"]))
shelldata = """
foo () {
bar
}
{
echo baz
$(heh)
eval `moo`
}
a=b
c=d
(
true && false
test -f foo
testval=something
$testval
) || aiee
! inverted
echo ${somevar}
case foo in
bar)
echo bar
;;
baz)
echo baz
;;
foo*)
echo foo
;;
esac
"""
def test_shell(self):
execs = ["bar", "echo", "heh", "moo", "true", "aiee"]
self.d.setVar("somevar", "heh")
self.d.setVar("inverted", "echo inverted...")
self.d.setVarFlag("inverted", "func", True)
self.d.setVar("FOO", self.shelldata)
self.d.setVarFlags("FOO", {"func": True})
self.setEmptyVars(execs)
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEquals(deps, set(["somevar", "inverted"] + execs))
def test_vardeps(self):
self.d.setVar("oe_libinstall", "echo test")
self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
self.d.setVarFlag("FOO", "vardeps", "oe_libinstall")
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEquals(deps, set(["oe_libinstall"]))
def test_vardeps_expand(self):
self.d.setVar("oe_libinstall", "echo test")
self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
self.d.setVarFlag("FOO", "vardeps", "${@'oe_libinstall'}")
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEquals(deps, set(["oe_libinstall"]))
#Currently no wildcard support
#def test_vardeps_wildcards(self):
# self.d.setVar("oe_libinstall", "echo test")
# self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
# self.d.setVarFlag("FOO", "vardeps", "oe_*")
# self.assertEquals(deps, set(["oe_libinstall"]))

View File

@@ -1,134 +0,0 @@
#
# BitBake Tests for Copy-on-Write (cow.py)
#
# Copyright 2006 Holger Freyther <freyther@handhelds.org>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import unittest
import os
class COWTestCase(unittest.TestCase):
"""
Test case for the COW module from mithro
"""
def testGetSet(self):
"""
Test and set
"""
from bb.COW import COWDictBase
a = COWDictBase.copy()
self.assertEquals(False, a.has_key('a'))
a['a'] = 'a'
a['b'] = 'b'
self.assertEquals(True, a.has_key('a'))
self.assertEquals(True, a.has_key('b'))
self.assertEquals('a', a['a'] )
self.assertEquals('b', a['b'] )
def testCopyCopy(self):
"""
Test the copy of copies
"""
from bb.COW import COWDictBase
# create two COW dict 'instances'
b = COWDictBase.copy()
c = COWDictBase.copy()
# assign some keys to one instance, some keys to another
b['a'] = 10
b['c'] = 20
c['a'] = 30
# test separation of the two instances
self.assertEquals(False, c.has_key('c'))
self.assertEquals(30, c['a'])
self.assertEquals(10, b['a'])
# test copy
b_2 = b.copy()
c_2 = c.copy()
self.assertEquals(False, c_2.has_key('c'))
self.assertEquals(10, b_2['a'])
b_2['d'] = 40
self.assertEquals(False, c_2.has_key('d'))
self.assertEquals(True, b_2.has_key('d'))
self.assertEquals(40, b_2['d'])
self.assertEquals(False, b.has_key('d'))
self.assertEquals(False, c.has_key('d'))
c_2['d'] = 30
self.assertEquals(True, c_2.has_key('d'))
self.assertEquals(True, b_2.has_key('d'))
self.assertEquals(30, c_2['d'])
self.assertEquals(40, b_2['d'])
self.assertEquals(False, b.has_key('d'))
self.assertEquals(False, c.has_key('d'))
# test copy of the copy
c_3 = c_2.copy()
b_3 = b_2.copy()
b_3_2 = b_2.copy()
c_3['e'] = 4711
self.assertEquals(4711, c_3['e'])
self.assertEquals(False, c_2.has_key('e'))
self.assertEquals(False, b_3.has_key('e'))
self.assertEquals(False, b_3_2.has_key('e'))
self.assertEquals(False, b_2.has_key('e'))
b_3['e'] = 'viel'
self.assertEquals('viel', b_3['e'])
self.assertEquals(4711, c_3['e'])
self.assertEquals(False, c_2.has_key('e'))
self.assertEquals(True, b_3.has_key('e'))
self.assertEquals(False, b_3_2.has_key('e'))
self.assertEquals(False, b_2.has_key('e'))
def testCow(self):
from bb.COW import COWDictBase
c = COWDictBase.copy()
c['123'] = 1027
c['other'] = 4711
c['d'] = { 'abc' : 10, 'bcd' : 20 }
copy = c.copy()
self.assertEquals(1027, c['123'])
self.assertEquals(4711, c['other'])
self.assertEquals({'abc':10, 'bcd':20}, c['d'])
self.assertEquals(1027, copy['123'])
self.assertEquals(4711, copy['other'])
self.assertEquals({'abc':10, 'bcd':20}, copy['d'])
# cow it now
copy['123'] = 1028
copy['other'] = 4712
copy['d']['abc'] = 20
self.assertEquals(1027, c['123'])
self.assertEquals(4711, c['other'])
self.assertEquals({'abc':10, 'bcd':20}, c['d'])
self.assertEquals(1028, copy['123'])
self.assertEquals(4712, copy['other'])
self.assertEquals({'abc':20, 'bcd':20}, copy['d'])

View File

@@ -1,252 +0,0 @@
#
# BitBake Tests for the Data Store (data.py/data_smart.py)
#
# Copyright (C) 2010 Chris Larson
# Copyright (C) 2012 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import unittest
import bb
import bb.data
class DataExpansions(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
self.d["foo"] = "value of foo"
self.d["bar"] = "value of bar"
self.d["value of foo"] = "value of 'value of foo'"
def test_one_var(self):
val = self.d.expand("${foo}")
self.assertEqual(str(val), "value of foo")
def test_indirect_one_var(self):
val = self.d.expand("${${foo}}")
self.assertEqual(str(val), "value of 'value of foo'")
def test_indirect_and_another(self):
val = self.d.expand("${${foo}} ${bar}")
self.assertEqual(str(val), "value of 'value of foo' value of bar")
def test_python_snippet(self):
val = self.d.expand("${@5*12}")
self.assertEqual(str(val), "60")
def test_expand_in_python_snippet(self):
val = self.d.expand("${@'boo ' + '${foo}'}")
self.assertEqual(str(val), "boo value of foo")
def test_python_snippet_getvar(self):
val = self.d.expand("${@d.getVar('foo', True) + ' ${bar}'}")
self.assertEqual(str(val), "value of foo value of bar")
def test_python_snippet_syntax_error(self):
self.d.setVar("FOO", "${@foo = 5}")
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
def test_python_snippet_runtime_error(self):
self.d.setVar("FOO", "${@int('test')}")
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
def test_python_snippet_error_path(self):
self.d.setVar("FOO", "foo value ${BAR}")
self.d.setVar("BAR", "bar value ${@int('test')}")
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
def test_value_containing_value(self):
val = self.d.expand("${@d.getVar('foo', True) + ' ${bar}'}")
self.assertEqual(str(val), "value of foo value of bar")
def test_reference_undefined_var(self):
val = self.d.expand("${undefinedvar} meh")
self.assertEqual(str(val), "${undefinedvar} meh")
def test_double_reference(self):
self.d.setVar("BAR", "bar value")
self.d.setVar("FOO", "${BAR} foo ${BAR}")
val = self.d.getVar("FOO", True)
self.assertEqual(str(val), "bar value foo bar value")
def test_direct_recursion(self):
self.d.setVar("FOO", "${FOO}")
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
def test_indirect_recursion(self):
self.d.setVar("FOO", "${BAR}")
self.d.setVar("BAR", "${BAZ}")
self.d.setVar("BAZ", "${FOO}")
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
def test_recursion_exception(self):
self.d.setVar("FOO", "${BAR}")
self.d.setVar("BAR", "${${@'FOO'}}")
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
def test_incomplete_varexp_single_quotes(self):
self.d.setVar("FOO", "sed -i -e 's:IP{:I${:g' $pc")
val = self.d.getVar("FOO", True)
self.assertEqual(str(val), "sed -i -e 's:IP{:I${:g' $pc")
def test_nonstring(self):
self.d.setVar("TEST", 5)
val = self.d.getVar("TEST", True)
self.assertEqual(str(val), "5")
def test_rename(self):
self.d.renameVar("foo", "newfoo")
self.assertEqual(self.d.getVar("newfoo"), "value of foo")
self.assertEqual(self.d.getVar("foo"), None)
def test_deletion(self):
self.d.delVar("foo")
self.assertEqual(self.d.getVar("foo"), None)
def test_keys(self):
keys = self.d.keys()
self.assertEqual(keys, ['value of foo', 'foo', 'bar'])
class TestNestedExpansions(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
self.d["foo"] = "foo"
self.d["bar"] = "bar"
self.d["value of foobar"] = "187"
def test_refs(self):
val = self.d.expand("${value of ${foo}${bar}}")
self.assertEqual(str(val), "187")
#def test_python_refs(self):
# val = self.d.expand("${@${@3}**2 + ${@4}**2}")
# self.assertEqual(str(val), "25")
def test_ref_in_python_ref(self):
val = self.d.expand("${@'${foo}' + 'bar'}")
self.assertEqual(str(val), "foobar")
def test_python_ref_in_ref(self):
val = self.d.expand("${${@'f'+'o'+'o'}}")
self.assertEqual(str(val), "foo")
def test_deep_nesting(self):
depth = 100
val = self.d.expand("${" * depth + "foo" + "}" * depth)
self.assertEqual(str(val), "foo")
#def test_deep_python_nesting(self):
# depth = 50
# val = self.d.expand("${@" * depth + "1" + "+1}" * depth)
# self.assertEqual(str(val), str(depth + 1))
def test_mixed(self):
val = self.d.expand("${value of ${@('${foo}'+'bar')[0:3]}${${@'BAR'.lower()}}}")
self.assertEqual(str(val), "187")
def test_runtime(self):
val = self.d.expand("${${@'value of' + ' f'+'o'+'o'+'b'+'a'+'r'}}")
self.assertEqual(str(val), "187")
class TestMemoize(unittest.TestCase):
def test_memoized(self):
d = bb.data.init()
d.setVar("FOO", "bar")
self.assertTrue(d.getVar("FOO") is d.getVar("FOO"))
def test_not_memoized(self):
d1 = bb.data.init()
d2 = bb.data.init()
d1.setVar("FOO", "bar")
d2.setVar("FOO", "bar2")
self.assertTrue(d1.getVar("FOO") is not d2.getVar("FOO"))
def test_changed_after_memoized(self):
d = bb.data.init()
d.setVar("foo", "value of foo")
self.assertEqual(str(d.getVar("foo")), "value of foo")
d.setVar("foo", "second value of foo")
self.assertEqual(str(d.getVar("foo")), "second value of foo")
def test_same_value(self):
d = bb.data.init()
d.setVar("foo", "value of")
d.setVar("bar", "value of")
self.assertEqual(d.getVar("foo"),
d.getVar("bar"))
class TestConcat(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
self.d.setVar("FOO", "foo")
self.d.setVar("VAL", "val")
self.d.setVar("BAR", "bar")
def test_prepend(self):
self.d.setVar("TEST", "${VAL}")
self.d.prependVar("TEST", "${FOO}:")
self.assertEqual(self.d.getVar("TEST", True), "foo:val")
def test_append(self):
self.d.setVar("TEST", "${VAL}")
self.d.appendVar("TEST", ":${BAR}")
self.assertEqual(self.d.getVar("TEST", True), "val:bar")
def test_multiple_append(self):
self.d.setVar("TEST", "${VAL}")
self.d.prependVar("TEST", "${FOO}:")
self.d.appendVar("TEST", ":val2")
self.d.appendVar("TEST", ":${BAR}")
self.assertEqual(self.d.getVar("TEST", True), "foo:val:val2:bar")
class TestOverrides(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
self.d.setVar("OVERRIDES", "foo:bar:local")
self.d.setVar("TEST", "testvalue")
def test_no_override(self):
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "testvalue")
def test_one_override(self):
self.d.setVar("TEST_bar", "testvalue2")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "testvalue2")
def test_multiple_override(self):
self.d.setVar("TEST_bar", "testvalue2")
self.d.setVar("TEST_local", "testvalue3")
self.d.setVar("TEST_foo", "testvalue4")
bb.data.update_data(self.d)
self.assertEqual(self.d.getVar("TEST", True), "testvalue3")
class TestFlags(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
self.d.setVar("foo", "value of foo")
self.d.setVarFlag("foo", "flag1", "value of flag1")
self.d.setVarFlag("foo", "flag2", "value of flag2")
def test_setflag(self):
self.assertEqual(self.d.getVarFlag("foo", "flag1"), "value of flag1")
self.assertEqual(self.d.getVarFlag("foo", "flag2"), "value of flag2")
def test_delflag(self):
self.d.delVarFlag("foo", "flag2")
self.assertEqual(self.d.getVarFlag("foo", "flag1"), "value of flag1")
self.assertEqual(self.d.getVarFlag("foo", "flag2"), None)

View File

@@ -1,191 +0,0 @@
#
# BitBake Tests for the Fetcher (fetch2/)
#
# Copyright (C) 2012 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import unittest
import tempfile
import subprocess
import os
import bb
class FetcherTest(unittest.TestCase):
replaceuris = {
("git://git.invalid.infradead.org/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "http://somewhere.org/somedir/")
: "http://somewhere.org/somedir/git2_git.invalid.infradead.org.mtd-utils.git.tar.gz",
("git://git.invalid.infradead.org/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/([^/]+/)*([^/]*)", "git://somewhere.org/somedir/\\2;protocol=http")
: "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/([^/]+/)*([^/]*)", "git://somewhere.org/somedir/\\2;protocol=http")
: "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/([^/]+/)*([^/]*)", "git://somewhere.org/\\2;protocol=http")
: "git://somewhere.org/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
("git://someserver.org/bitbake;tag=1234567890123456789012345678901234567890", "git://someserver.org/bitbake", "git://git.openembedded.org/bitbake")
: "git://git.openembedded.org/bitbake;tag=1234567890123456789012345678901234567890",
("file://sstate-xyz.tgz", "file://.*", "file:///somewhere/1234/sstate-cache")
: "file:///somewhere/1234/sstate-cache/sstate-xyz.tgz",
("file://sstate-xyz.tgz", "file://.*", "file:///somewhere/1234/sstate-cache/")
: "file:///somewhere/1234/sstate-cache/sstate-xyz.tgz",
("http://somewhere.org/somedir1/somedir2/somefile_1.2.3.tar.gz", "http://.*/.*", "http://somewhere2.org/somedir3")
: "http://somewhere2.org/somedir3/somefile_1.2.3.tar.gz",
("http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere2.org/somedir3/somefile_1.2.3.tar.gz")
: "http://somewhere2.org/somedir3/somefile_1.2.3.tar.gz",
("http://www.apache.org/dist/subversion/subversion-1.7.1.tar.bz2", "http://www.apache.org/dist", "http://archive.apache.org/dist")
: "http://archive.apache.org/dist/subversion/subversion-1.7.1.tar.bz2",
("http://www.apache.org/dist/subversion/subversion-1.7.1.tar.bz2", "http://.*/.*", "file:///somepath/downloads/")
: "file:///somepath/downloads/subversion-1.7.1.tar.bz2",
("git://git.invalid.infradead.org/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "git://somewhere.org/somedir/BASENAME;protocol=http")
: "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "git://somewhere.org/somedir/BASENAME;protocol=http")
: "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "git://somewhere.org/somedir/MIRRORNAME;protocol=http")
: "git://somewhere.org/somedir/git.invalid.infradead.org.foo.mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
#Renaming files doesn't work
#("http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere2.org/somedir3/somefile_2.3.4.tar.gz") : "http://somewhere2.org/somedir3/somefile_2.3.4.tar.gz"
#("file://sstate-xyz.tgz", "file://.*/.*", "file:///somewhere/1234/sstate-cache") : "file:///somewhere/1234/sstate-cache/sstate-xyz.tgz",
}
mirrorvar = "http://.*/.* file:///somepath/downloads/ \n" \
"git://someserver.org/bitbake git://git.openembedded.org/bitbake \n" \
"https://.*/.* file:///someotherpath/downloads/ \n" \
"http://.*/.* file:///someotherpath/downloads/ \n"
def setUp(self):
self.d = bb.data.init()
self.tempdir = tempfile.mkdtemp()
self.dldir = os.path.join(self.tempdir, "download")
os.mkdir(self.dldir)
self.d.setVar("DL_DIR", self.dldir)
self.unpackdir = os.path.join(self.tempdir, "unpacked")
os.mkdir(self.unpackdir)
persistdir = os.path.join(self.tempdir, "persistdata")
self.d.setVar("PERSISTENT_DIR", persistdir)
def tearDown(self):
bb.utils.prunedir(self.tempdir)
def test_fetch(self):
fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
fetcher.download()
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.1.tar.gz"), 57892)
self.d.setVar("BB_NO_NETWORK", "1")
fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
fetcher.download()
fetcher.unpack(self.unpackdir)
self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.0/")), 9)
self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.1/")), 9)
def test_fetch_mirror(self):
self.d.setVar("MIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
fetcher.download()
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
def test_fetch_premirror(self):
self.d.setVar("PREMIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
fetcher.download()
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
def gitfetcher(self, url1, url2):
def checkrevision(self, fetcher):
fetcher.unpack(self.unpackdir)
revision = subprocess.check_output("git rev-parse HEAD", shell=True, cwd=self.unpackdir + "/git").strip()
self.assertEqual(revision, "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
self.d.setVar("BB_GENERATE_MIRROR_TARBALLS", "1")
self.d.setVar("SRCREV", "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
fetcher = bb.fetch.Fetch([url1], self.d)
fetcher.download()
checkrevision(self, fetcher)
# Wipe out the dldir clone and the unpacked source, turn off the network and check mirror tarball works
bb.utils.prunedir(self.dldir + "/git2/")
bb.utils.prunedir(self.unpackdir)
self.d.setVar("BB_NO_NETWORK", "1")
fetcher = bb.fetch.Fetch([url2], self.d)
fetcher.download()
checkrevision(self, fetcher)
def test_gitfetch(self):
url1 = url2 = "git://git.openembedded.org/bitbake"
self.gitfetcher(url1, url2)
def test_gitfetch_premirror(self):
url1 = "git://git.openembedded.org/bitbake"
url2 = "git://someserver.org/bitbake"
self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
self.gitfetcher(url1, url2)
def test_gitfetch_premirror2(self):
url1 = url2 = "git://someserver.org/bitbake"
self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
self.gitfetcher(url1, url2)
def test_gitfetch_premirror3(self):
realurl = "git://git.openembedded.org/bitbake"
dummyurl = "git://someserver.org/bitbake"
self.sourcedir = self.unpackdir.replace("unpacked", "sourcemirror.git")
os.chdir(self.tempdir)
subprocess.check_output("git clone %s %s 2> /dev/null" % (realurl, self.sourcedir), shell=True)
self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" % (dummyurl, self.sourcedir))
self.gitfetcher(dummyurl, dummyurl)
def test_urireplace(self):
for k, v in self.replaceuris.items():
ud = bb.fetch.FetchData(k[0], self.d)
ud.setup_localpath(self.d)
mirrors = bb.fetch2.mirror_from_string("%s %s" % (k[1], k[2]))
newuris, uds = bb.fetch2.build_mirroruris(ud, mirrors, self.d)
self.assertEqual([v], newuris)
def test_urilist1(self):
fetcher = bb.fetch.FetchData("http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", self.d)
mirrors = bb.fetch2.mirror_from_string(self.mirrorvar)
uris, uds = bb.fetch2.build_mirroruris(fetcher, mirrors, self.d)
self.assertEqual(uris, ['file:///somepath/downloads/bitbake-1.0.tar.gz', 'file:///someotherpath/downloads/bitbake-1.0.tar.gz'])
def test_urilist2(self):
# Catch https:// -> files:// bug
fetcher = bb.fetch.FetchData("https://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", self.d)
mirrors = bb.fetch2.mirror_from_string(self.mirrorvar)
uris, uds = bb.fetch2.build_mirroruris(fetcher, mirrors, self.d)
self.assertEqual(uris, ['file:///someotherpath/downloads/bitbake-1.0.tar.gz'])
class URLHandle(unittest.TestCase):
datatable = {
"http://www.google.com/index.html" : ('http', 'www.google.com', '/index.html', '', '', {}),
"cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg" : ('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', '', {'module': 'familiar/dist/ipkg'}),
"cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg" : ('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', 'anonymous', {'tag': 'V0-99-81', 'module': 'familiar/dist/ipkg'})
}
def test_decodeurl(self):
for k, v in self.datatable.items():
result = bb.fetch.decodeurl(k)
self.assertEqual(result, v)
def test_encodeurl(self):
for k, v in self.datatable.items():
result = bb.fetch.encodeurl(v)
self.assertEqual(result, k)

View File

@@ -1,51 +0,0 @@
#
# BitBake Tests for utils.py
#
# Copyright (C) 2012 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import unittest
import bb
class VerCmpString(unittest.TestCase):
def test_vercmpstring(self):
result = bb.utils.vercmp_string('1', '2')
self.assertTrue(result < 0)
result = bb.utils.vercmp_string('2', '1')
self.assertTrue(result > 0)
result = bb.utils.vercmp_string('1', '1.0')
self.assertTrue(result < 0)
result = bb.utils.vercmp_string('1', '1.1')
self.assertTrue(result < 0)
result = bb.utils.vercmp_string('1.1', '1_p2')
self.assertTrue(result < 0)
def test_explode_dep_versions(self):
correctresult = {"foo" : ["= 1.10"]}
result = bb.utils.explode_dep_versions2("foo (= 1.10)")
self.assertEqual(result, correctresult)
result = bb.utils.explode_dep_versions2("foo (=1.10)")
self.assertEqual(result, correctresult)
result = bb.utils.explode_dep_versions2("foo ( = 1.10)")
self.assertEqual(result, correctresult)
result = bb.utils.explode_dep_versions2("foo ( =1.10)")
self.assertEqual(result, correctresult)
result = bb.utils.explode_dep_versions2("foo ( = 1.10 )")
self.assertEqual(result, correctresult)
result = bb.utils.explode_dep_versions2("foo ( =1.10 )")
self.assertEqual(result, correctresult)

View File

@@ -1,98 +0,0 @@
# tinfoil: a simple wrapper around cooker for bitbake-based command-line utilities
#
# Copyright (C) 2012 Intel Corporation
# Copyright (C) 2011 Mentor Graphics Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import logging
import warnings
import os
import sys
import bb.cache
import bb.cooker
import bb.providers
import bb.utils
from bb.cooker import state
import bb.fetch2
class Tinfoil:
def __init__(self):
# Needed to avoid deprecation warnings with python 2.6
warnings.filterwarnings("ignore", category=DeprecationWarning)
# Set up logging
self.logger = logging.getLogger('BitBake')
console = logging.StreamHandler(sys.stdout)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
bb.msg.addDefaultlogFilter(console)
console.setFormatter(format)
self.logger.addHandler(console)
initialenv = os.environ.copy()
bb.utils.clean_environment()
self.config = TinfoilConfig(parse_only=True)
self.cooker = bb.cooker.BBCooker(self.config,
self.register_idle_function,
initialenv)
self.config_data = self.cooker.configuration.data
bb.providers.logger.setLevel(logging.ERROR)
self.cooker_data = None
def register_idle_function(self, function, data):
pass
def parseRecipes(self):
sys.stderr.write("Parsing recipes..")
self.logger.setLevel(logging.WARNING)
try:
while self.cooker.state in (state.initial, state.parsing):
self.cooker.updateCache()
except KeyboardInterrupt:
self.cooker.shutdown()
self.cooker.updateCache()
sys.exit(2)
self.logger.setLevel(logging.INFO)
sys.stderr.write("done.\n")
self.cooker_data = self.cooker.status
def prepare(self, config_only = False):
if not self.cooker_data:
if config_only:
self.cooker.parseConfiguration()
self.cooker_data = self.cooker.status
else:
self.parseRecipes()
class TinfoilConfig(object):
def __init__(self, **options):
self.pkgs_to_build = []
self.debug_domains = []
self.extra_assume_provided = []
self.prefile = []
self.postfile = []
self.debug = 0
self.__dict__.update(options)
def __getattr__(self, attribute):
try:
return super(TinfoilConfig, self).__getattribute__(attribute)
except AttributeError:
return None

View File

@@ -23,13 +23,11 @@
import gtk
import pango
import gobject
import bb.process
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hobwidget import hic, HobNotebook, HobAltButton, HobWarpCellRendererText, HobButton, HobInfoButton
from bb.ui.crumbs.hobwidget import hic, HobNotebook, HobAltButton, HobWarpCellRendererText
from bb.ui.crumbs.runningbuild import RunningBuildTreeView
from bb.ui.crumbs.runningbuild import BuildFailureTreeView
from bb.ui.crumbs.hobpages import HobPage
from bb.ui.crumbs.hobcolor import HobColors
class BuildConfigurationTreeView(gtk.TreeView):
def __init__ (self):
@@ -98,12 +96,11 @@ class BuildConfigurationTreeView(gtk.TreeView):
for path in src_config_info.layers:
import os, os.path
if os.path.exists(path):
branch = bb.process.run('cd %s; git branch | grep "^* " | tr -d "* "' % path)[0]
if branch.startswith("fatal:"):
branch = "(unknown)"
if branch:
branch = branch.strip('\n')
f = os.popen('cd %s; git branch 2>&1 | grep "^* " | tr -d "* "' % path)
if f:
branch = f.readline().lstrip('\n').rstrip('\n')
vars.append(self.set_vars("Branch:", branch))
f.close()
break
self.set_config_model(vars)
@@ -147,7 +144,7 @@ class BuildDetailsPage (HobPage):
self.scrolled_view_config = gtk.ScrolledWindow ()
self.scrolled_view_config.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
self.scrolled_view_config.add(self.config_tv)
self.notebook.append_page(self.scrolled_view_config, "Build configuration")
self.notebook.append_page(self.scrolled_view_config, gtk.Label("Build configuration"))
self.failure_tv = BuildFailureTreeView()
self.failure_model = self.builder.handler.build.model.failure_model()
@@ -155,19 +152,19 @@ class BuildDetailsPage (HobPage):
self.scrolled_view_failure = gtk.ScrolledWindow ()
self.scrolled_view_failure.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
self.scrolled_view_failure.add(self.failure_tv)
self.notebook.append_page(self.scrolled_view_failure, "Issues")
self.notebook.append_page(self.scrolled_view_failure, gtk.Label("Issues"))
self.build_tv = RunningBuildTreeView(readonly=True, hob=True)
self.build_tv.set_model(self.builder.handler.build.model)
self.scrolled_view_build = gtk.ScrolledWindow ()
self.scrolled_view_build.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
self.scrolled_view_build.add(self.build_tv)
self.notebook.append_page(self.scrolled_view_build, "Log")
self.notebook.append_page(self.scrolled_view_build, gtk.Label("Log"))
self.builder.handler.build.model.connect_after("row-changed", self.scroll_to_present_row, self.scrolled_view_build.get_vadjustment(), self.build_tv)
self.button_box = gtk.HBox(False, 6)
self.back_button = HobAltButton('&lt;&lt; Back')
self.back_button = HobAltButton("<< Back to image configuration")
self.back_button.connect("clicked", self.back_button_clicked_cb)
self.button_box.pack_start(self.back_button, expand=False, fill=False)
@@ -201,133 +198,6 @@ class BuildDetailsPage (HobPage):
for child in children:
self.remove(child)
def add_build_fail_top_bar(self, actions, log_file=None):
primary_action = "Edit %s" % actions
self.notebook.set_page("Issues")
color = HobColors.ERROR
build_fail_top = gtk.EventBox()
#build_fail_top.set_size_request(-1, 200)
build_fail_top.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse(color))
build_fail_tab = gtk.Table(14, 46, True)
build_fail_top.add(build_fail_tab)
icon = gtk.Image()
icon_pix_buffer = gtk.gdk.pixbuf_new_from_file(hic.ICON_INDI_ERROR_FILE)
icon.set_from_pixbuf(icon_pix_buffer)
build_fail_tab.attach(icon, 1, 4, 0, 6)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
label.set_markup("<span size='x-large'><b>%s</b></span>" % self.title)
build_fail_tab.attach(label, 4, 26, 0, 6)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
label.set_markup("<span size='medium'>Check the \"Issues\" information for more details</span>")
build_fail_tab.attach(label, 4, 40, 4, 9)
# create button 'Edit packages'
action_button = HobButton(primary_action)
#action_button.set_size_request(-1, 40)
action_button.set_tooltip_text("Edit the %s parameters" % actions)
action_button.connect('clicked', self.failure_primary_action_button_clicked_cb, primary_action)
build_fail_tab.attach(action_button, 4, 13, 9, 12)
if log_file:
open_log_button = HobAltButton("Open log")
open_log_button.set_relief(gtk.RELIEF_HALF)
open_log_button.set_tooltip_text("Open the build's log file")
open_log_button.connect('clicked', self.open_log_button_clicked_cb, log_file)
build_fail_tab.attach(open_log_button, 14, 23, 9, 12)
attach_pos = (24 if log_file else 14)
file_bug_button = HobAltButton('File a bug')
file_bug_button.set_relief(gtk.RELIEF_HALF)
file_bug_button.set_tooltip_text("Open the Yocto Project bug tracking website")
file_bug_button.connect('clicked', self.failure_activate_file_bug_link_cb)
build_fail_tab.attach(file_bug_button, attach_pos, attach_pos + 9, 9, 12)
return build_fail_top
def show_fail_page(self, title):
self._remove_all_widget()
self.title = "Hob cannot build your %s" % title
self.build_fail_bar = self.add_build_fail_top_bar(title, self.builder.current_logfile)
self.pack_start(self.group_align, expand=True, fill=True)
self.box_group_area.pack_start(self.build_fail_bar, expand=False, fill=False)
self.box_group_area.pack_start(self.vbox, expand=True, fill=True)
self.vbox.pack_start(self.notebook, expand=True, fill=True)
self.show_all()
self.back_button.hide()
def add_build_stop_top_bar(self, action, log_file=None):
color = HobColors.LIGHT_GRAY
build_stop_top = gtk.EventBox()
#build_stop_top.set_size_request(-1, 200)
build_stop_top.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse(color))
build_stop_top.set_flags(gtk.CAN_DEFAULT)
build_stop_top.grab_default()
build_stop_tab = gtk.Table(11, 46, True)
build_stop_top.add(build_stop_tab)
icon = gtk.Image()
icon_pix_buffer = gtk.gdk.pixbuf_new_from_file(hic.ICON_INFO_HOVER_FILE)
icon.set_from_pixbuf(icon_pix_buffer)
build_stop_tab.attach(icon, 1, 4, 0, 6)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
label.set_markup("<span size='x-large'><b>%s</b></span>" % self.title)
build_stop_tab.attach(label, 4, 26, 0, 6)
action_button = HobButton("Edit %s" % action)
action_button.set_size_request(-1, 40)
if action == "image":
action_button.set_tooltip_text("Edit the image parameters")
elif action == "recipes":
action_button.set_tooltip_text("Edit the included recipes")
elif action == "packages":
action_button.set_tooltip_text("Edit the included packages")
action_button.connect('clicked', self.stop_primary_action_button_clicked_cb, action)
build_stop_tab.attach(action_button, 4, 13, 6, 9)
if log_file:
open_log_button = HobAltButton("Open log")
open_log_button.set_relief(gtk.RELIEF_HALF)
open_log_button.set_tooltip_text("Open the build's log file")
open_log_button.connect('clicked', self.open_log_button_clicked_cb, log_file)
build_stop_tab.attach(open_log_button, 14, 23, 6, 9)
attach_pos = (24 if log_file else 14)
build_button = HobAltButton("Build new image")
#build_button.set_size_request(-1, 40)
build_button.set_tooltip_text("Create a new image from scratch")
build_button.connect('clicked', self.new_image_button_clicked_cb)
build_stop_tab.attach(build_button, attach_pos, attach_pos + 9, 6, 9)
return build_stop_top, action_button
def show_stop_page(self, action):
self._remove_all_widget()
self.title = "Build stopped"
self.build_stop_bar, action_button = self.add_build_stop_top_bar(action, self.builder.current_logfile)
self.pack_start(self.group_align, expand=True, fill=True)
self.box_group_area.pack_start(self.build_stop_bar, expand=False, fill=False)
self.box_group_area.pack_start(self.vbox, expand=True, fill=True)
self.vbox.pack_start(self.notebook, expand=True, fill=True)
self.show_all()
self.back_button.hide()
return action_button
def show_page(self, step):
self._remove_all_widget()
if step == self.builder.PACKAGE_GENERATING or step == self.builder.FAST_IMAGE_GENERATING:
@@ -361,9 +231,6 @@ class BuildDetailsPage (HobPage):
def back_button_clicked_cb(self, button):
self.builder.show_configuration()
def new_image_button_clicked_cb(self, button):
self.builder.reset()
def show_back_button(self):
self.back_button.show()
@@ -384,26 +251,3 @@ class BuildDetailsPage (HobPage):
def show_configurations(self, configurations, params):
self.config_tv.show(configurations, params)
def failure_primary_action_button_clicked_cb(self, button, action):
if "Edit recipes" in action:
self.builder.show_recipes()
elif "Edit packages" in action:
self.builder.show_packages()
elif "Edit image" in action:
self.builder.show_configuration()
def stop_primary_action_button_clicked_cb(self, button, action):
if "recipes" in action:
self.builder.show_recipes()
elif "packages" in action:
self.builder.show_packages(ask=False)
elif "image" in action:
self.builder.show_configuration()
def open_log_button_clicked_cb(self, button, log_file):
if log_file:
os.system("xdg-open /%s" % log_file)
def failure_activate_file_bug_link_cb(self, button):
button.child.emit('activate-link', "http://bugzilla.yoctoproject.org")

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -22,6 +22,7 @@
import gobject
import logging
from bb.ui.crumbs.runningbuild import RunningBuild
from bb.ui.crumbs.hobwidget import hcc
class HobHandler(gobject.GObject):
@@ -43,7 +44,7 @@ class HobHandler(gobject.GObject):
(gobject.TYPE_STRING,)),
"sanity-failed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING, gobject.TYPE_INT)),
(gobject.TYPE_STRING,)),
"generating-data" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
@@ -101,13 +102,9 @@ class HobHandler(gobject.GObject):
def runCommand(self, commandline):
try:
result = self.server.runCommand(commandline)
result_str = str(result)
if (result_str.startswith("Busy (") or
result_str == "No such command"):
raise Exception('%s has failed with output "%s". ' %
(str(commandline), result_str) +
"We recommend that you restart Hob.")
result, error = self.server.runCommand(commandline)
if error:
raise Exception("Error running command '%s': %s" % (commandline, error))
return result
except Exception as e:
self.commands_async = []
@@ -163,16 +160,10 @@ class HobHandler(gobject.GObject):
targets.append(self.toolchain)
self.runCommand(["buildTargets", targets, self.default_task])
def display_error(self):
self.clear_busy()
self.emit("command-failed", self.error_msg)
self.error_msg = ""
if self.building:
self.building = False
def handle_event(self, event):
if not event:
return
if self.building:
self.current_phase = "building"
self.build.handle_event(event)
@@ -186,14 +177,13 @@ class HobHandler(gobject.GObject):
self.run_next_command()
elif isinstance(event, bb.event.SanityCheckFailed):
self.emit("sanity-failed", event._msg, event._network_error)
self.emit("sanity-failed", event._msg)
elif isinstance(event, logging.LogRecord):
if not self.building:
if event.levelno >= logging.ERROR:
formatter = bb.msg.BBLogFormatter()
msg = formatter.format(event)
self.error_msg += msg + '\n'
if event.levelno >= logging.ERROR:
formatter = bb.msg.BBLogFormatter()
formatter.format(event)
self.error_msg += event.message + '\n'
elif isinstance(event, bb.event.TargetsTreeGenerated):
self.current_phase = "data generation"
@@ -225,7 +215,11 @@ class HobHandler(gobject.GObject):
self.run_next_command()
elif isinstance(event, bb.command.CommandFailed):
self.commands_async = []
self.display_error()
self.clear_busy()
self.emit("command-failed", self.error_msg)
self.error_msg = ""
if self.building:
self.building = False
elif isinstance(event, (bb.event.ParseStarted,
bb.event.CacheLoadStarted,
bb.event.TreeDataPreparationStarted,
@@ -255,9 +249,6 @@ class HobHandler(gobject.GObject):
message["title"] = "Parsing recipes: "
self.emit("parsing-completed", message)
if self.error_msg and not self.commands_async:
self.display_error()
return
def init_cooker(self):
@@ -303,8 +294,8 @@ class HobHandler(gobject.GObject):
def set_sstate_dir(self, directory):
self.runCommand(["setVariable", "SSTATE_DIR_HOB", directory])
def set_sstate_mirrors(self, url):
self.runCommand(["setVariable", "SSTATE_MIRRORS_HOB", url])
def set_sstate_mirror(self, url):
self.runCommand(["setVariable", "SSTATE_MIRROR_HOB", url])
def set_extra_size(self, image_extra_size):
self.runCommand(["setVariable", "IMAGE_ROOTFS_EXTRA_SPACE", str(image_extra_size)])
@@ -332,6 +323,9 @@ class HobHandler(gobject.GObject):
def set_ftp_proxy(self, ftp_proxy):
self.runCommand(["setVariable", "ftp_proxy", ftp_proxy])
def set_all_proxy(self, all_proxy):
self.runCommand(["setVariable", "all_proxy", all_proxy])
def set_git_proxy(self, host, port):
self.runCommand(["setVariable", "GIT_PROXY_HOST", host])
self.runCommand(["setVariable", "GIT_PROXY_PORT", port])
@@ -361,7 +355,7 @@ class HobHandler(gobject.GObject):
self.commands_async.append(self.SUB_PARSE_CONFIG)
self.commands_async.append(self.SUB_GNERATE_TGTS)
self.run_next_command(self.GENERATE_RECIPES)
def generate_packages(self, tgts, default_task="build"):
targets = []
targets.extend(tgts)
@@ -404,9 +398,6 @@ class HobHandler(gobject.GObject):
def reset_build(self):
self.build.reset()
def get_logfile(self):
return self.server.runCommand(["getVariable", "BB_CONSOLELOG"])
def _remove_redundant(self, string):
ret = []
for i in string.split():
@@ -427,7 +418,7 @@ class HobHandler(gobject.GObject):
params["distro"] = self.runCommand(["getVariable", "DISTRO"]) or "defaultsetup"
params["pclass"] = self.runCommand(["getVariable", "PACKAGE_CLASSES"]) or ""
params["sstatedir"] = self.runCommand(["getVariable", "SSTATE_DIR"]) or ""
params["sstatemirror"] = self.runCommand(["getVariable", "SSTATE_MIRRORS"]) or ""
params["sstatemirror"] = self.runCommand(["getVariable", "SSTATE_MIRROR"]) or ""
num_threads = self.runCommand(["getCpuCount"])
if not num_threads:
@@ -509,7 +500,6 @@ class HobHandler(gobject.GObject):
params["runnable_image_types"] = self._remove_redundant(self.runCommand(["getVariable", "RUNNABLE_IMAGE_TYPES"]) or "")
params["runnable_machine_patterns"] = self._remove_redundant(self.runCommand(["getVariable", "RUNNABLE_MACHINE_PATTERNS"]) or "")
params["deployable_image_types"] = self._remove_redundant(self.runCommand(["getVariable", "DEPLOYABLE_IMAGE_TYPES"]) or "")
params["kernel_image_type"] = self.runCommand(["getVariable", "KERNEL_IMAGETYPE"]) or ""
params["tmpdir"] = self.runCommand(["getVariable", "TMPDIR"]) or ""
params["distro_version"] = self.runCommand(["getVariable", "DISTRO_VERSION"]) or ""
params["target_os"] = self.runCommand(["getVariable", "TARGET_OS"]) or ""
@@ -525,10 +515,9 @@ class HobHandler(gobject.GObject):
params["http_proxy"] = self.runCommand(["getVariable", "http_proxy"]) or ""
params["ftp_proxy"] = self.runCommand(["getVariable", "ftp_proxy"]) or ""
params["https_proxy"] = self.runCommand(["getVariable", "https_proxy"]) or ""
params["all_proxy"] = self.runCommand(["getVariable", "all_proxy"]) or ""
params["cvs_proxy_host"] = self.runCommand(["getVariable", "CVS_PROXY_HOST"]) or ""
params["cvs_proxy_port"] = self.runCommand(["getVariable", "CVS_PROXY_PORT"]) or ""
params["image_white_pattern"] = self.runCommand(["getVariable", "BBUI_IMAGE_WHITE_PATTERN"]) or ""
params["image_black_pattern"] = self.runCommand(["getVariable", "BBUI_IMAGE_BLACK_PATTERN"]) or ""
return params

View File

@@ -34,7 +34,7 @@ class PackageListModel(gtk.TreeStore):
providing convenience functions to access gtk.TreeModel subclasses which
provide filtered views of the data.
"""
(COL_NAME, COL_VER, COL_REV, COL_RNM, COL_SEC, COL_SUM, COL_RDEP, COL_RPROV, COL_SIZE, COL_BINB, COL_INC, COL_FADE_INC, COL_FONT) = range(13)
(COL_NAME, COL_VER, COL_REV, COL_RNM, COL_SEC, COL_SUM, COL_RDEP, COL_RPROV, COL_SIZE, COL_BINB, COL_INC, COL_FADE_INC) = range(12)
__gsignals__ = {
"package-selection-changed" : (gobject.SIGNAL_RUN_LAST,
@@ -42,7 +42,7 @@ class PackageListModel(gtk.TreeStore):
()),
}
__toolchain_required_packages__ = ["packagegroup-core-standalone-sdk-target", "packagegroup-core-standalone-sdk-target-dbg"]
__toolchain_required_packages__ = ["task-core-standalone-sdk-target", "task-core-standalone-sdk-target-dbg"]
def __init__(self):
@@ -65,8 +65,7 @@ class PackageListModel(gtk.TreeStore):
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_BOOLEAN,
gobject.TYPE_BOOLEAN,
gobject.TYPE_STRING)
gobject.TYPE_BOOLEAN)
"""
@@ -145,12 +144,6 @@ class PackageListModel(gtk.TreeStore):
self.pkg_path = {}
self.rprov_pkg = {}
def getpkgvalue(pkgdict, key, pkgname, defaultval = None):
value = pkgdict.get('%s_%s' % (key, pkgname), None)
if not value:
value = pkgdict.get(key, defaultval)
return value
for pkginfo in pkginfolist:
pn = pkginfo['PN']
pv = pkginfo['PV']
@@ -163,24 +156,25 @@ class PackageListModel(gtk.TreeStore):
self.COL_INC, False)
self.pn_path[pn] = self.get_path(pniter)
# PKG is always present
pkg = pkginfo['PKG']
pkgv = getpkgvalue(pkginfo, 'PKGV', pkg)
pkgr = getpkgvalue(pkginfo, 'PKGR', pkg)
# PKGSIZE is artificial, will always be overridden with the package name if present
pkgsize = pkginfo.get('PKGSIZE_%s' % pkg, "0")
# PKG_%s is the renamed version
pkg_rename = pkginfo.get('PKG_%s' % pkg, "")
# The rest may be overridden or not
section = getpkgvalue(pkginfo, 'SECTION', pkg, "")
summary = getpkgvalue(pkginfo, 'SUMMARY', pkg, "")
rdep = getpkgvalue(pkginfo, 'RDEPENDS', pkg, "")
rrec = getpkgvalue(pkginfo, 'RRECOMMENDS', pkg, "")
rprov = getpkgvalue(pkginfo, 'RPROVIDES', pkg, "")
pkgv = pkginfo['PKGV']
pkgr = pkginfo['PKGR']
pkgsize = pkginfo['PKGSIZE_%s' % pkg] if 'PKGSIZE_%s' % pkg in pkginfo.keys() else "0"
pkg_rename = pkginfo['PKG_%s' % pkg] if 'PKG_%s' % pkg in pkginfo.keys() else ""
section = pkginfo['SECTION_%s' % pkg] if 'SECTION_%s' % pkg in pkginfo.keys() else ""
summary = pkginfo['SUMMARY_%s' % pkg] if 'SUMMARY_%s' % pkg in pkginfo.keys() else ""
rdep = pkginfo['RDEPENDS_%s' % pkg] if 'RDEPENDS_%s' % pkg in pkginfo.keys() else ""
rrec = pkginfo['RRECOMMENDS_%s' % pkg] if 'RRECOMMENDS_%s' % pkg in pkginfo.keys() else ""
rprov = pkginfo['RPROVIDES_%s' % pkg] if 'RPROVIDES_%s' % pkg in pkginfo.keys() else ""
for i in rprov.split():
self.rprov_pkg[i] = pkg
allow_empty = getpkgvalue(pkginfo, 'ALLOW_EMPTY', pkg, "")
if 'ALLOW_EMPTY_%s' % pkg in pkginfo.keys():
allow_empty = pkginfo['ALLOW_EMPTY_%s' % pkg]
elif 'ALLOW_EMPTY' in pkginfo.keys():
allow_empty = pkginfo['ALLOW_EMPTY']
else:
allow_empty = ""
if pkgsize == "0" and not allow_empty:
continue
@@ -195,7 +189,7 @@ class PackageListModel(gtk.TreeStore):
self.COL_SEC, section, self.COL_SUM, summary,
self.COL_RDEP, rdep + ' ' + rrec,
self.COL_RPROV, rprov, self.COL_SIZE, size,
self.COL_BINB, "", self.COL_INC, False, self.COL_FONT, '10')
self.COL_BINB, "", self.COL_INC, False)
"""
Check whether the item at item_path is included or not
@@ -337,13 +331,13 @@ class PackageListModel(gtk.TreeStore):
set_selected_packages(), some packages will not be set included.
Return the un-set packages list.
"""
def set_selected_packages(self, packagelist, user_selected=False):
def set_selected_packages(self, packagelist):
left = []
binb = 'User Selected' if user_selected else ''
for pn in packagelist:
if pn in self.pkg_path.keys():
path = self.pkg_path[pn]
self.include_item(item_path=path, binb=binb)
self.include_item(item_path=path,
binb="User Selected")
else:
left.append(pn)
@@ -359,7 +353,7 @@ class PackageListModel(gtk.TreeStore):
while child_it:
if self.get_value(child_it, self.COL_INC):
binb = self.get_value(child_it, self.COL_BINB)
if binb == "User Selected":
if not binb or binb == "User Selected":
name = self.get_value(child_it, self.COL_NAME)
packagelist.append(name)
child_it = self.iter_next(child_it)
@@ -461,7 +455,7 @@ class RecipeListModel(gtk.ListStore):
"""
(COL_NAME, COL_DESC, COL_LIC, COL_GROUP, COL_DEPS, COL_BINB, COL_TYPE, COL_INC, COL_IMG, COL_INSTALL, COL_PN, COL_FADE_INC) = range(12)
__custom_image__ = "Create your own image"
__dummy_image__ = "Create your own image"
__gsignals__ = {
"recipe-selection-changed" : (gobject.SIGNAL_RUN_LAST,
@@ -526,24 +520,17 @@ class RecipeListModel(gtk.ListStore):
val2 = model.get_value(iter2, RecipeListModel.COL_INC)
return ((val1 == True) and (val2 == False))
def include_item_sort_func(self, model, iter1, iter2):
val1 = model.get_value(iter1, RecipeListModel.COL_INC)
val2 = model.get_value(iter2, RecipeListModel.COL_INC)
return ((val1 == False) and (val2 == True))
"""
Create, if required, and return a filtered gtk.TreeModelSort
containing only the items which are items specified by filter
"""
def tree_model(self, filter, excluded_items_ahead=False, included_items_ahead=True):
def tree_model(self, filter, excluded_items_ahead=False):
model = self.filter_new()
model.set_visible_func(self.tree_model_filter, filter)
sort = gtk.TreeModelSort(model)
if excluded_items_ahead:
sort.set_default_sort_func(self.exclude_item_sort_func)
elif included_items_ahead:
sort.set_default_sort_func(self.include_item_sort_func)
else:
sort.set_sort_column_id(RecipeListModel.COL_NAME, gtk.SORT_ASCENDING)
sort.set_default_sort_func(None)
@@ -577,13 +564,14 @@ class RecipeListModel(gtk.ListStore):
self.clear()
# dummy image for prompt
self.set(self.append(), self.COL_NAME, self.__custom_image__,
self.COL_DESC, "Use 'Edit image' to customize recipes and packages " \
"to be included in your image ",
self.set(self.append(), self.COL_NAME, self.__dummy_image__,
self.COL_DESC, "Use the 'View recipes' and 'View packages' " \
"options to select what you want to include " \
"in your image.",
self.COL_LIC, "", self.COL_GROUP, "",
self.COL_DEPS, "", self.COL_BINB, "",
self.COL_TYPE, "image", self.COL_INC, False,
self.COL_IMG, False, self.COL_INSTALL, "", self.COL_PN, self.__custom_image__)
self.COL_IMG, False, self.COL_INSTALL, "", self.COL_PN, self.__dummy_image__)
for item in event_model["pn"]:
name = item
@@ -595,8 +583,8 @@ class RecipeListModel(gtk.ListStore):
depends = event_model["depends"].get(item, []) + event_model["rdepends-pn"].get(item, [])
if ('packagegroup.bbclass' in " ".join(inherits)):
atype = 'packagegroup'
if ('task-' in name):
atype = 'task'
elif ('image.bbclass' in " ".join(inherits)):
if name != "hob-image":
atype = 'image'
@@ -672,10 +660,6 @@ class RecipeListModel(gtk.ListStore):
self[dep_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
elif not dep_included:
self.include_item(dep_path, binb=item_name, image_contents=image_contents)
dep_bin = self[item_path][self.COL_BINB].split(', ')
if self[item_path][self.COL_NAME] in dep_bin:
dep_bin.remove(self[item_path][self.COL_NAME])
self[item_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
def exclude_item(self, item_path):
if not self.path_included(item_path):

View File

@@ -38,7 +38,6 @@ class HobPage (gtk.VBox):
self.title = "Hob -- Image Creator"
else:
self.title = title
self.title_label = gtk.Label()
self.box_group_area = gtk.VBox(False, 12)
self.box_group_area.set_size_request(self.builder_width - 73 - 73, self.builder_height - 88 - 15 - 15)
@@ -47,9 +46,6 @@ class HobPage (gtk.VBox):
self.group_align.add(self.box_group_area)
self.box_group_area.set_homogeneous(False)
def set_title(self, title):
self.title = title
self.title_label.set_markup("<span size='x-large'>%s</span>" % self.title)
def add_onto_top_bar(self, widget = None, padding = 0):
# the top button occupies 1/7 of the page height
@@ -62,9 +58,9 @@ class HobPage (gtk.VBox):
hbox = gtk.HBox()
self.title_label = gtk.Label()
self.title_label.set_markup("<span size='x-large'>%s</span>" % self.title)
hbox.pack_start(self.title_label, expand=False, fill=False, padding=20)
label = gtk.Label()
label.set_markup("<span size='x-large'>%s</span>" % self.title)
hbox.pack_start(label, expand=False, fill=False, padding=20)
if widget:
# add the widget in the event box

View File

@@ -23,7 +23,6 @@ import os
import os.path
import sys
import pango, pangocairo
import cairo
import math
from bb.ui.crumbs.hobcolor import HobColors
@@ -63,6 +62,34 @@ class hic:
ICON_INDI_TICK_FILE = os.path.join(HOB_ICON_BASE_DIR, ('indicators/tick.png'))
ICON_INDI_INFO_FILE = os.path.join(HOB_ICON_BASE_DIR, ('indicators/info.png'))
class hcc:
SUPPORTED_IMAGE_TYPES = {
"jffs2" : ["jffs2"],
"sum.jffs2" : ["sum.jffs2"],
"cramfs" : ["cramfs"],
"ext2" : ["ext2"],
"ext2.gz" : ["ext2.gz"],
"ext2.bz2" : ["ext2.bz2"],
"ext3" : ["ext3"],
"ext3.gz" : ["ext3.gz"],
"ext2.lzma" : ["ext2.lzma"],
"btrfs" : ["btrfs"],
"live" : ["hddimg", "iso"],
"squashfs" : ["squashfs"],
"squashfs-lzma" : ["squashfs-lzma"],
"ubi" : ["ubi"],
"tar" : ["tar"],
"tar.gz" : ["tar.gz"],
"tar.bz2" : ["tar.bz2"],
"tar.xz" : ["tar.xz"],
"cpio" : ["cpio"],
"cpio.gz" : ["cpio.gz"],
"cpio.xz" : ["cpio.xz"],
"vmdk" : ["vmdk"],
"cpio.lzma" : ["cpio.lzma"],
}
class HobViewTable (gtk.VBox):
"""
A VBox to contain the table for different recipe views and package view
@@ -92,7 +119,6 @@ class HobViewTable (gtk.VBox):
self.table_tree.set_headers_clickable(True)
self.table_tree.set_enable_search(True)
self.table_tree.set_rules_hint(True)
self.table_tree.set_enable_tree_lines(True)
self.table_tree.get_selection().set_mode(gtk.SELECTION_SINGLE)
self.toggle_columns = []
self.table_tree.connect("row-activated", self.row_activated_cb)
@@ -114,8 +140,6 @@ class HobViewTable (gtk.VBox):
cell = gtk.CellRendererText()
col.pack_start(cell, True)
col.set_attributes(cell, text=column['col_id'])
if 'col_t_id' in column.keys():
col.add_attribute(cell, 'font', column['col_t_id'])
elif column['col_style'] == 'check toggle':
cell = HobCellRendererToggle()
cell.set_property('activatable', True)
@@ -125,8 +149,6 @@ class HobViewTable (gtk.VBox):
col.pack_end(cell, True)
col.set_attributes(cell, active=column['col_id'])
self.toggle_columns.append(column['col_name'])
if 'col_group' in column.keys():
col.set_cell_data_func(cell, self.set_group_number_cb)
elif column['col_style'] == 'radio toggle':
cell = gtk.CellRendererToggle()
cell.set_property('activatable', True)
@@ -140,8 +162,6 @@ class HobViewTable (gtk.VBox):
cell = gtk.CellRendererText()
col.pack_start(cell, True)
col.set_cell_data_func(cell, self.display_binb_cb, column['col_id'])
if 'col_t_id' in column.keys():
col.add_attribute(cell, 'font', column['col_t_id'])
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
@@ -153,12 +173,7 @@ class HobViewTable (gtk.VBox):
# Just display the first item
if binb:
bin = binb.split(', ')
total_no = len(bin)
if total_no > 1 and bin[0] == "User Selected":
present_binb = bin[1] + ' (+' + str(total_no) + ')'
else:
present_binb = bin[0] + ' (+' + str(total_no) + ')'
cell.set_property('text', present_binb)
cell.set_property('text', bin[0])
else:
cell.set_property('text', "")
return True
@@ -189,15 +204,6 @@ class HobViewTable (gtk.VBox):
def stop_cell_fadeinout_cb(self, ctrl, cell, tree):
self.emit("cell-fadeinout-stopped", ctrl, cell, tree)
def set_group_number_cb(self, col, cell, model, iter):
if model and (model.iter_parent(iter) == None):
cell.cell_attr["number_of_children"] = model.iter_n_children(iter)
else:
cell.cell_attr["number_of_children"] = 0
def connect_group_selection(self, cb_func):
self.table_tree.get_selection().connect("changed", cb_func)
"""
A method to calculate a softened value for the colour of widget when in the
provided state.
@@ -219,7 +225,7 @@ def soften_color(widget, state=gtk.STATE_NORMAL):
color.blue = color.blue * blend + style.base[state].blue * (1.0 - blend)
return color.to_string()
class BaseHobButton(gtk.Button):
class HobButton(gtk.Button):
"""
A gtk.Button subclass which follows the visual design of Hob for primary
action buttons
@@ -233,33 +239,24 @@ class BaseHobButton(gtk.Button):
@staticmethod
def style_button(button):
style = button.get_style()
style = gtk.rc_get_style_by_paths(gtk.settings_get_default(), 'gtk-button', 'gtk-button', gobject.TYPE_NONE)
button_color = gtk.gdk.Color(HobColors.ORANGE)
button.modify_bg(gtk.STATE_NORMAL, button_color)
button.modify_bg(gtk.STATE_PRELIGHT, button_color)
button.modify_bg(gtk.STATE_SELECTED, button_color)
button.set_flags(gtk.CAN_DEFAULT)
button.grab_default()
# label = "<span size='x-large'><b>%s</b></span>" % gobject.markup_escape_text(button.get_label())
label = button.get_label()
label = "<span size='x-large'><b>%s</b></span>" % gobject.markup_escape_text(button.get_label())
button.set_label(label)
button.child.set_use_markup(True)
class HobButton(BaseHobButton):
"""
A gtk.Button subclass which follows the visual design of Hob for primary
action buttons
label: the text to display as the button's label
"""
def __init__(self, label):
BaseHobButton.__init__(self, label)
HobButton.style_button(self)
class HobAltButton(BaseHobButton):
class HobAltButton(gtk.Button):
"""
A gtk.Button subclass which has no relief, and so is more discrete
"""
def __init__(self, label):
BaseHobButton.__init__(self, label)
gtk.Button.__init__(self, label)
HobAltButton.style_button(self)
"""
@@ -285,6 +282,14 @@ class HobAltButton(BaseHobButton):
button.set_label("<span size='large' color='%s'><b>%s</b></span>" % (colour, gobject.markup_escape_text(button.text)))
button.child.set_use_markup(True)
@staticmethod
def style_button(button):
button.text = button.get_label()
button.connect("state-changed", HobAltButton.desensitise_on_state_change_cb)
HobAltButton.set_text(button)
button.child.set_use_markup(True)
button.set_relief(gtk.RELIEF_NONE)
class HobImageButton(gtk.Button):
"""
A gtk.Button with an icon and two rows of text, the second of which is
@@ -337,8 +342,7 @@ class HobInfoButton(gtk.EventBox):
def __init__(self, tip_markup, parent=None):
gtk.EventBox.__init__(self)
self.image = gtk.Image()
self.image.set_from_file(
hic.ICON_INFO_DISPLAY_FILE)
self.image.set_from_file(hic.ICON_INFO_DISPLAY_FILE)
self.image.show()
self.add(self.image)
@@ -376,95 +380,363 @@ class HobInfoButton(gtk.EventBox):
def mouse_out_cb(self, widget, event):
self.image.set_from_file(hic.ICON_INFO_DISPLAY_FILE)
class HobIndicator(gtk.DrawingArea):
def __init__(self, count):
gtk.DrawingArea.__init__(self)
# Set no window for transparent background
self.set_has_window(False)
self.set_size_request(38,38)
# We need to pass through button clicks
self.add_events(gtk.gdk.BUTTON_PRESS_MASK | gtk.gdk.BUTTON_RELEASE_MASK)
class HobTabBar(gtk.DrawingArea):
__gsignals__ = {
"blank-area-changed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_INT,
gobject.TYPE_INT,
gobject.TYPE_INT,
gobject.TYPE_INT,)),
self.connect('expose-event', self.expose)
"tab-switched" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_INT,)),
}
self.count = count
self.color = HobColors.GRAY
def expose(self, widget, event):
if self.count and self.count > 0:
ctx = widget.window.cairo_create()
x, y, w, h = self.allocation
ctx.set_operator(cairo.OPERATOR_OVER)
ctx.set_source_color(gtk.gdk.color_parse(self.color))
ctx.translate(w/2, h/2)
ctx.arc(x, y, min(w,h)/2 - 2, 0, 2*math.pi)
ctx.fill_preserve()
layout = self.create_pango_layout(str(self.count))
textw, texth = layout.get_pixel_size()
x = (w/2)-(textw/2) + x
y = (h/2) - (texth/2) + y
ctx.move_to(x, y)
self.window.draw_layout(self.style.light_gc[gtk.STATE_NORMAL], int(x), int(y), layout)
def set_count(self, count):
self.count = count
def set_active(self, active):
if active:
self.color = HobColors.DEEP_RED
else:
self.color = HobColors.GRAY
class HobTabLabel(gtk.HBox):
def __init__(self, text, count=0):
gtk.HBox.__init__(self, False, 0)
self.indicator = HobIndicator(count)
self.indicator.show()
self.pack_end(self.indicator, False, False)
self.lbl = gtk.Label(text)
self.lbl.set_alignment(0.0, 0.5)
self.lbl.show()
self.pack_end(self.lbl, True, True, 6)
def set_count(self, count):
self.indicator.set_count(count)
def set_active(self, active=True):
self.indicator.set_active(active)
class HobNotebook(gtk.Notebook):
def __init__(self):
gtk.Notebook.__init__(self)
self.set_property('homogeneous', True)
gtk.DrawingArea.__init__(self)
self.children = []
self.pages = []
self.tab_width = 140
self.tab_height = 52
self.tab_x = 10
self.tab_y = 0
self.width = 500
self.height = 53
self.tab_w_ratio = 140 * 1.0/500
self.tab_h_ratio = 52 * 1.0/53
self.set_size_request(self.width, self.height)
self.current_child = None
self.font = self.get_style().font_desc
self.font.set_size(pango.SCALE * 13)
self.update_children_text_layout_and_bg_color()
self.blank_rectangle = None
self.tab_pressed = False
self.set_property('can-focus', True)
self.set_events(gtk.gdk.EXPOSURE_MASK | gtk.gdk.POINTER_MOTION_MASK |
gtk.gdk.BUTTON1_MOTION_MASK | gtk.gdk.BUTTON_PRESS_MASK |
gtk.gdk.BUTTON_RELEASE_MASK)
self.connect("expose-event", self.on_draw)
self.connect("button-press-event", self.button_pressed_cb)
self.connect("button-release-event", self.button_released_cb)
self.connect("query-tooltip", self.query_tooltip_cb)
self.show_all()
def button_released_cb(self, widget, event):
self.tab_pressed = False
self.queue_draw()
def button_pressed_cb(self, widget, event):
if event.type == gtk.gdk._2BUTTON_PRESS:
return
result = False
if self.is_focus() or event.type == gtk.gdk.BUTTON_PRESS:
x, y = event.get_coords()
# check which tab be clicked
for child in self.children:
if (child["x"] < x) and (x < child["x"] + self.tab_width) \
and (child["y"] < y) and (y < child["y"] + self.tab_height):
self.current_child = child
result = True
self.grab_focus()
break
# check the blank area is focus in or not
if (self.blank_rectangle) and (self.blank_rectangle.x > 0) and (self.blank_rectangle.y > 0):
if (self.blank_rectangle.x < x) and (x < self.blank_rectangle.x + self.blank_rectangle.width) \
and (self.blank_rectangle.y < y) and (y < self.blank_rectangle.y + self.blank_rectangle.height):
self.grab_focus()
if result == True:
page = self.current_child["toggled_page"]
self.emit("tab-switched", page)
self.tab_pressed = True
self.queue_draw()
def update_children_size(self):
# calculate the size of tabs
self.tab_width = int(self.width * self.tab_w_ratio)
self.tab_height = int(self.height * self.tab_h_ratio)
for i, child in enumerate(self.children):
child["x"] = self.tab_x + i * self.tab_width
child["y"] = self.tab_y
if self.blank_rectangle:
self.resize_blank_rectangle()
def resize_blank_rectangle(self):
width = self.width - self.tab_width * len(self.children) - self.tab_x
x = self.tab_x + self.tab_width * len(self.children)
hpadding = vpadding = 5
self.blank_rectangle = self.set_blank_size(x + hpadding, self.tab_y + vpadding,
width - 2 * hpadding, self.tab_height - 2 * vpadding)
def update_children_text_layout_and_bg_color(self):
style = self.get_style().copy()
color = style.base[gtk.STATE_NORMAL]
for child in self.children:
pangolayout = self.create_pango_layout(child["title"])
pangolayout.set_font_description(self.font)
child["title_layout"] = pangolayout
child["r"] = color.red
child["g"] = color.green
child["b"] = color.blue
def append_tab_child(self, title, page, tooltip=""):
num = len(self.children) + 1
self.tab_width = self.tab_width * len(self.children) / num
i = 0
for i, child in enumerate(self.children):
child["x"] = self.tab_x + i * self.tab_width
i += 1
x = self.tab_x + i * self.tab_width
y = self.tab_y
pangolayout = self.create_pango_layout(title)
pangolayout.set_font_description(self.font)
color = self.style.base[gtk.STATE_NORMAL]
new_one = {
"x" : x,
"y" : y,
"r" : color.red,
"g" : color.green,
"b" : color.blue,
"title_layout" : pangolayout,
"toggled_page" : page,
"title" : title,
"indicator_show" : False,
"indicator_number" : 0,
"tooltip_markup" : tooltip,
}
self.children.append(new_one)
if tooltip and (not self.props.has_tooltip):
self.props.has_tooltip = True
# set the default current child
if not self.current_child:
self.current_child = new_one
def on_draw(self, widget, event):
cr = widget.window.cairo_create()
self.width = self.allocation.width
self.height = self.allocation.height
self.update_children_size()
self.draw_background(cr)
self.draw_toggled_tab(cr)
for child in self.children:
if child["indicator_show"] == True:
self.draw_indicator(cr, child)
self.draw_tab_text(cr)
def draw_background(self, cr):
style = self.get_style()
if self.is_focus():
cr.set_source_color(style.base[gtk.STATE_SELECTED])
else:
cr.set_source_color(style.base[gtk.STATE_NORMAL])
y = 6
h = self.height - 6 - 1
gap = 1
w = self.children[0]["x"]
cr.set_source_color(gtk.gdk.color_parse(HobColors.GRAY))
cr.rectangle(0, y, w - gap, h) # start rectangle
cr.fill()
cr.set_source_color(style.base[gtk.STATE_NORMAL])
cr.rectangle(w - gap, y, w, h) #first gap
cr.fill()
w = self.tab_width
for child in self.children:
x = child["x"]
cr.set_source_color(gtk.gdk.color_parse(HobColors.GRAY))
cr.rectangle(x, y, w - gap, h) # tab rectangle
cr.fill()
cr.set_source_color(style.base[gtk.STATE_NORMAL])
cr.rectangle(x + w - gap, y, w, h) # gap
cr.fill()
cr.set_source_color(gtk.gdk.color_parse(HobColors.GRAY))
cr.rectangle(x + w, y, self.width - x - w, h) # last rectangle
cr.fill()
def draw_tab_text(self, cr):
style = self.get_style()
for child in self.children:
pangolayout = child["title_layout"]
if pangolayout:
fontw, fonth = pangolayout.get_pixel_size()
# center pos
off_x = (self.tab_width - fontw) / 2
off_y = (self.tab_height - fonth) / 2
x = child["x"] + off_x
y = child["y"] + off_y
if not child == self.current_child:
self.window.draw_layout(self.style.fg_gc[gtk.STATE_NORMAL], int(x), int(y), pangolayout, gtk.gdk.Color(HobColors.WHITE))
else:
self.window.draw_layout(self.style.fg_gc[gtk.STATE_NORMAL], int(x), int(y), pangolayout)
def draw_toggled_tab(self, cr):
if not self.current_child:
return
x = self.current_child["x"]
y = self.current_child["y"]
width = self.tab_width
height = self.tab_height
style = self.get_style()
color = style.base[gtk.STATE_NORMAL]
r = height / 10
if self.tab_pressed == True:
for xoff, yoff, c1, c2 in [(1, 0, HobColors.SLIGHT_DARK, HobColors.DARK), (2, 0, HobColors.GRAY, HobColors.LIGHT_GRAY)]:
cr.set_source_color(gtk.gdk.color_parse(c1))
cr.move_to(x + xoff, y + height + yoff)
cr.line_to(x + xoff, r + yoff)
cr.arc(x + r + xoff, y + r + yoff, r, math.pi, 1.5*math.pi)
cr.move_to(x + r + xoff, y + yoff)
cr.line_to(x + width - r + xoff, y + yoff)
cr.arc(x + width - r + xoff, y + r + yoff, r, 1.5*math.pi, 2*math.pi)
cr.stroke()
cr.set_source_color(gtk.gdk.color_parse(c2))
cr.move_to(x + width + xoff, r + yoff)
cr.line_to(x + width + xoff, y + height + yoff)
cr.line_to(x + xoff, y + height + yoff)
cr.stroke()
x = x + 2
y = y + 2
cr.set_source_rgba(color.red, color.green, color.blue, 1)
cr.move_to(x + r, y)
cr.line_to(x + width - r , y)
cr.arc(x + width - r, y + r, r, 1.5*math.pi, 2*math.pi)
cr.move_to(x + width, r)
cr.line_to(x + width, y + height)
cr.line_to(x, y + height)
cr.line_to(x, r)
cr.arc(x + r, y + r, r, math.pi, 1.5*math.pi)
cr.fill()
def draw_indicator(self, cr, child):
text = ("%d" % child["indicator_number"])
layout = self.create_pango_layout(text)
layout.set_font_description(self.font)
textw, texth = layout.get_pixel_size()
# draw the back round area
tab_x = child["x"]
tab_y = child["y"]
dest_w = int(32 * self.tab_w_ratio)
dest_h = int(32 * self.tab_h_ratio)
if dest_h < self.tab_height:
dest_w = dest_h
# x position is offset(tab_width*3/4 - icon_width/2) + start_pos(tab_x)
x = tab_x + self.tab_width * 3/4 - dest_w/2
y = tab_y + self.tab_height/2 - dest_h/2
r = min(dest_w, dest_h)/2
if not child == self.current_child:
color = cr.set_source_color(gtk.gdk.color_parse(HobColors.DEEP_RED))
else:
color = cr.set_source_color(gtk.gdk.color_parse(HobColors.GRAY))
# check round back area can contain the text or not
back_round_can_contain_width = float(2 * r * 0.707)
if float(textw) > back_round_can_contain_width:
xoff = (textw - int(back_round_can_contain_width)) / 2
cr.move_to(x + r - xoff, y + r + r)
cr.arc((x + r - xoff), (y + r), r, 0.5*math.pi, 1.5*math.pi)
cr.fill() # left half round
cr.rectangle((x + r - xoff), y, 2 * xoff, 2 * r)
cr.fill() # center rectangle
cr.arc((x + r + xoff), (y + r), r, 1.5*math.pi, 0.5*math.pi)
cr.fill() # right half round
else:
cr.arc((x + r), (y + r), r, 0, 2*math.pi)
cr.fill()
# draw the number text
x = x + (dest_w/2)-(textw/2)
y = y + (dest_h/2) - (texth/2)
cr.move_to(x, y)
self.window.draw_layout(self.style.fg_gc[gtk.STATE_NORMAL], int(x), int(y), layout, gtk.gdk.Color(HobColors.WHITE))
def show_indicator_icon(self, child, number):
child["indicator_show"] = True
child["indicator_number"] = number
self.queue_draw()
def hide_indicator_icon(self, child):
child["indicator_show"] = False
self.queue_draw()
def set_blank_size(self, x, y, w, h):
if not self.blank_rectangle or self.blank_rectangle.x != x or self.blank_rectangle.width != w:
self.emit("blank-area-changed", x, y, w, h)
return gtk.gdk.Rectangle(x, y, w, h)
def query_tooltip_cb(self, widget, x, y, keyboardtip, tooltip):
if keyboardtip or (not tooltip):
return False
# check which tab be clicked
for child in self.children:
if (child["x"] < x) and (x < child["x"] + self.tab_width) \
and (child["y"] < y) and (y < child["y"] + self.tab_height):
tooltip.set_markup(child["tooltip_markup"])
return True
return False
class HobNotebook(gtk.VBox):
def __init__(self):
gtk.VBox.__init__(self, False, 0)
self.notebook = gtk.Notebook()
self.notebook.set_property('homogeneous', True)
self.notebook.set_property('show-tabs', False)
self.tabbar = HobTabBar()
self.tabbar.connect("tab-switched", self.tab_switched_cb)
self.notebook.connect("page-added", self.page_added_cb)
self.notebook.connect("page-removed", self.page_removed_cb)
self.search = None
self.search_name = ""
self.connect("switch-page", self.page_changed_cb)
self.tb = gtk.Table(1, 100, False)
self.hbox= gtk.HBox(False, 0)
self.hbox.pack_start(self.tabbar, True, True)
self.tb.attach(self.hbox, 0, 100, 0, 1)
self.pack_start(self.tb, False, False)
self.pack_start(self.notebook)
self.show_all()
def page_changed_cb(self, nb, page, page_num):
for p, lbl in enumerate(self.pages):
if p == page_num:
lbl.set_active()
else:
lbl.set_active(False)
def append_page(self, child, tab_label, tab_tooltip=None):
label = HobTabLabel(tab_label)
if tab_tooltip:
label.set_tooltip_text(tab_tooltip)
label.set_active(False)
self.pages.append(label)
gtk.Notebook.append_page(self, child, label)
def append_page(self, child, tab_label):
self.notebook.set_current_page(self.notebook.append_page(child, tab_label))
def set_entry(self, name="Search:"):
for child in self.tb.get_children():
if child:
self.tb.remove(child)
hbox_entry = gtk.HBox(False, 0)
hbox_entry.show()
self.search = gtk.Entry()
self.search_name = name
style = self.search.get_style()
@@ -475,20 +747,59 @@ class HobNotebook(gtk.Notebook):
self.search.set_icon_from_stock(gtk.ENTRY_ICON_SECONDARY, gtk.STOCK_CLEAR)
self.search.connect("icon-release", self.set_search_entry_clear_cb)
self.search.show()
self.align = gtk.Alignment(xalign=1.0, yalign=0.7)
self.align.add(self.search)
self.align.show()
hbox_entry.pack_end(self.align, False, False)
self.tabbar.resize_blank_rectangle()
self.tb.attach(hbox_entry, 75, 100, 0, 1, xpadding=5)
self.tb.attach(self.hbox, 0, 100, 0, 1)
self.tabbar.connect("blank-area-changed", self.blank_area_resize_cb)
self.search.connect("focus-in-event", self.set_search_entry_editable_cb)
self.search.connect("focus-out-event", self.set_search_entry_reset_cb)
self.set_action_widget(self.search, gtk.PACK_END)
self.tb.show()
def show_indicator_icon(self, title, number):
for child in self.pages:
if child.lbl.get_label() == title:
child.set_count(number)
for child in self.tabbar.children:
if child["toggled_page"] == -1:
continue
if child["title"] == title:
self.tabbar.show_indicator_icon(child, number)
def hide_indicator_icon(self, title):
for child in self.pages:
if child.lbl.get_label() == title:
child.set_count(0)
for child in self.tabbar.children:
if child["toggled_page"] == -1:
continue
if child["title"] == title:
self.tabbar.hide_indicator_icon(child)
def tab_switched_cb(self, widget, page):
self.notebook.set_current_page(page)
def page_added_cb(self, notebook, notebook_child, page):
if not notebook:
return
title = notebook.get_tab_label_text(notebook_child)
label = notebook.get_tab_label(notebook_child)
tooltip_markup = label.get_tooltip_markup()
if not title:
return
for child in self.tabbar.children:
if child["title"] == title:
child["toggled_page"] = page
return
self.tabbar.append_tab_child(title, page, tooltip_markup)
def page_removed_cb(self, notebook, notebook_child, page, title=""):
for child in self.tabbar.children:
if child["title"] == title:
child["toggled_page"] = -1
def blank_area_resize_cb(self, widget, request_x, request_y, request_width, request_height):
self.search.set_size_request(request_width, request_height)
def set_search_entry_editable_cb(self, search, event):
search.set_editable(True)
@@ -508,14 +819,7 @@ class HobNotebook(gtk.Notebook):
self.reset_entry(search)
def set_search_entry_clear_cb(self, search, icon_pos, event):
if search.get_editable() == True:
search.set_text("")
def set_page(self, title):
for child in self.pages:
if child.lbl.get_label() == title:
child.grab_focus()
self.set_current_page(self.page_num(child))
self.reset_entry(search)
class HobWarpCellRendererText(gtk.CellRendererText):
def __init__(self, col_number):
@@ -780,17 +1084,11 @@ class HobCellRendererToggle(gtk.CellRendererToggle):
gtk.CellRendererToggle.__init__(self)
self.ctrl = HobCellRendererController(is_draw_row=True)
self.ctrl.running_mode = self.ctrl.MODE_ONE_SHORT
self.cell_attr = {"fadeout": False, "number_of_children": 0}
self.cell_attr = {"fadeout": False}
def do_render(self, window, widget, background_area, cell_area, expose_area, flags):
if (not self.ctrl) or (not widget):
return
if flags & gtk.CELL_RENDERER_SELECTED:
state = gtk.STATE_SELECTED
else:
state = gtk.STATE_NORMAL
if self.ctrl.is_active():
path = widget.get_path_at_pos(cell_area.x + cell_area.width/2, cell_area.y + cell_area.height/2)
# sometimes the parameters of cell_area will be a negative number,such as pull up down the scroll bar
@@ -799,23 +1097,14 @@ class HobCellRendererToggle(gtk.CellRendererToggle):
path = path[0]
if path in self.ctrl.running_cell_areas:
cr = window.cairo_create()
color = widget.get_style().base[state]
color = gtk.gdk.Color(HobColors.WHITE)
row_x, _, row_width, _ = widget.get_visible_rect()
border_y = self.get_property("ypad")
self.ctrl.on_draw_fadeinout_cb(cr, color, row_x, cell_area.y - border_y, row_width, \
cell_area.height + border_y * 2, self.cell_attr["fadeout"])
# draw number of a group
if self.cell_attr["number_of_children"]:
text = "%d pkg" % self.cell_attr["number_of_children"]
pangolayout = widget.create_pango_layout(text)
textw, texth = pangolayout.get_pixel_size()
x = cell_area.x + (cell_area.width/2) - (textw/2)
y = cell_area.y + (cell_area.height/2) - (texth/2)
widget.style.paint_layout(window, state, True, cell_area, widget, "checkbox", x, y, pangolayout)
else:
return gtk.CellRendererToggle.do_render(self, window, widget, background_area, cell_area, expose_area, flags)
return gtk.CellRendererToggle.do_render(self, window, widget, background_area, cell_area, expose_area, flags)
'''delay: normally delay time is 1000ms
cell_list: whilch cells need to be render

View File

@@ -22,7 +22,6 @@
import gtk
import glib
import re
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hobcolor import HobColors
from bb.ui.crumbs.hobwidget import hic, HobImageButton, HobInfoButton, HobAltButton, HobButton
@@ -34,9 +33,6 @@ from bb.ui.crumbs.hobpages import HobPage
#
class ImageConfigurationPage (HobPage):
__dummy_machine__ = "--select a machine--"
__dummy_image__ = "--select a base image--"
def __init__(self, builder):
super(ImageConfigurationPage, self).__init__(builder, "Image configuration")
@@ -135,9 +131,8 @@ class ImageConfigurationPage (HobPage):
self._pack_components(pack_config_build_button = True)
self.set_config_machine_layout(show_progress_bar = False)
self.set_config_baseimg_layout()
self.set_rcppkg_layout()
self.show_all()
if self.builder.recipe_model.get_selected_image() == self.builder.recipe_model.__custom_image__:
self.just_bake_button.hide()
def create_config_machine(self):
self.machine_title = gtk.Label()
@@ -152,6 +147,7 @@ class ImageConfigurationPage (HobPage):
self.machine_title_desc.set_markup(mark)
self.machine_combo = gtk.combo_box_new_text()
self.machine_combo.set_wrap_width(1)
self.machine_combo.connect("changed", self.machine_combo_changed_cb)
icon_file = hic.ICON_LAYERS_DISPLAY_FILE
@@ -167,12 +163,13 @@ class ImageConfigurationPage (HobPage):
markup += "dev-manual.html#understanding-and-using-layers\">reference manual</a>."
self.layer_info_icon = HobInfoButton(markup, self.get_parent())
# self.progress_box = gtk.HBox(False, 6)
self.progress_box = gtk.HBox(False, 6)
self.progress_bar = HobProgressBar()
# self.progress_box.pack_start(self.progress_bar, expand=True, fill=True)
self.progress_box.pack_start(self.progress_bar, expand=True, fill=True)
self.stop_button = HobAltButton("Stop")
self.stop_button.connect("clicked", self.stop_button_clicked_cb)
# self.progress_box.pack_end(stop_button, expand=False, fill=False)
self.progress_box.pack_end(self.stop_button, expand=False, fill=False)
self.machine_separator = gtk.HSeparator()
def set_config_machine_layout(self, show_progress_bar = False):
@@ -182,9 +179,7 @@ class ImageConfigurationPage (HobPage):
self.gtable.attach(self.layer_button, 14, 36, 7, 12)
self.gtable.attach(self.layer_info_icon, 36, 40, 7, 11)
if show_progress_bar:
#self.gtable.attach(self.progress_box, 0, 40, 15, 18)
self.gtable.attach(self.progress_bar, 0, 37, 15, 18)
self.gtable.attach(self.stop_button, 37, 40, 15, 18, 0, 0)
self.gtable.attach(self.progress_box, 0, 40, 15, 19)
self.gtable.attach(self.machine_separator, 0, 40, 13, 14)
def create_config_baseimg(self):
@@ -201,21 +196,28 @@ class ImageConfigurationPage (HobPage):
self.image_title_desc.set_markup(mark)
self.image_combo = gtk.combo_box_new_text()
self.image_combo.set_wrap_width(1)
self.image_combo_id = self.image_combo.connect("changed", self.image_combo_changed_cb)
self.image_desc = gtk.Label()
self.image_desc.set_alignment(0.0, 0.5)
self.image_desc.set_size_request(256, -1)
self.image_desc.set_justify(gtk.JUSTIFY_LEFT)
self.image_desc.set_line_wrap(True)
# button to view recipes
icon_file = hic.ICON_RCIPE_DISPLAY_FILE
hover_file = hic.ICON_RCIPE_HOVER_FILE
self.view_adv_configuration_button = HobImageButton("Advanced configuration",
"Select image types, package formats, etc",
icon_file, hover_file)
self.view_adv_configuration_button.connect("clicked", self.view_adv_configuration_button_clicked_cb)
self.view_recipes_button = HobImageButton("View recipes",
"Add/remove recipes and tasks",
icon_file, hover_file)
self.view_recipes_button.connect("clicked", self.view_recipes_button_clicked_cb)
# button to view packages
icon_file = hic.ICON_PACKAGES_DISPLAY_FILE
hover_file = hic.ICON_PACKAGES_HOVER_FILE
self.view_packages_button = HobImageButton("View packages",
"Add/remove previously built packages",
icon_file, hover_file)
self.view_packages_button.connect("clicked", self.view_packages_button_clicked_cb)
self.image_separator = gtk.HSeparator()
@@ -223,27 +225,32 @@ class ImageConfigurationPage (HobPage):
self.gtable.attach(self.image_title, 0, 40, 15, 17)
self.gtable.attach(self.image_title_desc, 0, 40, 18, 22)
self.gtable.attach(self.image_combo, 0, 12, 23, 26)
self.gtable.attach(self.image_desc, 0, 12, 27, 33)
self.gtable.attach(self.view_adv_configuration_button, 14, 36, 23, 28)
self.gtable.attach(self.image_desc, 13, 38, 23, 28)
self.gtable.attach(self.image_separator, 0, 40, 35, 36)
def set_rcppkg_layout(self):
self.gtable.attach(self.view_recipes_button, 0, 20, 28, 33)
self.gtable.attach(self.view_packages_button, 20, 40, 28, 33)
def create_config_build_button(self):
# Create the "Build packages" and "Build image" buttons at the bottom
button_box = gtk.HBox(False, 6)
# create button "Build image"
self.just_bake_button = HobButton("Build image")
#self.just_bake_button.set_size_request(205, 49)
self.just_bake_button.set_tooltip_text("Build target image")
self.just_bake_button.connect("clicked", self.just_bake_button_clicked_cb)
button_box.pack_end(self.just_bake_button, expand=False, fill=False)
just_bake_button = HobButton("Build image")
just_bake_button.set_size_request(205, 49)
just_bake_button.set_tooltip_text("Build target image")
just_bake_button.connect("clicked", self.just_bake_button_clicked_cb)
button_box.pack_end(just_bake_button, expand=False, fill=False)
# create button "Edit Image"
self.edit_image_button = HobAltButton("Edit image")
#self.edit_image_button.set_size_request(205, 49)
self.edit_image_button.set_tooltip_text("Edit target image")
self.edit_image_button.connect("clicked", self.edit_image_button_clicked_cb)
button_box.pack_end(self.edit_image_button, expand=False, fill=False)
label = gtk.Label(" or ")
button_box.pack_end(label, expand=False, fill=False)
# create button "Build Packages"
build_packages_button = HobAltButton("Build packages")
build_packages_button.connect("clicked", self.build_packages_button_clicked_cb)
build_packages_button.set_tooltip_text("Build recipes into packages")
button_box.pack_end(build_packages_button, expand=False, fill=False)
return button_box
@@ -252,15 +259,9 @@ class ImageConfigurationPage (HobPage):
def machine_combo_changed_cb(self, machine_combo):
combo_item = machine_combo.get_active_text()
if not combo_item or combo_item == self.__dummy_machine__:
if not combo_item:
return
# remove __dummy_machine__ item from the store list after first user selection
# because it is no longer valid
combo_store = machine_combo.get_model()
if len(combo_store) and (combo_store[0][0] == self.__dummy_machine__):
machine_combo.remove_text(0)
self.builder.configuration.curr_mach = combo_item
if self.machine_combo_changed_by_manual:
self.builder.configuration.clear_selection()
@@ -271,13 +272,13 @@ class ImageConfigurationPage (HobPage):
self.builder.populate_recipe_package_info_async()
def update_machine_combo(self):
all_machines = [self.__dummy_machine__] + self.builder.parameters.all_machines
all_machines = self.builder.parameters.all_machines
model = self.machine_combo.get_model()
model.clear()
for machine in all_machines:
self.machine_combo.append_text(machine)
self.machine_combo.set_active(0)
self.machine_combo.set_active(-1)
def switch_machine_combo(self):
self.machine_combo_changed_by_manual = False
@@ -288,15 +289,10 @@ class ImageConfigurationPage (HobPage):
self.machine_combo.set_active(active)
return
active += 1
self.machine_combo.set_active(-1)
if model[0][0] != self.__dummy_machine__:
self.machine_combo.insert_text(0, self.__dummy_machine__)
self.machine_combo.set_active(0)
def update_image_desc(self):
def update_image_desc(self, selected_image):
desc = ""
selected_image = self.image_combo.get_active_text()
if selected_image and selected_image in self.builder.recipe_model.pn_path.keys():
image_path = self.builder.recipe_model.pn_path[selected_image]
image_iter = self.builder.recipe_model.get_iter(image_path)
@@ -313,15 +309,9 @@ class ImageConfigurationPage (HobPage):
def image_combo_changed_cb(self, combo):
self.builder.window_sensitive(False)
selected_image = self.image_combo.get_active_text()
if not selected_image or (selected_image == self.__dummy_image__):
if not selected_image:
return
# remove __dummy_image__ item from the store list after first user selection
# because it is no longer valid
combo_store = combo.get_model()
if len(combo_store) and (combo_store[0][0] == self.__dummy_image__):
combo.remove_text(0)
self.builder.customized = False
selected_recipes = []
@@ -329,16 +319,13 @@ class ImageConfigurationPage (HobPage):
image_path = self.builder.recipe_model.pn_path[selected_image]
image_iter = self.builder.recipe_model.get_iter(image_path)
selected_packages = self.builder.recipe_model.get_value(image_iter, self.builder.recipe_model.COL_INSTALL).split()
self.update_image_desc()
self.update_image_desc(selected_image)
self.builder.recipe_model.reset()
self.builder.package_model.reset()
self.show_baseimg_selected()
if selected_image == self.builder.recipe_model.__custom_image__:
self.just_bake_button.hide()
glib.idle_add(self.image_combo_changed_idle_cb, selected_image, selected_recipes, selected_packages)
def _image_combo_connect_signal(self):
@@ -355,63 +342,32 @@ class ImageConfigurationPage (HobPage):
# populate image combo
filter = {RecipeListModel.COL_TYPE : ['image']}
image_model = recipe_model.tree_model(filter)
image_model.set_sort_column_id(recipe_model.COL_NAME, gtk.SORT_ASCENDING)
active = 0
cnt = 1
white_pattern = []
if self.builder.parameters.image_white_pattern:
for i in self.builder.parameters.image_white_pattern.split():
white_pattern.append(re.compile(i))
black_pattern = []
if self.builder.parameters.image_black_pattern:
for i in self.builder.parameters.image_black_pattern.split():
black_pattern.append(re.compile(i))
black_pattern.append(re.compile("hob-image"))
active = -1
cnt = 0
it = image_model.get_iter_first()
self._image_combo_disconnect_signal()
model = self.image_combo.get_model()
model.clear()
# Set a indicator text to combo store when first open
self.image_combo.append_text(self.__dummy_image__)
# append and set active
while it:
path = image_model.get_path(it)
it = image_model.iter_next(it)
image_name = image_model[path][recipe_model.COL_NAME]
if image_name == self.builder.recipe_model.__custom_image__:
if image_name == self.builder.recipe_model.__dummy_image__:
continue
if black_pattern:
allow = True
for pattern in black_pattern:
if pattern.search(image_name):
allow = False
break
elif white_pattern:
allow = False
for pattern in white_pattern:
if pattern.search(image_name):
allow = True
break
else:
allow = True
if allow:
self.image_combo.append_text(image_name)
if image_name == selected_image:
active = cnt
cnt = cnt + 1
self.image_combo.append_text(self.builder.recipe_model.__custom_image__)
if selected_image == self.builder.recipe_model.__custom_image__:
self.image_combo.append_text(image_name)
if image_name == selected_image:
active = cnt
cnt = cnt + 1
self.image_combo.append_text(self.builder.recipe_model.__dummy_image__)
if selected_image == self.builder.recipe_model.__dummy_image__:
active = cnt
self.image_combo.set_active(-1)
self.image_combo.set_active(active)
if active != 0:
if active != -1:
self.show_baseimg_selected()
self._image_combo_connect_signal()
@@ -419,20 +375,18 @@ class ImageConfigurationPage (HobPage):
def layer_button_clicked_cb(self, button):
# Create a layer selection dialog
self.builder.show_layer_selection_dialog()
def view_adv_configuration_button_clicked_cb(self, button):
# Create an advanced settings dialog
response, settings_changed = self.builder.show_adv_settings_dialog()
if not response:
return
if settings_changed:
self.builder.reparse_post_adv_settings()
def view_recipes_button_clicked_cb(self, button):
self.builder.show_recipes()
def view_packages_button_clicked_cb(self, button):
self.builder.show_packages()
def just_bake_button_clicked_cb(self, button):
self.builder.just_bake()
def edit_image_button_clicked_cb(self, button):
self.builder.show_recipes()
def build_packages_button_clicked_cb(self, button):
self.builder.build_packages()
def template_button_clicked_cb(self, button):
response, path = self.builder.show_load_template_dialog()
@@ -446,7 +400,7 @@ class ImageConfigurationPage (HobPage):
def settings_button_clicked_cb(self, button):
# Create an advanced settings dialog
response, settings_changed = self.builder.show_simple_settings_dialog()
response, settings_changed = self.builder.show_adv_settings_dialog()
if not response:
return
if settings_changed:

View File

@@ -25,96 +25,34 @@ import gtk
from bb.ui.crumbs.hobcolor import HobColors
from bb.ui.crumbs.hobwidget import hic, HobViewTable, HobAltButton, HobButton
from bb.ui.crumbs.hobpages import HobPage
import subprocess
from bb.ui.crumbs.hig import CrumbsDialog
#
# ImageDetailsPage
#
class ImageDetailsPage (HobPage):
__columns__ = [{
'col_name' : 'Image name',
'col_id' : 0,
'col_style': 'text',
'col_min' : 500,
'col_max' : 500
}, {
'col_name' : 'Image size',
'col_id' : 1,
'col_style': 'text',
'col_min' : 100,
'col_max' : 100
}, {
'col_name' : 'Select',
'col_id' : 2,
'col_style': 'radio toggle',
'col_min' : 100,
'col_max' : 100
}]
class DetailBox (gtk.EventBox):
def __init__(self, widget = None, varlist = None, vallist = None, icon = None, button = None, button2=None, color = HobColors.LIGHT_GRAY):
gtk.EventBox.__init__(self)
# set color
style = self.get_style().copy()
style.bg[gtk.STATE_NORMAL] = self.get_colormap().alloc_color(color, False, False)
self.set_style(style)
self.row = gtk.Table(1, 2, False)
self.row.set_border_width(10)
self.add(self.row)
total_rows = 0
if widget:
total_rows = 10
if varlist and vallist:
# pack the icon and the text on the left
total_rows += len(varlist)
self.table = gtk.Table(total_rows, 20, True)
self.table.set_row_spacings(6)
self.table.set_size_request(100, -1)
self.row.attach(self.table, 0, 1, 0, 1, xoptions=gtk.FILL|gtk.EXPAND, yoptions=gtk.FILL)
colid = 0
rowid = 0
self.line_widgets = {}
if icon:
self.table.attach(icon, colid, colid + 2, 0, 1)
colid = colid + 2
if widget:
self.table.attach(widget, colid, 20, 0, 10)
rowid = 10
if varlist and vallist:
for row in range(rowid, total_rows):
index = row - rowid
self.line_widgets[varlist[index]] = self.text2label(varlist[index], vallist[index])
self.table.attach(self.line_widgets[varlist[index]], colid, 20, row, row + 1)
# pack the button on the right
if button:
self.bbox = gtk.VBox()
self.bbox.pack_start(button, expand=True, fill=False)
if button2:
self.bbox.pack_start(button2, expand=True, fill=False)
self.bbox.set_size_request(150,-1)
self.row.attach(self.bbox, 1, 2, 0, 1, xoptions=gtk.FILL, yoptions=gtk.EXPAND)
def update_line_widgets(self, variable, value):
if len(self.line_widgets) == 0:
return
if not isinstance(self.line_widgets[variable], gtk.Label):
return
self.line_widgets[variable].set_markup(self.format_line(variable, value))
def wrap_line(self, inputs):
# wrap the long text of inputs
wrap_width_chars = 75
outputs = ""
tmps = inputs
less_chars = len(inputs)
while (less_chars - wrap_width_chars) > 0:
less_chars -= wrap_width_chars
outputs += tmps[:wrap_width_chars] + "\n "
tmps = inputs[less_chars:]
outputs += tmps
return outputs
def format_line(self, variable, value):
wraped_value = self.wrap_line(value)
markup = "<span weight=\'bold\'>%s</span>" % variable
markup += "<span weight=\'normal\' foreground=\'#1c1c1c\' font_desc=\'14px\'>%s</span>" % wraped_value
return markup
def text2label(self, variable, value):
# append the name:value to the left box
# such as "Name: hob-core-minimal-variant-2011-12-15-beagleboard"
label = gtk.Label()
label.set_alignment(0.0, 0.5)
label.set_markup(self.format_line(variable, value))
return label
class BuildDetailBox (gtk.EventBox):
def __init__(self, varlist = None, vallist = None, icon = None, color = HobColors.LIGHT_GRAY):
def __init__(self, widget = None, varlist = None, vallist = None, icon = None, button = None, color = HobColors.LIGHT_GRAY):
gtk.EventBox.__init__(self)
# set color
@@ -123,30 +61,34 @@ class ImageDetailsPage (HobPage):
self.set_style(style)
self.hbox = gtk.HBox()
self.hbox.set_border_width(10)
self.hbox.set_border_width(15)
self.add(self.hbox)
total_rows = 0
if varlist and vallist:
if widget:
row = 1
elif varlist and vallist:
# pack the icon and the text on the left
total_rows += len(varlist)
self.table = gtk.Table(total_rows, 20, True)
self.table.set_row_spacings(6)
row = len(varlist)
self.table = gtk.Table(row, 20, True)
self.table.set_size_request(100, -1)
self.hbox.pack_start(self.table, expand=True, fill=True, padding=15)
colid = 0
rowid = 0
self.line_widgets = {}
if icon:
self.table.attach(icon, colid, colid + 2, 0, 1)
colid = colid + 2
if varlist and vallist:
for row in range(rowid, total_rows):
index = row - rowid
self.line_widgets[varlist[index]] = self.text2label(varlist[index], vallist[index])
self.table.attach(self.line_widgets[varlist[index]], colid, 20, row, row + 1)
if widget:
self.table.attach(widget, colid, 20, 0, 1)
elif varlist and vallist:
for line in range(0, row):
self.line_widgets[varlist[line]] = self.text2label(varlist[line], vallist[line])
self.table.attach(self.line_widgets[varlist[line]], colid, 20, line, line + 1)
# pack the button on the right
if button:
self.hbox.pack_end(button, expand=False, fill=False)
def update_line_widgets(self, variable, value):
if len(self.line_widgets) == 0:
return
@@ -154,23 +96,9 @@ class ImageDetailsPage (HobPage):
return
self.line_widgets[variable].set_markup(self.format_line(variable, value))
def wrap_line(self, inputs):
# wrap the long text of inputs
wrap_width_chars = 75
outputs = ""
tmps = inputs
less_chars = len(inputs)
while (less_chars - wrap_width_chars) > 0:
less_chars -= wrap_width_chars
outputs += tmps[:wrap_width_chars] + "\n "
tmps = inputs[less_chars:]
outputs += tmps
return outputs
def format_line(self, variable, value):
wraped_value = self.wrap_line(value)
markup = "<span weight=\'bold\'>%s</span>" % variable
markup += "<span weight=\'normal\' foreground=\'#1c1c1c\' font_desc=\'14px\'>%s</span>" % wraped_value
markup += "<span weight=\'normal\' foreground=\'#1c1c1c\' font_desc=\'14px\'>%s</span>" % value
return markup
def text2label(self, variable, value):
@@ -184,7 +112,7 @@ class ImageDetailsPage (HobPage):
def __init__(self, builder):
super(ImageDetailsPage, self).__init__(builder, "Image details")
self.image_store = []
self.image_store = gtk.ListStore(gobject.TYPE_STRING, gobject.TYPE_STRING, gobject.TYPE_BOOLEAN)
self.button_ids = {}
self.details_bottom_buttons = gtk.HBox(False, 6)
self.create_visual_elements()
@@ -229,30 +157,27 @@ class ImageDetailsPage (HobPage):
self.details_bottom_buttons.remove(child)
def show_page(self, step):
self.build_succeeded = (step == self.builder.IMAGE_GENERATED)
build_succeeded = (step == self.builder.IMAGE_GENERATED)
image_addr = self.builder.parameters.image_addr
image_names = self.builder.parameters.image_names
if self.build_succeeded:
if build_succeeded:
machine = self.builder.configuration.curr_mach
base_image = self.builder.recipe_model.get_selected_image()
layers = self.builder.configuration.layers
pkg_num = "%s" % len(self.builder.package_model.get_selected_packages())
log_file = self.builder.current_logfile
else:
pkg_num = "N/A"
log_file = None
# remove
for button_id, button in self.button_ids.items():
button.disconnect(button_id)
self._remove_all_widget()
# repack
self.pack_start(self.details_top_buttons, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
self.build_result = None
if self.build_succeeded and self.builder.current_step == self.builder.IMAGE_GENERATING:
if build_succeeded:
# building is the previous step
icon = gtk.Image()
pixmap_path = hic.ICON_INDI_CONFIRM_FILE
@@ -261,93 +186,49 @@ class ImageDetailsPage (HobPage):
icon.set_from_pixbuf(pix_buffer)
varlist = [""]
vallist = ["Your image is ready"]
self.build_result = self.BuildDetailBox(varlist=varlist, vallist=vallist, icon=icon, color=color)
self.build_result = self.DetailBox(varlist=varlist, vallist=vallist, icon=icon, color=color)
self.box_group_area.pack_start(self.build_result, expand=False, fill=False)
# create the buttons at the bottom first because the buttons are used in apply_button_per_image()
if self.build_succeeded:
if build_succeeded:
self.buttonlist = ["Build new image", "Save as template", "Run image", "Deploy image"]
else: # get to this page from "My images"
self.buttonlist = ["Build new image", "Run image", "Deploy image"]
# Name
self.image_store = []
self.toggled_image = ""
self.image_store.clear()
default_toggled = False
default_image_size = 0
self.num_toggled = 0
i = 0
for image_name in image_names:
image_size = HobPage._size_to_string(os.stat(os.path.join(image_addr, image_name)).st_size)
image_attr = ("run" if (self.test_type_runnable(image_name) and self.test_mach_runnable(image_name)) else \
("deploy" if self.test_deployable(image_name) else ""))
is_toggled = (image_attr != "")
if not self.toggled_image:
if not default_toggled:
default_toggled = (self.test_type_runnable(image_name) and self.test_mach_runnable(image_name)) \
or self.test_deployable(image_name)
if i == (len(image_names) - 1):
is_toggled = True
if is_toggled:
default_toggled = True
self.image_store.set(self.image_store.append(), 0, image_name, 1, image_size, 2, default_toggled)
if default_toggled:
default_image_size = image_size
self.toggled_image = image_name
split_stuff = image_name.split('.')
if "rootfs" in split_stuff:
image_type = image_name[(len(split_stuff[0]) + len(".rootfs") + 1):]
self.create_bottom_buttons(self.buttonlist, image_name)
else:
image_type = image_name[(len(split_stuff[0]) + 1):]
self.image_store.append({'name': image_name,
'type': image_type,
'size': image_size,
'is_toggled': is_toggled,
'action_attr': image_attr,})
self.image_store.set(self.image_store.append(), 0, image_name, 1, image_size, 2, False)
i = i + 1
self.num_toggled += is_toggled
is_runnable = self.create_bottom_buttons(self.buttonlist, self.toggled_image)
# Generated image files info
varlist = ["Name: ", "Files created: ", "Directory: "]
vallist = []
vallist.append(image_name.split('.')[0])
vallist.append(', '.join(fileitem['type'] for fileitem in self.image_store))
vallist.append(image_addr)
image_table = HobViewTable(self.__columns__)
image_table.set_model(self.image_store)
image_table.connect("toggled", self.toggled_cb)
view_files_button = HobAltButton("View files")
view_files_button.connect("clicked", self.view_files_clicked_cb, image_addr)
view_files_button.set_tooltip_text("Open the directory containing the image files")
open_log_button = None
if log_file:
open_log_button = HobAltButton("Open log")
open_log_button.connect("clicked", self.open_log_clicked_cb, log_file)
open_log_button.set_tooltip_text("Open the build's log file")
self.image_detail = self.DetailBox(varlist=varlist, vallist=vallist, button=view_files_button, button2=open_log_button)
self.box_group_area.pack_start(self.image_detail, expand=False, fill=True)
# The default kernel box for the qemu images
self.sel_kernel = ""
self.kernel_detail = None
if 'qemu' in image_name:
self.sel_kernel = self.get_kernel_file_name()
# varlist = ["Kernel: "]
# vallist = []
# vallist.append(self.sel_kernel)
# change_kernel_button = HobAltButton("Change")
# change_kernel_button.connect("clicked", self.change_kernel_cb)
# change_kernel_button.set_tooltip_text("Change qemu kernel file")
# self.kernel_detail = self.DetailBox(varlist=varlist, vallist=vallist, button=change_kernel_button)
# self.box_group_area.pack_start(self.kernel_detail, expand=True, fill=True)
self.image_detail = self.DetailBox(widget=image_table, button=view_files_button)
self.box_group_area.pack_start(self.image_detail, expand=True, fill=True)
# Machine, Base image and Layers
layer_num_limit = 15
varlist = ["Machine: ", "Base image: ", "Layers: "]
vallist = []
self.setting_detail = None
if self.build_succeeded:
if build_succeeded:
vallist.append(machine)
vallist.append(base_image)
i = 0
@@ -371,40 +252,29 @@ class ImageDetailsPage (HobPage):
edit_config_button.set_tooltip_text("Edit machine, base image and recipes")
edit_config_button.connect("clicked", self.edit_config_button_clicked_cb)
self.setting_detail = self.DetailBox(varlist=varlist, vallist=vallist, button=edit_config_button)
self.box_group_area.pack_start(self.setting_detail, expand=True, fill=True)
self.box_group_area.pack_start(self.setting_detail, expand=False, fill=False)
# Packages included, and Total image size
varlist = ["Packages included: ", "Total image size: "]
vallist = []
vallist.append(pkg_num)
vallist.append(default_image_size)
if self.build_succeeded:
if build_succeeded:
edit_packages_button = HobAltButton("Edit packages")
edit_packages_button.set_tooltip_text("Edit the packages included in your image")
edit_packages_button.connect("clicked", self.edit_packages_button_clicked_cb)
else: # get to this page from "My images"
edit_packages_button = None
self.package_detail = self.DetailBox(varlist=varlist, vallist=vallist, button=edit_packages_button)
self.box_group_area.pack_start(self.package_detail, expand=True, fill=True)
self.box_group_area.pack_start(self.package_detail, expand=False, fill=False)
# pack the buttons at the bottom, at this time they are already created.
if self.build_succeeded:
self.box_group_area.pack_end(self.details_bottom_buttons, expand=False, fill=False)
else: # for "My images" page
self.details_separator = gtk.HSeparator()
self.box_group_area.pack_start(self.details_separator, expand=False, fill=False)
self.box_group_area.pack_start(self.details_bottom_buttons, expand=False, fill=False)
self.box_group_area.pack_end(self.details_bottom_buttons, expand=False, fill=False)
self.show_all()
if self.kernel_detail and (not is_runnable):
self.kernel_detail.hide()
def view_files_clicked_cb(self, button, image_addr):
subprocess.call("xdg-open /%s" % image_addr, shell=True)
def open_log_clicked_cb(self, button, log_file):
if log_file:
os.system("xdg-open /%s" % log_file)
os.system("xdg-open /%s" % image_addr)
def refresh_package_detail_box(self, image_size):
self.package_detail.update_line_widgets("Total image size: ", image_size)
@@ -426,8 +296,6 @@ class ImageDetailsPage (HobPage):
return mach_runnable
def test_deployable(self, image_name):
if self.builder.configuration.curr_mach.startswith("qemu"):
return False
deployable = False
for t in self.builder.parameters.deployable_image_types:
if image_name.endswith(t):
@@ -435,121 +303,49 @@ class ImageDetailsPage (HobPage):
break
return deployable
def get_kernel_file_name(self, kernel_addr=""):
kernel_name = ""
if not kernel_addr:
kernel_addr = self.builder.parameters.image_addr
files = [f for f in os.listdir(kernel_addr) if f[0] <> '.']
for check_file in files:
if check_file.endswith(".bin"):
name_splits = check_file.split(".")[0]
if self.builder.parameters.kernel_image_type in name_splits.split("-"):
kernel_name = check_file
break
return kernel_name
def show_builded_images_dialog(self, widget, primary_action=""):
title = primary_action if primary_action else "Your builded images"
dialog = CrumbsDialog(title, self.builder,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT)
dialog.set_border_width(12)
label = gtk.Label()
label.set_use_markup(True)
label.set_alignment(0.0, 0.5)
label.set_padding(12,0)
if primary_action == "Run image":
label.set_markup("<span font_desc='12'>Select the image file you want to run:</span>")
elif primary_action == "Deploy image":
label.set_markup("<span font_desc='12'>Select the image file you want to deploy:</span>")
else:
label.set_markup("<span font_desc='12'>Select the image file you want to %s</span>" % primary_action)
dialog.vbox.pack_start(label, expand=False, fill=False)
# filter created images as action attribution (deploy or run)
action_attr = ""
action_images = []
for fileitem in self.image_store:
action_attr = fileitem['action_attr']
if (action_attr == 'run' and primary_action == "Run image") \
or (action_attr == 'deploy' and primary_action == "Deploy image"):
action_images.append(fileitem)
# pack the corresponding 'runnable' or 'deploy' radio_buttons, if there has no more than one file.
# assume that there does not both have 'deploy' and 'runnable' files in the same building result
# in possible as design.
curr_row = 0
rows = (len(action_images)) if len(action_images) < 10 else 10
table = gtk.Table(rows, 10, True)
table.set_row_spacings(6)
table.set_col_spacing(0, 12)
table.set_col_spacing(5, 12)
sel_parent_btn = None
for fileitem in action_images:
sel_btn = gtk.RadioButton(sel_parent_btn, fileitem['type'])
sel_parent_btn = sel_btn if not sel_parent_btn else sel_parent_btn
sel_btn.set_active(fileitem['is_toggled'])
sel_btn.connect('toggled', self.table_selected_cb, fileitem)
if curr_row < 10:
table.attach(sel_btn, 0, 4, curr_row, curr_row + 1, xpadding=24)
else:
table.attach(sel_btn, 5, 9, curr_row - 10, curr_row - 9, xpadding=24)
curr_row += 1
dialog.vbox.pack_start(table, expand=False, fill=False, padding=6)
button = dialog.add_button("Cancel", gtk.RESPONSE_CANCEL)
HobAltButton.style_button(button)
if primary_action:
button = dialog.add_button(primary_action, gtk.RESPONSE_YES)
HobButton.style_button(button)
dialog.show_all()
response = dialog.run()
dialog.destroy()
if response != gtk.RESPONSE_YES:
def toggled_cb(self, table, cell, path, columnid, tree):
model = tree.get_model()
if not model:
return
iter = model.get_iter_first()
while iter:
rowpath = model.get_path(iter)
model[rowpath][columnid] = False
iter = model.iter_next(iter)
for fileitem in self.image_store:
if fileitem['is_toggled']:
if fileitem['action_attr'] == 'run':
self.builder.runqemu_image(fileitem['name'], self.sel_kernel)
elif fileitem['action_attr'] == 'deploy':
self.builder.deploy_image(fileitem['name'])
model[path][columnid] = True
self.refresh_package_detail_box(model[path][1])
def table_selected_cb(self, tbutton, image):
image['is_toggled'] = tbutton.get_active()
if image['is_toggled']:
self.toggled_image = image['name']
image_name = model[path][0]
def change_kernel_cb(self, widget):
kernel_path = self.builder.show_load_kernel_dialog()
if kernel_path and self.kernel_detail:
import os.path
self.sel_kernel = os.path.basename(kernel_path)
markup = self.kernel_detail.format_line("Kernel: ", self.sel_kernel)
label = ((self.kernel_detail.get_children()[0]).get_children()[0]).get_children()[0]
label.set_markup(markup)
# remove
for button_id, button in self.button_ids.items():
button.disconnect(button_id)
self._remove_all_widget()
# repack
self.pack_start(self.details_top_buttons, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
if self.build_result:
self.box_group_area.pack_start(self.build_result, expand=False, fill=False)
self.box_group_area.pack_start(self.image_detail, expand=True, fill=True)
if self.setting_detail:
self.box_group_area.pack_start(self.setting_detail, expand=False, fill=False)
self.box_group_area.pack_start(self.package_detail, expand=False, fill=False)
self.create_bottom_buttons(self.buttonlist, image_name)
self.box_group_area.pack_end(self.details_bottom_buttons, expand=False, fill=False)
self.show_all()
def create_bottom_buttons(self, buttonlist, image_name):
# Create the buttons at the bottom
created = False
packed = False
self.button_ids = {}
is_runnable = False
# create button "Deploy image"
name = "Deploy image"
if name in buttonlist and self.test_deployable(image_name):
deploy_button = HobButton('Deploy image')
#deploy_button.set_size_request(205, 49)
deploy_button.set_size_request(205, 49)
deploy_button.set_tooltip_text("Burn a live image to a USB drive or flash memory")
deploy_button.set_flags(gtk.CAN_DEFAULT)
button_id = deploy_button.connect("clicked", self.deploy_button_clicked_cb)
@@ -562,15 +358,15 @@ class ImageDetailsPage (HobPage):
if name in buttonlist and self.test_type_runnable(image_name) and self.test_mach_runnable(image_name):
if created == True:
# separator
#label = gtk.Label(" or ")
#self.details_bottom_buttons.pack_end(label, expand=False, fill=False)
label = gtk.Label(" or ")
self.details_bottom_buttons.pack_end(label, expand=False, fill=False)
# create button "Run image"
run_button = HobAltButton("Run image")
else:
# create button "Run image" as the primary button
run_button = HobButton("Run image")
#run_button.set_size_request(205, 49)
run_button.set_size_request(205, 49)
run_button.set_flags(gtk.CAN_DEFAULT)
packed = True
run_button.set_tooltip_text("Start up an image with qemu emulator")
@@ -578,22 +374,25 @@ class ImageDetailsPage (HobPage):
self.button_ids[button_id] = run_button
self.details_bottom_buttons.pack_end(run_button, expand=False, fill=False)
created = True
is_runnable = True
if not packed:
box = gtk.HBox(False, 6)
box.show()
subbox = gtk.HBox(False, 0)
subbox.set_size_request(205, 49)
subbox.show()
box.add(subbox)
self.details_bottom_buttons.pack_end(box, False, False)
name = "Save as template"
if name in buttonlist:
if created == True:
# separator
#label = gtk.Label(" or ")
#self.details_bottom_buttons.pack_end(label, expand=False, fill=False)
label = gtk.Label(" or ")
self.details_bottom_buttons.pack_end(label, expand=False, fill=False)
# create button "Save as template"
save_button = HobAltButton("Save as template")
else:
save_button = HobButton("Save as template")
#save_button.set_size_request(205, 49)
save_button.set_flags(gtk.CAN_DEFAULT)
packed = True
# create button "Save as template"
save_button = HobAltButton("Save as template")
save_button.set_tooltip_text("Save the image configuration for reuse")
button_id = save_button.connect("clicked", self.save_button_clicked_cb)
self.button_ids[button_id] = save_button
@@ -603,39 +402,34 @@ class ImageDetailsPage (HobPage):
name = "Build new image"
if name in buttonlist:
# create button "Build new image"
if packed:
build_new_button = HobAltButton("Build new image")
else:
build_new_button = HobButton("Build new image")
build_new_button.set_flags(gtk.CAN_DEFAULT)
#build_new_button.set_size_request(205, 49)
self.details_bottom_buttons.pack_end(build_new_button, expand=False, fill=False)
build_new_button = HobAltButton("Build new image")
build_new_button.set_tooltip_text("Create a new image from scratch")
button_id = build_new_button.connect("clicked", self.build_new_button_clicked_cb)
self.button_ids[button_id] = build_new_button
self.details_bottom_buttons.pack_start(build_new_button, expand=False, fill=False)
return is_runnable
def _get_selected_image(self):
image_name = ""
iter = self.image_store.get_iter_first()
while iter:
path = self.image_store.get_path(iter)
if self.image_store[path][2]:
image_name = self.image_store[path][0]
break
iter = self.image_store.iter_next(iter)
return image_name
def save_button_clicked_cb(self, button):
self.builder.show_save_template_dialog()
def deploy_button_clicked_cb(self, button):
if self.toggled_image:
if self.num_toggled > 1:
self.set_sensitive(False)
self.show_builded_images_dialog(None, "Deploy image")
self.set_sensitive(True)
else:
self.builder.deploy_image(self.toggled_image)
image_name = self._get_selected_image()
self.builder.deploy_image(image_name)
def run_button_clicked_cb(self, button):
if self.toggled_image:
if self.num_toggled > 1:
self.set_sensitive(False)
self.show_builded_images_dialog(None, "Run image")
self.set_sensitive(True)
else:
self.builder.runqemu_image(self.toggled_image, self.sel_kernel)
image_name = self._get_selected_image()
self.builder.runqemu_image(image_name)
def build_new_button_clicked_cb(self, button):
self.builder.initiate_new_build_async()
@@ -658,7 +452,7 @@ class ImageDetailsPage (HobPage):
def settings_button_clicked_cb(self, button):
# Create an advanced settings dialog
response, settings_changed = self.builder.show_simple_settings_dialog()
response, settings_changed = self.builder.show_adv_settings_dialog()
if not response:
return
if settings_changed:

View File

@@ -34,8 +34,7 @@ class PackageSelectionPage (HobPage):
pages = [
{
'name' : 'Included packages',
'tooltip' : 'The packages currently included for your image',
'name' : 'Included',
'filter' : { PackageListModel.COL_INC : [True] },
'columns' : [{
'col_name' : 'Package name',
@@ -44,13 +43,6 @@ class PackageSelectionPage (HobPage):
'col_min' : 100,
'col_max' : 300,
'expand' : 'True'
}, {
'col_name' : 'Size',
'col_id' : PackageListModel.COL_SIZE,
'col_style': 'text',
'col_min' : 100,
'col_max' : 300,
'expand' : 'True'
}, {
'col_name' : 'Brought in by',
'col_id' : PackageListModel.COL_BINB,
@@ -58,6 +50,13 @@ class PackageSelectionPage (HobPage):
'col_min' : 100,
'col_max' : 350,
'expand' : 'True'
}, {
'col_name' : 'Size',
'col_id' : PackageListModel.COL_SIZE,
'col_style': 'text',
'col_min' : 100,
'col_max' : 300,
'expand' : 'True'
}, {
'col_name' : 'Included',
'col_id' : PackageListModel.COL_INC,
@@ -67,7 +66,6 @@ class PackageSelectionPage (HobPage):
}]
}, {
'name' : 'All packages',
'tooltip' : 'All packages that have been built',
'filter' : {},
'columns' : [{
'col_name' : 'Package name',
@@ -92,12 +90,9 @@ class PackageSelectionPage (HobPage):
}]
}
]
(INCLUDED,
ALL) = range(2)
def __init__(self, builder):
super(PackageSelectionPage, self).__init__(builder, "Edit packages")
super(PackageSelectionPage, self).__init__(builder, "Packages")
# set invisiable members
self.recipe_model = self.builder.recipe_model
@@ -106,16 +101,13 @@ class PackageSelectionPage (HobPage):
# create visual elements
self.create_visual_elements()
def included_clicked_cb(self, button):
self.ins.set_current_page(self.INCLUDED)
def create_visual_elements(self):
self.label = gtk.Label("Packages included: 0\nSelected packages size: 0 MB")
self.eventbox = self.add_onto_top_bar(self.label, 73)
self.pack_start(self.eventbox, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
# set visible members
# set visiable members
self.ins = HobNotebook()
self.tables = [] # we need to modify table when the dialog is shown
# append the tab
@@ -125,37 +117,35 @@ class PackageSelectionPage (HobPage):
filter = page['filter']
tab.set_model(self.package_model.tree_model(filter))
tab.connect("toggled", self.table_toggled_cb, page['name'])
if page['name'] == "Included packages":
if page['name'] == "Included":
tab.connect("button-release-event", self.button_click_cb)
tab.connect("cell-fadeinout-stopped", self.after_fadeout_checkin_include)
self.ins.append_page(tab, page['name'], page['tooltip'])
label = gtk.Label(page['name'])
self.ins.append_page(tab, label)
self.tables.append(tab)
self.ins.set_entry("Search packages:")
# set the search entry for each table
for tab in self.tables:
search_tip = "Enter a package name to find it"
self.ins.search.set_tooltip_text(search_tip)
self.ins.search.props.has_tooltip = True
tab.set_search_entry(0, self.ins.search)
# add all into the dialog
self.box_group_area.pack_start(self.ins, expand=True, fill=True)
self.button_box = gtk.HBox(False, 6)
self.box_group_area.pack_start(self.button_box, expand=False, fill=False)
button_box = gtk.HBox(False, 6)
self.box_group_area.pack_start(button_box, expand=False, fill=False)
self.build_image_button = HobButton('Build image')
#self.build_image_button.set_size_request(205, 49)
self.build_image_button.set_size_request(205, 49)
self.build_image_button.set_tooltip_text("Build target image")
self.build_image_button.set_flags(gtk.CAN_DEFAULT)
self.build_image_button.grab_default()
self.build_image_button.connect("clicked", self.build_image_clicked_cb)
self.button_box.pack_end(self.build_image_button, expand=False, fill=False)
button_box.pack_end(self.build_image_button, expand=False, fill=False)
self.back_button = HobAltButton('Cancel')
self.back_button = HobAltButton("<< Back to image configuration")
self.back_button.connect("clicked", self.back_button_clicked_cb)
self.button_box.pack_end(self.back_button, expand=False, fill=False)
button_box.pack_start(self.back_button, expand=False, fill=False)
def button_click_cb(self, widget, event):
path, col = widget.table_tree.get_cursor()
@@ -165,34 +155,11 @@ class PackageSelectionPage (HobPage):
if binb:
self.builder.show_binb_dialog(binb)
def open_log_clicked_cb(self, button, log_file):
if log_file:
os.system("xdg-open /%s" % log_file)
def show_page(self, log_file):
children = self.button_box.get_children() or []
for child in children:
self.button_box.remove(child)
# re-packed the buttons as request, add the 'open log' button if build success
self.button_box.pack_end(self.build_image_button, expand=False, fill=False)
if log_file:
open_log_button = HobAltButton("Open log")
open_log_button.connect("clicked", self.open_log_clicked_cb, log_file)
open_log_button.set_tooltip_text("Open the build's log file")
self.button_box.pack_end(open_log_button, expand=False, fill=False)
self.button_box.pack_end(self.back_button, expand=False, fill=False)
self.show_all()
def build_image_clicked_cb(self, button):
self.builder.build_image()
def back_button_clicked_cb(self, button):
if self.builder.previous_step == self.builder.IMAGE_GENERATED:
self.builder.restore_initial_selected_packages()
self.refresh_selection()
self.builder.show_image_details()
else:
self.builder.show_configuration()
self.builder.show_configuration()
def _expand_all(self):
for tab in self.tables:
@@ -216,15 +183,15 @@ class PackageSelectionPage (HobPage):
image_total_size += (51200 * 1024)
image_total_size_str = HobPage._size_to_string(image_total_size)
self.label.set_label("Packages included: %s\nSelected packages size: %s\nTotal image size: %s" %
self.label.set_text("Packages included: %s\nSelected packages size: %s\nTotal image size: %s" %
(selected_packages_num, selected_packages_size_str, image_total_size_str))
self.ins.show_indicator_icon("Included packages", selected_packages_num)
self.ins.show_indicator_icon("Included", selected_packages_num)
def toggle_item_idle_cb(self, path, view_tree, cell, pagename):
if not self.package_model.path_included(path):
self.package_model.include_item(item_path=path, binb="User Selected")
else:
if pagename == "Included packages":
if pagename == "Included":
self.pre_fadeout_checkout_include(view_tree)
self.package_model.exclude_item(item_path=path)
self.render_fadeout(view_tree, cell)
@@ -234,7 +201,7 @@ class PackageSelectionPage (HobPage):
self.refresh_selection()
if not self.builder.customized:
self.builder.customized = True
self.builder.configuration.selected_image = self.recipe_model.__custom_image__
self.builder.configuration.selected_image = self.recipe_model.__dummy_image__
self.builder.rcppkglist_populated()
self.builder.window_sensitive(True)
@@ -280,7 +247,3 @@ class PackageSelectionPage (HobPage):
def after_fadeout_checkin_include(self, table, ctrl, cell, tree):
tree.set_model(self.package_model.tree_model(self.pages[0]['filter']))
tree.expand_all()
def set_packages_curr_tab(self, curr_page):
self.ins.set_current_page(curr_page)

View File

@@ -11,9 +11,6 @@ class ProgressBar(gtk.Dialog):
self.vbox.pack_start(self.progress)
self.show_all()
def set_text(self, msg):
self.progress.set_text(msg)
def update(self, x, y):
self.progress.set_fraction(float(x)/float(y))
self.progress.set_text("%2d %%" % (x*100/y))

View File

@@ -33,10 +33,10 @@ from bb.ui.crumbs.hobpages import HobPage
class RecipeSelectionPage (HobPage):
pages = [
{
'name' : 'Included recipes',
'name' : 'Included',
'tooltip' : 'The recipes currently included for your image',
'filter' : { RecipeListModel.COL_INC : [True],
RecipeListModel.COL_TYPE : ['recipe', 'packagegroup'] },
RecipeListModel.COL_TYPE : ['recipe', 'task'] },
'columns' : [{
'col_name' : 'Recipe name',
'col_id' : RecipeListModel.COL_NAME,
@@ -44,13 +44,6 @@ class RecipeSelectionPage (HobPage):
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Group',
'col_id' : RecipeListModel.COL_GROUP,
'col_style': 'text',
'col_min' : 100,
'col_max' : 300,
'expand' : 'True'
}, {
'col_name' : 'Brought in by',
'col_id' : RecipeListModel.COL_BINB,
@@ -58,6 +51,13 @@ class RecipeSelectionPage (HobPage):
'col_min' : 100,
'col_max' : 500,
'expand' : 'True'
}, {
'col_name' : 'Group',
'col_id' : RecipeListModel.COL_GROUP,
'col_style': 'text',
'col_min' : 100,
'col_max' : 300,
'expand' : 'True'
}, {
'col_name' : 'Included',
'col_id' : RecipeListModel.COL_INC,
@@ -67,7 +67,7 @@ class RecipeSelectionPage (HobPage):
}]
}, {
'name' : 'All recipes',
'tooltip' : 'All recipes in your configured layers',
'tooltip' : 'All recipes available in the Yocto Project',
'filter' : { RecipeListModel.COL_TYPE : ['recipe'] },
'columns' : [{
'col_name' : 'Recipe name',
@@ -76,13 +76,6 @@ class RecipeSelectionPage (HobPage):
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Group',
'col_id' : RecipeListModel.COL_GROUP,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'License',
'col_id' : RecipeListModel.COL_LIC,
@@ -90,6 +83,13 @@ class RecipeSelectionPage (HobPage):
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Group',
'col_id' : RecipeListModel.COL_GROUP,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Included',
'col_id' : RecipeListModel.COL_INC,
@@ -98,16 +98,23 @@ class RecipeSelectionPage (HobPage):
'col_max' : 100
}]
}, {
'name' : 'Package Groups',
'tooltip' : 'All package groups in your configured layers',
'filter' : { RecipeListModel.COL_TYPE : ['packagegroup'] },
'name' : 'Tasks',
'tooltip' : 'All tasks available in the Yocto Project',
'filter' : { RecipeListModel.COL_TYPE : ['task'] },
'columns' : [{
'col_name' : 'Package group name',
'col_name' : 'Task name',
'col_id' : RecipeListModel.COL_NAME,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Description',
'col_id' : RecipeListModel.COL_DESC,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Included',
'col_id' : RecipeListModel.COL_INC,
@@ -117,29 +124,23 @@ class RecipeSelectionPage (HobPage):
}]
}
]
(INCLUDED,
ALL,
TASKS) = range(3)
def __init__(self, builder = None):
super(RecipeSelectionPage, self).__init__(builder, "Step 1 of 2: Edit recipes")
super(RecipeSelectionPage, self).__init__(builder, "Recipes")
# set invisible members
# set invisiable members
self.recipe_model = self.builder.recipe_model
# create visual elements
self.create_visual_elements()
def included_clicked_cb(self, button):
self.ins.set_current_page(self.INCLUDED)
def create_visual_elements(self):
self.eventbox = self.add_onto_top_bar(None, 73)
self.label = gtk.Label()
self.eventbox = self.add_onto_top_bar(self.label, 73)
self.pack_start(self.eventbox, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
# set visible members
# set visiable members
self.ins = HobNotebook()
self.tables = [] # we need modify table when the dialog is shown
# append the tabs in order
@@ -149,10 +150,13 @@ class RecipeSelectionPage (HobPage):
filter = page['filter']
tab.set_model(self.recipe_model.tree_model(filter))
tab.connect("toggled", self.table_toggled_cb, page['name'])
if page['name'] == "Included recipes":
if page['name'] == "Included":
tab.connect("button-release-event", self.button_click_cb)
tab.connect("cell-fadeinout-stopped", self.after_fadeout_checkin_include)
self.ins.append_page(tab, page['name'], page['tooltip'])
label = gtk.Label(page['name'])
label.set_selectable(False)
label.set_tooltip_text(page['tooltip'])
self.ins.append_page(tab, label)
self.tables.append(tab)
self.ins.set_entry("Search recipes:")
@@ -170,16 +174,16 @@ class RecipeSelectionPage (HobPage):
self.box_group_area.pack_end(button_box, expand=False, fill=False)
self.build_packages_button = HobButton('Build packages')
#self.build_packages_button.set_size_request(205, 49)
self.build_packages_button.set_size_request(205, 49)
self.build_packages_button.set_tooltip_text("Build selected recipes into packages")
self.build_packages_button.set_flags(gtk.CAN_DEFAULT)
self.build_packages_button.grab_default()
self.build_packages_button.connect("clicked", self.build_packages_clicked_cb)
button_box.pack_end(self.build_packages_button, expand=False, fill=False)
self.back_button = HobAltButton('Cancel')
self.back_button = HobAltButton("<< Back to image configuration")
self.back_button.connect("clicked", self.back_button_clicked_cb)
button_box.pack_end(self.back_button, expand=False, fill=False)
button_box.pack_start(self.back_button, expand=False, fill=False)
def button_click_cb(self, widget, event):
path, col = widget.table_tree.get_cursor()
@@ -198,13 +202,14 @@ class RecipeSelectionPage (HobPage):
def refresh_selection(self):
self.builder.configuration.selected_image = self.recipe_model.get_selected_image()
_, self.builder.configuration.selected_recipes = self.recipe_model.get_selected_recipes()
self.ins.show_indicator_icon("Included recipes", len(self.builder.configuration.selected_recipes))
self.label.set_text("Recipes included: %s" % len(self.builder.configuration.selected_recipes))
self.ins.show_indicator_icon("Included", len(self.builder.configuration.selected_recipes))
def toggle_item_idle_cb(self, path, view_tree, cell, pagename):
if not self.recipe_model.path_included(path):
self.recipe_model.include_item(item_path=path, binb="User Selected", image_contents=False)
else:
if pagename == "Included recipes":
if pagename == "Included":
self.pre_fadeout_checkout_include(view_tree)
self.recipe_model.exclude_item(item_path=path)
self.render_fadeout(view_tree, cell)
@@ -214,7 +219,7 @@ class RecipeSelectionPage (HobPage):
self.refresh_selection()
if not self.builder.customized:
self.builder.customized = True
self.builder.configuration.selected_image = self.recipe_model.__custom_image__
self.builder.configuration.selected_image = self.recipe_model.__dummy_image__
self.builder.rcppkglist_populated()
self.builder.window_sensitive(True)
@@ -236,7 +241,7 @@ class RecipeSelectionPage (HobPage):
# Check out a model which base on the column COL_FADE_INC,
# it's save the prev state of column COL_INC before do exclude_item
filter = { RecipeListModel.COL_FADE_INC : [True],
RecipeListModel.COL_TYPE : ['recipe', 'packagegroup'] }
RecipeListModel.COL_TYPE : ['recipe', 'task'] }
new_model = self.recipe_model.tree_model(filter, excluded_items_ahead=True)
tree.set_model(new_model)
@@ -258,6 +263,3 @@ class RecipeSelectionPage (HobPage):
def after_fadeout_checkin_include(self, table, ctrl, cell, tree):
tree.set_model(self.recipe_model.tree_model(self.pages[0]['filter']))
def set_recipe_curr_tab(self, curr_page):
self.ins.set_current_page(curr_page)

View File

@@ -76,9 +76,6 @@ class RunningBuild (gobject.GObject):
'build-complete' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'build-aborted' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'task-started' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
@@ -88,9 +85,6 @@ class RunningBuild (gobject.GObject):
'no-provider' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
'log' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING, gobject.TYPE_PYOBJECT,)),
}
pids_to_task = {}
tasks_to_iter = {}
@@ -99,7 +93,6 @@ class RunningBuild (gobject.GObject):
gobject.GObject.__init__ (self)
self.model = RunningBuildModel()
self.sequential = sequential
self.buildaborted = False
def reset (self):
self.pids_to_task.clear()
@@ -129,8 +122,6 @@ class RunningBuild (gobject.GObject):
parent = self.tasks_to_iter[(package, task)]
if(isinstance(event, logging.LogRecord)):
if event.taskpid == 0 or event.levelno > logging.INFO:
self.emit("log", "handle", event)
# FIXME: this is a hack! More info in Yocto #1433
# http://bugzilla.pokylinux.org/show_bug.cgi?id=1433, temporarily
# mask the error message as it's not informative for the user.
@@ -216,7 +207,6 @@ class RunningBuild (gobject.GObject):
self.tasks_to_iter[(package, task)] = i
elif isinstance(event, bb.build.TaskBase):
self.emit("log", "info", event._message)
current = self.tasks_to_iter[(package, task)]
parent = self.tasks_to_iter[(package, None)]
@@ -284,9 +274,7 @@ class RunningBuild (gobject.GObject):
0))
# Emit the appropriate signal depending on the number of failures
if self.buildaborted:
self.emit ("build-aborted")
elif (failures >= 1):
if (failures >= 1):
self.emit ("build-failed")
else:
self.emit ("build-succeeded")
@@ -298,11 +286,7 @@ class RunningBuild (gobject.GObject):
if pbar:
pbar.set_text(event.msg)
elif isinstance(event, bb.event.DiskFull):
self.buildaborted = True
elif isinstance(event, bb.command.CommandFailed):
self.emit("log", "error", "Command execution failed: %s" % (event.error))
if event.error.startswith("Exited with"):
# If the command fails with an exit code we're done, emit the
# generic signal for the UI to notify the user
@@ -330,24 +314,7 @@ class RunningBuild (gobject.GObject):
elif isinstance(event, bb.event.ParseCompleted) and pbar:
pbar.hide()
#using runqueue events as many as possible to update the progress bar
elif isinstance(event, bb.runqueue.runQueueTaskFailed):
self.emit("log", "error", "Task %s (%s) failed with exit code '%s'" % (event.taskid, event.taskstring, event.exitcode))
elif isinstance(event, bb.runqueue.sceneQueueTaskFailed):
self.emit("log", "warn", "Setscene task %s (%s) failed with exit code '%s' - real task will be run instead" \
% (event.taskid, event.taskstring, event.exitcode))
elif isinstance(event, (bb.runqueue.runQueueTaskStarted, bb.runqueue.sceneQueueTaskStarted)):
if isinstance(event, bb.runqueue.sceneQueueTaskStarted):
self.emit("log", "info", "Running setscene task %d of %d (%s)" % \
(event.stats.completed + event.stats.active + event.stats.failed + 1,
event.stats.total, event.taskstring))
else:
if event.noexec:
tasktype = 'noexec task'
else:
tasktype = 'task'
self.emit("log", "info", "Running %s %s of %s (ID: %s, %s)" % \
(tasktype, event.stats.completed + event.stats.active + event.stats.failed + 1,
event.stats.total, event.taskid, event.taskstring))
message = {}
message["eventname"] = bb.event.getName(event)
num_of_completed = event.stats.completed + event.stats.failed
@@ -356,10 +323,6 @@ class RunningBuild (gobject.GObject):
message["title"] = ""
message["task"] = event.taskstring
self.emit("task-started", message)
elif isinstance(event, bb.event.MultipleProviders):
self.emit("log", "info", "multiple providers are available for %s%s (%s)" \
% (event._is_runtime and "runtime " or "", event._item, ", ".join(event._candidates)))
self.emit("log", "info", "consider defining a PREFERRED_PROVIDER entry to match %s" % (event._item))
elif isinstance(event, bb.event.NoProvider):
msg = ""
if event._runtime:
@@ -374,34 +337,6 @@ class RunningBuild (gobject.GObject):
for reason in event._reasons:
msg += ("%s\n" % reason)
self.emit("no-provider", msg)
self.emit("log", "error", msg)
elif isinstance(event, bb.event.LogExecTTY):
icon = "dialog-warning"
color = HobColors.WARNING
if self.sequential or not parent:
tree_add = self.model.append
else:
tree_add = self.model.prepend
tree_add(parent,
(None,
package,
task,
event.msg,
icon,
color,
0))
else:
if not isinstance(event, (bb.event.BuildBase,
bb.event.StampUpdate,
bb.event.ConfigParsed,
bb.event.RecipeParsed,
bb.event.RecipePreFinalise,
bb.runqueue.runQueueEvent,
bb.runqueue.runQueueExitWait,
bb.event.OperationStarted,
bb.event.OperationCompleted,
bb.event.OperationProgress)):
self.emit("log", "error", "Unknown event: %s" % (event.error if hasattr(event, 'error') else 'error'))
return

View File

@@ -1,85 +0,0 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Bogdan Marinescu <bogdan.a.marinescu@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk, gobject
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hobwidget import hic
from bb.ui.crumbs.hobpages import HobPage
#
# SanityCheckPage
#
class SanityCheckPage (HobPage):
def __init__(self, builder):
super(SanityCheckPage, self).__init__(builder)
self.running = False
self.create_visual_elements()
self.show_all()
def make_label(self, text, bold=True):
label = gtk.Label()
label.set_alignment(0.0, 0.5)
mark = "<span %s>%s</span>" % (self.span_tag('x-large', 'bold') if bold else self.span_tag('medium'), text)
label.set_markup(mark)
return label
def start(self):
if not self.running:
self.running = True
gobject.timeout_add(100, self.timer_func)
def stop(self):
self.running = False
def is_running(self):
return self.running
def timer_func(self):
self.progress_bar.pulse()
return self.running
def create_visual_elements(self):
# Table'd layout. 'rows' and 'cols' give the table size
rows, cols = 30, 50
self.table = gtk.Table(rows, cols, True)
self.pack_start(self.table, expand=False, fill=False)
sx, sy = 2, 2
# 'info' icon
image = gtk.Image()
image.set_from_file(hic.ICON_INFO_DISPLAY_FILE)
self.table.attach(image, sx, sx + 2, sy, sy + 3 )
image.show()
# 'Checking' message
label = self.make_label('Hob is checking for correct build system setup')
self.table.attach(label, sx + 2, cols, sy, sy + 3, xpadding=5 )
label.show()
# 'Shouldn't take long' message.
label = self.make_label("The check shouldn't take long.", False)
self.table.attach(label, sx + 2, cols, sy + 3, sy + 4, xpadding=5)
label.show()
# Progress bar
self.progress_bar = HobProgressBar()
self.table.attach(self.progress_bar, sx + 2, cols - 3, sy + 5, sy + 7, xpadding=5)
self.progress_bar.show()
# All done
self.table.show()

View File

@@ -101,19 +101,7 @@ class HobTemplateFile(ConfigFile):
return self.dictionary[var]
else:
return ""
def getVersion(self):
contents = ConfigFile.readFile(self)
pattern = "^\s*(\S+)\s*=\s*(\".*?\")"
for line in contents:
match = re.search(pattern, line)
if match:
if match.group(1) == "VERSION":
return match.group(2).strip('"')
return None
def load(self):
contents = ConfigFile.readFile(self)
self.dictionary.clear()
@@ -137,7 +125,7 @@ class RecipeFile(ConfigFile):
class TemplateMgr(gobject.GObject):
__gLocalVars__ = ["MACHINE", "PACKAGE_CLASSES", "DISTRO", "DL_DIR", "SSTATE_DIR", "SSTATE_MIRRORS", "PARALLEL_MAKE", "BB_NUMBER_THREADS", "CONF_VERSION"]
__gLocalVars__ = ["MACHINE", "PACKAGE_CLASSES", "DISTRO", "DL_DIR", "SSTATE_DIR", "SSTATE_MIRROR", "PARALLEL_MAKE", "BB_NUMBER_THREADS", "CONF_VERSION"]
__gBBLayersVars__ = ["BBLAYERS", "LCONF_VERSION"]
__gRecipeVars__ = ["DEPENDS", "IMAGE_INSTALL"]
@@ -186,9 +174,6 @@ class TemplateMgr(gobject.GObject):
self.image_bb.save()
self.template_hob.save()
def getVersion(self, path):
return HobTemplateFile(path).getVersion()
def load(self, path):
self.template_hob = HobTemplateFile(path)
self.dictionary = self.template_hob.load()

View File

@@ -22,7 +22,6 @@
# bitbake which will allow more flexibility.
import os
import bb
def which_terminal():
term = bb.utils.which(os.environ["PATH"], "xterm")

View File

@@ -24,7 +24,7 @@ import threading
import xmlrpclib
import bb
import bb.event
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.progress import ProgressBar
# Package Model
(COL_PKG_NAME) = (0)
@@ -198,17 +198,23 @@ class gtkthread(threading.Thread):
def main(server, eventHandler):
try:
cmdline = server.runCommand(["getCmdLineAction"])
if cmdline and not cmdline['action']:
print(cmdline['msg'])
return
elif not cmdline or (cmdline['action'] and cmdline['action'][0] != "generateDotGraph"):
cmdline, error = server.runCommand(["getCmdLineAction"])
if error:
print("Error getting bitbake commandline: %s" % error)
return 1
elif not cmdline:
print("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
return 1
elif not cmdline or cmdline[0] != "generateDotGraph":
print("This UI is only compatible with the -g option")
return
ret = server.runCommand(["generateDepTreeEvent", cmdline['action'][1], cmdline['action'][2]])
if ret != True:
print("Couldn't run command! %s" % ret)
return
return 1
ret, error = server.runCommand(["generateDepTreeEvent", cmdline[1], cmdline[2]])
if error:
print("Error running command '%s': %s" % (cmdline, error))
return 1
elif ret != True:
print("Error running command '%s': returned %s" % (cmdline, ret))
return 1
except xmlrpclib.Fault as x:
print("XMLRPC Fault getting commandline:\n %s" % x)
return
@@ -220,13 +226,8 @@ def main(server, eventHandler):
gtk.gdk.threads_enter()
dep = DepExplorer()
bardialog = gtk.Dialog(parent=dep,
flags=gtk.DIALOG_MODAL|gtk.DIALOG_DESTROY_WITH_PARENT)
bardialog.set_default_size(400, 50)
pbar = HobProgressBar()
bardialog.vbox.pack_start(pbar)
bardialog.show_all()
bardialog.connect("delete-event", gtk.main_quit)
pbar = ProgressBar(dep)
pbar.connect("delete-event", gtk.main_quit)
gtk.gdk.threads_leave()
progress_total = 0
@@ -234,7 +235,9 @@ def main(server, eventHandler):
try:
event = eventHandler.waitEvent(0.25)
if gtkthread.quit.isSet():
server.runCommand(["stateStop"])
_, error = server.runCommand(["stateStop"])
if error:
print('Unable to cleanly stop: %s' % error)
break
if event is None:
@@ -243,20 +246,19 @@ def main(server, eventHandler):
if isinstance(event, bb.event.CacheLoadStarted):
progress_total = event.total
gtk.gdk.threads_enter()
bardialog.set_title("Loading Cache")
pbar.update(0)
pbar.set_title("Loading Cache")
pbar.update(0, progress_total)
gtk.gdk.threads_leave()
if isinstance(event, bb.event.CacheLoadProgress):
x = event.current
gtk.gdk.threads_enter()
pbar.update(x * 1.0 / progress_total)
pbar.set_title('')
pbar.update(x, progress_total)
gtk.gdk.threads_leave()
continue
if isinstance(event, bb.event.CacheLoadCompleted):
bardialog.hide()
pbar.hide()
continue
if isinstance(event, bb.event.ParseStarted):
@@ -264,21 +266,19 @@ def main(server, eventHandler):
if progress_total == 0:
continue
gtk.gdk.threads_enter()
pbar.update(0)
bardialog.set_title("Processing recipes")
pbar.set_title("Processing recipes")
pbar.update(0, progress_total)
gtk.gdk.threads_leave()
if isinstance(event, bb.event.ParseProgress):
x = event.current
gtk.gdk.threads_enter()
pbar.update(x * 1.0 / progress_total)
pbar.set_title('')
pbar.update(x, progress_total)
gtk.gdk.threads_leave()
continue
if isinstance(event, bb.event.ParseCompleted):
bardialog.hide()
pbar.hide()
continue
if isinstance(event, bb.event.DepTreeGenerated):
@@ -310,9 +310,13 @@ def main(server, eventHandler):
break
if shutdown == 1:
print("\nSecond Keyboard Interrupt, stopping...\n")
server.runCommand(["stateStop"])
_, error = server.runCommand(["stateStop"])
if error:
print('Unable to cleanly stop: %s' % error)
if shutdown == 0:
print("\nKeyboard Interrupt, closing down...\n")
server.runCommand(["stateShutdown"])
_, error = server.runCommand(["stateShutdown"])
if error:
print('Unable to cleanly shutdown: %s' % error)
shutdown = shutdown + 1
pass

View File

@@ -80,16 +80,19 @@ def main (server, eventHandler):
running_build.connect ("build-failed", running_build_failed_cb)
try:
cmdline = server.runCommand(["getCmdLineAction"])
if not cmdline:
cmdline, error = server.runCommand(["getCmdLineAction"])
if err:
print("Error getting bitbake commandline: %s" % error)
return 1
elif not cmdline:
print("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
return 1
elif not cmdline['action']:
print(cmdline['msg'])
ret, error = server.runCommand(cmdline)
if error:
print("Error running command '%s': %s" % (cmdline, error))
return 1
ret = server.runCommand(cmdline['action'])
if ret != True:
print("Couldn't get default commandline! %s" % ret)
elif ret != True:
print("Error running command '%s': returned %s" % (cmdline, ret))
return 1
except xmlrpclib.Fault as x:
print("XMLRPC Fault getting commandline:\n %s" % x)

View File

@@ -30,7 +30,7 @@ try:
pygtk.require('2.0') # to be certain we don't have gtk+ 1.x !?!
gtkver = gtk.gtk_version
pygtkver = gtk.pygtk_version
if gtkver < (2, 20, 0) or pygtkver < (2, 21, 0):
if gtkver < (2, 18, 0) or pygtkver < (2, 16, 0):
sys.exit("%s,\nYou have Gtk+ %s and PyGtk %s." % (requirements,
".".join(map(str, gtkver)),
".".join(map(str, pygtkver))))

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.0 KiB

After

Width:  |  Height:  |  Size: 4.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.1 KiB

After

Width:  |  Height:  |  Size: 4.5 KiB

View File

@@ -25,12 +25,7 @@ import sys
import xmlrpclib
import logging
import progressbar
import signal
import bb.msg
import time
import fcntl
import struct
import copy
from bb.ui import uihelper
logger = logging.getLogger("BitBake")
@@ -42,21 +37,8 @@ class BBProgress(progressbar.ProgressBar):
widgets = [progressbar.Percentage(), ' ', progressbar.Bar(), ' ',
progressbar.ETA()]
try:
self._resize_default = signal.getsignal(signal.SIGWINCH)
except:
self._resize_default = None
progressbar.ProgressBar.__init__(self, maxval, [self.msg + ": "] + widgets)
def _handle_resize(self, signum, frame):
progressbar.ProgressBar._handle_resize(self, signum, frame)
if self._resize_default:
self._resize_default(signum, frame)
def finish(self):
progressbar.ProgressBar.finish(self)
if self._resize_default:
signal.signal(signal.SIGWINCH, self._resize_default)
class NonInteractiveProgress(object):
fobj = sys.stdout
@@ -88,142 +70,55 @@ def pluralise(singular, plural, qty):
else:
return plural % qty
class InteractConsoleLogFilter(logging.Filter):
def __init__(self, tf, format):
self.tf = tf
self.format = format
def filter(self, record):
if record.levelno == self.format.NOTE and (record.msg.startswith("Running") or record.msg.startswith("recipe ")):
return False
self.tf.clearFooter()
return True
class TerminalFilter(object):
columns = 80
def sigwinch_handle(self, signum, frame):
self.columns = self.getTerminalColumns()
if self._sigwinch_default:
self._sigwinch_default(signum, frame)
def getTerminalColumns(self):
def ioctl_GWINSZ(fd):
try:
cr = struct.unpack('hh', fcntl.ioctl(fd, self.termios.TIOCGWINSZ, '1234'))
except:
return None
return cr
cr = ioctl_GWINSZ(sys.stdout.fileno())
if not cr:
try:
fd = os.open(os.ctermid(), os.O_RDONLY)
cr = ioctl_GWINSZ(fd)
os.close(fd)
except:
pass
if not cr:
try:
cr = (env['LINES'], env['COLUMNS'])
except:
cr = (25, 80)
return cr[1]
def __init__(self, main, helper, console, format):
self.main = main
self.helper = helper
self.cuu = None
self.stdinbackup = None
self.interactive = sys.stdout.isatty()
self.footer_present = False
self.lastpids = []
if not self.interactive:
return
try:
import curses
except ImportError:
sys.exit("FATAL: The knotty ui could not load the required curses python module.")
import termios
self.curses = curses
self.termios = termios
try:
fd = sys.stdin.fileno()
self.stdinbackup = termios.tcgetattr(fd)
new = copy.deepcopy(self.stdinbackup)
new[3] = new[3] & ~termios.ECHO
termios.tcsetattr(fd, termios.TCSADRAIN, new)
curses.setupterm()
self.ed = curses.tigetstr("ed")
if self.ed:
self.cuu = curses.tigetstr("cuu")
try:
self._sigwinch_default = signal.getsignal(signal.SIGWINCH)
signal.signal(signal.SIGWINCH, self.sigwinch_handle)
except:
pass
self.columns = self.getTerminalColumns()
except:
self.cuu = None
console.addFilter(InteractConsoleLogFilter(self, format))
def clearFooter(self):
if self.footer_present:
lines = self.footer_present
sys.stdout.write(self.curses.tparm(self.cuu, lines))
sys.stdout.write(self.curses.tparm(self.ed))
self.footer_present = False
return
def updateFooter(self):
if not self.cuu:
if not main.shutdown or not self.helper.needUpdate:
return
activetasks = self.helper.running_tasks
failedtasks = self.helper.failed_tasks
runningpids = self.helper.running_pids
if self.footer_present and (self.lastcount == self.helper.tasknumber_current) and (self.lastpids == runningpids):
return
if self.footer_present:
self.clearFooter()
if not self.helper.tasknumber_total or self.helper.tasknumber_current == self.helper.tasknumber_total:
if len(runningpids) == 0:
return
self.helper.getTasks()
tasks = []
for t in runningpids:
tasks.append("%s (pid %s)" % (activetasks[t]["title"], t))
if self.main.shutdown:
content = "Waiting for %s running tasks to finish:" % len(activetasks)
elif not len(activetasks):
content = "No currently running tasks (%s of %s)" % (self.helper.tasknumber_current, self.helper.tasknumber_total)
if main.shutdown:
print("Waiting for %s running tasks to finish:" % len(activetasks))
else:
content = "Currently %s running tasks (%s of %s):" % (len(activetasks), self.helper.tasknumber_current, self.helper.tasknumber_total)
print content
lines = 1 + int(len(content) / (self.columns + 1))
print("Currently %s running tasks (%s of %s):" % (len(activetasks), self.helper.tasknumber_current, self.helper.tasknumber_total))
for tasknum, task in enumerate(tasks):
content = "%s: %s" % (tasknum, task)
print content
lines = lines + 1 + int(len(content) / (self.columns + 1))
self.footer_present = lines
self.lastpids = runningpids[:]
self.lastcount = self.helper.tasknumber_current
print("%s: %s" % (tasknum, task))
def finish(self):
if self.stdinbackup:
fd = sys.stdin.fileno()
self.termios.tcsetattr(fd, self.termios.TCSADRAIN, self.stdinbackup)
return
def main(server, eventHandler, tf = TerminalFilter):
# Get values of variables which control our output
includelogs = server.runCommand(["getVariable", "BBINCLUDELOGS"])
loglines = server.runCommand(["getVariable", "BBINCLUDELOGS_LINES"])
consolelogfile = server.runCommand(["getVariable", "BB_CONSOLELOG"])
if sys.stdin.isatty() and sys.stdout.isatty():
log_exec_tty = True
else:
log_exec_tty = False
includelogs, error = server.runCommand(["getVariable", "BBINCLUDELOGS"])
if error:
logger.error("Unable to get the value of BBINCLUDELOGS variable: %s" % error)
return 1
loglines, error = server.runCommand(["getVariable", "BBINCLUDELOGS_LINES"])
if error:
logger.error("Unable to get the value of BBINCLUDELOGS_LINES variable: %s" % error)
return 1
consolelogfile, error = server.runCommand(["getVariable", "BB_CONSOLELOG"])
if error:
logger.error("Unable to get the value of BB_CONSOLELOG variable: %s" % error)
return 1
helper = uihelper.BBUIHelper()
@@ -233,26 +128,28 @@ def main(server, eventHandler, tf = TerminalFilter):
console.setFormatter(format)
logger.addHandler(console)
if consolelogfile:
bb.utils.mkdirhier(os.path.dirname(consolelogfile))
consolelog = logging.FileHandler(consolelogfile)
bb.msg.addDefaultlogFilter(consolelog)
consolelog.setFormatter(format)
logger.addHandler(consolelog)
try:
cmdline = server.runCommand(["getCmdLineAction"])
if not cmdline:
cmdline, error = server.runCommand(["getCmdLineAction"])
if error:
logger.error("Unable to get bitbake commandline arguments: %s" % error)
return 1
elif not cmdline:
print("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
return 1
elif not cmdline['action']:
print(cmdline['msg'])
ret, error = server.runCommand(cmdline)
if error:
logger.error("Command '%s' failed: %s" % (cmdline, error))
return 1
ret = server.runCommand(cmdline['action'])
if ret != True:
print("Couldn't get default commandline! %s" % ret)
elif ret != True:
logger.error("Command '%s' failed: returned %s" % (cmdline, ret))
return 1
except xmlrpclib.Fault as x:
print("XMLRPC Fault getting commandline:\n %s" % x)
logger.error("XMLRPC Fault getting commandline:\n %s" % x)
return 1
parseprogress = None
@@ -279,20 +176,6 @@ def main(server, eventHandler, tf = TerminalFilter):
if not main.shutdown:
main.shutdown = 1
if isinstance(event, bb.event.LogExecTTY):
if log_exec_tty:
tries = event.retries
while tries:
print "Trying to run: %s" % event.prog
if os.system(event.prog) == 0:
break
time.sleep(event.sleep_delay)
tries -= 1
if tries:
continue
logger.warn(event.msg)
continue
if isinstance(event, logging.LogRecord):
if event.levelno >= format.ERROR:
errors = errors + 1
@@ -447,14 +330,19 @@ def main(server, eventHandler, tf = TerminalFilter):
if ioerror.args[0] == 4:
pass
except KeyboardInterrupt:
import time
termfilter.clearFooter()
if main.shutdown == 1:
print("\nSecond Keyboard Interrupt, stopping...\n")
server.runCommand(["stateStop"])
_, error = server.runCommand(["stateStop"])
if error:
logger.error("Unable to cleanly stop: %s" % error)
if main.shutdown == 0:
interrupted = True
print("\nKeyboard Interrupt, closing down...\n")
server.runCommand(["stateShutdown"])
interrupted = True
_, error = server.runCommand(["stateShutdown"])
if error:
logger.error("Unable to cleanly shutdown: %s" % error)
main.shutdown = main.shutdown + 1
pass

View File

@@ -0,0 +1,109 @@
#
# BitBake (No)TTY UI Implementation (v2)
#
# Handling output to TTYs or files (no TTY)
#
# Copyright (C) 2012 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from bb.ui import knotty
import logging
import sys
logger = logging.getLogger("BitBake")
class InteractConsoleLogFilter(logging.Filter):
def __init__(self, tf, format):
self.tf = tf
self.format = format
def filter(self, record):
if record.levelno == self.format.NOTE and (record.msg.startswith("Running") or record.msg.startswith("package ")):
return False
self.tf.clearFooter()
return True
class TerminalFilter2(object):
def __init__(self, main, helper, console, format):
self.main = main
self.helper = helper
self.cuu = None
self.stdinbackup = None
self.interactive = sys.stdout.isatty()
self.footer_present = False
self.lastpids = []
if not self.interactive:
return
import curses
import termios
import copy
self.curses = curses
self.termios = termios
try:
fd = sys.stdin.fileno()
self.stdinbackup = termios.tcgetattr(fd)
new = copy.deepcopy(self.stdinbackup)
new[3] = new[3] & ~termios.ECHO
termios.tcsetattr(fd, termios.TCSADRAIN, new)
curses.setupterm()
self.ed = curses.tigetstr("ed")
if self.ed:
self.cuu = curses.tigetstr("cuu")
except:
self.cuu = None
console.addFilter(InteractConsoleLogFilter(self, format))
def clearFooter(self):
if self.footer_present:
lines = self.footer_present
sys.stdout.write(self.curses.tparm(self.cuu, lines))
sys.stdout.write(self.curses.tparm(self.ed))
self.footer_present = False
def updateFooter(self):
if not self.cuu:
return
activetasks = self.helper.running_tasks
failedtasks = self.helper.failed_tasks
runningpids = self.helper.running_pids
if self.footer_present and (self.lastpids == runningpids):
return
if self.footer_present:
self.clearFooter()
if not activetasks:
return
lines = 1
tasks = []
for t in runningpids:
tasks.append("%s (pid %s)" % (activetasks[t]["title"], t))
if self.main.shutdown:
print("Waiting for %s running tasks to finish:" % len(activetasks))
else:
print("Currently %s running tasks (%s of %s):" % (len(activetasks), self.helper.tasknumber_current, self.helper.tasknumber_total))
for tasknum, task in enumerate(tasks):
print("%s: %s" % (tasknum, task))
lines = lines + 1
self.footer_present = lines
self.lastpids = runningpids[:]
def finish(self):
if self.stdinbackup:
fd = sys.stdin.fileno()
self.termios.tcsetattr(fd, self.termios.TCSADRAIN, self.stdinbackup)
def main(server, eventHandler):
bb.ui.knotty.main(server, eventHandler, TerminalFilter2)

View File

@@ -47,13 +47,7 @@
from __future__ import division
import logging
import os, sys, itertools, time, subprocess
try:
import curses
except ImportError:
sys.exit("FATAL: The ncurses ui could not load the required curses python module.")
import os, sys, curses, itertools, time
import bb
import xmlrpclib
from bb import ui
@@ -236,15 +230,18 @@ class NCursesUI:
shutdown = 0
try:
cmdline = server.runCommand(["getCmdLineAction"])
cmdline, error = server.runCommand(["getCmdLineAction"])
if not cmdline:
print("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
return
elif not cmdline['action']:
print(cmdline['msg'])
elif error:
print("Error getting bitbake commandline: %s" % error)
return
ret = server.runCommand(cmdline['action'])
if ret != True:
ret, error = server.runCommand(cmdline)
if error:
print("Error running command '%s': %s" % (cmdline, error))
return
elif ret != True:
print("Couldn't get default commandlind! %s" % ret)
return
except xmlrpclib.Fault as x:
@@ -292,7 +289,7 @@ class NCursesUI:
# bb.error("log data follows (%s)" % logfile)
# number_of_lines = data.getVar("BBINCLUDELOGS_LINES", d)
# if number_of_lines:
# subprocess.call('tail -n%s %s' % (number_of_lines, logfile), shell=True)
# os.system('tail -n%s %s' % (number_of_lines, logfile))
# else:
# f = open(logfile, "r")
# while True:
@@ -318,8 +315,6 @@ class NCursesUI:
if isinstance(event, bb.cooker.CookerExit):
exitflag = True
if isinstance(event, bb.event.LogExecTTY):
mw.appendText('WARN: ' + event.msg + '\n')
if helper.needUpdate:
activetasks, failedtasks = helper.getTasks()
taw.erase()
@@ -345,10 +340,14 @@ class NCursesUI:
exitflag = True
if shutdown == 1:
mw.appendText("Second Keyboard Interrupt, stopping...\n")
server.runCommand(["stateStop"])
_, error = server.runCommand(["stateStop"])
if error:
print("Unable to cleanly stop: %s" % error)
if shutdown == 0:
mw.appendText("Keyboard Interrupt, closing down...\n")
server.runCommand(["stateShutdown"])
_, error = server.runCommand(["stateShutdown"])
if error:
print("Unable to cleanly shutdown: %s" % error)
shutdown = shutdown + 1
pass

View File

@@ -48,7 +48,7 @@ class BBUIHelper:
self.running_pids.remove(event.pid)
self.failed_tasks.append( { 'title' : "%s %s" % (event._package, event._task)})
self.needUpdate = True
if isinstance(event, bb.runqueue.runQueueTaskStarted) or isinstance(event, bb.runqueue.sceneQueueTaskStarted):
if isinstance(event, bb.runqueue.runQueueTaskStarted):
self.tasknumber_current = event.stats.completed + event.stats.active + event.stats.failed + 1
self.tasknumber_total = event.stats.total

View File

@@ -26,7 +26,6 @@ import logging
import bb
import bb.msg
import multiprocessing
import fcntl
from commands import getstatusoutput
from contextlib import contextmanager
@@ -109,10 +108,130 @@ def vercmp(ta, tb):
r = vercmp_part(ra, rb)
return r
def vercmp_string(a, b):
ta = split_version(a)
tb = split_version(b)
return vercmp(ta, tb)
_package_weights_ = {"pre":-2, "p":0, "alpha":-4, "beta":-3, "rc":-1} # dicts are unordered
_package_ends_ = ["pre", "p", "alpha", "beta", "rc", "cvs", "bk", "HEAD" ] # so we need ordered list
def relparse(myver):
"""Parses the last elements of a version number into a triplet, that can
later be compared.
"""
number = 0
p1 = 0
p2 = 0
mynewver = myver.split('_')
if len(mynewver) == 2:
# an _package_weights_
number = float(mynewver[0])
match = 0
for x in _package_ends_:
elen = len(x)
if mynewver[1][:elen] == x:
match = 1
p1 = _package_weights_[x]
try:
p2 = float(mynewver[1][elen:])
except:
p2 = 0
break
if not match:
# normal number or number with letter at end
divider = len(myver)-1
if myver[divider:] not in "1234567890":
# letter at end
p1 = ord(myver[divider:])
number = float(myver[0:divider])
else:
number = float(myver)
else:
# normal number or number with letter at end
divider = len(myver)-1
if myver[divider:] not in "1234567890":
#letter at end
p1 = ord(myver[divider:])
number = float(myver[0:divider])
else:
number = float(myver)
return [number, p1, p2]
__vercmp_cache__ = {}
def vercmp_string(val1, val2):
"""This takes two version strings and returns an integer to tell you whether
the versions are the same, val1>val2 or val2>val1.
"""
# quick short-circuit
if val1 == val2:
return 0
valkey = val1 + " " + val2
# cache lookup
try:
return __vercmp_cache__[valkey]
try:
return - __vercmp_cache__[val2 + " " + val1]
except KeyError:
pass
except KeyError:
pass
# consider 1_p2 vc 1.1
# after expansion will become (1_p2,0) vc (1,1)
# then 1_p2 is compared with 1 before 0 is compared with 1
# to solve the bug we need to convert it to (1,0_p2)
# by splitting _prepart part and adding it back _after_expansion
val1_prepart = val2_prepart = ''
if val1.count('_'):
val1, val1_prepart = val1.split('_', 1)
if val2.count('_'):
val2, val2_prepart = val2.split('_', 1)
# replace '-' by '.'
# FIXME: Is it needed? can val1/2 contain '-'?
val1 = val1.split("-")
if len(val1) == 2:
val1[0] = val1[0] + "." + val1[1]
val2 = val2.split("-")
if len(val2) == 2:
val2[0] = val2[0] + "." + val2[1]
val1 = val1[0].split('.')
val2 = val2[0].split('.')
# add back decimal point so that .03 does not become "3" !
for x in xrange(1, len(val1)):
if val1[x][0] == '0' :
val1[x] = '.' + val1[x]
for x in xrange(1, len(val2)):
if val2[x][0] == '0' :
val2[x] = '.' + val2[x]
# extend varion numbers
if len(val2) < len(val1):
val2.extend(["0"]*(len(val1)-len(val2)))
elif len(val1) < len(val2):
val1.extend(["0"]*(len(val2)-len(val1)))
# add back _prepart tails
if val1_prepart:
val1[-1] += '_' + val1_prepart
if val2_prepart:
val2[-1] += '_' + val2_prepart
# The above code will extend version numbers out so they
# have the same number of digits.
for x in xrange(0, len(val1)):
cmp1 = relparse(val1[x])
cmp2 = relparse(val2[x])
for y in xrange(0, 3):
myret = cmp1[y] - cmp2[y]
if myret != 0:
__vercmp_cache__[valkey] = myret
return myret
__vercmp_cache__[valkey] = 0
return 0
def explode_deps(s):
"""
@@ -138,7 +257,7 @@ def explode_deps(s):
#r[-1] += ' ' + ' '.join(j)
return r
def explode_dep_versions2(s):
def explode_dep_versions(s):
"""
Take an RDEPENDS style string of format:
"DEPEND1 (optional version) DEPEND2 (optional version) ..."
@@ -147,70 +266,24 @@ def explode_dep_versions2(s):
r = {}
l = s.replace(",", "").split()
lastdep = None
lastcmp = ""
lastver = ""
incmp = False
inversion = False
for i in l:
if i[0] == '(':
incmp = True
i = i[1:].strip()
if not i:
continue
if incmp:
incmp = False
inversion = True
# This list is based on behavior and supported comparisons from deb, opkg and rpm.
#
# Even though =<, <<, ==, !=, =>, and >> may not be supported,
# we list each possibly valid item.
# The build system is responsible for validation of what it supports.
if i.startswith(('<=', '=<', '<<', '==', '!=', '>=', '=>', '>>')):
lastcmp = i[0:2]
i = i[2:]
elif i.startswith(('<', '>', '=')):
lastcmp = i[0:1]
i = i[1:]
else:
# This is an unsupported case!
lastcmp = (i or "")
i = ""
i.strip()
if not i:
continue
lastver = i[1:] or ""
#j = []
elif inversion and i.endswith(')'):
inversion = False
lastver = lastver + " " + (i[:-1] or "")
r[lastdep] = lastver
elif not inversion:
r[i] = None
lastdep = i
lastver = ""
elif inversion:
lastver = lastver + " " + i
if inversion:
if i.endswith(')'):
i = i[:-1] or ""
inversion = False
if lastver and i:
lastver += " "
if i:
lastver += i
if lastdep not in r:
r[lastdep] = []
r[lastdep].append(lastcmp + " " + lastver)
continue
#if not inversion:
lastdep = i
lastver = ""
lastcmp = ""
if not (i in r and r[i]):
r[lastdep] = []
return r
def explode_dep_versions(s):
r = explode_dep_versions2(s)
for d in r:
if not r[d]:
r[d] = None
continue
if len(r[d]) > 1:
bb.warn("explode_dep_versions(): Item %s appeared in dependency string '%s' multiple times with different values. explode_dep_versions cannot cope with this." % (d, s))
r[d] = r[d][0]
return r
def join_deps(deps, commasep=True):
@@ -220,11 +293,7 @@ def join_deps(deps, commasep=True):
result = []
for dep in deps:
if deps[dep]:
if isinstance(deps[dep], list):
for v in deps[dep]:
result.append(dep + " (" + v + ")")
else:
result.append(dep + " (" + deps[dep] + ")")
result.append(dep + " (" + deps[dep] + ")")
else:
result.append(dep)
if commasep:
@@ -266,23 +335,20 @@ def better_compile(text, file, realfile, mode = "exec"):
for line in body:
logger.error(line)
e = bb.BBHandledException(e)
raise e
raise
def better_exec(code, context, text = None, realfile = "<code>"):
def better_exec(code, context, text, realfile = "<code>"):
"""
Similiar to better_compile, better_exec will
print the lines that are responsible for the
error.
"""
import bb.parse
if not text:
text = code
if not hasattr(code, "co_filename"):
code = better_compile(code, realfile, realfile)
try:
exec(code, _context, context)
except Exception as e:
except Exception:
(t, value, tb) = sys.exc_info()
if t in [bb.parse.SkipPackage, bb.build.FuncFailed]:
@@ -307,32 +373,22 @@ def better_exec(code, context, text = None, realfile = "<code>"):
logger.error("The code that was being executed was:")
_print_trace(textarray, linefailed)
logger.error("[From file: '%s', lineno: %s, function: %s]", tbextract[0][0], tbextract[0][1], tbextract[0][2])
logger.error("(file: '%s', lineno: %s, function: %s)", tbextract[0][0], tbextract[0][1], tbextract[0][2])
# See if this is a function we constructed and has calls back into other functions in
# "text". If so, try and improve the context of the error by diving down the trace
level = 0
nexttb = tb.tb_next
while nexttb is not None and (level+1) < len(tbextract):
while nexttb is not None:
if tbextract[level][0] == tbextract[level+1][0] and tbextract[level+1][2] == tbextract[level][0]:
_print_trace(textarray, tbextract[level+1][1])
logger.error("[From file: '%s', lineno: %s, function: %s]", tbextract[level+1][0], tbextract[level+1][1], tbextract[level+1][2])
elif "d" in context and tbextract[level+1][2]:
d = context["d"]
functionname = tbextract[level+1][2]
text = d.getVar(functionname, True)
if text:
_print_trace(text.split('\n'), tbextract[level+1][1])
logger.error("[From file: '%s', lineno: %s, function: %s]", tbextract[level+1][0], tbextract[level+1][1], tbextract[level+1][2])
else:
break
logger.error("(file: '%s', lineno: %s, function: %s)", tbextract[level+1][0], tbextract[level+1][1], tbextract[level+1][2])
else:
break
nexttb = tb.tb_next
level = level + 1
e = bb.BBHandledException(e)
raise e
raise
def simple_exec(code, context):
exec(code, _context, context)
@@ -486,6 +542,8 @@ def preserved_envvars():
'BB_PRESERVE_ENV',
'BB_ENV_WHITELIST',
'BB_ENV_EXTRAWHITE',
'LANG',
'_',
]
return v + preserved_envvars_exported() + preserved_envvars_exported_interactive()
@@ -783,8 +841,6 @@ def which(path, item, direction = 0):
for p in paths:
next = os.path.join(p, item)
if os.path.exists(next):
if not os.path.isabs(next):
next = os.path.abspath(next)
return next
return ""
@@ -816,7 +872,3 @@ def contains(variable, checkvalues, truevalue, falsevalue, d):
def cpu_count():
return multiprocessing.cpu_count()
def nonblockingfd(fd):
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)

View File

@@ -1,15 +1,12 @@
# This is a single Makefile to handle all generated Yocto Project documents.
# The Makefile needs to live in the documents directory and all figures used
# in any manuals must be .PNG files and live in the individual book's figures
# directory as well as in the figures directory for the mega-manual.
# Note that the figures for the Yocto Project Development Manual
# differ depending on the BRANCH being built.
# directory. Note that the figures for the Yocto Project Development Manual
# differ between the 'master' and 'edison' branches.
#
# The Makefile has these targets:
#
# pdf: generates a PDF version of a manual. Not valid for the Quick Start
# or the mega-manual (single, large HTML file comprised of all
# Yocto Project manuals).
# html: generates an HTML version of a manual.
# tarball: creates a tarball for the doc files.
# validate: validates
@@ -17,22 +14,18 @@
# clean: removes files
#
# The Makefile generates an HTML and PDF version of every document except the
# Yocto Project Quick Start and the single, HTML mega-manual, which is comprised
# of all the individual Yocto Project manuals. These two manuals are in HTML
# form only. The variable DOC indicates the folder name for a given manual. The
# variable VER represents the distro version of the Yocto Release for which the
# manuals are being generated. The variable BRANCH is used to indicate the
# branch (edison or denzil) and is used only when DOC=dev-manual or
# DOC=mega-manual. If you do not specify a BRANCH, the default branch used
# will be for the latest Yocto Project release. If you build for either
# edison or denzil, you must use BRANCH. You do not need to use BRANCH for
# any release beyond denzil.
# Yocto Project Quick Start. The Quick Start is in HTML form only. The variable
# DOC is used to indicate the folder name for a given manual. The variable
# VER represents the distro version of the Yocto Release for which the manuals
# are being generated. The variable BRANCH is used to indicate the 'edison'
# branch and is used only when DOC=dev-manual (making the YP Development
# Manual).
#
# To build a manual, you must invoke Makefile with the DOC argument. If you
# are going to publish the manual, then you must invoke Makefile with both the
# DOC and the VER argument. Furthermore, if you are building or publishing
# the edison or denzil versions of the Yocto Poject Development Manual or
# the mega-manual, you must also use the BRANCH argument.
# To build the HTML and PDF versions of the manual you must invoke the Makefile
# with the DOC argument. If you are going to publish the manual then you
# you must invoke the Makefile with both the DOC and the VER argument.
# If you are building the 'edison' version of the YP DEvelopment Manual then
# you must use the DOC and BRANCH arguments.
#
# Examples:
#
@@ -40,43 +33,39 @@
# make DOC=yocto-project-qs
# make pdf DOC=poky-ref-manual
# make DOC=dev-manual BRANCH=edison
# make DOC=mega-manual BRANCH=denzil
#
# The first example generates the HTML and PDF versions of the BSP Guide.
# The second example generates the HTML version only of the Quick Start. Note that
# the Quick Start only has an HTML version available. The third example generates
# both the PDF and HTML versions of the Yocto Project Reference Manual. The
# fourth example generates both the PDF and HTML 'edison' versions of the YP
# Development Manual. The last exmample generates the HTML version of the
# mega-manual and uses the 'denzil' branch when choosing figures for the
# tarball of figures. Any example that does not use the BRANCH argument
# builds the current version of the manual set.
# last example generates both the PDF and HTML 'edison' versions of the YP
# Development Manual.
#
# Use the publish target to push the generated manuals to the Yocto Project
# website. All files needed for the manual's HTML form are pushed as well as the
# PDF version (if applicable).
# Examples:
#
# make publish DOC=bsp-guide VER=1.3
# make publish DOC=adt-manual VER=1.3
# make publish DOC=bsp-guide VER=1.2
# make publish DOC=adt-manual VER=1.2
# make publish DOC=dev-manual VER=1.1.1 BRANCH=edison
# make publish DOC=dev-manual VER=1.2 BRANCH=denzil
# make publish DOC=dev-manual VER=1.2
#
# The first example publishes the 1.3 version of both the PDF and HTML versions of
# the BSP Guide. The second example publishes the 1.3 version of both the PDF and
# The first example publishes the 1.2 version of both the PDF and HTML versions of
# the BSP Guide. The second example publishes the 1.2 version of both the PDF and
# HTML versions of the ADT Manual. The third example publishes the PDF and HTML
# 'edison' versions of the YP Development Manual. The fourth example publishes
# the PDF and HTML 'denzil' versions of the YP Development Manual.
# 'edison' versions of the YP Development Manual. Finally, the last example publishes
# the PDF and HTML 'master' versions of the YP Development Manual.
#
ifeq ($(DOC),bsp-guide)
XSLTOPTS = --stringparam html.stylesheet bsp-style.css \
XSLTOPTS = --stringparam html.stylesheet style.css \
--stringparam chapter.autolabel 1 \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
--xinclude
ALLPREQ = html pdf tarball
TARFILES = bsp-style.css bsp-guide.html bsp-guide.pdf figures/bsp-title.png
TARFILES = style.css bsp-guide.html bsp-guide.pdf figures/bsp-title.png
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
FIGURES = figures
STYLESHEET = $(DOC)/*.css
@@ -84,7 +73,7 @@ STYLESHEET = $(DOC)/*.css
endif
ifeq ($(DOC),dev-manual)
XSLTOPTS = --stringparam html.stylesheet dev-style.css \
XSLTOPTS = --stringparam html.stylesheet style.css \
--stringparam chapter.autolabel 1 \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
@@ -98,7 +87,7 @@ ALLPREQ = html pdf tarball
#
ifeq ($(BRANCH),edison)
TARFILES = dev-style.css dev-manual.html dev-manual.pdf \
TARFILES = style.css dev-manual.html dev-manual.pdf \
figures/app-dev-flow.png figures/bsp-dev-flow.png figures/dev-title.png \
figures/git-workflow.png figures/index-downloads.png figures/kernel-dev-flow.png \
figures/kernel-example-repos-edison.png \
@@ -107,7 +96,7 @@ TARFILES = dev-style.css dev-manual.html dev-manual.pdf \
figures/source-repos.png figures/yp-download.png \
figures/wip.png
else ifeq ($(BRANCH),denzil)
TARFILES = dev-style.css dev-manual.html dev-manual.pdf \
TARFILES = style.css dev-manual.html dev-manual.pdf \
figures/app-dev-flow.png figures/bsp-dev-flow.png figures/dev-title.png \
figures/git-workflow.png figures/index-downloads.png figures/kernel-dev-flow.png \
figures/kernel-example-repos-denzil.png \
@@ -116,11 +105,14 @@ TARFILES = dev-style.css dev-manual.html dev-manual.pdf \
figures/source-repos.png figures/yp-download.png \
figures/wip.png
else
TARFILES = dev-style.css dev-manual.html dev-manual.pdf \
TARFILES = style.css dev-manual.html dev-manual.pdf \
figures/app-dev-flow.png figures/bsp-dev-flow.png figures/dev-title.png \
figures/git-workflow.png figures/index-downloads.png figures/kernel-dev-flow.png \
figures/kernel-overview-1.png figures/kernel-overview-2-generic.png \
figures/source-repos.png figures/yp-download.png
figures/kernel-example-repos-denzil.png \
figures/kernel-overview-1.png figures/kernel-overview-2.png \
figures/kernel-overview-3-denzil.png \
figures/source-repos.png figures/yp-download.png \
figures/wip.png
endif
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
@@ -130,76 +122,24 @@ STYLESHEET = $(DOC)/*.css
endif
ifeq ($(DOC),yocto-project-qs)
XSLTOPTS = --stringparam html.stylesheet qs-style.css \
XSLTOPTS = --stringparam html.stylesheet style.css \
--xinclude
ALLPREQ = html tarball
TARFILES = yocto-project-qs.html qs-style.css figures/yocto-environment.png figures/building-an-image.png figures/using-a-pre-built-image.png figures/yocto-project-transp.png
TARFILES = yocto-project-qs.html style.css figures/yocto-environment.png figures/building-an-image.png figures/using-a-pre-built-image.png figures/yocto-project-transp.png
MANUALS = $(DOC)/$(DOC).html
FIGURES = figures
STYLESHEET = $(DOC)/*.css
endif
ifeq ($(DOC),mega-manual)
XSLTOPTS = --stringparam html.stylesheet mega-style.css \
--stringparam chapter.autolabel 1 \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
--xinclude
ALLPREQ = html tarball
ifeq ($(BRANCH),edison)
TARFILES = mega-manual.html mega-style.css figures/yocto-environment.png figures/building-an-image.png \
figures/using-a-pre-built-image.png \
figures/poky-title.png \
figures/adt-title.png figures/bsp-title.png \
figures/kernel-title.png figures/kernel-architecture-overview.png \
figures/app-dev-flow.png figures/bsp-dev-flow.png figures/dev-title.png \
figures/git-workflow.png figures/index-downloads.png figures/kernel-dev-flow.png \
figures/kernel-example-repos-edison.png \
figures/kernel-overview-1.png figures/kernel-overview-2.png \
figures/kernel-overview-3-edison.png \
figures/source-repos.png figures/yp-download.png \
figures/wip.png
else ifeq ($(BRANCH),denzil)
TARFILES = mega-manual.html mega-style.css figures/yocto-environment.png figures/building-an-image.png \
figures/using-a-pre-built-image.png \
figures/poky-title.png \
figures/adt-title.png figures/bsp-title.png \
figures/kernel-title.png figures/kernel-architecture-overview.png \
figures/app-dev-flow.png figures/bsp-dev-flow.png figures/dev-title.png \
figures/git-workflow.png figures/index-downloads.png figures/kernel-dev-flow.png \
figures/kernel-example-repos-denzil.png \
figures/kernel-overview-1.png figures/kernel-overview-2.png \
figures/kernel-overview-3-denzil.png \
figures/source-repos.png figures/yp-download.png \
figures/wip.png
else
TARFILES = mega-manual.html mega-style.css figures/yocto-environment.png figures/building-an-image.png \
figures/using-a-pre-built-image.png \
figures/poky-title.png \
figures/adt-title.png figures/bsp-title.png \
figures/kernel-title.png figures/kernel-architecture-overview.png \
figures/app-dev-flow.png figures/bsp-dev-flow.png figures/dev-title.png \
figures/git-workflow.png figures/index-downloads.png figures/kernel-dev-flow.png \
figures/kernel-overview-1.png figures/kernel-overview-2-generic.png \
figures/source-repos.png figures/yp-download.png
endif
MANUALS = $(DOC)/$(DOC).html
FIGURES = figures
STYLESHEET = $(DOC)/*.css
endif
ifeq ($(DOC),poky-ref-manual)
XSLTOPTS = --stringparam html.stylesheet ref-style.css \
XSLTOPTS = --stringparam html.stylesheet style.css \
--stringparam chapter.autolabel 1 \
--stringparam appendix.autolabel A \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
--xinclude
ALLPREQ = html pdf tarball
TARFILES = poky-ref-manual.html ref-style.css figures/poky-title.png
TARFILES = poky-ref-manual.html style.css figures/poky-title.png
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
FIGURES = figures
STYLESHEET = $(DOC)/*.css
@@ -207,28 +147,28 @@ endif
ifeq ($(DOC),adt-manual)
XSLTOPTS = --stringparam html.stylesheet adt-style.css \
XSLTOPTS = --stringparam html.stylesheet style.css \
--stringparam chapter.autolabel 1 \
--stringparam appendix.autolabel A \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
--xinclude
ALLPREQ = html pdf tarball
TARFILES = adt-manual.html adt-manual.pdf adt-style.css figures/adt-title.png
TARFILES = adt-manual.html adt-manual.pdf style.css figures/adt-title.png
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
FIGURES = figures
STYLESHEET = $(DOC)/*.css
endif
ifeq ($(DOC),kernel-manual)
XSLTOPTS = --stringparam html.stylesheet kernel-style.css \
XSLTOPTS = --stringparam html.stylesheet style.css \
--stringparam chapter.autolabel 1 \
--stringparam appendix.autolabel A \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
--xinclude
ALLPREQ = html pdf tarball
TARFILES = kernel-manual.html kernel-manual.pdf kernel-style.css figures/kernel-title.png figures/kernel-architecture-overview.png
TARFILES = kernel-manual.html kernel-manual.pdf style.css figures/kernel-title.png figures/kernel-architecture-overview.png
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
FIGURES = figures
STYLESHEET = $(DOC)/*.css
@@ -246,47 +186,17 @@ all: $(ALLPREQ)
pdf:
ifeq ($(DOC),yocto-project-qs)
@echo " "
@echo "ERROR: You cannot generate a yocto-project-qs PDF file."
@echo "ERROR: You cannot generate a PDF file for the Yocto Project Quick Start"
@echo " "
else ifeq ($(DOC),mega-manual)
@echo " "
@echo "ERROR: You cannot generate a mega-manual PDF file."
@echo " "
else
cd $(DOC); ../tools/poky-docbook-to-pdf $(DOC).xml ../template; cd ..
endif
html:
ifeq ($(DOC),mega-manual)
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
@echo " "
@echo "******** Building "$(DOC)
@echo " "
cd $(DOC); xsltproc $(XSLTOPTS) -o $(DOC).html $(DOC)-customization.xsl $(DOC).xml; cd ..
@echo " "
@echo "******** Using mega-manual.sed to process external links"
@echo " "
cd $(DOC); sed -f ../tools/mega-manual.sed < mega-manual.html > mega-output.html; cd ..
@echo " "
@echo "******** Cleaning up transient file mega-output.html"
@echo " "
cd $(DOC); rm mega-manual.html; mv mega-output.html mega-manual.html; cd ..
else
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
@echo " "
@echo "******** Building "$(DOC)
@echo " "
cd $(DOC); xsltproc $(XSLTOPTS) -o $(DOC).html $(DOC)-customization.xsl $(DOC).xml; cd ..
endif
tarball: html
@echo " "
@echo "******** Creating Tarball of document files"
@echo " "
cd $(DOC); tar -cvzf $(DOC).tgz $(TARFILES); cd ..
validate:
@@ -294,18 +204,8 @@ validate:
publish:
@if test -f $(DOC)/$(DOC).html; \
then \
echo " "; \
echo "******** Publishing "$(DOC)".html"; \
echo " "; \
scp -r $(MANUALS) $(STYLESHEET) www.yoctoproject.org:/srv/www/www.yoctoproject.org-docs/$(VER)/$(DOC); \
cd $(DOC); scp -r $(FIGURES) www.yoctoproject.org:/srv/www/www.yoctoproject.org-docs/$(VER)/$(DOC); \
else \
echo " "; \
echo $(DOC)".html missing. Generate the file first then try again."; \
echo " "; \
fi
scp -r $(MANUALS) $(STYLESHEET) www.yoctoproject.org:/srv/www/www.yoctoproject.org-docs/$(VER)/$(DOC)
cd $(DOC); scp -r $(FIGURES) www.yoctoproject.org:/srv/www/www.yoctoproject.org-docs/$(VER)/$(DOC)
clean:
rm -f $(MANUALS)

View File

@@ -41,22 +41,8 @@ Folders exist for individual manuals as follows:
* kernel-manual - The Yocto Project Kernel Architecture and Use Manual
* poky-ref-manual - The Yocto Project Reference Manual
* yocto-project-qs - The Yocto Project Quick Start
* mega-manual - The aggregated manual comprised of all YP manuals and guides
Each folder is self-contained regarding content and figures. Note that there
is a sed file needed to process the links of the mega-manual. The sed file
is located in the tools directory. Also note that the figures folder in the
mega-manual directory contains duplicates of all the figures in the YP folders
directories for all YP manuals and guides.
If you want to find HTML versions of the Yocto Project manuals on the web,
go to http://www.yoctoproject.org and click on the "Documentation" tab. From
there you have access to archived documentation from previous releases, current
documentation for the latest release, and "Docs in Progress" for the release
currently being developed.
In general, the Yocto Project site (http://www.yoctoproject.org) is a great
reference for both information and downloads.
Each folder is self-contained regarding content and figures.
Makefile
========
@@ -85,10 +71,7 @@ Contains various templates, fonts, and some old PNG files.
tools
=====
Contains a tool to convert the DocBook files to PDF format. This folder also
contains the mega-manual.sed file, which is used by Makefile to process
cross-references from within the manual that normally go to an external
manual.
Contains a tool to convert the DocBook files to PDF format.

View File

@@ -8,9 +8,9 @@
<para>
Recall that earlier the manual discussed how to use an existing toolchain
tarball that had been installed into <filename>/opt/poky</filename>,
which is outside of the
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
(see the section "<link linkend='using-an-existing-toolchain-tarball'>Using a Cross-Toolchain Tarball)</link>".
which is outside of the Yocto Project build tree
(see the section "<link linkend='using-an-existing-toolchain-tarball'>Using an Existing
Toolchain Tarball)</link>".
And, that sourcing your architecture-specific environment setup script
initializes a suitable cross-toolchain development environment.
During the setup, locations for the compiler, QEMU scripts, QEMU binary,
@@ -21,7 +21,7 @@
for example, <filename>configure.sh</filename> can find pre-generated
test results for tests that need target hardware on which to run.
These conditions allow you to easily use the toolchain outside of the
OpenEmbedded build environment on both autotools-based projects and
Yocto Project build environment on both autotools-based projects and
Makefile-based projects.
</para>
@@ -32,7 +32,7 @@
For an Autotools-based project, you can use the cross-toolchain by just
passing the appropriate host option to <filename>configure.sh</filename>.
The host option you use is derived from the name of the environment setup
script in <filename>/opt/poky</filename> resulting from installation of the
script in <filename>/opt/poky</filename> resulting from unpacking the
cross-toolchain tarball.
For example, the host option for an ARM-based target that uses the GNU EABI
is <filename>armv5te-poky-linux-gnueabi</filename>.

View File

@@ -0,0 +1,736 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
<chapter id='adt-eclipse'>
<title>Working Within Eclipse</title>
<para>
The Eclipse IDE is a popular development environment and it fully supports
development using Yocto Project.
When you install and configure the Eclipse Yocto Project Plug-in into
the Eclipse IDE, you maximize your Yocto Project design experience.
Installing and configuring the Plug-in results in an environment that
has extensions specifically designed to let you more easily develop software.
These extensions allow for cross-compilation, deployment, and execution of
your output into a QEMU emulation session.
You can also perform cross-debugging and profiling.
The environment also supports a suite of tools that allows you to perform
remote profiling, tracing, collection of power data, collection of
latency data, and collection of performance data.
</para>
<para>
This section describes how to install and configure the Eclipse IDE
Yocto Plug-in and how to use it to develop your Yocto Project.
</para>
<section id='setting-up-the-eclipse-ide'>
<title>Setting Up the Eclipse IDE</title>
<para>
To develop within the Eclipse IDE, you need to do the following:
<orderedlist>
<listitem><para>Install the optimal version of the Eclipse IDE.</para></listitem>
<listitem><para>Configure the Eclipse IDE.</para></listitem>
<listitem><para>Install the Eclipse Yocto Plug-in.</para></listitem>
<listitem><para>Configure the Eclipse Yocto Plug-in.</para></listitem>
</orderedlist>
</para>
<section id='installing-eclipse-ide'>
<title>Installing the Eclipse IDE</title>
<para>
It is recommended that you have the Indigo 3.7.2 version of the
Eclipse IDE installed on your development system.
If you dont have this version, you can find it at
<ulink url='&ECLIPSE_MAIN_URL;'></ulink>.
From that site, choose the Eclipse Classic version particular to your development
host.
This version contains the Eclipse Platform, the Java Development
Tools (JDT), and the Plug-in Development Environment.
</para>
<para>
Once you have downloaded the tarball, extract it into a clean
directory.
For example, the following commands unpack and install the Eclipse IDE
tarball found in the <filename>Downloads</filename> area
into a clean directory using the default name <filename>eclipse</filename>:
<literallayout class='monospaced'>
$ cd ~
$ tar -xzvf ~/Downloads/eclipse-SDK-3.7.1-linux-gtk-x86_64.tar.gz
</literallayout>
</para>
<para>
One issue exists that you need to be aware of regarding the Java
Virtual machines garbage collection (GC) process.
The GC process does not clean up the permanent generation
space (PermGen).
This space stores metadata descriptions of classes.
The default value is set too small and it could trigger an
out-of-memory error such as the following:
<literallayout class='monospaced'>
Java.lang.OutOfMemoryError: PermGen space
</literallayout>
</para>
<para>
This error causes the application to hang.
</para>
<para>
To fix this issue, you can use the <filename>--vmargs</filename>
option when you start Eclipse to increase the size of the permanent generation space:
<literallayout class='monospaced'>
eclipse --vmargs --XX:PermSize=256M
</literallayout>
</para>
</section>
<section id='configuring-the-eclipse-ide'>
<title>Configuring the Eclipse IDE</title>
<para>
Before installing and configuring the Eclipse Yocto Plug-in, you need to configure
the Eclipse IDE.
Follow these general steps to configure Eclipse:
<orderedlist>
<listitem><para>Start the Eclipse IDE.</para></listitem>
<listitem><para>Make sure you are in your Workbench and select
"Install New Software" from the "Help" pull-down menu.
</para></listitem>
<listitem><para>Select <filename>indigo - &ECLIPSE_INDIGO_URL;</filename>
from the "Work with:" pull-down menu.</para></listitem>
<listitem><para>Expand the box next to <filename>Programming Languages</filename>
and select the <filename>Autotools Support for CDT (incubation)</filename>
and <filename>C/C++ Development Tools</filename> boxes.</para></listitem>
<listitem><para>Expand the box next to "Linux Tools" and select the
"LTTng - Linux Tracing Toolkit(incubation)" boxes.</para></listitem>
<listitem><para>Complete the installation and restart the Eclipse IDE.</para></listitem>
<listitem><para>After the Eclipse IDE restarts and from the Workbench, select
"Install New Software" from the "Help" pull-down menu.</para></listitem>
<listitem><para>Click the
"Available Software Sites" link.</para></listitem>
<listitem><para>Check the box next to
<filename>&ECLIPSE_UPDATES_URL;</filename>
and click "OK".</para></listitem>
<listitem><para>Select <filename>&ECLIPSE_UPDATES_URL;</filename>
from the "Work with:" pull-down menu.</para></listitem>
<listitem><para>Check the box next to <filename>TM and RSE Main Features</filename>.
</para></listitem>
<listitem><para>Expand the box next to <filename>TM and RSE Optional Add-ons</filename>
and select every item except <filename>RSE Unit Tests</filename> and
<filename>RSE WinCE Services (incubation)</filename>.</para></listitem>
<listitem><para>Complete the installation and restart the Eclipse IDE.</para></listitem>
<listitem><para>If necessary, select
"Install New Software" from the "Help" pull-down menu so you can click the
"Available Software Sites" link again.</para></listitem>
<listitem><para>After clicking "Available Software Sites", check the box next to
<filename>http://download.eclipse.org/tools/cdt/releases/indigo</filename>
and click "OK".</para></listitem>
<listitem><para>Select <filename>&ECLIPSE_INDIGO_CDT_URL;</filename>
from the "Work with:" pull-down menu.</para></listitem>
<listitem><para>Check the box next to <filename>CDT Main Features</filename>.
</para></listitem>
<listitem><para>Expand the box next to <filename>CDT Optional Features</filename>
and select <filename>C/C++ Remote Launch</filename> and
<filename>Target Communication Framework (incubation)</filename>.</para></listitem>
<listitem><para>Complete the installation and restart the Eclipse IDE.</para></listitem>
</orderedlist>
</para>
</section>
<section id='installing-the-eclipse-yocto-plug-in'>
<title>Installing or Accessing the Eclipse Yocto Plug-in</title>
<para>
You can install the Eclipse Yocto Plug-in into the Eclipse IDE
one of two ways: use the Yocto Project update site to install the pre-built plug-in,
or build and install the plug-in from the latest source code.
If you don't want to permanently install the plug-in but just want to try it out
within the Eclipse environment, you can import the plug-in project from the
Yocto Project source repositories.
</para>
<section id='new-software'>
<title>Installing the Pre-built Plug-in from the Yocto Project Eclipse Update Site</title>
<para>
To install the Eclipse Yocto Plug-in from the update site,
follow these steps:
<orderedlist>
<listitem><para>Start up the Eclipse IDE.</para></listitem>
<listitem><para>In Eclipse, select "Install New Software" from the "Help" menu.</para></listitem>
<listitem><para>Click "Add..." in the "Work with:" area.</para></listitem>
<listitem><para>Enter
<filename>&ECLIPSE_DL_PLUGIN_URL;</filename>
in the URL field and provide a meaningful name in the "Name" field.</para></listitem>
<listitem><para>Click "OK" to have the entry added to the "Work with:"
drop-down list.</para></listitem>
<listitem><para>Select the entry for the plug-in from the "Work with:" drop-down
list.</para></listitem>
<listitem><para>Check the box next to <filename>Development tools and SDKs for Yocto Linux</filename>.
</para></listitem>
<listitem><para>Complete the remaining software installation steps and
then restart the Eclipse IDE to finish the installation of the plug-in.
</para></listitem>
</orderedlist>
</para>
</section>
<section id='zip-file-method'>
<title>Installing the Plug-in Using the Latest Source Code</title>
<para>
To install the Eclipse Yocto Plug-in from the latest source code, follow these steps:
<orderedlist>
<listitem><para>Open a shell and create a Git repository with:
<literallayout class='monospaced'>
$ git clone git://git.yoctoproject.org/eclipse-poky yocto-eclipse
</literallayout>
For this example, the repository is named
<filename>~/yocto-eclipse</filename>.</para></listitem>
<listitem><para>Locate the <filename>build.sh</filename> script in the
Git repository you created in the previous step.
The script is located in the <filename>scripts</filename>.</para></listitem>
<listitem><para>Be sure to set and export the <filename>ECLIPSE_HOME</filename> environment
variable to the top-level directory in which you installed the Indigo
version of Eclipse.
For example, if your Eclipse directory is <filename>$HOME/eclipse</filename>,
use the following:
<literallayout class='monospaced'>
$ export ECLIPSE_HOME=$HOME/eclipse
</literallayout></para></listitem>
<listitem><para>Run the <filename>build.sh</filename> script and provide the
name of the Git branch along with the Yocto Project release you are
using.
Here is an example that uses the <filename>master</filename> Git repository
and the <filename>1.1M4</filename> release:
<literallayout class='monospaced'>
$ scripts/build.sh master 1.1M4
</literallayout>
After running the script, the file
<filename>org.yocto.sdk-&lt;release&gt;-&lt;date&gt;-archive.zip</filename>
is in the current directory.</para></listitem>
<listitem><para>If necessary, start the Eclipse IDE and be sure you are in the
Workbench.</para></listitem>
<listitem><para>Select "Install New Software" from the "Help" pull-down menu.
</para></listitem>
<listitem><para>Click "Add".</para></listitem>
<listitem><para>Provide anything you want in the "Name" field.</para></listitem>
<listitem><para>Click "Archive" and browse to the ZIP file you built
in step four.
This ZIP file should not be "unzipped", and must be the
<filename>*archive.zip</filename> file created by running the
<filename>build.sh</filename> script.</para></listitem>
<listitem><para>Check the box next to the new entry in the installation window and complete
the installation.</para></listitem>
<listitem><para>Restart the Eclipse IDE if necessary.</para></listitem>
</orderedlist>
</para>
<para>
At this point you should be able to configure the Eclipse Yocto Plug-in as described in the
"<link linkend='configuring-the-eclipse-yocto-plug-in'>Configuring the Eclipse Yocto Plug-in</link>"
section.</para>
</section>
<section id='yocto-project-source'>
<title>Importing the Plug-in Project into the Eclipse Environment</title>
<para>
Importing the Eclipse Yocto Plug-in project from the Yocto Project source repositories
is useful when you want to try out the latest plug-in from the tip of plug-in's
development tree.
It is important to understand when you import the plug-in you are not installing
it into the Eclipse application.
Rather, you are importing the project and just using it.
To import the plug-in project, follow these steps:
<orderedlist>
<listitem><para>Open a shell and create a Git repository with:
<literallayout class='monospaced'>
$ git clone git://git.yoctoproject.org/eclipse-poky yocto-eclipse
</literallayout>
For this example, the repository is named
<filename>~/yocto-eclipse</filename>.</para></listitem>
<listitem><para>In Eclipse, select "Import" from the "File" menu.</para></listitem>
<listitem><para>Expand the "General" box and select "existing projects into workspace"
and then click "Next".</para></listitem>
<listitem><para>Select the root directory and browse to
<filename>~/yocto-eclipse/plugins</filename>.</para></listitem>
<listitem><para>Three plug-ins exist: "org.yocto.bc.ui", "org.yocto.sdk.ide", and
"org.yocto.sdk.remotetools".
Select and import all of them.</para></listitem>
</orderedlist>
</para>
<para>
The left navigation pane in the Eclipse application shows the default projects.
Right-click on one of these projects and run it as an Eclipse application.
This brings up a second instance of Eclipse IDE that has the Yocto Plug-in.
</para>
</section>
</section>
<section id='configuring-the-eclipse-yocto-plug-in'>
<title>Configuring the Eclipse Yocto Plug-in</title>
<para>
Configuring the Eclipse Yocto Plug-in involves setting the Cross
Compiler options and the Target options.
The configurations you choose become the default settings for all projects.
You do have opportunities to change them later when
you configure the project (see the following section).
</para>
<para>
To start, you need to do the following from within the Eclipse IDE:
<itemizedlist>
<listitem><para>Choose <filename>Windows -&gt; Preferences</filename> to display
the <filename>Preferences</filename> Dialog</para></listitem>
<listitem><para>Click <filename>Yocto ADT</filename></para></listitem>
</itemizedlist>
</para>
<section id='configuring-the-cross-compiler-options'>
<title>Configuring the Cross-Compiler Options</title>
<para>
To configure the Cross Compiler Options, you must select the type of toolchain,
point to the toolchain, specify the sysroot location, and select the target architecture.
<itemizedlist>
<listitem><para><emphasis>Selecting the Toolchain Type:</emphasis>
Choose between <filename>Standalone pre-built toolchain</filename>
and <filename>Build system derived toolchain</filename> for Cross
Compiler Options.
<itemizedlist>
<listitem><para><emphasis>
<filename>Standalone Pre-built Toolchain:</filename></emphasis>
Select this mode when you are using a stand-alone cross-toolchain.
For example, suppose you are an application developer and do not
need to build a target image.
Instead, you just want to use an architecture-specific toolchain on an
existing kernel and target root filesystem.
</para></listitem>
<listitem><para><emphasis>
<filename>Build System Derived Toolchain:</filename></emphasis>
Select this mode if the cross-toolchain has been installed and built
as part of the Yocto Project build tree.
When you select <filename>Build system derived toolchain</filename>,
you are using the toolchain bundled
inside the Yocto Project build tree.
</para></listitem>
</itemizedlist>
</para></listitem>
<listitem><para><emphasis>Point to the Toolchain:</emphasis>
If you are using a stand-alone pre-built toolchain, you should be pointing to the
<filename>&YOCTO_ADTPATH_DIR;</filename> directory.
This is the location for toolchains installed by the ADT Installer or by hand.
Sections "<link linkend='configuring-and-running-the-adt-installer-script'>Configuring
and Running the ADT Installer Script</link>" and
"<link linkend='using-an-existing-toolchain-tarball'>Using a Cross-Toolchain
Tarball</link>" describe two ways to install
a stand-alone cross-toolchain in the
<filename>/opt/poky</filename> directory.
<note>It is possible to install a stand-alone cross-toolchain in a directory
other than <filename>/opt/poky</filename>.
However, doing so is discouraged.</note></para>
<para>If you are using a system-derived toolchain, the path you provide
for the <filename>Toolchain Root Location</filename>
field is the Yocto Project's build directory.
See section "<link linkend='using-the-toolchain-from-within-the-build-tree'>Using
BitBake and the Yocto Project Build Tree</link>" for
information on how to install the toolchain into the Yocto
Project build tree.</para></listitem>
<listitem><para><emphasis>Specify the Sysroot Location:</emphasis>
This location is where the root filesystem for the
target hardware is created on the development system by the ADT Installer.
The QEMU user-space tools, the
NFS boot process, and the cross-toolchain all use the sysroot location.
</para></listitem>
<listitem><para><emphasis>Select the Target Architecture:</emphasis>
The target architecture is the type of hardware you are
going to use or emulate.
Use the pull-down <filename>Target Architecture</filename> menu to make
your selection.
The pull-down menu should have the supported architectures.
If the architecture you need is not listed in the menu, you
will need to build the image.
See the "<ulink url='&YOCTO_DOCS_QS_URL;#building-image'>Building an Image</ulink>" section
of The Yocto Project Quick Start for more information.</para></listitem>
</itemizedlist>
</para>
</section>
<section id='configuring-the-target-options'>
<title>Configuring the Target Options</title>
<para>
You can choose to emulate hardware using the QEMU emulator, or you
can choose to run your image on actual hardware.
<itemizedlist>
<listitem><para><emphasis><filename>QEMU:</filename></emphasis> Select this option if
you will be using the QEMU emulator.
If you are using the emulator, you also need to locate the kernel
and specify any custom options.</para>
<para>If you selected <filename>Build system derived toolchain</filename>,
the target kernel you built will be located in the
Yocto Project build tree in <filename>tmp/deploy/images</filename> directory.
If you selected <filename>Standalone pre-built toolchain</filename>, the
pre-built image you downloaded is located
in the directory you specified when you downloaded the image.</para>
<para>Most custom options are for advanced QEMU users to further
customize their QEMU instance.
These options are specified between paired angled brackets.
Some options must be specified outside the brackets.
In particular, the options <filename>serial</filename>,
<filename>nographic</filename>, and <filename>kvm</filename> must all
be outside the brackets.
Use the <filename>man qemu</filename> command to get help on all the options
and their use.
The following is an example:
<literallayout class='monospaced'>
serial &lt;-m 256 -full-screen&gt;
</literallayout></para>
<para>
Regardless of the mode, Sysroot is already defined as part of the
Cross Compiler Options configuration in the
<filename>Sysroot Location:</filename> field.</para></listitem>
<listitem><para><emphasis><filename>External HW:</filename></emphasis> Select this option
if you will be using actual hardware.</para></listitem>
</itemizedlist>
</para>
<para>
Click the <filename>OK</filename> button to save your plug-in configurations.
</para>
</section>
</section>
</section>
<section id='creating-the-project'>
<title>Creating the Project</title>
<para>
You can create two types of projects: Autotools-based, or Makefile-based.
This section describes how to create Autotools-based projects from within
the Eclipse IDE.
For information on creating Makefile-based projects in a terminal window, see the section
"<link linkend='using-the-command-line'>Using the Command Line</link>".
</para>
<para>
To create a project based on a Yocto template and then display the source code,
follow these steps:
<orderedlist>
<listitem><para>Select <filename>File -&gt; New -&gt; Project</filename>.</para></listitem>
<listitem><para>Double click <filename>CC++</filename>.</para></listitem>
<listitem><para>Double click <filename>C Project</filename> to create the project.</para></listitem>
<listitem><para>Expand <filename>Yocto ADT Project</filename>.</para></listitem>
<listitem><para>Select <filename>Hello World ANSI C Autotools Project</filename>.
This is an Autotools-based project based on a Yocto Project template.</para></listitem>
<listitem><para>Put a name in the <filename>Project name:</filename> field.
Do not use hyphens as part of the name.</para></listitem>
<listitem><para>Click <filename>Next</filename>.</para></listitem>
<listitem><para>Add information in the <filename>Author</filename> and
<filename>Copyright notice</filename> fields.</para></listitem>
<listitem><para>Be sure the <filename>License</filename> field is correct.</para></listitem>
<listitem><para>Click <filename>Finish</filename>.</para></listitem>
<listitem><para>If the "open perspective" prompt appears, click "Yes" so that you
in the C/C++ perspective.</para></listitem>
<listitem><para>The left-hand navigation pane shows your project.
You can display your source by double clicking the project's source file.
</para></listitem>
</orderedlist>
</para>
</section>
<section id='configuring-the-cross-toolchains'>
<title>Configuring the Cross-Toolchains</title>
<para>
The earlier section, "<link linkend='configuring-the-eclipse-yocto-plug-in'>Configuring
the Eclipse Yocto Plug-in</link>", sets up the default project
configurations.
You can override these settings for a given project by following these steps:
<orderedlist>
<listitem><para>Select <filename>Project -&gt; Change Yocto Project Settings</filename>:
This selection brings up the <filename>Project Yocto Settings</filename> Dialog
and allows you to make changes specific to an individual project.
</para>
<para>By default, the Cross Compiler Options and Target Options for a project
are inherited from settings you provide using the <filename>Preferences</filename>
Dialog as described earlier
in the "<link linkend='configuring-the-eclipse-yocto-plug-in'>Configuring the Eclipse
Yocto Plug-in</link>" section.
The <filename>Project Yocto Settings</filename>
Dialog allows you to override those default settings
for a given project.</para></listitem>
<listitem><para>Make your configurations for the project and click "OK".</para></listitem>
<listitem><para>Select <filename>Project -&gt; Reconfigure Project</filename>:
This selection reconfigures the project by running
<filename>autogen.sh</filename> in the workspace for your project.
The script also runs <filename>libtoolize</filename>, <filename>aclocal</filename>,
<filename>autoconf</filename>, <filename>autoheader</filename>,
<filename>automake --a</filename>, and
<filename>./configure</filename>.
Click on the <filename>Console</filename> tab beneath your source code to
see the results of reconfiguring your project.</para></listitem>
</orderedlist>
</para>
</section>
<section id='building-the-project'>
<title>Building the Project</title>
<para>
To build the project, select <filename>Project -&gt; Build Project</filename>.
The console should update and you can note the cross-compiler you are using.
</para>
</section>
<section id='starting-qemu-in-user-space-nfs-mode'>
<title>Starting QEMU in User Space NFS Mode</title>
<para>
To start the QEMU emulator from within Eclipse, follow these steps:
<orderedlist>
<listitem><para>Expose the <filename>Run -&gt; External Tools</filename> menu.
Your image should appear as a selectable menu item.
</para></listitem>
<listitem><para>Select your image from the menu to launch the
emulator in a new window.</para></listitem>
<listitem><para>If needed, enter your host root password in the shell window at the prompt.
This sets up a <filename>Tap 0</filename> connection needed for running in user-space
NFS mode.</para></listitem>
<listitem><para>Wait for QEMU to launch.</para></listitem>
<listitem><para>Once QEMU launches, you can begin operating within that
environment.
For example, you could determine the IP Address
for the user-space NFS by using the <filename>ifconfig</filename> command.
</para></listitem>
</orderedlist>
</para>
</section>
<section id='deploying-and-debugging-the-application'>
<title>Deploying and Debugging the Application</title>
<para>
Once the QEMU emulator is running the image, using the Eclipse IDE
you can deploy your application and use the emulator to perform debugging.
Follow these steps to deploy the application.
<orderedlist>
<listitem><para>Select <filename>Run -&gt; Debug Configurations...</filename></para></listitem>
<listitem><para>In the left area, expand <filename>C/C++Remote Application</filename>.</para></listitem>
<listitem><para>Locate your project and select it to bring up a new
tabbed view in the <filename>Debug Configurations</filename> Dialog.</para></listitem>
<listitem><para>Enter the absolute path into which you want to deploy
the application.
Use the <filename>Remote Absolute File Path for C/C++Application:</filename> field.
For example, enter <filename>/usr/bin/&lt;programname&gt;</filename>.</para></listitem>
<listitem><para>Click on the <filename>Debugger</filename> tab to see the cross-tool debugger
you are using.</para></listitem>
<listitem><para>Click on the <filename>Main</filename> tab.</para></listitem>
<listitem><para>Create a new connection to the QEMU instance
by clicking on <filename>new</filename>.</para></listitem>
<listitem><para>Select <filename>TCF</filename>, which means Target Communication
Framework.</para></listitem>
<listitem><para>Click <filename>Next</filename>.</para></listitem>
<listitem><para>Clear out the <filename>host name</filename> field and enter the IP Address
determined earlier.</para></listitem>
<listitem><para>Click <filename>Finish</filename> to close the
<filename>New Connections</filename> Dialog.</para></listitem>
<listitem><para>Use the drop-down menu now in the <filename>Connection</filename> field and pick
the IP Address you entered.</para></listitem>
<listitem><para>Click <filename>Debug</filename> to bring up a login screen
and login.</para></listitem>
<listitem><para>Accept the debug perspective.</para></listitem>
</orderedlist>
</para>
</section>
<section id='running-user-space-tools'>
<title>Running User-Space Tools</title>
<para>
As mentioned earlier in the manual, several tools exist that enhance
your development experience.
These tools are aids in developing and debugging applications and images.
You can run these user-space tools from within the Eclipse IDE through the
<filename>YoctoTools</filename> menu.
</para>
<para>
Once you pick a tool, you need to configure it for the remote target.
Every tool needs to have the connection configured.
You must select an existing TCF-based RSE connection to the remote target.
If one does not exist, click <filename>New</filename> to create one.
</para>
<para>
Here are some specifics about the remote tools:
<itemizedlist>
<listitem><para><emphasis><filename>OProfile</filename>:</emphasis> Selecting this tool causes
the <filename>oprofile-server</filename> on the remote target to launch on
the local host machine.
The <filename>oprofile-viewer</filename> must be installed on the local host machine and the
<filename>oprofile-server</filename> must be installed on the remote target,
respectively, in order to use.
You must compile and install the <filename>oprofile-viewer</filename> from the source code
on your local host machine.
Furthermore, in order to convert the target's sample format data into a form that the
host can use, you must have <filename>oprofile</filename> version 0.9.4 or
greater installed on the host.</para>
<para>You can locate both the viewer and server from
<ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/oprofileui/'></ulink>.
<note>The <filename>oprofile-server</filename> is installed by default on
the <filename>core-image-sato-sdk</filename> image.</note></para></listitem>
<listitem><para><emphasis><filename>Lttng-ust</filename>:</emphasis> Selecting this tool runs
<filename>usttrace</filename> on the remote target, transfers the output data back
to the local host machine, and uses the <filename>lttng</filename> Eclipse plug-in to
graphically display the output.
For information on how to use <filename>lttng</filename> to trace an application, see
<ulink url='http://lttng.org/files/ust/manual/ust.html'></ulink>.</para>
<para>For <filename>Application</filename>, you must supply the absolute path name of the
application to be traced by user mode <filename>lttng</filename>.
For example, typing <filename>/path/to/foo</filename> triggers
<filename>usttrace /path/to/foo</filename> on the remote target to trace the
program <filename>/path/to/foo</filename>.</para>
<para><filename>Argument</filename> is passed to <filename>usttrace</filename>
running on the remote target.</para>
<para>Before you use the <filename>lttng-ust</filename> tool, you need to setup
the <filename>lttng</filename> Eclipse plug-in and create a <filename>lttng</filename>
project.
Do the following:
<orderedlist>
<listitem><para>Follow these
<ulink url='http://wiki.eclipse.org/Linux_Tools_Project/LTTng#Downloading_and_installing_the_LTTng_parser_library'>instructions</ulink>
to download and install the <filename>lttng</filename> parser library.
</para></listitem>
<listitem><para>Select <filename>Window -> Open Perspective -> Other</filename>
and then select <filename>LTTng</filename>.</para></listitem>
<listitem><para>Click <filename>OK</filename> to change the Eclipse perspective
into the <filename>LTTng</filename> perspective.</para></listitem>
<listitem><para>Create a new <filename>LTTng</filename> project by selecting
<filename>File -> New -> Project</filename>.</para></listitem>
<listitem><para>Choose <filename>LTTng -> LTTng Project</filename>.</para></listitem>
<listitem><para>Click <filename>YoctoTools -> lttng-ust</filename> to start user mode
<filename>lttng</filename> on the remote target.</para></listitem>
</orderedlist></para>
<para>After the output data has been transferred from the remote target back to the local
host machine, new traces will be imported into the selected <filename>LTTng</filename> project.
Then you can go to the <filename>LTTng</filename> project, right click the imported
trace, and set the trace type as the <filename>LTTng</filename> kernel trace.
Finally, right click the imported trace and select <filename>Open</filename>
to display the data graphically.</para></listitem>
<listitem><para><emphasis><filename>PowerTOP</filename>:</emphasis> Selecting this tool runs
<filename>powertop</filename> on the remote target machine and displays the results in a
new view called <filename>powertop</filename>.</para>
<para><filename>Time to gather data(sec):</filename> is the time passed in seconds before data
is gathered from the remote target for analysis.</para>
<para><filename>show pids in wakeups list:</filename> corresponds to the
<filename>-p</filename> argument
passed to <filename>powertop</filename>.</para></listitem>
<listitem><para><emphasis><filename>LatencyTOP and Perf</filename>:</emphasis>
<filename>latencytop</filename> identifies system latency, while
<filename>perf</filename> monitors the system's
performance counter registers.
Selecting either of these tools causes an RSE terminal view to appear
from which you can run the tools.
Both tools refresh the entire screen to display results while they run.</para></listitem>
</itemizedlist>
</para>
</section>
<section id='customizing-an-image-using-a-bitbake-commander-project-and-hob'>
<title>Customizing an Image Using a BitBake Commander Project and Hob</title>
<para>
Within Eclipse, you can create a Yocto BitBake Commander project,
edit the metadata, and then use the
<ulink url='&YOCTO_HOME_URL;/projects/hob'>Hob</ulink> to build a customized
image all within one IDE.
</para>
<section id='creating-the-yocto-bitbake-commander-project'>
<title>Creating the Yocto BitBake Commander Project</title>
<para>
To create a Yocto BitBake Commander project, follow these steps:
<orderedlist>
<listitem><para>Select <filename>Window -> Open Perspective -> Other</filename>
and then choose <filename>Bitbake Commander</filename>.</para></listitem>
<listitem><para>Click <filename>OK</filename> to change the Eclipse perspective into the
Bitbake Commander perspective.</para></listitem>
<listitem><para>Select <filename>File -> New -> Project</filename> to create a new Yocto
Bitbake Commander project.</para></listitem>
<listitem><para>Choose <filename>Yocto Project Bitbake Commander -> New Yocto Project</filename>
and click <filename>Next</filename>.</para></listitem>
<listitem><para>Enter the Project Name and choose the Project Location.
The Yocto project's metadata files will be put under the directory
<filename>&lt;project_location&gt;/&lt;project_name&gt;</filename>.
If that directory does not exist, you need to check
the "Clone from Yocto Git Repository" box, which would execute a
<filename>git clone</filename> command to get the Yocto project's metadata files.
</para></listitem>
<listitem><para>Select <filename>Finish</filename> to create the project.</para></listitem>
</orderedlist>
</para>
</section>
<section id='editing-the-metadata-files'>
<title>Editing the Metadata Files</title>
<para>
After you create the Yocto Bitbake Commander project, you can modify the metadata files
by opening them in the project.
When editing recipe files (<filename>.bb</filename> files), you can view BitBake
variable values and information by hovering the mouse pointer over the variable name and
waiting a few seconds.
</para>
<para>
To edit the metadata, follow these steps:
<orderedlist>
<listitem><para>Select your Yocto Bitbake Commander project.</para></listitem>
<listitem><para>Select <filename>File -> New -> Yocto BitBake Commander -> BitBake Recipe</filename>
to open a new recipe wizard.</para></listitem>
<listitem><para>Point to your source by filling in the "SRC_URL" field.
For example, you can add a recipe in the
<ulink url='&YOCTO_DOCS_DEV_URL;#yocto-project-source-files'>Yocto Project Source Files</ulink>,
input the "SRC_URL" as follows:
<literallayout class='monospaced'>
ftp://ftp.gnu.org/gnu/m4/m4-1.4.9.tar.gz
</literallayout></para></listitem>
<listitem><para>Click "Populate" to calculate the archive md5, sha256,
license checksum values and to auto-generate the recipe filename.</para></listitem>
<listitem><para>Fill in the "Description" field.</para></listitem>
<listitem><para>Be sure values for all required fields exist.</para></listitem>
<listitem><para>Click <filename>Finish</filename>.</para></listitem>
</orderedlist>
</para>
</section>
<section id='buiding-and-customizing-the-image'>
<title>Building and Customizing the Image</title>
<para>
To build and customize the image in Eclipse, follow these steps:
<orderedlist>
<listitem><para>Select your Yocto Bitbake Commander project.</para></listitem>
<listitem><para>Select <filename>Project -> Launch HOB</filename>.</para></listitem>
<listitem><para>Enter the build directory where you want to put your final images.</para></listitem>
<listitem><para>Click <filename>OK</filename> to launch Hob.</para></listitem>
<listitem><para>Use Hob to customize and build your own images.
For information on Hob, see the
<ulink url='&YOCTO_HOME_URL;/projects/hob'>Hob Project Page</ulink> on the
Yocto Project website.</para></listitem>
</orderedlist>
</para>
</section>
</section>
</chapter>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -3,61 +3,54 @@
[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
<chapter id='adt-intro'>
<title>Introduction</title>
<title>Application Development Toolkit (ADT) User's Guide</title>
<para>
Welcome to the Yocto Project Application Developer's Guide.
This manual provides information that lets you begin developing applications
using the Yocto Project.
Welcome to the Application Development Toolkit Users Guide. This manual provides
information that lets you get going with the ADT to develop projects using the Yocto
Project.
</para>
<para>
The Yocto Project provides an application development environment based on
an Application Development Toolkit (ADT) and the availability of stand-alone
cross-development toolchains and other tools.
This manual describes the ADT and how you can configure and install it,
how to access and use the cross-development toolchains, how to
customize the development packages installation,
how to use command line development for both Autotools-based and Makefile-based projects,
and an introduction to the Eclipse Yocto Plug-in.
</para>
<section id='adt-intro-section'>
<title>The Application Development Toolkit (ADT)</title>
<section id='book-intro'>
<title>Introducing the Application Development Toolkit (ADT)</title>
<para>
Part of the Yocto Project development solution is an Application Development
Toolkit (ADT).
The ADT provides you with a custom-built, cross-development
platform suited for developing a user-targeted product application.
Fundamentally, the ADT consists of an architecture-specific cross-toolchain and
a matching sysroot that are both built by the Yocto Project build system Poky.
The toolchain and sysroot are based on a metadata configuration and extensions,
which allows you to cross-develop on the host machine for the target.
</para>
<para>
Fundamentally, the ADT consists of the following:
<itemizedlist>
<listitem><para>An architecture-specific cross-toolchain and matching
sysroot both built by the OpenEmbedded build system, which uses Poky.
The toolchain and sysroot are based on a metadata configuration and extensions,
which allows you to cross-develop on the host machine for the target hardware.
</para></listitem>
<listitem><para>The Eclipse IDE Yocto Plug-in.</para></listitem>
<listitem><para>The Quick EMUlator (QEMU), which lets you simulate target hardware.
</para></listitem>
<listitem><para>Various user-space tools that greatly enhance your application
development experience.</para></listitem>
</itemizedlist>
Additionally, to provide an effective development platform, the Yocto Project
makes available and suggests other tools you can use with the ADT.
These other tools include the Eclipse IDE Yocto Plug-in, an emulator (QEMU),
and various user-space tools that greatly enhance your development experience.
</para>
<para>
The resulting combination of the architecture-specific cross-toolchain and sysroot
along with these additional tools yields a custom-built, cross-development platform
for a user-targeted product.
</para>
</section>
<section id='adt-components'>
<title>ADT Components</title>
<para>
This section provides a brief description of what comprises the ADT.
</para>
<section id='the-cross-toolchain'>
<title>The Cross-Toolchain</title>
<para>
The cross-toolchain consists of a cross-compiler, cross-linker, and cross-debugger
that are used to develop user-space applications for targeted hardware.
This toolchain is created either by running the ADT Installer script, a toolchain installer
script, or through a
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink> that
is based on your metadata
This toolchain is created either by running the ADT Installer script or
through a Yocto Project build tree that is based on your metadata
configuration or extension for your targeted device.
The cross-toolchain works with a matching target sysroot.
</para>
@@ -70,38 +63,11 @@
The matching target sysroot contains needed headers and libraries for generating
binaries that run on the target architecture.
The sysroot is based on the target root filesystem image that is built by
the OpenEmbedded build system Poky and uses the same metadata configuration
the Yocto Project's build system Poky and uses the same metadata configuration
used to build the cross-toolchain.
</para>
</section>
<section id='eclipse-overview'>
<title>Eclipse Yocto Plug-in</title>
<para>
The Eclipse IDE is a popular development environment and it fully supports
development using the Yocto Project.
When you install and configure the Eclipse Yocto Project Plug-in into
the Eclipse IDE, you maximize your Yocto Project experience.
Installing and configuring the Plug-in results in an environment that
has extensions specifically designed to let you more easily develop software.
These extensions allow for cross-compilation, deployment, and execution of
your output into a QEMU emulation session.
You can also perform cross-debugging and profiling.
The environment also supports a suite of tools that allows you to perform
remote profiling, tracing, collection of power data, collection of
latency data, and collection of performance data.
</para>
<para>
For information about the application development workflow that uses the Eclipse
IDE and for a detailed example of how to install and configure the Eclipse
Yocto Project Plug-in, see the
"<ulink url='&YOCTO_DOCS_DEV_URL;#adt-eclipse'>Working Within Eclipse</ulink>" section
of the Yocto Project Development Manual.
</para>
</section>
<section id='the-qemu-emulator'>
<title>The QEMU Emulator</title>
@@ -113,10 +79,8 @@
<listitem><para>If you use the ADT Installer script to install ADT, you can
specify whether or not to install QEMU.</para></listitem>
<listitem><para>If you have downloaded a Yocto Project release and unpacked
it to create a
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink> and
you have sourced
the environment setup script, QEMU is installed and automatically
it to create a Yocto Project file structure and you have sourced
the Yocto Project environment setup script, QEMU is installed and automatically
available.</para></listitem>
<listitem><para>If you have installed the cross-toolchain
tarball and you have sourcing the toolchain's setup environment script, QEMU
@@ -143,7 +107,7 @@
<listitem><para><emphasis>PowerTOP:</emphasis> Helps you determine what
software is using the most power.
You can find out more about PowerTOP at
<ulink url='https://01.org/powertop/'></ulink>.</para></listitem>
<ulink url='http://www.linuxpowertop.org/'></ulink>.</para></listitem>
<listitem><para><emphasis>OProfile:</emphasis> A system-wide profiler for Linux
systems that is capable of profiling all running code at low overhead.
You can find out more about OProfile at
@@ -156,7 +120,7 @@
<listitem><para><emphasis>SystemTap:</emphasis> A free software infrastructure
that simplifies information gathering about a running Linux system.
This information helps you diagnose performance or functional problems.
SystemTap is not available as a user-space tool through the Eclipse IDE Yocto Plug-in.
SystemTap is not available as a user-space tool through the Yocto Eclipse IDE Plug-in.
See <ulink url='http://sourceware.org/systemtap'></ulink> for more information
on SystemTap.</para></listitem>
<listitem><para><emphasis>Lttng-ust:</emphasis> A User-space Tracer designed to

View File

@@ -2,7 +2,7 @@
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
<book id='adt-manual' lang='en'
<book id='adt-manual' lang='en'
xmlns:xi="http://www.w3.org/2003/XInclude"
xmlns="http://docbook.org/ns/docbook"
>
@@ -10,10 +10,10 @@
<mediaobject>
<imageobject>
<imagedata fileref='figures/adt-title.png'
format='SVG'
<imagedata fileref='figures/adt-title.png'
format='SVG'
align='left' scalefit='1' width='100%'/>
</imageobject>
</imageobject>
</mediaobject>
<title></title>
@@ -50,9 +50,14 @@
<revremark>Released with the Yocto Project 1.2 Release.</revremark>
</revision>
<revision>
<revnumber>1.3</revnumber>
<date>October 2012</date>
<revremark>Released with the Yocto Project 1.3 Release.</revremark>
<revnumber>1.2.1</revnumber>
<date>July 2012</date>
<revremark>Released with the Yocto Project 1.2.1 Release.</revremark>
</revision>
<revision>
<revnumber>1.2.2</revnumber>
<date>January 2013</date>
<revremark>Released with the Yocto Project 1.2.2 Release.</revremark>
</revision>
</revhistory>
@@ -63,13 +68,15 @@
<legalnotice>
<para>
Permission is granted to copy, distribute and/or modify this document under
the terms of the <ulink type="http" url="http://creativecommons.org/licenses/by-sa/2.0/uk/">Creative Commons Attribution-Share Alike 2.0 UK: England &amp; Wales</ulink> as published by Creative Commons.
Permission is granted to copy, distribute and/or modify this document
under the terms of the <ulink type="http" url="http://creativecommons.org/licenses/by-sa/2.0/uk/">Creative Commons Attribution-Share Alike 2.0 UK: England &amp; Wales</ulink> as
published by Creative Commons.
</para>
<note>
Due to production processes, there could be differences between the Yocto Project
documentation bundled in the release tarball and the
<ulink url='&YOCTO_DOCS_ADT_URL;'>Yocto Project Application Developer's Guide</ulink> on
documentation bundled in the release tarball and the
<ulink url='&YOCTO_DOCS_ADT_URL;'>
Application Developer's Toolkit (ADT) User's Guide</ulink> on
the <ulink url='&YOCTO_HOME_URL;'>Yocto Project</ulink> website.
For the latest version of this manual, see the manual on the website.
</note>
@@ -84,6 +91,8 @@
<xi:include href="adt-package.xml"/>
<xi:include href="adt-eclipse.xml"/>
<xi:include href="adt-command.xml"/>
<!-- <index id='index'>
@@ -92,6 +101,6 @@
-->
</book>
<!--
vim: expandtab tw=80 ts=4
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -17,7 +17,7 @@
<title>Package Management Systems</title>
<para>
The OpenEmbedded build system supports the generation of sysroot files using
The Yocto Project supports the generation of sysroot files using
three different Package Management Systems (PMS):
<itemizedlist>
<listitem><para><emphasis>OPKG:</emphasis> A less well known PMS whose use
@@ -28,7 +28,7 @@
<listitem><para><emphasis>RPM:</emphasis> A more widely known PMS intended for GNU/Linux
distributions.
This PMS works with files packaged in an <filename>.rms</filename> format.
The build system currently installs through this PMS by default.
The Yocto Project currently installs through this PMS by default.
See <ulink url='http://en.wikipedia.org/wiki/RPM_Package_Manager'></ulink>
for more information about RPM.</para></listitem>
<listitem><para><emphasis>Debian:</emphasis> The PMS for Debian-based systems
@@ -45,8 +45,7 @@
<para>
Whichever PMS you are using, you need to be sure that the
<ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_CLASSES'><filename>PACKAGE_CLASSES</filename></ulink>
variable in the <filename>conf/local.conf</filename>
<filename>PACKAGE_CLASSES</filename> variable in the <filename>conf/local.conf</filename>
file is set to reflect that system.
The first value you choose for the variable specifies the package file format for the root
filesystem at sysroot.
@@ -56,8 +55,7 @@
<note>
For build performance information related to the PMS, see
<ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-package'>Packaging - <filename>package*.bbclass</filename></ulink>
in the Yocto Project Reference Manual.
<ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-package'>Packaging - <filename>package*.bbclass</filename></ulink> in The Yocto Project Reference Manual.
</note>
<para>
@@ -77,8 +75,7 @@
</para>
<para>
Next, source the environment setup script found in the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
Next, source the environment setup script found in the Yocto Project files.
Follow that by setting up the installation destination to point to your
sysroot as <filename>&lt;sysroot_dir&gt;</filename>.
Finally, have an OPKG configuration file <filename>&lt;conf_file&gt;</filename>

View File

@@ -4,40 +4,25 @@
<chapter id='adt-prepare'>
<title>Preparing for Application Development</title>
<title>Preparing to Use the Application Development Toolkit (ADT)</title>
<para>
In order to develop applications, you need set up your host development system.
Several ways exist that allow you to install cross-development tools, QEMU, the
Eclipse Yocto Plug-in, and other tools.
This chapter describes how to prepare for application development.
In order to use the ADT, you must install it, <filename>source</filename> a script to set up the
environment, and be sure both the kernel and filesystem image specific to the target architecture
exist.
This chapter describes how to be sure you meet the ADT requirements.
</para>
<section id='installing-the-adt'>
<title>Installing the ADT and Toolchains</title>
<title>Installing the ADT</title>
<para>
The following list describes installation methods that set up varying degrees of tool
availabiltiy on your system.
Regardless of the installation method you choose,
you must <filename>source</filename> the cross-toolchain
environment setup script before you use a toolchain.
The following list describes how you can install the ADT, which includes the cross-toolchain.
Regardless of the installation you choose, you must <filename>source</filename> the cross-toolchain
environment setup script before you use the toolchain.
See the "<link linkend='setting-up-the-cross-development-environment'>Setting Up the
Cross-Development Environment</link>" section for more information.
</para>
<note>
<para>Avoid mixing installation methods when installing toolchains for different architectures.
For example, avoid using the ADT Installer to install some toolchains and then hand-installing
cross-development toolchains by running the toolchain installer for different architectures.
Mixing installation methods can result in situations where the ADT Installer becomes
unreliable and might not install the toolchain.</para>
<para>If you must mix installation methods, you might avoid problems by deleting
<filename>/var/lib/opkg</filename>, thus purging the <filename>opkg</filename> package
metadata</para>
</note>
<para>
Cross-Development Environment</link>"
section for more information.
<itemizedlist>
<listitem><para><emphasis>Use the ADT Installer Script:</emphasis>
This method is the recommended way to install the ADT because it
@@ -45,15 +30,14 @@
For example, you can configure the installation to install the QEMU emulator
and the user-space NFS, specify which root filesystem profiles to download,
and define the target sysroot location.</para></listitem>
<listitem><para><emphasis>Use an Existing Toolchain:</emphasis>
<listitem><para><emphasis>Use an Existing Toolchain Tarball:</emphasis>
Using this method, you select and download an architecture-specific
toolchain installer and then run the script to hand-install the toolchain.
toolchain tarball and then hand-install the toolchain.
If you use this method, you just get the cross-toolchain and QEMU - you do not
get any of the other mentioned benefits had you run the ADT Installer script.</para></listitem>
<listitem><para><emphasis>Use the Toolchain from within the Build Directory:</emphasis>
If you already have a
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>,
you can build the cross-toolchain within the directory.
<listitem><para><emphasis>Use the Toolchain from within a Yocto Project Build Tree:</emphasis>
If you already have a Yocto Project build tree, you can build the cross-toolchain
within tree.
However, like the previous method mentioned, you only get the cross-toolchain and QEMU - you
do not get any of the other benefits without taking separate steps.</para></listitem>
</itemizedlist>
@@ -76,22 +60,22 @@
<ulink url='&YOCTO_DL_URL;/releases'>Index of Releases</ulink>, specifically
at
<ulink url='&YOCTO_ADTINSTALLER_DL_URL;'></ulink>.
Or, you can use BitBake to generate the tarball inside the existing
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
Or, you can use BitBake to generate the tarball inside the existing Yocto Project
build tree.
</para>
<para>
If you use BitBake to generate the ADT Installer tarball, you must
<filename>source</filename> the environment setup script
(<filename>&OE_INIT_FILE;</filename>) located
in the Source Directory before running the <filename>bitbake</filename>
<filename>source</filename> the Yocto Project environment setup script
(<filename>oe-init-build-env</filename>) located
in the Yocto Project file structure before running the <filename>bitbake</filename>
command that creates the tarball.
</para>
<para>
The following example commands download the Poky tarball, set up the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>,
set up the environment while also creating the default Build Directory,
The following example commands download the Yocto Project release tarball, set up the Yocto
Project files structure, set up the environment while also creating the
default Yocto Project build tree,
and run the <filename>bitbake</filename> command that results in the tarball
<filename>~/yocto-project/build/tmp/deploy/sdk/adt_installer.tar.bz2</filename>:
<literallayout class='monospaced'>
@@ -152,7 +136,7 @@
or not to install the emulator QEMU.</para></listitem>
<listitem><para><filename>YOCTOADT_NFS_UTIL</filename>: Indicates whether
or not to install user-mode NFS.
If you plan to use the Eclipse IDE Yocto plug-in against QEMU,
If you plan to use the Yocto Eclipse IDE plug-in against QEMU,
you should install NFS.
<note>To boot QEMU images using our userspace NFS server, you need
to be running <filename>portmap</filename> or <filename>rpcbind</filename>.
@@ -182,12 +166,8 @@
<para>
After you have configured the <filename>adt_installer.conf</filename> file,
run the installer using the following command.
Be sure that you are not trying to use cross-compilation tools.
When you run the installer, the environment must use a
host <filename>gcc</filename>:
run the installer using the following command:
<literallayout class='monospaced'>
$ cd ~/adt-installer
$ ./adt_installer
</literallayout>
</para>
@@ -196,14 +176,11 @@
The ADT Installer requires the <filename>libtool</filename> package to complete.
If you install the recommended packages as described in
"<ulink url='&YOCTO_DOCS_QS_URL;#packages'>The Packages</ulink>"
section of the Yocto Project Quick Start, then you will have libtool installed.
section of The Yocto Project Quick Start, then you will have libtool installed.
</note>
<para>
Once the installer begins to run, you are asked to enter the location for
cross-toolchain installation.
The default location is <filename>/opt/poky/&lt;release&gt;</filename>.
After selecting the location, you are prompted to run in
Once the installer begins to run, you are asked whether you want to run in
interactive or silent mode.
If you want to closely monitor the installation, choose “I” for interactive
mode rather than “S” for silent mode.
@@ -226,12 +203,10 @@
<title>Using a Cross-Toolchain Tarball</title>
<para>
If you want to simply install the cross-toolchain by hand, you can do so by running the
toolchain installer.
If you want to simply install the cross-toolchain by hand, you can do so by using an existing
cross-toolchain tarball.
If you use this method to install the cross-toolchain and you still need to install the target
sysroot, you will have to extract and install sysroot separately.
For information on how to do this, see the
"<link linkend='extracting-the-root-filesystem'>Extracting the Root Filesystem</link>" section.
sysroot, you will have to install sysroot separately.
</para>
<para>
@@ -242,43 +217,30 @@
and find the folder that matches your host development system
(i.e. <filename>i686</filename> for 32-bit machines or
<filename>x86-64</filename> for 64-bit machines).</para></listitem>
<listitem><para>Go into that folder and download the toolchain installer whose name
<listitem><para>Go into that folder and download the toolchain tarball whose name
includes the appropriate target architecture.
For example, if your host development system is an Intel-based 64-bit system and
you are going to use your cross-toolchain for an Intel-based 32-bit target, go into the
<filename>x86_64</filename> folder and download the following installer:
<filename>x86_64</filename> folder and download the following tarball:
<literallayout class='monospaced'>
poky-eglibc-x86_64-i586-toolchain-gmae-&DISTRO;.sh
poky-eglibc-x86_64-i586-toolchain-gmae-&DISTRO;.tar.bz2
</literallayout>
<note><para>As an alternative to steps one and two, you can build the toolchain installer
if you have a <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
<note><para>As an alternative to steps one and two, you can build the toolchain tarball
if you have a Yocto Project build tree.
If you need GMAE, you should use the <filename>bitbake meta-toolchain-gmae</filename>
command.
The resulting installation script when run will support such development.
The resulting tarball will support such development.
However, if you are not concerned with GMAE,
you can generate the toolchain installer using
<filename>bitbake meta-toolchain</filename>.</para>
you can generate the tarball using <filename>bitbake meta-toolchain</filename>.</para>
<para>Use the appropriate <filename>bitbake</filename> command only after you have
sourced the <filename>&OE_INIT_PATH;</filename> script located in the Source
Directory.
When the <filename>bitbake</filename> command completes, the toolchain installer will
be in <filename>tmp/deploy/sdk</filename> in the Build Directory.
</para></note>
</para></listitem>
<listitem><para>Once you have the installer, run it to install the toolchain.
You must change the permissions on the toolchain installer
script so that it is executable.</para>
<para>The following command shows how to run the installer given a toolchain tarball
for a 64-bit development host system and a 32-bit target architecture.
The example assumes the toolchain installer is located in <filename>~/Downloads/</filename>.
<literallayout class='monospaced'>
$ ~/Downloads/poky-eglibc-x86_64-i586-toolchain-gmae-&DISTRO;.sh
</literallayout>
<note>
If you do not have write permissions for the directory into which you are installing
the toolchain, the toolchain installer notifies you and exits.
Be sure you have write permissions in the directory and run the installer again.
</note>
sourced the <filename>oe-build-init-env</filename> script located in the Yocto
Project files.
When the <filename>bitbake</filename> command completes, the tarball will
be in <filename>tmp/deploy/sdk</filename> in the Yocto Project build tree.
</para></note></para></listitem>
<listitem><para>Make sure you are in the root directory with root privileges and then expand
the tarball.
The tarball expands into <filename>&YOCTO_ADTPATH_DIR;</filename>.
Once the tarball is expanded, the cross-toolchain is installed.
You will notice environment setup files for the cross-toolchain in the directory.
</para></listitem>
@@ -287,54 +249,47 @@
</section>
<section id='using-the-toolchain-from-within-the-build-tree'>
<title>Using BitBake and the Build Directory</title>
<title>Using BitBake and the Yocto Project Build Tree</title>
<para>
A final way of making the cross-toolchain available is to use BitBake
to generate the toolchain within an existing
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
This method does not install the toolchain into the
<filename>/opt</filename> directory.
A final way of installing just the cross-toolchain is to use BitBake to build the
toolchain within an existing Yocto Project build tree.
This method does not install the toolchain into the <filename>/opt</filename> directory.
As with the previous method, if you need to install the target sysroot, you must
do that separately as well.
do this separately.
</para>
<para>
Follow these steps to generate the toolchain into the Build Directory:
Follow these steps to build and install the toolchain into the build tree:
<orderedlist>
<listitem><para>Source the environment setup script
<filename>&OE_INIT_FILE;</filename> located in the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
</para></listitem>
<filename>oe-init-build-env</filename> located in the Yocto Project
files.</para></listitem>
<listitem><para>At this point, you should be sure that the
<ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink> variable
<filename>MACHINE</filename> variable
in the <filename>local.conf</filename> file found in the
<filename>conf</filename> directory of the Build Directory
<filename>conf</filename> directory of the Yocto Project build directory
is set for the target architecture.
Comments within the <filename>local.conf</filename> file list the values you
can use for the <filename>MACHINE</filename> variable.
<note>You can populate the Build Directory with the cross-toolchains for more
<note>You can populate the build tree with the cross-toolchains for more
than a single architecture.
You just need to edit the <filename>MACHINE</filename> variable in the
<filename>local.conf</filename> file and re-run the BitBake
command.</note></para></listitem>
<listitem><para>Run <filename>bitbake meta-ide-support</filename> to complete the
cross-toolchain generation.
<note>If you change out of your working directory after you
cross-toolchain installation.
<note>If change out of your working directory after you
<filename>source</filename> the environment setup script and before you run
the <filename>bitbake</filename> command, the command might not work.
Be sure to run the <filename>bitbake</filename> command immediately
after checking or editing the <filename>local.conf</filename> but without
changing out of your working directory.</note>
Once the <filename>bitbake</filename> command finishes,
the cross-toolchain is generated and populated within the Build Directory.
the tarball for the cross-toolchain is generated within the Yocto Project build tree.
You will notice environment setup files for the cross-toolchain in the
Build Directory in the <filename>tmp</filename> directory.
Setup script filenames contain the strings <filename>environment-setup</filename>.</para>
<para>Be aware that when you use this method to install the toolchain you still need
to separately extract and install the sysroot filesystem.
For information on how to do this, see the
"<link linkend='extracting-the-root-filesystem'>Extracting the Root Filesystem</link>" section.
Yocto Project build tree in the <filename>tmp</filename> directory.
Setup script filenames contain the strings <filename>environment-setup</filename>.
</para></listitem>
</orderedlist>
</para>
@@ -347,13 +302,11 @@
<para>
Before you can develop using the cross-toolchain, you need to set up the
cross-development environment by sourcing the toolchain's environment setup script.
If you used the ADT Installer or hand-installed cross-toolchain,
If you used the ADT Installer or used an existing ADT tarball to install the ADT,
then you can find this script in the <filename>&YOCTO_ADTPATH_DIR;</filename>
directory.
If you installed the toolchain in the
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>,
you can find the environment setup
script for the toolchain in the Build Directory's <filename>tmp</filename> directory.
If you installed the toolchain in the build tree, you can find the environment setup
script for the toolchain in the Yocto Project build tree's <filename>tmp</filename> directory.
</para>
<para>
@@ -387,33 +340,24 @@
pre-built versions.
You can find examples for both these situations in the
"<ulink url='&YOCTO_DOCS_QS_URL;#test-run'>A Quick Test Run</ulink>" section of
the Yocto Project Quick Start.
The Yocto Project Quick Start.
</para>
<para>
The Yocto Project ships basic kernel and filesystem images for several
The Yocto Project provides basic kernel and filesystem images for several
architectures (<filename>x86</filename>, <filename>x86-64</filename>,
<filename>mips</filename>, <filename>powerpc</filename>, and <filename>arm</filename>)
that you can use unaltered in the QEMU emulator.
These kernel images reside in the release
These kernel images reside in the Yocto Project release
area - <ulink url='&YOCTO_MACHINES_DL_URL;'></ulink>
and are ideal for experimentation using Yocto Project.
For information on the image types you can build using the OpenEmbedded build system,
see the
"<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Images</ulink>" chapter in
the Yocto Project Reference Manual.
and are ideal for experimentation within Yocto Project.
For information on the image types you can build using the Yocto Project, see the
"<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Reference: Images</ulink>" appendix in
The Yocto Project Reference Manual.
</para>
<para>
If you are planning on developing against your image and you are not
building or using one of the Yocto Project development images
(e.g. core-image-*-dev), you must be sure to include the development
packages as part of your image recipe.
</para>
<para>
Furthermore, if you plan on remotely deploying and debugging your
application from within the
If you plan on remotely deploying and debugging your application from within the
Eclipse IDE, you must have an image that contains the Yocto Target Communication
Framework (TCF) agent (<filename>tcf-agent</filename>).
By default, the Yocto Project provides only one type pre-built image that contains the
@@ -426,10 +370,8 @@
you can do so one of two ways:
<itemizedlist>
<listitem><para>Modify the <filename>conf/local.conf</filename> configuration in
the <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
and then rebuild the image.
With this method, you need to modify the
<ulink url='&YOCTO_DOCS_REF_URL;#var-EXTRA_IMAGE_FEATURES'><filename>EXTRA_IMAGE_FEATURES</filename></ulink>
the Yocto Project build directory and then rebuild the image.
With this method, you need to modify the <filename>EXTRA_IMAGE_FEATURES</filename>
variable to have the value of "tools-debug" before rebuilding the image.
Once the image is rebuilt, the <filename>tcf-agent</filename> will be included
in the image and is launched automatically after the boot.</para></listitem>
@@ -437,7 +379,7 @@
To build the agent, follow these steps:
<orderedlist>
<listitem><para>Be sure the ADT is installed as described in the
"<link linkend='installing-the-adt'>Installing the ADT and Toolchains</link>" section.
"<link linkend='installing-the-adt'>Installing the ADT</link>" section.
</para></listitem>
<listitem><para>Set up the cross-development environment as described in the
"<link linkend='setting-up-the-cross-development-environment'>Setting
@@ -450,8 +392,7 @@
</literallayout></para></listitem>
<listitem><para>Modify the <filename>Makefile.inc</filename> file
for the cross-compilation environment by setting the
<filename>OPSYS</filename> and
<ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
<filename>OPSYS</filename> and <filename>MACHINE</filename>
variables according to your target.</para></listitem>
<listitem><para>Use the cross-development tools to build the
<filename>tcf-agent</filename>.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

After

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -110,7 +110,7 @@ h5 {
h6 {
margin: 1em 0em 0em 0em;
padding: 1em 0em 0em 0em;
font-size: 110%;
font-size: 80%;
font-weight: bold;
}

View File

@@ -2,7 +2,7 @@
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
<book id='bsp-guide' lang='en'
<book id='bsp-guide' lang='en'
xmlns:xi="http://www.w3.org/2003/XInclude"
xmlns="http://docbook.org/ns/docbook"
>
@@ -10,13 +10,13 @@
<mediaobject>
<imageobject>
<imagedata fileref='figures/bsp-title.png'
format='SVG'
<imagedata fileref='figures/bsp-title.png'
format='SVG'
align='center' scalefit='1' width='100%'/>
</imageobject>
</imageobject>
</mediaobject>
<title></title>
<title></title>
<authorgroup>
<author>
@@ -62,9 +62,14 @@
<revremark>Released with the Yocto Project 1.2 Release.</revremark>
</revision>
<revision>
<revnumber>1.3</revnumber>
<date>October 2012</date>
<revremark>Released with the Yocto Project 1.3 Release.</revremark>
<revnumber>1.2.1</revnumber>
<date>July 2012</date>
<revremark>Released with the Yocto Project 1.2.1 Release.</revremark>
</revision>
<revision>
<revnumber>1.2.2</revnumber>
<date>January 2013</date>
<revremark>Released with the Yocto Project 1.2.2 Release.</revremark>
</revision>
</revhistory>
@@ -75,13 +80,15 @@
<legalnotice>
<para>
Permission is granted to copy, distribute and/or modify this document under
the terms of the <ulink type="http" url="http://creativecommons.org/licenses/by-nc-sa/2.0/uk/">Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England &amp; Wales</ulink> as published by Creative Commons.
Permission is granted to copy, distribute and/or modify this document under
the terms of the <ulink type="http" url="http://creativecommons.org/licenses/by-nc-sa/2.0/uk/">Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England &amp; Wales</ulink> as
published by Creative Commons.
</para>
<note>
Due to production processes, there could be differences between the Yocto Project
documentation bundled in the release tarball and the
<ulink url='&YOCTO_DOCS_BSP_URL;'>Yocto Project Board Support Package (BSP) Developer's Guide</ulink> on
documentation bundled in the release tarball and the
<ulink url='&YOCTO_DOCS_BSP_URL;'>
Board Support Package (BSP) Developer's Guide</ulink> on
the <ulink url='&YOCTO_HOME_URL;'>Yocto Project</ulink> website.
For the latest version of this manual, see the manual on the website.
</note>
@@ -97,6 +104,6 @@
-->
</book>
<!--
vim: expandtab tw=80 ts=4
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -19,7 +19,8 @@
</para>
<para>
This guide presents information about BSP Layers, defines a structure for components
This chapter (or document if you are reading the BSP Developer's Guide)
talks about BSP Layers, defines a structure for components
so that BSPs follow a commonly understood layout, discusses how to customize
a recipe for a BSP, addresses BSP licensing, and provides information that
shows you how to create and manage a
@@ -47,15 +48,14 @@
This root is what you add to the
<ulink url='&YOCTO_DOCS_REF_URL;#var-BBLAYERS'><filename>BBLAYERS</filename></ulink>
variable in the <filename>conf/bblayers.conf</filename> file found in the
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
Adding the root allows the OpenEmbedded build system to recognize the BSP
<ulink url='&YOCTO_DOCS_DEV_URL;#yocto-project-build-directory'>Yocto Project Build Directory</ulink>.
Adding the root allows the Yocto Project build system to recognize the BSP
definition and from it build an image.
Here is an example:
<literallayout class='monospaced'>
BBLAYERS = " \
/usr/local/src/yocto/meta \
/usr/local/src/yocto/meta-yocto \
/usr/local/src/yocto/meta-yocto-bsp \
/usr/local/src/yocto/meta-&lt;bsp_name&gt; \
"
</literallayout>
@@ -83,6 +83,8 @@
For more detailed information on layers, see the
"<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>"
section of the Yocto Project Development Manual.
You can also see the detailed examples in the appendices of
<ulink url='&YOCTO_DOCS_DEV_URL;'>The Yocto Project Development Manual</ulink>.
</para>
</section>
@@ -97,14 +99,13 @@
</para>
<para>
The proposed form does have elements that are specific to the
OpenEmbedded build system.
The proposed form does have elements that are specific to the Yocto Project and
OpenEmbedded build systems.
It is intended that this information can be
used by other build systems besides the OpenEmbedded build system
and that it will be simple
used by other systems besides Yocto Project and OpenEmbedded and that it will be simple
to extract information and convert it to other formats if required.
The OpenEmbedded build system, through its standard layers mechanism, can directly
accept the format described as a layer.
Yocto Project, through its standard layers mechanism, can directly accept the format
described as a layer.
The BSP captures all
the hardware-specific details in one place in a standard format, which is
useful for any person wishing to use the hardware platform regardless of
@@ -170,6 +171,9 @@
meta-crownbay/recipes-bsp/formfactor/formfactor/crownbay/machconfig
meta-crownbay/recipes-bsp/formfactor/formfactor/crownbay-noemgd/
meta-crownbay/recipes-bsp/formfactor/formfactor/crownbay-noemgd/machconfig
meta-crownbay/recipes-core/
meta-crownbay/recipes-core/tasks/
meta-crownbay/recipes-core/tasks/task-core-tools-profile.bbappend
meta-crownbay/recipes-graphics/
meta-crownbay/recipes-graphics/xorg-xserver/
meta-crownbay/recipes-graphics/xorg-xserver/xserver-xf86-config_0.1.bbappend
@@ -180,10 +184,9 @@
meta-crownbay/recipes-graphics/xorg-xserver/xserver-xf86-config/crownbay-noemgd/xorg.conf
meta-crownbay/recipes-kernel/
meta-crownbay/recipes-kernel/linux/
meta-crownbay/recipes-kernel/linux/linux-yocto-rt_3.2.bbappend
meta-crownbay/recipes-kernel/linux/linux-yocto-rt_3.4.bbappend
meta-crownbay/recipes-kernel/linux/linux-yocto_3.2.bbappend
meta-crownbay/recipes-kernel/linux/linux-yocto_3.4.bbappend
meta-crownbay/recipes-kernel/linux/linux-yocto-rt_3.0.bbappend
meta-crownbay/recipes-kernel/linux/linux-yocto_2.6.37.bbappend
meta-crownbay/recipes-kernel/linux/linux-yocto_3.0.bbappend
</literallayout>
</para>
@@ -294,10 +297,9 @@
</para>
<para>
The <filename>conf/layer.conf</filename> file identifies the file structure as a
layer, identifies the
contents of the layer, and contains information about how the build
system should use it.
The <filename>conf/layer.conf</filename> file identifies the file structure as a Yocto
Project layer, identifies the
contents of the layer, and contains information about how Yocto Project should use it.
Generally, a standard boilerplate file such as the following works.
In the following example, you would replace "<filename>bsp</filename>" and
"<filename>_bsp</filename>" with the actual name
@@ -310,8 +312,8 @@
BBPATH := "${BBPATH}:${LAYERDIR}"
# We have a recipes directory, add to BBFILES
BBFILES := "${BBFILES} ${LAYERDIR}/recipes-*/*.bb \
${LAYERDIR}/recipes-*/*.bbappend"
BBFILES := "${BBFILES} ${LAYERDIR}/recipes/*/*.bb \
${LAYERDIR}/recipes/*/*.bbappend"
BBFILE_COLLECTIONS += "bsp"
BBFILE_PATTERN_bsp := "^${LAYERDIR}/"
@@ -331,7 +333,7 @@
<para>
This file simply makes BitBake aware of the recipes and configuration directories.
The file must exist so that the OpenEmbedded build system can recognize the BSP.
The file must exist so that the Yocto Project build system can recognize the BSP.
</para>
</section>
@@ -346,7 +348,7 @@
<para>
The machine files bind together all the information contained elsewhere
in the BSP into a format that the build system can understand.
in the BSP into a format that the Yocto Project build system can understand.
If the BSP supports multiple machines, multiple machine configuration files
can be present.
These filenames correspond to the values to which users have set the
@@ -386,8 +388,8 @@
<para>
Tuning files are found in the <filename>meta/conf/machine/include</filename>
directory within the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
directory of the
<ulink url='&YOCTO_DOCS_DEV_URL;#yocto-project-files'>Yocto Project Files</ulink>.
Tuning files can also reside in the BSP Layer itself.
For example, the <filename>ia32-base.inc</filename> file resides in the
<filename>meta-intel</filename> BSP Layer in <filename>conf/machine/include</filename>.
@@ -398,8 +400,8 @@
For example, the Crown Bay BSP <filename>crownbay.conf</filename> has the
following statements:
<literallayout class='monospaced'>
require conf/machine/include/tune-atom.inc
require conf/machine/include/ia32-base.inc
include conf/machine/include/tune-atom.inc
include conf/machine/include/ia32-base.inc
</literallayout>
</para>
</section>
@@ -439,10 +441,31 @@
formfactor recipe
<filename>meta/recipes-bsp/formfactor/formfactor_0.0.bb</filename>,
which is found in the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
<ulink url='&YOCTO_DOCS_DEV_URL;#yocto-project-files'>Yocto Project Files</ulink>.
</para></note>
</section>
<section id='bsp-filelayout-core-recipes'>
<title>Core Recipe Files</title>
<para>
You can find these files in the BSP Layer at:
<literallayout class='monospaced'>
meta-&lt;bsp_name&gt;/recipes-core/*
</literallayout>
</para>
<para>
This directory contains recipe files that are almost always necessary to build a
useful, working Linux image.
Thus, the term "core" is used to group these recipes.
For example, in the Crown Bay BSP there is the
<filename>task-core-tools-profile.bbappend</filename> file, which is an append file used
to recommend that the
<ulink url='http://sourceware.org/systemtap/wiki'>SystemTap</ulink>
package be included as a package when the image is built.
</para>
</section>
<section id='bsp-filelayout-recipes-graphics'>
<title>Display Support Files</title>
<para>
@@ -480,39 +503,33 @@
</para>
<para>
These files append your specific changes to the main kernel recipe you are using.
These files append your specific changes to the kernel you are using.
</para>
<para>
For your BSP, you typically want to use an existing Yocto Project kernel recipe found in the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
at <filename>meta/recipes-kernel/linux</filename>.
For your BSP, you typically want to use an existing Yocto Project kernel found in the
<ulink url='&YOCTO_DOCS_DEV_URL;#yocto-project-files'>Yocto
Project Files</ulink> at <filename>meta/recipes-kernel/linux</filename>.
You can append your specific changes to the kernel recipe by using a
similarly named append file, which is located in the BSP Layer (e.g.
the <filename>meta-&lt;bsp_name&gt;/recipes-kernel/linux</filename> directory).
</para>
<para>
Suppose you are using the <filename>linux-yocto_3.4.bb</filename> recipe to build
the kernel.
Suppose the BSP uses the <filename>linux-yocto_3.0.bb</filename> kernel,
which is the preferred kernel to use for developing a new BSP using the Yocto Project.
In other words, you have selected the kernel in your
<filename>&lt;bsp_name&gt;.conf</filename> file by adding these types
of statements:
<filename>&lt;bsp_name&gt;.conf</filename> file by adding the following statements:
<literallayout class='monospaced'>
PREFERRED_PROVIDER_virtual/kernel ?= "linux-yocto"
PREFERRED_VERSION_linux-yocto = "3.4%"
PREFERRED_VERSION_linux-yocto = "3.0%"
</literallayout>
<note>
When the preferred provider is assumed by default, the
<filename>PREFERRED_PROVIDER</filename> statement does not appear in the
<filename>&lt;bsp_name&gt;.conf</filename> file.
</note>
You would use the <filename>linux-yocto_3.4.bbappend</filename> file to append
You would use the <filename>linux-yocto_3.0.bbappend</filename> file to append
specific BSP settings to the kernel, thus configuring the kernel for your particular BSP.
</para>
<para>
As an example, look at the existing Crown Bay BSP.
The append file used is:
<literallayout class='monospaced'>
meta-crownbay/recipes-kernel/linux/linux-yocto_3.4.bbappend
meta-crownbay/recipes-kernel/linux/linux-yocto_3.0.bbappend
</literallayout>
The following listing shows the file.
Be aware that the actual commit ID strings in this example listing might be different
@@ -522,99 +539,82 @@
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
COMPATIBLE_MACHINE_crownbay = "crownbay"
KMACHINE_crownbay = "crownbay"
KBRANCH_crownbay = "standard/crownbay"
KMACHINE_crownbay = "yocto/standard/crownbay"
KERNEL_FEATURES_append_crownbay += " cfg/smp.scc"
COMPATIBLE_MACHINE_crownbay-noemgd = "crownbay-noemgd"
KMACHINE_crownbay-noemgd = "crownbay"
KBRANCH_crownbay-noemgd = "standard/crownbay"
KMACHINE_crownbay-noemgd = "yocto/standard/crownbay"
KERNEL_FEATURES_append_crownbay-noemgd += " cfg/smp.scc"
SRCREV_machine_pn-linux-yocto_crownbay ?= "449f7f520350700858f21a5554b81cc8ad23267d"
SRCREV_meta_pn-linux-yocto_crownbay ?= "9e3bdb7344054264b750e53fbbb6394cc1c942ac"
SRCREV_emgd_pn-linux-yocto_crownbay ?= "86643bdd8cbad616a161ab91f51108cf0da827bc"
SRCREV_machine_pn-linux-yocto_crownbay ?= "63c65842a3a74e4bd3128004ac29b5639f16433f"
SRCREV_meta_pn-linux-yocto_crownbay ?= "59314a3523e360796419d76d78c6f7d8c5ef2593"
SRCREV_machine_pn-linux-yocto_crownbay-noemgd ?= "449f7f520350700858f21a5554b81cc8ad23267d"
SRCREV_meta_pn-linux-yocto_crownbay-noemgd ?= "9e3bdb7344054264b750e53fbbb6394cc1c942ac"
KSRC_linux_yocto_3_4 ?= "git.yoctoproject.org/linux-yocto-3.4.git"
SRC_URI_crownbay = "git://git.yoctoproject.org/linux-yocto-3.4.git;protocol=git;nocheckout=1;branch=${KBRANCH},meta,emgd-1.14;name=machine,meta,emgd"
SRC_URI_crownbay-noemgd = "git://git.yoctoproject.org/linux-yocto-3.4.git;protocol=git;nocheckout=1;branch=${KBRANCH},meta;name=machine,meta"
SRCREV_machine_pn-linux-yocto_crownbay-noemgd ?= "63c65842a3a74e4bd3128004ac29b5639f16433f"
SRCREV_meta_pn-linux-yocto_crownbay-noemgd ?= "59314a3523e360796419d76d78c6f7d8c5ef2593"
</literallayout>
This append file contains statements used to support the Crown Bay BSP for both
<trademark class='registered'>Intel</trademark> EMGD and the VESA graphics.
The build process, in this case, recognizes and uses only the statements that
apply to the defined machine name - <filename>crownbay</filename> in this case.
So, the applicable statements in the <filename>linux-yocto_3.4.bbappend</filename>
So, the applicable statements in the <filename>linux-yocto_3.0.bbappend</filename>
file are follows:
<literallayout class='monospaced'>
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
COMPATIBLE_MACHINE_crownbay = "crownbay"
KMACHINE_crownbay = "crownbay"
KBRANCH_crownbay = "standard/crownbay"
KMACHINE_crownbay = "yocto/standard/crownbay"
KERNEL_FEATURES_append_crownbay += " cfg/smp.scc"
SRCREV_machine_pn-linux-yocto_crownbay ?= "449f7f520350700858f21a5554b81cc8ad23267d"
SRCREV_meta_pn-linux-yocto_crownbay ?= "9e3bdb7344054264b750e53fbbb6394cc1c942ac"
SRCREV_emgd_pn-linux-yocto_crownbay ?= "86643bdd8cbad616a161ab91f51108cf0da827bc"
SRCREV_machine_pn-linux-yocto_crownbay ?= "63c65842a3a74e4bd3128004ac29b5639f16433f"
SRCREV_meta_pn-linux-yocto_crownbay ?= "59314a3523e360796419d76d78c6f7d8c5ef2593"
</literallayout>
The append file defines <filename>crownbay</filename> as the
<ulink url='&YOCTO_DOCS_REF_URL;#var-COMPATIBLE_MACHINE'><filename>COMPATIBLE_MACHINE</filename></ulink>
and uses the
<ulink url='&YOCTO_DOCS_REF_URL;#var-KMACHINE'><filename>KMACHINE</filename></ulink> variable to
ensure the machine name used by the OpenEmbedded build system maps to the
machine name used by the Linux Yocto kernel.
The file also uses the optional
<ulink url='&YOCTO_DOCS_REF_URL;#var-KBRANCH'><filename>KBRANCH</filename></ulink> variable
to ensure the build process uses the <filename>standard/default/crownbay</filename>
kernel branch.
Finally, the append file points to specific commits in the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink> Git
The append file defines <filename>crownbay</filename> as the compatible machine and
defines the <filename>KMACHINE</filename>.
The file also points to some configuration fragments to use by setting the
<filename>KERNEL_FEATURES</filename> variable.
The location for the configuration fragments is the kernel tree itself in the
<ulink url='&YOCTO_DOCS_DEV_URL;#yocto-project-build-directory'>Yocto Project Build
Directory</ulink> under <filename>linux/meta</filename>.
Finally, the append file points to the specific commits in the
<ulink url='&YOCTO_DOCS_DEV_URL;#yocto-project-files'>Yocto Project Files</ulink> Git
repository and the <filename>meta</filename> Git repository branches to identify the
exact kernel needed to build the Crown Bay BSP.
<note>
For <filename>crownbay</filename>, a specific commit is also needed to point
to the branch that supports EMGD graphics.
At a minimum, every BSP points to the
<filename>machine</filename> and <filename>meta</filename> commits.
</note>
</para>
<para>
One thing missing in this particular BSP, which you will typically need when
developing a BSP, is the kernel configuration file (<filename>.config</filename>) for your BSP.
When developing a BSP, you probably have a kernel configuration file or a set of kernel
configuration files that, when taken together, define the kernel configuration for your BSP.
You can accomplish this definition by putting the configurations in a file or a set of files
inside a directory located at the same level as your kernel's append file and having the same
name as the kernel's main recipe file.
With all these conditions met, simply reference those files in a
inside a directory located at the same level as your append file and having the same name
as the kernel.
With all these conditions met simply reference those files in a
<filename>SRC_URI</filename> statement in the append file.
</para>
<para>
For example, suppose you had a some configuration options in a file called
<filename>network_configs.cfg</filename>.
You can place that file inside a directory named <filename>/linux-yocto</filename> and then add
a <filename>SRC_URI</filename> statement such as the following to the append file.
When the OpenEmbedded build system builds the kernel, the configuration options are
picked up and applied.
For example, suppose you had a set of configuration options in a file called
<filename>myconfig</filename>.
If you put that file inside a directory named
<filename>/linux-yocto</filename> and then added
a <filename>SRC_URI</filename> statement such as the following to the append file,
those configuration
options will be picked up and applied when the kernel is built.
<literallayout class='monospaced'>
SRC_URI += "file://network_configs.cfg"
SRC_URI += "file://myconfig"
</literallayout>
</para>
<para>
To group related configurations into multiple files, you perform a similar procedure.
Here is an example that groups separate configurations specifically for Ethernet and graphics
into their own files and adds the configurations
by using a <filename>SRC_URI</filename> statement like the following in your append file:
As mentioned earlier, you can group related configurations into multiple files and
name them all in the <filename>SRC_URI</filename> statement as well.
For example, you could group separate configurations specifically for Ethernet and graphics
into their own files and add those by using a <filename>SRC_URI</filename> statement like the
following in your append file:
<literallayout class='monospaced'>
SRC_URI += "file://myconfig.cfg \
SRC_URI += "file://myconfig \
file://eth.cfg \
file://gfx.cfg"
</literallayout>
</para>
<para>
The <filename>FILESEXTRAPATHS</filename> variable is in boilerplate form in the
previous example in order to make it easy to do that.
@@ -623,29 +623,32 @@
The <filename>FILESEXTRAPATHS</filename> variable enables the build process to
find those configuration files.
</para>
<note>
<para>
Other methods exist to accomplish grouping and defining configuration options.
For example, if you are working with a local clone of the kernel repository,
you could checkout the kernel's <filename>meta</filename> branch, make your changes,
and then push the changes to the local bare clone of the kernel.
The result is that you directly add configuration options to the
<filename>meta</filename> branch for your BSP.
The configuration options will likely end up in that location anyway if the BSP gets
added to the Yocto Project.
</para>
Other methods exist to accomplish grouping and defining configuration options.
For example, if you are working with a local clone of the kernel repository,
you could checkout the kernel's <filename>meta</filename> branch, make your changes,
and then push the changes to the local bare clone of the kernel.
The result is that you directly add configuration options to the Yocto kernel
<filename>meta</filename> branch for your BSP.
The configuration options will likely end up in that location anyway if the BSP gets
added to the Yocto Project.
For an example showing how to change the BSP configuration, see the
"<ulink url='&YOCTO_DOCS_DEV_URL;#changing-the-bsp-configuration'>Changing the BSP Configuration</ulink>"
section in the Yocto Project Development Manual.
For a better understanding of working with a local clone of the kernel repository
and a local bare clone of the kernel, see the
"<ulink url='&YOCTO_DOCS_DEV_URL;#modifying-the-kernel-source-code'>Modifying the Kernel
Source Code</ulink>" section also in the Yocto Project Development Manual.</para>
<para>
In general, however, the Yocto Project maintainers take care of moving the
<filename>SRC_URI</filename>-specified
configuration options to the kernel's <filename>meta</filename> branch.
Not only is it easier for BSP developers to not have to worry about putting those
configurations in the branch, but having the maintainers do it allows them to apply
'global' knowledge about the kinds of common configuration options multiple BSPs in
the tree are typically using.
This allows for promotion of common configurations into common features.
</para>
In general, however, the Yocto Project maintainers take care of moving the
<filename>SRC_URI</filename>-specified
configuration options to the kernel's <filename>meta</filename> branch.
Not only is it easier for BSP developers to not have to worry about putting those
configurations in the branch, but having the maintainers do it allows them to apply
'global' knowledge about the kinds of common configuration options multiple BSPs in
the tree are typically using.
This allows for promotion of common configurations into common features.</para>
</note>
</section>
</section>
@@ -669,7 +672,7 @@
<itemizedlist>
<listitem><para>The requirements here assume the BSP layer is a well-formed, "legal"
layer that can be added to the Yocto Project.
For guidelines on creating a layer that meets these base requirements, see the
For guidelines on creating a Yocto Project layer that meets these base requirements, see the
"<link linkend='bsp-layers'>BSP Layers</link>" and the
"<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding
and Creating Layers"</ulink> in the Yocto Project Development Manual.</para></listitem>
@@ -716,15 +719,15 @@
<filename>recipe-*</filename> subdirectory.
You can find <filename>recipes.txt</filename> in the
<filename>meta</filename> directory of the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>,
or in the OpenEmbedded Core Layer
<ulink url='&YOCTO_DOCS_DEV_URL;#yocto-project-files'>Yocto
Project Files</ulink>, or in the OpenEmbedded Core Layer
(<filename>openembedded-core</filename>) found at
<ulink url='http://git.openembedded.org/openembedded-core/tree/meta'></ulink>.
</para>
<para>Within any particular <filename>recipes-*</filename> category, the layout
should match what is found in the OpenEmbedded Core
Git repository (<filename>openembedded-core</filename>)
or the Source Directory (<filename>poky</filename>).
or the Yocto Project Files (<filename>poky</filename>).
In other words, make sure you place related files in appropriately
related <filename>recipes-*</filename> subdirectories specific to the
recipe's function, or within a subdirectory containing a set of closely-related
@@ -740,22 +743,22 @@
You must specify which license to use since there is no
default license if one is not specified.
See the
<ulink url='&YOCTO_GIT_URL;/cgit.cgi/meta-intel/tree/meta-fri2/COPYING.MIT'><filename>COPYING.MIT</filename></ulink>
file for the Fish River Island 2 BSP in the <filename>meta-fri2</filename> BSP layer
<ulink url='&YOCTO_GIT_URL;/cgit.cgi/meta-intel/tree/meta-fishriver/COPYING.MIT'><filename>COPYING.MIT</filename></ulink>
file for the Fish River BSP in the <filename>meta-fishriver</filename> BSP layer
as an example.</para></listitem>
<listitem><para><emphasis>README File:</emphasis>
You must include a <filename>README</filename> file in the
<filename>meta-&lt;bsp_name&gt;</filename> directory.
See the
<ulink url='&YOCTO_GIT_URL;/cgit.cgi/meta-intel/tree/meta-fri2/README'><filename>README</filename></ulink>
file for the Fish River Island 2 BSP in the <filename>meta-fri2</filename> BSP layer
<ulink url='&YOCTO_GIT_URL;/cgit.cgi/meta-intel/tree/meta-fishriver/README'><filename>README</filename></ulink>
file for the Fish River BSP in the <filename>meta-fishriver</filename> BSP layer
as an example.</para>
<para>At a minimum, the <filename>README</filename> file should
contain the following:
<itemizedlist>
<listitem><para>A brief description about the hardware the BSP
targets.</para></listitem>
<listitem><para>A list of all the dependencies
<listitem><para>A list of all the dependencies a
on which a BSP layer depends.
These dependencies are typically a list of required layers needed
to build the BSP.
@@ -788,8 +791,8 @@
generate the binary images contained in the
<filename>/binary</filename> directory, if present.
See the
<ulink url='&YOCTO_GIT_URL;/cgit.cgi/meta-intel/tree/meta-fri2/README.sources'><filename>README.sources</filename></ulink>
file for the Fish River Island 2 BSP in the <filename>meta-fri2</filename> BSP layer
<ulink url='&YOCTO_GIT_URL;/cgit.cgi/meta-intel/tree/meta-fishriver/README.sources'><filename>README.sources</filename></ulink>
file for the Fish River BSP in the <filename>meta-fishriver</filename> BSP layer
as an example.</para></listitem>
<listitem><para><emphasis>Layer Configuration File:</emphasis>
You must include a <filename>conf/layer.conf</filename> in the
@@ -803,13 +806,14 @@
using the BSP layer.
Multiple machine configuration files define variations of machine
configurations that are supported by the BSP.
If a BSP supports multiple machine variations, you need to
If a BSP supports more multiple machine variations, you need to
adequately describe each variation in the BSP
<filename>README</filename> file.
Do not use multiple machine configuration files to describe disparate
hardware.
If you do have very different targets, you should create separate
BSP layers for each target.
Multiple machine configuration files should describe very similar targets.
If you do have very different targets, you should create a separate
BSP.
<note>It is completely possible for a developer to structure the
working repository as a conglomeration of unrelated BSP
files, and to possibly generate specifically targeted 'release' BSPs
@@ -855,7 +859,7 @@
Basing your recipes on these kernels reduces the costs for maintaining
the BSP and increases its scalability.
See the <filename>Yocto Linux Kernel</filename> category in the
<ulink url='&YOCTO_GIT_URL;/cgit.cgi'>Source Repositories</ulink>
<ulink url='&YOCTO_GIT_URL;/cgit.cgi'><filename>Yocto Source Repositories</filename></ulink>
for these kernels.</para></listitem>
</itemizedlist>
</para>
@@ -880,7 +884,7 @@
<para>
To better understand this, consider an example that customizes a recipe by adding
a BSP-specific configuration file named <filename>interfaces</filename> to the
<filename>netbase_5.0.bb</filename> recipe for machine "xyz".
<filename>netbase_4.47.bb</filename> recipe for machine "xyz".
Do the following:
<orderedlist>
<listitem><para>Edit the <filename>netbase_4.47.bbappend</filename> file so that it
@@ -906,7 +910,7 @@
for a component or components.
For these cases, you are required to accept the terms of a commercial or other
type of license that requires some kind of explicit End User License Agreement (EULA).
Once the license is accepted, the OpenEmbedded build system can then build and
Once the license is accepted, the Yocto Project build system can then build and
include the corresponding component in the final BSP image.
If the BSP is available as a pre-built image, you can download the image after
agreeing to the license or EULA.
@@ -949,12 +953,13 @@
</para>
<para>
A couple different methods exist within the OpenEmbedded build system to
satisfy the licensing requirements for an encumbered BSP.
A couple different methods exist within the Yocto
Project build system to satisfy the licensing
requirements for an encumbered BSP.
The following list describes them in order of preference:
<orderedlist>
<listitem><para><emphasis>Use the <filename>LICENSE_FLAGS</filename> variable
to define the recipes that have commercial or other types of
to define the Yocto Project recipes that have commercial or other types of
specially-licensed packages:</emphasis>
For each of those recipes, you can
specify a matching license string in a
@@ -1019,15 +1024,15 @@
The Yocto Project includes a couple of tools that enable
you to create a <link linkend='bsp-layers'>BSP layer</link>
from scratch and do basic configuration and maintenance
of the kernel without ever looking at a metadata file.
of the kernel without ever looking at a Yocto Project metadata file.
These tools are <filename>yocto-bsp</filename> and <filename>yocto-kernel</filename>,
respectively.
</para>
<para>
The following sections describe the common location and help features as well
as provide details for the
<filename>yocto-bsp</filename> and <filename>yocto-kernel</filename> tools.
as details for the <filename>yocto-bsp</filename> and <filename>yocto-kernel</filename>
tools.
</para>
<section id='common-features'>
@@ -1046,7 +1051,8 @@
<para>
Both tools reside in the <filename>scripts/</filename> subdirectory
of the <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
of the <ulink url='&YOCTO_DOCS_DEV_URL;#yocto-project-files'>Yocto Project
Files</ulink>.
Consequently, to use the scripts, you must <filename>source</filename> the
environment just as you would when invoking a build:
<literallayout class='monospaced'>
@@ -1058,27 +1064,30 @@
The most immediately useful function is to get help on both tools.
The built-in help system makes it easy to drill down at
any time and view the syntax required for any specific command.
Simply enter the name of the command with the <filename>help</filename>
switch:
Simply enter the name of the command, or the command along with
<filename>help</filename> to display a list of the available sub-commands.
Here is an example:
<literallayout class='monospaced'>
$ yocto-bsp
$ yocto-bsp help
Usage:
Create a customized Yocto BSP layer.
Usage:
usage: yocto-bsp [--version] [--help] COMMAND [ARGS]
Create a customized Yocto BSP layer.
Current 'yocto-bsp' commands are:
create Create a new Yocto BSP
list List available values for options and BSP properties
usage: yocto-bsp [--version] [--help] COMMAND [ARGS]
See 'yocto-bsp help COMMAND' for more information on a specific command.
The most commonly used 'yocto-bsp' commands are:
create Create a new Yocto BSP
list List available values for options and BSP properties
See 'yocto-bsp help COMMAND' for more information on a specific command.
Options:
--version show program's version number and exit
-h, --help show this help message and exit
-D, --debug output debug information
--version show program's version number and exit
-h, --help show this help message and exit
-D, --debug output debug information
</literallayout>
</para>
@@ -1088,20 +1097,19 @@
<literallayout class='monospaced'>
$ yocto-bsp create
Usage:
Usage:
Create a new Yocto BSP
usage: yocto-bsp create &lt;bsp-name&gt; &lt;karch&gt; [-o &lt;DIRNAME&gt; | --outdir &lt;DIRNAME&gt;]
Create a new Yocto BSP
usage: yocto-bsp create &lt;bsp-name&gt; &lt;karch&gt; [-o &lt;DIRNAME&gt; | --outdir &lt;DIRNAME&gt;]
[-i &lt;JSON PROPERTY FILE&gt; | --infile &lt;JSON PROPERTY_FILE&gt;]
This command creates a Yocto BSP based on the specified parameters.
The new BSP will be a new Yocto BSP layer contained by default within
the top-level directory specified as 'meta-bsp-name'. The -o option
can be used to place the BSP layer in a directory with a different
name and location.
This command creates a Yocto BSP based on the specified parameters.
The new BSP will be a new Yocto BSP layer contained by default within
the top-level directory specified as 'meta-bsp-name'. The -o option
can be used to place the BSP layer in a directory with a different
name and location.
...
...
</literallayout>
</para>
@@ -1112,26 +1120,33 @@
$ yocto-bsp help create
NAME
yocto-bsp create - Create a new Yocto BSP
yocto-bsp create - Create a new Yocto BSP
SYNOPSIS
yocto-bsp create &lt;bsp-name&gt; &lt;karch&gt; [-o &lt;DIRNAME&gt; | --outdir &lt;DIRNAME&gt;]
yocto-bsp create &lt;bsp-name&gt; &lt;karch&gt; [-o &lt;DIRNAME&gt; | --outdir &lt;DIRNAME&gt;]
[-i &lt;JSON PROPERTY FILE&gt; | --infile &lt;JSON PROPERTY_FILE&gt;]
DESCRIPTION
This command creates a Yocto BSP based on the specified
parameters. The new BSP will be a new Yocto BSP layer contained
by default within the top-level directory specified as
'meta-bsp-name'. The -o option can be used to place the BSP layer
in a directory with a different name and location.
The value of the 'karch' parameter determines the set of files
that will be generated for the BSP, along with the specific set of
'properties' that will be used to fill out the BSP-specific
portions of the BSP. The possible values for the 'karch' paramter
can be listed via 'yocto-bsp list karch'.
...
This command creates a Yocto BSP based on the specified
parameters. The new BSP will be a new Yocto BSP layer contained
by default within the top-level directory specified as
'meta-bsp-name'. The -o option can be used to place the BSP layer
in a directory with a different name and location.
The value of the 'karch' parameter determines the set of files
that will be generated for the BSP, along with the specific set of
'properties' that will be used to fill out the BSP-specific
portions of the BSP.
...
NOTE: Once created, you should add your new layer to your
bblayers.conf file in order for it to be subsequently seen and
modified by the yocto-kernel tool.
NOTE for x86- and x86_64-based BSPs: The generated BSP assumes the
presence of the of the meta-intel layer, so you should also have a
meta-intel layer present and added to your bblayers.conf as well.
</literallayout>
</para>
@@ -1158,33 +1173,33 @@
For the current set of BSPs, the script prompts you for various important
parameters such as:
<itemizedlist>
<listitem><para>The kernel to use</para></listitem>
<listitem><para>The branch of that kernel to use (or re-use)</para></listitem>
<listitem><para>Whether or not to use X, and if so, which drivers to use</para></listitem>
<listitem><para>Whether to turn on SMP</para></listitem>
<listitem><para>Whether the BSP has a keyboard</para></listitem>
<listitem><para>Whether the BSP has a touchscreen</para></listitem>
<listitem><para>Remaining configurable items associated with the BSP</para></listitem>
<listitem><para>which kernel to use</para></listitem>
<listitem><para>which branch of that kernel to use (or re-use)</para></listitem>
<listitem><para>whether or not to use X, and if so, which drivers to use</para></listitem>
<listitem><para>whether to turn on SMP</para></listitem>
<listitem><para>whether the BSP has a keyboard</para></listitem>
<listitem><para>whether the BSP has a touchscreen</para></listitem>
<listitem><para>any remaining configurable items associated with the BSP</para></listitem>
</itemizedlist>
</para>
<para>
You use the <filename>yocto-bsp create</filename> sub-command to create
a new BSP layer.
This command requires you to specify a particular kernel architecture
(<filename>karch</filename>) on which to base the BSP.
This command requires you to specify a particular architecture on which to
base the BSP.
Assuming you have sourced the environment, you can use the
<filename>yocto-bsp list karch</filename> sub-command to list the
architectures available for BSP creation as follows:
<literallayout class='monospaced'>
$ yocto-bsp list karch
Architectures available:
qemu
x86_64
i386
powerpc
arm
powerpc
i386
mips
x86_64
qemu
</literallayout>
</para>
@@ -1205,46 +1220,53 @@
the prompts appear in brackets.
Pressing enter without supplying anything on the command line or pressing enter
and providing an invalid response causes the script to accept the default value.
Once the script completes, the new <filename>meta-myarm</filename> BSP layer
is created in the current working directory.
This example assumes you have source the &OE_INIT_FILE; and are currently
in the top-level folder of the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
</para>
<para>
Following is the complete example:
<literallayout class='monospaced'>
$ yocto-bsp create myarm qemu
Which qemu architecture would you like to use? [default: i386]
1) i386 (32-bit)
2) x86_64 (64-bit)
3) ARM (32-bit)
4) PowerPC (32-bit)
5) MIPS (32-bit)
Which qemu architecture would you like to use? [default: x86]
1) common 32-bit x86
2) common 64-bit x86
3) common 32-bit ARM
4) common 32-bit PowerPC
5) common 32-bit MIPS
3
Would you like to use the default (3.4) kernel? (y/n) [default: y]
Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n] [default: y]
Getting branches from remote repo git://git.yoctoproject.org/linux-yocto-3.4.git...
Please choose a machine branch to base your new BSP branch on: [default: standard/base]
1) standard/arm-versatile-926ejs
Would you like to use the default (3.2) kernel? (Y/n)
Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [Y/n]
Getting branches from remote repo git://git.yoctoproject.org/linux-yocto-3.2...
Please choose a machine branch to base this BSP on => [default: standard/default/common-pc]
1) base
2) standard/base
3) standard/beagleboard
4) standard/cedartrail
5) standard/crownbay
6) standard/emenlow
7) standard/fishriver
8) standard/fri2
9) standard/fsl-mpc8315e-rdb
10) standard/mti-malta32
11) standard/mti-malta64
12) standard/qemuppc
13) standard/routerstationpro
14) standard/sys940x
1
Would you like SMP support? (y/n) [default: y]
Does your BSP have a touchscreen? (y/n) [default: n]
Does your BSP have a keyboard? (y/n) [default: y]
3) standard/default/arm-versatile-926ejs
4) standard/default/base
5) standard/default/beagleboard
6) standard/default/cedartrailbsp (copy).xml
7) standard/default/common-pc-64/base
8) standard/default/common-pc-64/jasperforest
9) standard/default/common-pc-64/romley
10) standard/default/common-pc-64/sugarbay
11) standard/default/common-pc/atom-pc
12) standard/default/common-pc/base
13) standard/default/crownbay
14) standard/default/emenlow
15) standard/default/fishriver
16) standard/default/fri2
17) standard/default/fsl-mpc8315e-rdb
18) standard/default/mti-malta32-be
19) standard/default/mti-malta32-le
20) standard/default/preempt-rt
21) standard/default/qemu-ppc32
22) standard/default/routerstationpro
23) standard/preempt-rt/base
24) standard/preempt-rt/qemu-ppc32
25) standard/preempt-rt/routerstationpro
26) standard/tiny
3
Do you need SMP support? (Y/n)
Does your BSP have a touchscreen? (y/N)
Does your BSP have a keyboard? (Y/n)
New qemu BSP created in meta-myarm
</literallayout>
Let's take a closer look at the example now:
@@ -1254,10 +1276,10 @@
In the example, we use the <filename>arm</filename> architecture.
</para></listitem>
<listitem><para>The script then prompts you for the kernel.
The default 3.4 kernel is acceptable.
The default kernel is 3.2 and is acceptable.
So, the example accepts the default.
If you enter 'n', the script prompts you to further enter the kernel
you do want to use (e.g. 3.0, 3.2_preempt-rt, and so forth.).</para></listitem>
you do want to use (e.g. 3.0, 3.2_preempt-rt, etc.).</para></listitem>
<listitem><para>Next, the script asks whether you would like to have a new
branch created especially for your BSP in the local
<ulink url='&YOCTO_DOCS_DEV_URL;#local-kernel-files'>Linux Yocto Kernel</ulink>
@@ -1270,21 +1292,26 @@
The reason a new branch is the default is that typically
new BSPs do require BSP-specific patches.
The tool thus assumes that most of time a new branch is required.
</para></listitem>
<listitem><para>Regardless of which choice you make in the previous step,
<note>In the current implementation, creation or re-use of a branch does
not actually matter.
The reason is because the generated BSPs assume that patches and
configurations live in recipe-space, which is something that can be done
with or without a dedicated branch.
Generated BSPs, however, are different.
This difference becomes significant once the tool's 'publish' functionality
is implemented.</note></para></listitem>
<listitem><para>Regardless of which choice is made in the previous step,
you are now given the opportunity to select a particular machine branch on
which to base your new BSP-specific machine branch
which to base your new BSP-specific machine branch on
(or to re-use if you had elected to not create a new branch).
Because this example is generating an <filename>arm</filename> BSP, the example
uses <filename>#1</filename> at the prompt, which selects the arm-versatile branch.
uses <filename>#3</filename> at the prompt, which selects the arm-versatile branch.
</para></listitem>
<listitem><para>The remainder of the prompts are routine.
Defaults are accepted for each.</para></listitem>
<listitem><para>By default, the script creates the new BSP Layer in the
current working directory of the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>,
which is <filename>poky</filename> in this case.
</para></listitem>
<ulink url='&YOCTO_DOCS_DEV_URL;#yocto-project-build-directory'>Yocto Project
Build Directory</ulink>.</para></listitem>
</orderedlist>
</para>
@@ -1296,7 +1323,6 @@
BBLAYERS = " \
/usr/local/src/yocto/meta \
/usr/local/src/yocto/meta-yocto \
/usr/local/src/yocto/meta-yocto-bsp \
/usr/local/src/yocto/meta-myarm \
"
</literallayout>
@@ -1310,7 +1336,8 @@
<title>Managing Kernel Patches and Config Items with yocto-kernel</title>
<para>
Assuming you have created a <link linkend='bsp-layers'>BSP Layer</link> using
Assuming you have created a Yocto Project
<link linkend='bsp-layers'>BSP Layer</link> using
<link linkend='creating-a-new-bsp-layer-using-the-yocto-bsp-script'>
<filename>yocto-bsp</filename></link> and you added it to your
<ulink url='&YOCTO_DOCS_REF_URL;#var-BBLAYERS'><filename>BBLAYERS</filename></ulink>
@@ -1321,35 +1348,28 @@
<para>
The <filename>yocto-kernel</filename> script allows you to add, remove, and list patches
and kernel config settings to a BSP's kernel
and kernel config settings to a Yocto Project BSP's kernel
<filename>.bbappend</filename> file.
All you need to do is use the appropriate sub-command.
Recall that the easiest way to see exactly what sub-commands are available
is to use the <filename>yocto-kernel</filename> built-in help as follows:
<literallayout class='monospaced'>
$ yocto-kernel
Usage:
Usage:
Modify and list Yocto BSP kernel config items and patches.
Modify and list Yocto BSP kernel config items and patches.
usage: yocto-kernel [--version] [--help] COMMAND [ARGS]
usage: yocto-kernel [--version] [--help] COMMAND [ARGS]
Current 'yocto-kernel' commands are:
config list List the modifiable set of bare kernel config options for a BSP
config add Add or modify bare kernel config options for a BSP
config rm Remove bare kernel config options from a BSP
patch list List the patches associated with a BSP
patch add Patch the Yocto kernel for a BSP
patch rm Remove patches from a BSP
The most commonly used 'yocto-kernel' commands are:
config list List the modifiable set of bare kernel config options for a BSP
config add Add or modify bare kernel config options for a BSP
config rm Remove bare kernel config options from a BSP
patch list List the patches associated with a BSP
patch add Patch the Yocto kernel for a BSP
patch rm Remove patches from a BSP
See 'yocto-kernel help COMMAND' for more information on a specific command.
Options:
--version show program's version number and exit
-h, --help show this help message and exit
-D, --debug output debug information
See 'yocto-kernel help COMMAND' for more information on a specific command.
</literallayout>
</para>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -110,7 +110,7 @@ h5 {
h6 {
margin: 1em 0em 0em 0em;
padding: 1em 0em 0em 0em;
font-size: 110%;
font-size: 80%;
font-weight: bold;
}

View File

@@ -0,0 +1,716 @@
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
<appendix id='dev-manual-bsp-appendix'>
<title>BSP Development Example</title>
<para>
This appendix provides a complete BSP development example.
The example assumes the following:
<itemizedlist>
<listitem><para>No previous preparation or use of the Yocto Project.</para></listitem>
<listitem><para>Use of the Crown Bay Board Support Package (BSP) as a "base" BSP from
which to work.
The example begins with the Crown Bay BSP as the starting point
but ends by building a new 'atom-pc' BSP, which was based on the Crown Bay BSP.
</para></listitem>
<listitem><para>Shell commands assume <filename>bash</filename></para></listitem>
<listitem><para>Example was developed on an Intel-based Core i7 platform running
Ubuntu 10.04 LTS released in April of 2010.</para></listitem>
</itemizedlist>
</para>
<section id='getting-local-yocto-project-files-and-bsp-files'>
<title>Getting Local Yocto Project Files and BSP Files</title>
<para>
You need to have the Yocto Project files available on your host system.
You can get files through tarball extraction or by cloning the <filename>poky</filename>
Git repository.
The following paragraphs describe both methods.
For additional information, see the bulleted item
"<link linkend='local-yp-release'>Yocto Project Release</link>".
</para>
<para>
As mentioned, one way to get the Yocto Project files is to use Git to clone the
<filename>poky</filename> repository.
These commands create a local copy of the Git repository.
By default, the top-level directory of the repository is named <filename>poky</filename>:
<literallayout class='monospaced'>
$ git clone git://git.yoctoproject.org/poky
$ cd poky
</literallayout>
Alternatively, you can start with the downloaded Poky "&DISTRO_NAME;" tarball.
These commands unpack the tarball into a Yocto Project File directory structure.
By default, the top-level directory of the file structure is named
<filename>&YOCTO_POKY;</filename>:
<literallayout class='monospaced'>
$ tar xfj &YOCTO_POKY_TARBALL;
$ cd &YOCTO_POKY;
</literallayout>
<note><para>If you're using the tarball method, you can ignore all the following steps that
ask you to carry out Git operations.
You already have the results of those operations
in the form of the &DISTRO_NAME; release tarballs.
Consequently, there is nothing left to do other than extract those tarballs into the
proper locations.</para>
<para>Once you expand the released tarball, you have a snapshot of the Git repository
that represents a specific release.
Fundamentally, this is different than having a local copy of the Yocto Project
Git repository.
Given the tarball method, changes you make are building on top of a release.
With the Git repository method you have the ability to track development
and keep changes in revision control.
See the
"<link linkend='repositories-tags-and-branches'>Repositories, Tags, and Branches</link>" section
for more discussion around these differences.</para></note>
</para>
<para>
With the local <filename>poky</filename> Git repository set up,
you have all the development branches available to you from which you can work.
Next, you need to be sure that your local repository reflects the exact
release in which you are interested.
From inside the repository you can see the development branches that represent
areas of development that have diverged from the main (master) branch
at some point, such as a branch to track a maintenance release's development.
You can also see the tag names used to mark snapshots of stable releases or
points in the repository.
Use the following commands to list out the branches and the tags in the repository,
respectively.
<literallayout class='monospaced'>
$ git branch -a
$ git tag -l
</literallayout>
For this example, we are going to use the Yocto Project &DISTRO; Release, which is code
named "&DISTRO_NAME;".
To make sure we have a local area (branch in Git terms) on our machine that
reflects the &DISTRO; release, we can use the following commands:
<literallayout class='monospaced'>
$ cd ~/poky
$ git fetch --tags
$ git checkout &DISTRO_NAME;-&POKYVERSION; -b &DISTRO_NAME;
Switched to a new branch '&DISTRO_NAME;'
</literallayout>
The <filename>git fetch --tags</filename> is somewhat redundant since you just set
up the repository and should have all the tags.
The <filename>fetch</filename> command makes sure all the tags are available in your
local repository.
The Git <filename>checkout</filename> command with the <filename>-b</filename> option
creates a local branch for you named <filename>&DISTRO_NAME;</filename>.
Your local branch begins in the same state as the Yocto Project &DISTRO; released tarball
marked with the <filename>&DISTRO_NAME;-&POKYVERSION;</filename> tag in the source repositories.
</para>
</section>
<section id='choosing-a-base-bsp-app'>
<title>Choosing a Base BSP</title>
<para>
For this example, the base BSP is the <trademark class='registered'>Intel</trademark>
<trademark class='trade'>Atom</trademark> Processor E660 with Intel Platform
Controller Hub EG20T Development Kit, which is otherwise referred to as "Crown Bay."
The BSP layer is <filename>meta-crownbay</filename>.
The base BSP is simply the BSP
we will be using as a starting point, so don't worry if you don't actually have Crown Bay
hardware.
The remainder of the example transforms the base BSP into a BSP that should be
able to boot on generic atom-pc (netbook) hardware.
</para>
<para>
For information on how to choose a base BSP, see
"<link linkend='developing-a-board-support-package-bsp'>Developing a Board Support Package (BSP)</link>".
</para>
</section>
<section id='getting-your-base-bsp-app'>
<title>Getting Your Base BSP</title>
<para>
You need to have the base BSP layer on your development system.
Similar to the local <link linkend='yocto-project-files'>Yocto Project Files</link>,
you can get the BSP
layer in a couple of different ways:
download the BSP tarball and extract it, or set up a local Git repository that
has the Yocto Project BSP layers.
You should use the same method that you used to get the local Yocto Project files earlier.
See "<link linkend='getting-setup'>Getting Setup</link>" for information on how to get
the BSP files.
</para>
<para>
This example assumes the BSP layer will be located within a directory named
<filename>meta-intel</filename> contained within the <filename>poky</filename>
parent directory.
The following steps will automatically create the
<filename>meta-intel</filename> directory and the contained
<filename>meta-crownbay</filename> starting point in both the Git and the tarball cases.
</para>
<para>
If you're using the Git method, you could do the following to create
the starting layout after you have made sure you are in the <filename>poky</filename>
directory created in the previous steps:
<literallayout class='monospaced'>
$ git clone git://git.yoctoproject.org/meta-intel.git
$ cd meta-intel
</literallayout>
Alternatively, you can start with the downloaded Crown Bay tarball.
You can download the &DISTRO_NAME; version of the BSP tarball from the
<ulink url='&YOCTO_HOME_URL;/download'>Download</ulink> page of the
Yocto Project website.
Here is the specific link for the tarball needed for this example:
<ulink url='&YOCTO_MACHINES_DL_URL;/crownbay-noemgd/crownbay-noemgd-&DISTRO_NAME;-&POKYVERSION;.tar.bz2'></ulink>.
Again, be sure that you are already in the <filename>poky</filename> directory
as described previously before installing the tarball:
<literallayout class='monospaced'>
$ tar xfj crownbay-noemgd-&DISTRO_NAME;-&POKYVERSION;.tar.bz2
$ cd meta-intel
</literallayout>
</para>
<para>
The <filename>meta-intel</filename> directory contains all the metadata
that supports BSP creation.
If you're using the Git method, the following
step will switch to the &DISTRO_NAME; metadata.
If you're using the tarball method, you already have the correct metadata and can
skip to the next step.
Because <filename>meta-intel</filename> is its own Git repository, you will want
to be sure you are in the appropriate branch for your work.
For this example we are going to use the <filename>&DISTRO_NAME;</filename> branch.
<literallayout class='monospaced'>
$ git checkout -b &DISTRO_NAME; origin/&DISTRO_NAME;
Branch &DISTRO_NAME; set up to track remote branch &DISTRO_NAME; from origin.
Switched to a new branch '&DISTRO_NAME;'
</literallayout>
</para>
</section>
<section id='making-a-copy-of-the-base bsp-to-create-your-new-bsp-layer-app'>
<title>Making a Copy of the Base BSP to Create Your New BSP Layer</title>
<para>
Now that you have the local Yocto Project files and the base BSP files, you need to create a
new layer for your BSP.
To create your BSP layer, you simply copy the <filename>meta-crownbay</filename>
layer to a new layer.
</para>
<para>
For this example, the new layer will be named <filename>meta-mymachine</filename>.
The name should follow the BSP layer naming convention, which is
<filename>meta-&lt;name&gt;</filename>.
The following assumes your working directory is <filename>meta-intel</filename>
inside the local Yocto Project files.
To start your new layer, just copy the new layer alongside the existing
BSP layers in the <filename>meta-intel</filename> directory:
<literallayout class='monospaced'>
$ cp -a meta-crownbay/ meta-mymachine
</literallayout>
</para>
</section>
<section id='making-changes-to-your-bsp-app'>
<title>Making Changes to Your BSP</title>
<para>
Right now you have two identical BSP layers with different names:
<filename>meta-crownbay</filename> and <filename>meta-mymachine</filename>.
You need to change your configurations so that they work for your new BSP and
your particular hardware.
The following sections look at each of these areas of the BSP.
</para>
<section id='changing-the-bsp-configuration'>
<title>Changing the BSP Configuration</title>
<para>
We will look first at the configurations, which are all done in the layers
<filename>conf</filename> directory.
</para>
<para>
First, since in this example the new BSP will not support EMGD, we will get rid of the
<filename>crownbay.conf</filename> file and then rename the
<filename>crownbay-noemgd.conf</filename> file to <filename>mymachine.conf</filename>.
Much of what we do in the configuration directory is designed to help the Yocto Project
build system work with the new layer and to be able to find and use the right software.
The following two commands result in a single machine configuration file named
<filename>mymachine.conf</filename>.
<literallayout class='monospaced'>
$ rm meta-mymachine/conf/machine/crownbay.conf
$ mv meta-mymachine/conf/machine/crownbay-noemgd.conf \
meta-mymachine/conf/machine/mymachine.conf
</literallayout>
</para>
<para>
Next, we need to make changes to the <filename>mymachine.conf</filename> itself.
The only changes we want to make for this example are to the comment lines.
Changing comments, of course, is never strictly necessary, but it's alway good form to make
them reflect reality as much as possible.
Here, simply substitute the Crown Bay name with an appropriate name for the BSP
(<filename>mymachine</filename> in this case) and change the description to
something that describes your hardware.
</para>
<para>
Note that inside the <filename>mymachine.conf</filename> is the
<filename>PREFERRED_VERSION_linux-yocto</filename> statement.
This statement identifies the kernel that the BSP is going to use.
In this case, the BSP is using <filename>linux-yocto</filename>, which is the
current Linux Yocto kernel based on the Linux 3.2 release.
</para>
<para>
The next configuration file in the new BSP layer we need to edit is
<filename>meta-mymachine/conf/layer.conf</filename>.
This file identifies build information needed for the new layer.
You can see the
"<ulink url='&YOCTO_DOCS_BSP_URL;#bsp-filelayout-layer'>Layer Configuration File</ulink>" section
in The Board Support Packages (BSP) Development Guide for more information on this configuration file.
Basically, we are changing the existing statements to work with our BSP.
</para>
<para>
The file contains these statements that reference the Crown Bay BSP:
<literallayout class='monospaced'>
BBFILE_COLLECTIONS += "crownbay"
BBFILE_PATTERN_crownbay := "^${LAYERDIR}/"
BBFILE_PRIORITY_crownbay = "6"
LAYERDEPENDS_crownbay = "intel"
</literallayout>
</para>
<para>
Simply substitute the machine string name <filename>crownbay</filename>
with the new machine name <filename>mymachine</filename> to get the following:
<literallayout class='monospaced'>
BBFILE_COLLECTIONS += "mymachine"
BBFILE_PATTERN_mymachine := "^${LAYERDIR}/"
BBFILE_PRIORITY_mymachine = "6"
LAYERDEPENDS_mymachine = "intel"
</literallayout>
</para>
</section>
<section id='changing-the-recipes-in-your-bsp'>
<title>Changing the Recipes in Your BSP</title>
<para>
Now we will take a look at the recipes in your new layer.
The standard BSP structure has areas for BSP, graphics, core, and kernel recipes.
When you create a BSP, you use these areas for appropriate recipes and append files.
Recipes take the form of <filename>.bb</filename> files, while append files take
the form of <filename>.bbappend</filename> files.
If you want to leverage the existing recipes the Yocto Project build system uses
but change those recipes, you can use <filename>.bbappend</filename> files.
All new recipes and append files for your layer must go in the layers
<filename>recipes-bsp</filename>, <filename>recipes-kernel</filename>,
<filename>recipes-core</filename>, and
<filename>recipes-graphics</filename> directories.
</para>
<section id='changing-recipes-bsp'>
<title>Changing&nbsp;&nbsp;<filename>recipes-bsp</filename></title>
<para>
First, let's look at <filename>recipes-bsp</filename>.
For this example we are not adding any new BSP recipes.
And, we only need to remove the formfactor we do not want and change the name of
the remaining one that doesn't support EMGD.
These commands take care of the <filename>recipes-bsp</filename> recipes:
<literallayout class='monospaced'>
$ rm -rf meta-mymachine/recipes-bsp/formfactor/formfactor/crownbay
$ mv meta-mymachine/recipes-bsp/formfactor/formfactor/crownbay-noemgd/ \
meta-mymachine/recipes-bsp/formfactor/formfactor/mymachine
</literallayout>
</para>
</section>
<section id='changing-recipes-graphics'>
<title>Changing&nbsp;&nbsp;<filename>recipes-graphics</filename></title>
<para>
Now let's look at <filename>recipes-graphics</filename>.
For this example we want to remove anything that supports EMGD and
be sure to rename remaining directories appropriately.
The following commands clean up the <filename>recipes-graphics</filename> directory:
<literallayout class='monospaced'>
$ rm -rf meta-mymachine/recipes-graphics/xorg-xserver/xserver-xf86-config/crownbay
$ mv meta-mymachine/recipes-graphics/xorg-xserver/xserver-xf86-config/crownbay-noemgd \
meta-mymachine/recipes-graphics/xorg-xserver/xserver-xf86-config/mymachine
</literallayout>
</para>
<para>
At this point the <filename>recipes-graphics</filename> directory just has files that
support Video Electronics Standards Association (VESA) graphics modes and not EMGD.
</para>
</section>
<section id='changing-recipes-core'>
<title>Changing&nbsp;&nbsp;<filename>recipes-core</filename></title>
<para>
Now let's look at changes in <filename>recipes-core</filename>.
The file <filename>task-core-tools.bbappend</filename> in
<filename>recipes-core/tasks</filename> appends the similarly named recipe
located in the local <link linkend='yocto-project-files'>Yocto Project Files</link> at
<filename>meta/recipes-core/tasks</filename>.
The append file in our layer right now is Crown Bay-specific and supports
EMGD and non-EMGD.
Here are the contents of the file:
<literallayout class='monospaced'>
RRECOMMENDS_task-core-tools-profile_append_crownbay = " systemtap"
RRECOMMENDS_task-core-tools-profile_append_crownbay-noemgd = " systemtap"
</literallayout>
</para>
<para>
The <filename>RRECOMMENDS</filename> statements list packages that
extend usability.
The first <filename>RRECOMMENDS</filename> statement can be removed, while the
second one can be changed to reflect <filename>meta-mymachine</filename>:
<literallayout class='monospaced'>
RRECOMMENDS_task-core-tools-profile_append_mymachine = " systemtap"
</literallayout>
</para>
</section>
<section id='changing-recipes-kernel'>
<title>Changing&nbsp;&nbsp;<filename>recipes-kernel</filename></title>
<para>
Finally, let's look at <filename>recipes-kernel</filename> changes.
Recall that the BSP uses the <filename>linux-yocto</filename> kernel as determined
earlier in the <filename>mymachine.conf</filename>.
The recipe for that kernel is not located in the
BSP layer but rather in the local Yocto Project files at
<filename>meta/recipes-kernel/linux</filename> and is
named <filename>linux-yocto_3.2.bb</filename>.
The <filename>SRCREV_machine</filename> and <filename>SRCREV_meta</filename>
statements point to the exact commits used by the Yocto Project development team
in their source repositories that identify the right kernel for our hardware.
In other words, the <filename>SRCREV</filename> values are simply Git commit
IDs that identify which commit on each
of the kernel branches (machine and meta) will be checked out and used to build
the kernel.
</para>
<para>
However, in the <filename>meta-mymachine</filename> layer in
<filename>recipes-kernel/linux</filename> resides a <filename>.bbappend</filename>
file named <filename>linux-yocto_3.2.bbappend</filename> that
appends information to the recipe of the same name in <filename>meta/recipes-kernel/linux</filename>.
Thus, the <filename>SRCREV</filename> statements in the append file override
the more general statements found in <filename>meta</filename>.
</para>
<para>
The <filename>SRCREV</filename> statements in the append file currently identify
the kernel that supports the Crown Bay BSP with and without EMGD support.
Here are the statements:
<note>The commit ID strings used in this manual might not match the actual commit
ID strings found in the <filename>linux-yocto_3.2.bbappend</filename> file.
For the example, this difference does not matter.</note>
<literallayout class='monospaced'>
SRCREV_machine_pn-linux-yocto_crownbay ?= \
"211fc7f4d10ec2b82b424286aabbaff9254b7cbd"
SRCREV_meta_pn-linux-yocto_crownbay ?= \
"514847185c78c07f52e02750fbe0a03ca3a31d8f"
SRCREV_machine_pn-linux-yocto_crownbay-noemgd ?= \
"211fc7f4d10ec2b82b424286aabbaff9254b7cbd"
SRCREV_meta_pn-linux-yocto_crownbay-noemgd ?= \
"514847185c78c07f52e02750fbe0a03ca3a31d8f"
</literallayout>
</para>
<para>
You will notice that there are two pairs of <filename>SRCREV</filename> statements.
The top pair identifies the kernel that supports
EMGD, which we dont care about in this example.
The bottom pair identifies the kernel that we will use:
<filename>linux-yocto</filename>.
At this point though, the unique commit strings all are still associated with
Crown Bay and not <filename>meta-mymachine</filename>.
</para>
<para>
To fix this situation in <filename>linux-yocto_3.2.bbappend</filename>,
we delete the two <filename>SRCREV</filename> statements that support
EMGD (the top pair).
We also change the remaining pair to specify <filename>mymachine</filename>
and insert the commit identifiers to identify the kernel in which we
are interested, which will be based on the <filename>atom-pc-standard</filename>
kernel.
In this case, because we're working with the &DISTRO_NAME; branch of everything, we
need to use the <filename>SRCREV</filename> values for the atom-pc branch
that are associated with the &DISTRO_NAME; release.
To find those values, we need to find the <filename>SRCREV</filename>
values that &DISTRO_NAME; uses for the atom-pc branch, which we find in the
<filename>poky/meta-yocto/recipes-kernel/linux/linux-yocto_3.2.bbappend</filename>
file.
</para>
<para>
The machine <filename>SRCREV</filename> we want is in the
<filename>SRCREV_machine_atom-pc</filename> variable.
The meta <filename>SRCREV</filename> isn't specified in this file, so it must be
specified in the base kernel recipe in the
<filename>poky/meta/recipes-kernel/linux/linux-yocto_3.2.bb</filename>
file, in the <filename>SRCREV_meta</filename> variable found there.
Here are the final <filename>SRCREV</filename> statements:
<literallayout class='monospaced'>
SRCREV_machine_pn-linux-yocto_mymachine ?= \
"f29531a41df15d74be5ad47d958e4117ca9e489e"
SRCREV_meta_pn-linux-yocto_mymachine ?= \
"b14a08f5c7b469a5077c10942f4e1aec171faa9d"
</literallayout>
</para>
<para>
In this example, we're using the <filename>SRCREV</filename> values we
found already captured in the &DISTRO_NAME; release because we're creating a BSP based on
&DISTRO_NAME;.
If, instead, we had based our BSP on the master branches, we would want to use
the most recent <filename>SRCREV</filename> values taken directly from the kernel repo.
We will not be doing that for this example.
However, if you do base a future BSP on master and
if you are familiar with Git repositories, you probably wont have trouble locating the
exact commit strings in the Yocto Project source repositories you need to change
the <filename>SRCREV</filename> statements.
You can find all the <filename>machine</filename> and <filename>meta</filename>
branch points (commits) for the <filename>linux-yocto-3.2</filename> kernel at
<ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/linux-yocto-3.2'></ulink>.
</para>
<para>
If you need a little more assistance after going to the link then do the following:
<orderedlist>
<listitem><para>Expand the list of branches by clicking <filename>[…]</filename></para></listitem>
<listitem><para>Click on the <filename>standard/default/common-pc/atom-pc</filename>
branch</para></listitem>
<listitem><para>Click on the commit column header to view the top commit</para></listitem>
<listitem><para>Copy the commit string for use in the
<filename>linux-yocto_3.2.bbappend</filename> file</para></listitem>
</orderedlist>
</para>
<para>
For the <filename>SRCREV</filename> statement that points to the <filename>meta</filename>
branch use the same procedure except expand the <filename>meta</filename>
branch in step 2 above.
</para>
<para>
Also in the <filename>linux-yocto_3.2.bbappend</filename> file are
<filename>COMPATIBLE_MACHINE</filename>, <filename>KMACHINE</filename>,
and <filename>KBRANCH</filename> statements.
Two sets of these exist: one set supports EMGD and one set does not.
Because we are not interested in supporting EMGD those three can be deleted.
The remaining three must be changed so that <filename>mymachine</filename> replaces
<filename>crownbay-noemgd</filename> and <filename>crownbay</filename>.
Because we are using the <filename>atom-pc</filename> branch for this new BSP, we can also find
the exact branch we need for the <filename>KMACHINE</filename>
and <filename>KBRANCH</filename> variables in our new BSP from the value
we find in the
<filename>poky/meta-yocto/recipes-kernel/linux/linux-yocto_3.2.bbappend</filename>
file we looked at in a previous step.
In this case, the values we want are in the <filename>KMACHINE_atom-pc</filename> variable
and the <filename>KBRANCH_atom-pc</filename> variables in that file.
Here is the final <filename>linux-yocto_3.2.bbappend</filename> file after all
the edits:
<literallayout class='monospaced'>
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
COMPATIBLE_MACHINE_mymachine = "mymachine"
KMACHINE_mymachine = "atom-pc"
KBRANCH_mymachine = "standard/default/common-pc/atom-pc"
SRCREV_machine_pn-linux-yocto_mymachine ?= \
"f29531a41df15d74be5ad47d958e4117ca9e489e"
SRCREV_meta_pn-linux-yocto_mymachine ?= \
"b14a08f5c7b469a5077c10942f4e1aec171faa9d"
</literallayout>
</para>
</section>
</section>
<section id='bsp-recipe-change-summary'>
<title>BSP Recipe Change Summary</title>
<para>
In summary, the edits to the layers recipe files result in removal of any files and
statements that do not support your targeted hardware in addition to the inclusion
of any new recipes you might need.
In this example, it was simply a matter of ridding the new layer
<filename>meta-mymachine</filename> of any code that supported the EMGD features
and making sure we were identifying the kernel that supports our example, which
is the <filename>atom-pc-standard</filename> kernel.
We did not introduce any new recipes to the layer.
</para>
<para>
Finally, it is also important to update the layers <filename>README</filename>
file so that the information in it reflects your BSP.
</para>
</section>
</section>
<section id='preparing-for-the-build-app'>
<title>Preparing for the Build</title>
<para>
To get ready to build your image that uses the new layer you need to do the following:
<orderedlist>
<listitem><para>Get the environment ready for the build by sourcing the environment
script.
The environment script is in the top-level of the local Yocto Project files
directory structure.
The script has the string
<filename>init-build-env</filename> in the files name.
For this example, the following command gets the build environment ready:
<literallayout class='monospaced'>
$ source oe-init-build-env yocto-build
</literallayout>
When you source the script a build directory is created in the current
working directory.
In our example we were in the <filename>poky</filename> directory.
Thus, entering the previous command created the <filename>yocto-build</filename> directory.
If you do not provide a name for the build directory it defaults to
<filename>build</filename>.
The <filename>yocto-build</filename> directory contains a
<filename>conf</filename> directory that has
two configuration files you will need to check: <filename>bblayers.conf</filename>
and <filename>local.conf</filename>.</para></listitem>
<listitem><para>Check and edit the resulting <filename>local.conf</filename> file.
This file minimally identifies the machine for which to build the image by
configuring the <filename>MACHINE</filename> variable.
For this example you must set the variable to mymachine as follows:
<literallayout class='monospaced'>
MACHINE ??= “mymachine”
</literallayout>
You should also be sure any other variables in which you are interested are set.
Some variables to consider are <filename>BB_NUMBER_THREADS</filename>
and <filename>PARALLEL_MAKE</filename>, both of which can greatly reduce your build time
if your development system supports multiple cores.
For development systems that support multiple cores, a good rule of thumb is to set
both the <filename>BB_NUMBER_THREADS</filename> and <filename>PARALLEL_MAKE</filename>
variables to twice the number of cores your system supports.</para></listitem>
<listitem><para>Update the <filename>bblayers.conf</filename> file so that it includes
both the path to your new BSP layer and the path to the
<filename>meta-intel</filename> layer.
In this example, you need to include both these paths as part of the
<filename>BBLAYERS</filename> variable:
<literallayout class='monospaced'>
$HOME/poky/meta-intel
$HOME/poky/meta-intel/meta-mymachine
</literallayout></para></listitem>
</orderedlist>
</para>
<para>
The appendix
<ulink url='&YOCTO_DOCS_REF_URL;#ref-variables-glos'>
Reference: Variables Glossary</ulink> in the Yocto Project Reference Manual has more information
on configuration variables.
</para>
</section>
<section id='building-the-image-app'>
<title>Building and Booting the Image</title>
<para>
To build the image for our <filename>meta-mymachine</filename> BSP enter the following command
from the same shell from which you ran the setup script.
You should run the <filename>bitbake</filename> command without any intervening shell commands.
For example, moving your working directory around could cause problems.
Here is the command for this example:
<literallayout class='monospaced'>
$ bitbake -k core-image-sato
</literallayout>
</para>
<para>
This command specifies an image that has Sato support and that can be run from a USB device or
from a CD without having to first install anything.
The build process takes significant time and includes thousands of tasks, which are reported
at the console.
If the build results in any type of error you should check for misspellings in the
files you changed or problems with your host development environment such as missing packages.
</para>
<para>
Finally, once you have an image, you can try booting it from a device
(e.g. a USB device).
To prepare a bootable USB device, insert a USB flash drive into your build system and
copy the <filename>.hddimg</filename> file, located in the
<filename>poky/build/tmp/deploy/images</filename>
directory after a successful build to the flash drive.
Assuming the USB flash drive takes device <filename>/dev/sdf</filename>,
use <filename>dd</filename> to copy the live image to it.
For example:
<literallayout class='monospaced'>
# dd if=core-image-sato-mymachine-20111101223904.hddimg of=/dev/sdf
# sync
# eject /dev/sdf
</literallayout>
You should now have a bootable USB flash device.
</para>
<para>
Insert the device
into a bootable USB socket on the target, and power it on.
The system should boot to the Sato graphical desktop.
<footnote><para>Because
this new image is not in any way tailored to the system you're
booting it on, which is assumed to be some sort of atom-pc (netbook) system for this
example, it might not be completely functional though it should at least boot to a text
prompt.
Specifically, it might fail to boot into graphics without some tweaking.
If this ends up being the case, a possible next step would be to replace the
<filename>mymachine.conf</filename>
contents with the contents of <filename>atom-pc.conf</filename> and replace
<filename>xorg.conf</filename> with <filename>atom-pc xorg.conf</filename>
in <filename>meta-yocto</filename> and see if it fares any better.
In any case, following the previous steps will give you a buildable image that
will probably boot on most systems.
Getting things working like you want
them to for your hardware will normally require some amount of experimentation with
configuration settings.</para></footnote>
</para>
<para>
For reference, the sato image produced by the previous steps for &DISTRO_NAME;
should look like the following in terms of size.
If your sato image is much different from this,
you probably made a mistake in one of the above steps:
<literallayout class='monospaced'>
260538368 2012-04-27 01:44 core-image-sato-mymachine-20120427025051.hddimg
</literallayout>
<note>The previous instructions are also present in the README that was copied
from meta-crownbay, which should also be updated to reflect the specifics of your
new BSP.
That file and the <filename>README.hardware</filename> file in the top-level
<filename>poky</filename> directory
also provides some suggestions for things to try if booting fails and produces
strange error messages.</note>
</para>
</section>
</appendix>
<!--
vim: expandtab tw=80 ts=4
-->

File diff suppressed because it is too large Load Diff

View File

@@ -18,14 +18,13 @@
sources where you can find more detail.
For example, detailed information on Git, repositories and open source in general
can be found in many places.
Another example is how to get set up to use the Yocto Project, which our Yocto Project
Quick Start covers.
Another example is how to get set up to use the Yocto Project, which our Yocto Project Quick Start covers.
</para>
<para>
The Yocto Project Development Manual, however, does provide detailed examples
on how to change the kernel source code, reconfigure the kernel, and develop
an application using the popular <trademark class='trade'>Eclipse</trademark> IDE.
The Yocto Project Development Manual, however, does provide detailed examples on how to create a
Board Support Package (BSP), change the kernel source code, and re-configure the kernel.
You can find this information in the appendices of the manual.
</para>
</section>
@@ -44,8 +43,14 @@
<listitem><para>Development case overviews for both system development and user-space
applications.</para></listitem>
<listitem><para>An overview and understanding of the emulation environment used with
the Yocto Project - the Quick EMUlator (QEMU).</para></listitem>
<listitem><para>An understanding of basic kernel architecture and concepts.</para></listitem>
the Yocto Project (QEMU).</para></listitem>
<!-- <listitem><para>A discussion of target-level analysis techniques, tools, tips,
and tricks.</para></listitem>
<listitem><para>Considerations for deploying your final product.</para></listitem> -->
<listitem><para>An understanding of basic kernel architecture and
concepts.</para></listitem>
<!-- <listitem><para>Information that will help you migrate an existing project to the
Yocto Project development environment.</para></listitem> -->
<listitem><para>Many references to other sources of related information.</para></listitem>
</itemizedlist>
</para>
@@ -59,14 +64,14 @@
<itemizedlist>
<listitem><para>Step-by-step instructions if those instructions exist in other Yocto
Project documentation.
For example, the Yocto Project Application Developer's Guide contains detailed
instruction on how to run the
<ulink url='&YOCTO_DOCS_ADT_URL;#installing-the-adt'>Installing the ADT and Toolchains</ulink>,
which is used to set up a cross-development environment.</para></listitem>
For example, the Application Development Toolkit (ADT) Users Guide contains detailed
instruction on how to obtain and configure the
<trademark class='trade'>Eclipse</trademark> Yocto Plug-in.</para></listitem>
<listitem><para>Reference material.
This type of material resides in an appropriate reference manual.
For example, system variables are documented in the
<ulink url='&YOCTO_DOCS_REF_URL;'>Yocto Project Reference Manual</ulink>.</para></listitem>
<ulink url='&YOCTO_DOCS_REF_URL;'>
Yocto Project Reference Manual</ulink>.</para></listitem>
<listitem><para>Detailed public information that is not specific to the Yocto Project.
For example, exhaustive information on how to use Git is covered better through the
Internet than in this manual.</para></listitem>
@@ -86,36 +91,40 @@
</emphasis> The home page for the Yocto Project provides lots of information on the project
as well as links to software and documentation.</para></listitem>
<listitem><para><emphasis>
<ulink url='&YOCTO_DOCS_QS_URL;'>Yocto Project Quick Start</ulink>:</emphasis> This short document lets you get started
<ulink url='&YOCTO_DOCS_QS_URL;'>
The Yocto Project Quick Start</ulink>:</emphasis> This short document lets you get started
with the Yocto Project quickly and start building an image.</para></listitem>
<listitem><para><emphasis>
<ulink url='&YOCTO_DOCS_REF_URL;'>Yocto Project Reference Manual</ulink>:</emphasis> This manual is a reference
guide to the OpenEmbedded build system known as "Poky."
<ulink url='&YOCTO_DOCS_REF_URL;'>
The Yocto Project Reference Manual</ulink>:</emphasis> This manual is a reference
guide to the Yocto Project build component known as "Poky."
The manual also contains a reference chapter on Board Support Package (BSP)
layout.</para></listitem>
<listitem><para><emphasis>
<ulink url='&YOCTO_DOCS_ADT_URL;'>Yocto Project Application Developer's Guide</ulink>:</emphasis>
This guide provides information that lets you get going with the Application
Development Toolkit (ADT) and stand-alone cross-development toolchains to
<ulink url='&YOCTO_DOCS_ADT_URL;'>
The Yocto Project Application Development Toolkit (ADT) User's Guide</ulink>:</emphasis>
This guide provides information that lets you get going with the ADT to
develop projects using the Yocto Project.</para></listitem>
<listitem><para><emphasis>
<ulink url='&YOCTO_DOCS_BSP_URL;'>Yocto Project Board Support Package (BSP) Developer's Guide</ulink>:</emphasis>
<ulink url='&YOCTO_DOCS_BSP_URL;'>
The Yocto Project Board Support Package (BSP) Developer's Guide</ulink>:</emphasis>
This guide defines the structure for BSP components.
Having a commonly understood structure encourages standardization.</para></listitem>
<listitem><para><emphasis>
<ulink url='&YOCTO_DOCS_KERNEL_URL;'>Yocto Project Kernel Architecture and Use Manual</ulink>:</emphasis>
<ulink url='&YOCTO_DOCS_KERNEL_URL;'>
The Yocto Project Kernel Architecture and Use Manual</ulink>:</emphasis>
This manual describes the architecture of the Yocto Project kernel and provides
some work flow examples.</para></listitem>
<listitem><para><emphasis>
<ulink url='http://www.youtube.com/watch?v=3ZlOu-gLsh0'>
Eclipse IDE Yocto Plug-in</ulink>:</emphasis> A step-by-step instructional video that
Yocto Eclipse Plug-in</ulink>:</emphasis> A step-by-step instructional video that
demonstrates how an application developer uses Yocto Plug-in features within
the Eclipse IDE.</para></listitem>
<listitem><para><emphasis>
<ulink url='&YOCTO_WIKI_URL;/wiki/FAQ'>FAQ</ulink>:</emphasis>
A list of commonly asked questions and their answers.</para></listitem>
<listitem><para><emphasis>
<ulink url='&YOCTO_HOME_URL;/download/yocto/yocto-project-&DISTRO;-release-notes-poky-&POKYVERSION;'>
<ulink url='&YOCTO_HOME_URL;/download/yocto/yocto-project-1.1-release-notes-poky-&POKYVERSION;'>
Release Notes</ulink>:</emphasis> Features, updates and known issues for the current
release of the Yocto Project.</para></listitem>
<listitem><para><emphasis>
@@ -124,11 +133,8 @@
Hob's primary goal is to enable a user to perform common tasks more easily.</para></listitem>
<listitem><para><emphasis>
<ulink url='&YOCTO_HOME_URL;/documentation/build-appliance'>
Build Appliance</ulink>:</emphasis> A bootable custom embedded Linux image you can
either build using a non-Linux development system (VMware applications) or download
from the Yocto Project website.
See the <ulink url='&YOCTO_HOME_URL;/documentation/build-appliance'>Build Appliance</ulink>
page for more information.</para></listitem>
Build Appliance</ulink>:</emphasis> Allows you to build and boot a custom embedded Linux
image with the Yocto Project using a non-Linux development system.</para></listitem>
<listitem><para><emphasis>
<ulink url='&YOCTO_BUGZILLA_URL;'>Bugzilla</ulink>:</emphasis>
The bug tracking application the Yocto Project uses.
@@ -143,40 +149,38 @@
<listitem><para><ulink url='&YOCTO_LISTS_URL;/listinfo/poky'></ulink> for a
Yocto Project Discussions mailing list about the Poky build system.</para></listitem>
<listitem><para><ulink url='&YOCTO_LISTS_URL;/listinfo/yocto-announce'></ulink>
for a mailing list to receive official Yocto Project announcements for developments and
for a mailing list to receive offical Yocto Project announcements for developments and
as well as Yocto Project milestones.</para></listitem>
</itemizedlist></para></listitem>
<listitem><para><emphasis>Internet Relay Chat (IRC):</emphasis>
Two IRC channels on freenode are available
for Yocto Project and Poky discussions: <filename>#yocto</filename> and
<filename>#poky</filename>, respectively.</para></listitem>
<filename>#poky</filename>.</para></listitem>
<listitem><para><emphasis>
<ulink url='&OH_HOME_URL;'>OpenedHand</ulink>:</emphasis>
The company that initially developed the Poky project, which is the basis
for the OpenEmbedded build system used by the Yocto Project.
OpenedHand was acquired by Intel Corporation in 2008.</para></listitem>
The company where the Yocto Project build system Poky was first developed.
OpenedHand has since been acquired by Intel Corporation.</para></listitem>
<listitem><para><emphasis>
<ulink url='http://www.intel.com/'>Intel Corporation</ulink>:</emphasis>
A multinational semiconductor chip manufacturer company whose Software and
Services Group created and supports the Yocto Project.
Intel acquired OpenedHand in 2008.</para></listitem>
The company that acquired OpenedHand in 2008 and continues development on the
Yocto Project.</para></listitem>
<listitem><para><emphasis>
<ulink url='&OE_HOME_URL;'>OpenEmbedded</ulink>:</emphasis>
The build system used by the Yocto Project.
This project is the upstream, generic, embedded distribution from which the Yocto
Project derives its build system (Poky) from and to which it contributes.</para></listitem>
The upstream, generic, embedded distribution the Yocto Project build system (Poky) derives
from and to which it contributes.</para></listitem>
<listitem><para><emphasis>
<ulink url='http://developer.berlios.de/projects/bitbake/'>
BitBake</ulink>:</emphasis> The tool used by the OpenEmbedded build system
to process project metadata.</para></listitem>
BitBake</ulink>:</emphasis> The tool used to process Yocto Project metadata.</para></listitem>
<listitem><para><emphasis>
BitBake User Manual:</emphasis>
A comprehensive guide to the BitBake tool.
If you want information on BitBake, see the user manual inculded in the
<filename>bitbake/doc/manual</filename> directory of the
<link linkend='source-directory'>Source Directory</link>.</para></listitem>
<ulink url='http://bitbake.berlios.de/manual/'>
BitBake User Manual</ulink>:</emphasis> A comprehensive guide to the BitBake tool.
</para></listitem>
<listitem><para><emphasis>
<ulink url='http://wiki.qemu.org/Index.html'>Quick EMUlator (QEMU)</ulink>:
<ulink url='http://pimlico-project.org/'>Pimlico</ulink>:</emphasis>
A suite of lightweight Personal Information Management (PIM) applications designed
primarily for handheld and mobile devices.</para></listitem>
<listitem><para><emphasis>
<ulink url='http://wiki.qemu.org/Index.html'>QEMU</ulink>:
</emphasis> An open-source machine emulator and virtualizer.</para></listitem>
</itemizedlist>
</para>

View File

@@ -1,553 +0,0 @@
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
<appendix id='dev-manual-kernel-appendix'>
<title>Kernel Modification Example</title>
<para>
Kernel modification involves changing or adding configurations to an existing kernel,
changing or adding recipes to the kernel that are needed to support specific hardware features,
or even altering the source code itself.
This appendix presents simple examples that modify the kernel source code,
change the kernel configuration, and add a kernel source recipe.
<note>
You can use the <filename>yocto-kernel</filename> script
found in the <link linkend='source-directory'>Source Directory</link>
under <filename>scripts</filename> to manage kernel patches and configuration.
See the "<ulink url='&YOCTO_DOCS_BSP_URL;#managing-kernel-patches-and-config-items-with-yocto-kernel'>Managing kernel Patches and Config Items with yocto-kernel</ulink>"
section in the Yocto Project Board Support Packages (BSP) Developer's Guide for
more information.</note>
</para>
<section id='modifying-the-kernel-source-code'>
<title>Modifying the Kernel Source Code</title>
<para>
This example adds some simple QEMU emulator console output at boot time by
adding <filename>printk</filename> statements to the kernel's
<filename>calibrate.c</filename> source code file.
Booting the modified image causes the added messages to appear on the emulator's
console.
</para>
<section id='understanding-the-files-you-need'>
<title>Understanding the Files You Need</title>
<para>
Before you modify the kernel, you need to know what Git repositories and file
structures you need.
Briefly, you need the following:
<itemizedlist>
<listitem><para>A local
<link linkend='source-directory'>Source Directory</link> for the
poky Git repository</para></listitem>
<listitem><para>Local copies of the
<link linkend='poky-extras-repo'><filename>poky-extras</filename></link>
Git repository placed within the Source Directory.</para></listitem>
<listitem><para>A bare clone of the
<link linkend='local-kernel-files'>Yocto Project Kernel</link> upstream Git
repository to which you want to push your modifications.
</para></listitem>
<listitem><para>A copy of that bare clone in which you make your source
modifications</para></listitem>
</itemizedlist>
</para>
<para>
The following figure summarizes these four areas.
Within each rectangular that represents a data structure, a
host development directory pathname appears at the
lower left-hand corner of the box.
These pathnames are the locations used in this example.
The figure also provides key statements and commands used during the kernel
modification process:
</para>
<para>
<imagedata fileref="figures/kernel-example-repos-generic.png" width="7in" depth="5in"
align="center" scale="100" />
</para>
<para>
Here is a brief description of the four areas:
<itemizedlist>
<listitem><para><emphasis>Local Source Directory:</emphasis>
This area contains all the metadata that supports building images
using the OpenEmbedded build system.
In this example, the
<link linkend='source-directory'>Source Directory</link> also
contains the
<link linkend='build-directory'>Build Directory</link>,
which contains the configuration directory
that lets you control the build.
Also in this example, the Source Directory contains local copies of the
<filename>poky-extras</filename> Git repository.</para>
<para>See the bulleted item
"<link linkend='local-yp-release'>Yocto Project Release</link>"
for information on how to get these files on your local system.</para></listitem>
<listitem><para><emphasis>Local copies of the&nbsp;<filename>poky-extras</filename>&nbsp;Git Repository:</emphasis>
This area contains the <filename>meta-kernel-dev</filename> layer,
which is where you make changes that append the kernel build recipes.
You edit <filename>.bbappend</filename> files to locate your
local kernel source files and to identify the kernel being built.
This Git repository is a gathering place for extensions to the Yocto Project
(or really any) kernel recipes that faciliate the creation and development
of kernel features, BSPs or configurations.</para>
<para>See the bulleted item
"<link linkend='poky-extras-repo'>The
<filename>poky-extras</filename> Git Repository</link>"
for information on how to get these files.</para></listitem>
<listitem><para><emphasis>Bare Clone of the Yocto Project kernel:</emphasis>
This bare Git repository tracks the upstream Git repository of the Linux
Yocto kernel source code you are changing.
When you modify the kernel you must work through a bare clone.
All source code changes you make to the kernel must be committed and
pushed to the bare clone using Git commands.
As mentioned, the <filename>.bbappend</filename> file in the
<filename>poky-extras</filename> repository points to the bare clone
so that the build process can locate the locally changed source files.</para>
<para>See the bulleted item
"<link linkend='local-kernel-files'>Yocto Project Kernel</link>"
for information on how to set up the bare clone.
</para></listitem>
<listitem><para><emphasis>Copy of the Yocto Project Kernel Bare Clone:</emphasis>
This Git repository contains the actual source files that you modify.
Any changes you make to files in this location need to ultimately be pushed
to the bare clone using the <filename>git push</filename> command.</para>
<para>See the bulleted item
"<link linkend='local-kernel-files'>Yocto Project Kernel</link>"
for information on how to set up the bare clone.
<note>Typically, Git workflows follow a scheme where changes made to a local area
are pulled into a Git repository.
However, because the <filename>git pull</filename> command does not work
with bare clones, this workflow pushes changes to the
repository even though you could use other more complicated methods to
get changes into the bare clone.</note>
</para></listitem>
</itemizedlist>
</para>
</section>
<section id='setting-up-the-local-yocto-project-files-git-repository'>
<title>Setting Up the Local Source Directory</title>
<para>
You can set up the
<link linkend='source-directory'>Source Directory</link>
through tarball extraction or by
cloning the <filename>poky</filename> Git repository.
This example uses <filename>poky</filename> as the root directory of the
local Source Directory.
See the bulleted item
"<link linkend='local-yp-release'>Yocto Project Release</link>"
for information on how to get these files.
</para>
<para>
Once you have Source Directory set up,
you have many development branches from which you can work.
From inside the local repository you can see the branch names and the tag names used
in the upstream Git repository by using either of the following commands:
<literallayout class='monospaced'>
$ cd poky
$ git branch -a
$ git tag -l
</literallayout>
This example uses the Yocto Project &DISTRO; Release code named "&DISTRO_NAME;",
which maps to the <filename>&DISTRO_NAME;</filename> branch in the repository.
The following commands create and checkout the local <filename>&DISTRO_NAME;</filename>
branch:
<literallayout class='monospaced'>
$ git checkout -b &DISTRO_NAME; origin/&DISTRO_NAME;
Branch &DISTRO_NAME; set up to track remote branch &DISTRO_NAME; from origin.
Switched to a new branch '&DISTRO_NAME;'
</literallayout>
</para>
</section>
<section id='setting-up-the-poky-extras-git-repository'>
<title>Setting Up the Local poky-extras Git Repository</title>
<para>
This example creates a local copy of the <filename>poky-extras</filename> Git
repository inside the <filename>poky</filename> Source Directory.
See the bulleted item "<link linkend='poky-extras-repo'>The
<filename>poky-extras</filename> Git Repository</link>"
for information on how to set up a local copy of the
<filename>poky-extras</filename> repository.
</para>
<para>
Because this example uses the Yocto Project &DISTRO; Release code
named "&DISTRO_NAME;", which maps to the <filename>&DISTRO_NAME;</filename>
branch in the repository, you need to be sure you are using that
branch for <filename>poky-extras</filename>.
The following commands create and checkout the local
branch you are using for the <filename>&DISTRO_NAME;</filename>
branch:
<literallayout class='monospaced'>
$ cd ~/poky/poky-extras
$ git checkout -b &DISTRO_NAME; origin/&DISTRO_NAME;
Branch &DISTRO_NAME; set up to track remote branch &DISTRO_NAME; from origin.
Switched to a new branch '&DISTRO_NAME;'
</literallayout>
</para>
</section>
<section id='setting-up-the-bare-clone-and-its-copy'>
<title>Setting Up the Bare Clone and its Copy</title>
<para>
This example modifies the <filename>linux-yocto-3.4</filename> kernel.
Thus, you need to create a bare clone of that kernel and then make a copy of the
bare clone.
See the bulleted item
"<link linkend='local-kernel-files'>Yocto Project Kernel</link>"
for information on how to do that.
</para>
<para>
The bare clone exists for the kernel build tools and simply as the receiving end
of <filename>git push</filename>
commands after you make edits and commits inside the copy of the clone.
The copy (<filename>my-linux-yocto-3.4-work</filename> in this example) has to have
a local branch created and checked out for your work.
This example uses <filename>common-pc-base</filename> as the local branch.
The following commands create and checkout the branch:
<literallayout class='monospaced'>
$ cd ~/my-linux-yocto-3.4-work
$ git checkout -b standard-common-pc-base origin/standard/common-pc/base
Branch standard-common-pc-base set up to track remote branch
standard/common-pc/base from origin.
Switched to a new branch 'standard-common-pc-base'
</literallayout>
</para>
</section>
<section id='building-and-booting-the-default-qemu-kernel-image'>
<title>Building and Booting the Default QEMU Kernel Image</title>
<para>
Before we make changes to the kernel source files, this example first builds the
default image and then boots it inside the QEMU emulator.
<note>
Because a full build can take hours, you should check two variables in the
<filename>build</filename> directory that is created after you source the
<filename>&OE_INIT_FILE;</filename> script.
You can find these variables
<filename>BB_NUMBER_THREADS</filename> and <filename>PARALLEL_MAKE</filename>
in the <filename>build/conf</filename> directory in the
<filename>local.conf</filename> configuration file.
By default, these variables are commented out.
If your host development system supports multi-core and multi-thread capabilities,
you can uncomment these statements and set the variables to significantly shorten
the full build time.
As a guideline, set both <filename>BB_NUMBER_THREADS</filename> and
<filename>PARALLEL_MAKE</filename> to twice the number
of cores your machine supports.
</note>
The following two commands <filename>source</filename> the build environment setup script
and build the default <filename>qemux86</filename> image.
If necessary, the script creates the build directory:
<literallayout class='monospaced'>
$ cd ~/poky
$ source &OE_INIT_FILE;
You had no conf/local.conf file. This configuration file has therefore been
created for you with some default values. You may wish to edit it to use a
different MACHINE (target hardware) or enable parallel build options to take
advantage of multiple cores for example. See the file for more information as
common configuration options are commented.
The Yocto Project has extensive documentation about OE including a reference manual
which can be found at:
http://yoctoproject.org/documentation
For more information about OpenEmbedded see their website:
http://www.openembedded.org/
You had no conf/bblayers.conf file. The configuration file has been created for
you with some default values. To add additional metadata layers into your
configuration please add entries to this file.
The Yocto Project has extensive documentation about OE including a reference manual
which can be found at:
http://yoctoproject.org/documentation
For more information about OpenEmbedded see their website:
http://www.openembedded.org/
### Shell environment set up for builds. ###
You can now run 'bitbake &lt;target&gt;>'
Common targets are:
core-image-minimal
core-image-sato
meta-toolchain
meta-toolchain-sdk
adt-installer
meta-ide-support
You can also run generated qemu images with a command like 'runqemu qemux86'
</literallayout>
</para>
<para>
The following <filename>bitbake</filename> command starts the build:
<literallayout class='monospaced'>
$ bitbake -k core-image-minimal
</literallayout>
<note>Be sure to check the settings in the <filename>local.conf</filename>
before starting the build.</note>
</para>
<para>
After the build completes, you can start the QEMU emulator using the resulting image
<filename>qemux86</filename> as follows:
<literallayout class='monospaced'>
$ runqemu qemux86
</literallayout>
</para>
<para>
As the image boots in the emulator, console message and status output appears
across the terminal window.
Because the output scrolls by quickly, it is difficult to read.
To examine the output, you log into the system using the
login <filename>root</filename> with no password.
Once you are logged in, issue the following command to scroll through the
console output:
<literallayout class='monospaced'>
# dmesg | less
</literallayout>
</para>
<para>
Take note of the output as you will want to look for your inserted print command output
later in the example.
</para>
</section>
<section id='changing-the-source-code-and-pushing-it-to-the-bare-clone'>
<title>Changing the Source Code and Pushing it to the Bare Clone</title>
<para>
The file you change in this example is named <filename>calibrate.c</filename>
and is located in the <filename>my-linux-yocto-3.4-work</filename> Git repository
(the copy of the bare clone) in <filename>init</filename>.
This example simply inserts several <filename>printk</filename> statements
at the beginning of the <filename>calibrate_delay</filename> function.
</para>
<para>
Here is the unaltered code at the start of this function:
<literallayout class='monospaced'>
void __cpuinit calibrate_delay(void)
{
unsigned long lpj;
static bool printed;
int this_cpu = smp_processor_id();
if (per_cpu(cpu_loops_per_jiffy, this_cpu)) {
.
.
.
</literallayout>
</para>
<para>
Here is the altered code showing five new <filename>printk</filename> statements
near the top of the function:
<literallayout class='monospaced'>
void __cpuinit calibrate_delay(void)
{
unsigned long lpj;
static bool printed;
int this_cpu = smp_processor_id();
printk("*************************************\n");
printk("* *\n");
printk("* HELLO YOCTO KERNEL *\n");
printk("* *\n");
printk("*************************************\n");
if (per_cpu(cpu_loops_per_jiffy, this_cpu)) {
.
.
.
</literallayout>
</para>
<para>
After making and saving your changes, you need to stage them for the push.
The following Git commands are one method of staging and committing your changes:
<literallayout class='monospaced'>
$ git add calibrate.c
$ git commit --signoff
</literallayout>
</para>
<para>
Once the source code has been modified, you need to use Git to push the changes to
the bare clone.
If you do not push the changes, then the OpenEmbedded build system will not pick
up the changed source files.
</para>
<para>
The following command pushes the changes to the bare clone:
<literallayout class='monospaced'>
$ git push origin standard-common-pc-base:standard/default/common-pc/base
</literallayout>
</para>
</section>
<section id='changing-build-parameters-for-your-build'>
<title>Changing Build Parameters for Your Build</title>
<para>
At this point, the source has been changed and pushed.
The example now defines some variables used by the OpenEmbedded build system
to locate your kernel source.
You essentially need to identify where to find the kernel recipe and the changed source code.
You also need to be sure some basic configurations are in place that identify the
type of machine you are building and to help speed up the build should your host support
multiple-core and thread capabilities.
</para>
<para>
Do the following to make sure the build parameters are set up for the example.
Once you set up these build parameters, they do not have to change unless you
change the target architecture of the machine you are building or you move
the bare clone, copy of the clone, or the <filename>poky-extras</filename> repository:
<itemizedlist>
<listitem><para><emphasis>Build for the Correct Target Architecture:</emphasis> The
<filename>local.conf</filename> file in the build directory defines the build's
target architecture.
By default, <filename>MACHINE</filename> is set to
<filename>qemux86</filename>, which specifies a 32-bit
<trademark class='registered'>Intel</trademark> Architecture
target machine suitable for the QEMU emulator.
In this example, <filename>MACHINE</filename> is correctly configured.
</para></listitem>
<listitem><para><emphasis>Optimize Build Time:</emphasis> Also in the
<filename>local.conf</filename> file are two variables that can speed your
build time if your host supports multi-core and multi-thread capabilities:
<filename>BB_NUMBER_THREADS</filename> and <filename>PARALLEL_MAKE</filename>.
If the host system has multiple cores then you can optimize build time
by setting both these variables to twice the number of
cores.</para></listitem>
<listitem><para><emphasis>Identify Your <filename>meta-kernel-dev</filename>
Layer:</emphasis> The <filename>BBLAYERS</filename> variable in the
<filename>bblayers.conf</filename> file found in the
<filename>poky/build/conf</filename> directory needs to have the path to your local
<filename>meta-kernel-dev</filename> layer.
By default, the <filename>BBLAYERS</filename> variable contains paths to
<filename>meta</filename> and <filename>meta-yocto</filename> in the
<filename>poky</filename> Git repository.
Add the path to your <filename>meta-kernel-dev</filename> location.
Be sure to substitute your user information in the statement.
Here is an example:
<literallayout class='monospaced'>
BBLAYERS = " \
/home/scottrif/poky/meta \
/home/scottrif/poky/meta-yocto \
/home/scottrif/poky/meta-yocto-bsp \
/home/scottrif/poky/poky-extras/meta-kernel-dev \
"
</literallayout></para></listitem>
<listitem><para><emphasis>Identify Your Source Files:</emphasis> In the
<filename>linux-yocto_3.4.bbappend</filename> file located in the
<filename>poky-extras/meta-kernel-dev/recipes-kernel/linux</filename>
directory, you need to identify the location of the
local source code, which in this example is the bare clone named
<filename>linux-yocto-3.4.git</filename>.
To do this, set the <filename>KSRC_linux_yocto</filename> variable to point to your
local <filename>linux-yocto-3.4.git</filename> Git repository by adding the
following statement.
Also, be sure the <filename>SRC_URI</filename> variable is pointing to
your kernel source files by removing the comment.
Finally, be sure to substitute your user information in the statement:
<literallayout class='monospaced'>
KSRC_linux_yocto_3_4 ?= "/home/scottrif/linux-yocto-3.4.git"
SRC_URI = "git://${KSRC_linux_yocto_3_4};protocol=file;nocheckout=1;branch=${KBRANCH},meta;name=machine,meta"
</literallayout></para></listitem>
</itemizedlist>
</para>
<note>
<para>Before attempting to build the modified kernel, there is one more set of changes you
need to make in the <filename>meta-kernel-dev</filename> layer.
Because all the kernel <filename>.bbappend</filename> files are parsed during the
build process regardless of whether you are using them or not, you should either
comment out the <filename>COMPATIBLE_MACHINE</filename> statements in all
unused <filename>.bbappend</filename> files, or simply remove (or rename) all the files
except the one your are using for the build
(i.e. <filename>linux-yocto_3.4.bbappend</filename> in this example).</para>
<para>If you do not make one of these two adjustments, your machine will be compatible
with all the kernel recipes in the <filename>meta-kernel-dev</filename> layer.
When your machine is comapatible with all the kernel recipes, the build attempts
to build all kernels in the layer.
You could end up with build errors blocking your work.</para>
</note>
</section>
<section id='building-and-booting-the-modified-qemu-kernel-image'>
<title>Building and Booting the Modified QEMU Kernel Image</title>
<para>
Next, you need to build the modified image.
Do the following:
<orderedlist>
<listitem><para>Your environment should be set up since you previously sourced
the <filename>&OE_INIT_FILE;</filename> script.
If it isn't, source the script again from <filename>poky</filename>.
<literallayout class='monospaced'>
$ cd ~/poky
$ source &OE_INIT_FILE;
</literallayout>
</para></listitem>
<listitem><para>Be sure old images are cleaned out by running the
<filename>cleanall</filename> BitBake task as follows from your build directory:
<literallayout class='monospaced'>
$ bitbake -c cleanall linux-yocto
</literallayout></para>
<para><note>Never remove any files by hand from the <filename>tmp/deploy</filename>
directory insided the build directory.
Always use the BitBake <filename>cleanall</filename> task to clear
out previous builds.</note></para></listitem>
<listitem><para>Next, build the kernel image using this command:
<literallayout class='monospaced'>
$ bitbake -k core-image-minimal
</literallayout></para></listitem>
<listitem><para>Finally, boot the modified image in the QEMU emulator
using this command:
<literallayout class='monospaced'>
$ runqemu qemux86
</literallayout></para></listitem>
</orderedlist>
</para>
<para>
Log into the machine using <filename>root</filename> with no password and then
use the following shell command to scroll through the console's boot output.
<literallayout class='monospaced'>
# dmesg | less
</literallayout>
</para>
<para>
You should see the results of your <filename>printk</filename> statements
as part of the output.
</para>
</section>
</section>
</appendix>
<!--
vim: expandtab tw=80 ts=4
-->

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -23,7 +23,7 @@
Open source philosophy is characterized by software development directed by peer production
and collaboration through an active community of developers.
Contrast this to the more standard centralized development models used by commercial software
companies where a finite set of developers produces a product for sale using a defined set
companies where a finite set of developers produce a product for sale using a defined set
of procedures that ultimately result in an end product whose architecture and source material
are closed to the public.
</para>
@@ -55,7 +55,7 @@
</section>
<section id="usingpoky-changes-collaborate">
<title>Using the Yocto Project in a Team Environment</title>
<title>Using The Yocto Project in a Team Environment</title>
<para>
It might not be immediately clear how you can use the Yocto Project in a team environment,
@@ -75,12 +75,12 @@
Experience shows that buildbot is a good fit for this role.
What works well is to configure buildbot to make two types of builds:
incremental and full (from scratch).
See <ulink url='http://autobuilder.yoctoproject.org:8010/'>Welcome to the buildbot for the
See <ulink url='http://autobuilder.yoctoproject.org:8010/'>the buildbot for the
Yocto Project</ulink> for an example implementation that uses buildbot.
</para>
<para>
You can tie an incremental build to a commit hook that triggers the build
You can tie incremental builds to a commit hook that triggers the build
each time a commit is made to the metadata.
This practice results in useful acid tests that determine whether a given commit
breaks the build in some serious way.
@@ -97,20 +97,19 @@
<para>
Most teams have many pieces of software undergoing active development at any given time.
You can derive large benefits by putting these pieces under the control of a source
control system that is compatible (i.e. Git or Subversion (SVN)) with the OpenEmbeded
build system that the Yocto Project uses.
control system that is compatible with the Yocto Project (i.e. Git or Subversion (SVN).
You can then set the autobuilder to pull the latest revisions of the packages
and test the latest commits by the builds.
This practice quickly highlights issues.
The build system easily supports testing configurations that use both a
The Yocto Project easily supports testing configurations that use both a
stable known good revision and a floating revision.
The build system can also take just the changes from specific source control branches.
The Yocto Project can also take just the changes from specific source control branches.
This capability allows you to track and test specific changes.
</para>
<para>
Perhaps the hardest part of setting this up is defining the software project or
the metadata policies that surround the different source control systems.
the Yocto Project metadata policies that surround the different source control systems.
Of course circumstances will be different in each case.
However, this situation reveals one of the Yocto Project's advantages -
the system itself does not
@@ -130,7 +129,7 @@
From the interface, you can click on any particular item in the "Name" column and
see the URL at the bottom of the page that you need to set up a Git repository for
that particular item.
Having a local Git repository of the Source Directory (poky) allows you to
Having a local Git repository of the Yocto Project files allows you to
make changes, contribute to the history, and ultimately enhance the Yocto Project's
tools, Board Support Packages, and so forth.
</para>
@@ -148,8 +147,8 @@
<ulink url='&YOCTO_HOME_URL;/download'>download page</ulink> and get a
tarball of the release.
You can also go to this site to download any supported BSP tarballs.
Unpacking the tarball gives you a hierarchical Source Directory that lets you develop
using the Yocto Project.
Unpacking the tarball gives you a hierarchical directory structure of Yocto Project
files that lets you develop using the Yocto Project.
</para>
<para>
@@ -158,22 +157,22 @@
</para>
<para>
In summary, here is where you can get the project files needed for development:
In summary, here is where you can get the Yocto Project files needed for development:
<itemizedlist>
<listitem><para id='source-repositories'><emphasis><ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi'>Source Repositories:</ulink></emphasis>
This area contains IDE Plugins, Matchbox, Poky, Poky Support, Tools, Yocto Linux Kernel, and Yocto
Metadata Layers.
You can create local copies of Git repositories for each of these areas.</para>
You can create Git repositories for each of these areas.</para>
<para>
<imagedata fileref="figures/source-repos.png" align="center" width="6in" depth="4in" />
</para></listitem>
<listitem><para><anchor id='index-downloads' /><emphasis><ulink url='&YOCTO_DL_URL;/releases/'>Index of /releases:</ulink></emphasis>
This area contains index releases such as
the <trademark class='trade'>Eclipse</trademark>
Yocto Plug-in, miscellaneous support, poky, pseudo, installers for cross-development toolchains,
Yocto Plug-in, miscellaneous support, Poky, pseudo, cross-development toolchains,
and all released versions of Yocto Project in the form of images or tarballs.
Downloading and extracting these files does not produce a local copy of the
Git repository but rather a snapshot of a particular release or image.</para>
Downloading and extracting these files does not produce a Git repository but rather
a snapshot of a particular release or image.</para>
<para>
<imagedata fileref="figures/index-downloads.png" align="center" width="6in" depth="4in" />
</para></listitem>
@@ -200,62 +199,21 @@
<listitem><para><emphasis>Append Files:</emphasis> Files that append build information to
a recipe file.
Append files are known as BitBake append files and <filename>.bbappend</filename> files.
The OpenEmbedded build system expects every append file to have a corresponding and
The Yocto Project build system expects every append file to have a corresponding and
underlying recipe (<filename>.bb</filename>) file.
Furthermore, the append file and the underlying recipe must have the same root filename.
The filenames can differ only in the file type suffix used (e.g.
<filename>formfactor_0.0.bb</filename> and <filename>formfactor_0.0.bbappend</filename>).
</para>
<para>Information in append files overrides the information in the similarly-named recipe file.
For an example of an append file in use, see the
"<link linkend='using-bbappend-files'>Using .bbappend Files</link>" section.
</para></listitem>
<listitem><para id='bitbake-term'><emphasis>BitBake:</emphasis>
The task executor and scheduler used by
the OpenEmbedded build system to build images.
For more information on BitBake, see the BitBake documentation
in the <filename>bitbake/doc/manual</filename> directory of the
<link linkend='source-directory'>Source Directory</link>.</para></listitem>
<listitem>
<para id='build-directory'><emphasis>Build Directory:</emphasis>
This term refers to the area used by the OpenEmbedded build system for builds.
The area is created when you <filename>source</filename> the setup
environment script that is found in the Source Directory
(i.e. <filename>&OE_INIT_FILE;</filename>).
The <ulink url='&YOCTO_DOCS_REF_URL;#var-TOPDIR'><filename>TOPDIR</filename></ulink>
variable points to the Build Directory.</para>
<para>You have a lot of flexibility when creating the Build Directory.
Following are some examples that show how to create the directory:
<itemizedlist>
<listitem><para>Create the Build Directory in your current working directory
and name it <filename>build</filename>.
This is the default behavior.
<literallayout class='monospaced'>
$ source &OE_INIT_PATH;
</literallayout></para></listitem>
<listitem><para>Provide a directory path and specifically name the build
directory.
This next example creates a Build Directory named <filename>YP-&POKYVERSION;</filename>
in your home directory within the directory <filename>mybuilds</filename>.
If <filename>mybuilds</filename> does not exist, the directory is created for you:
<literallayout class='monospaced'>
$ source &OE_INIT_PATH; $HOME/mybuilds/YP-&POKYVERSION;
</literallayout></para></listitem>
<listitem><para>Provide an existing directory to use as the Build Directory.
This example uses the existing <filename>mybuilds</filename> directory
as the Build Directory.
<literallayout class='monospaced'>
$ source &OE_INIT_PATH; $HOME/mybuilds/
</literallayout></para></listitem>
</itemizedlist>
</para></listitem>
<listitem><para><emphasis>Build System:</emphasis> In the context of the Yocto Project
this term refers to the OpenEmbedded build system used by the project.
This build system is based on the project known as "Poky."
For some historical information about Poky, see the
<link linkend='poky'>Poky</link> term further along in this section.
</para></listitem>
For examples of <filename>.bbappend</filename> file in use, see the
"<link linkend='using-bbappend-files'>Using .bbappend Files</link>" and
"<link linkend='changing-recipes-kernel'>Changing <filename>recipes-kernel</filename></link>"
sections.</para></listitem>
<listitem><para><emphasis>BitBake:</emphasis> The task executor and scheduler used by
the Yocto Project to build images.
For more information on BitBake, see the <ulink url='http://bitbake.berlios.de/manual/'>
BitBake documentation</ulink>.</para></listitem>
<listitem><para><emphasis>Classes:</emphasis> Files that provide for logic encapsulation
and inheritance allowing commonly used patterns to be defined once and easily used
in multiple recipes.
@@ -264,14 +222,13 @@
<listitem><para><emphasis>Configuration File:</emphasis> Configuration information in various
<filename>.conf</filename> files provides global definitions of variables.
The <filename>conf/local.conf</filename> configuration file in the
<link linkend='build-directory'>Build Directory</link>
<link linkend='yocto-project-build-directory'>Yocto Project Build Directory</link>
contains user-defined variables that affect each build.
The <filename>meta-yocto/conf/distro/poky.conf</filename> configuration file
defines Yocto distro configuration
variables used only when building with this policy.
Machine configuration files, which
are located throughout the
<link linkend='source-directory'>Source Directory</link>, define
are located throughout the Yocto Project file structure, define
variables for specific hardware and are only used when building for that target
(e.g. the <filename>machine/beagleboard.conf</filename> configuration file defines
variables for the Texas Instruments ARM Cortex-A8 development board).
@@ -282,19 +239,18 @@
tools and utilities that allow you to develop software for targeted architectures.
This toolchain contains cross-compilers, linkers, and debuggers that are specific to
an architecture.
You can use the OpenEmbedded build system to build a cross-development toolchain
installer that when run installs the toolchain that contains the development tools you
need to cross-compile and test your software.
The Yocto Project ships with images that contain installers for
toolchains for supported architectures as well.
You can use the Yocto Project to build cross-development toolchains in tarball form that when
unpacked contain the development tools you need to cross-compile and test your software.
The Yocto Project ships with images that contain toolchains for supported architectures
as well.
Sometimes this toolchain is referred to as the meta-toolchain.</para></listitem>
<listitem><para><emphasis>Image:</emphasis> An image is the result produced when
BitBake processes a given collection of recipes and related metadata.
Images are the binary output that run on specific hardware or QEMU
and for specific use cases.
Images are the binary output that runs on specific hardware and for specific
use cases.
For a list of the supported image types that the Yocto Project provides, see the
"<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Images</ulink>"
chapter in the Yocto Project Reference Manual.</para></listitem>
"<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Reference: Images</ulink>"
appendix in The Yocto Project Reference Manual.</para></listitem>
<listitem><para id='layer'><emphasis>Layer:</emphasis> A collection of recipes representing the core,
a BSP, or an application stack.
For a discussion on BSP Layers, see the
@@ -303,90 +259,21 @@
<listitem><para id='metadata'><emphasis>Metadata:</emphasis> The files that BitBake parses when
building an image.
Metadata includes recipes, classes, and configuration files.</para></listitem>
<listitem><para id='oe-core'><emphasis>OE-Core:</emphasis> A core set of metadata originating
<listitem><para><emphasis>OE-Core:</emphasis> A core set of metadata originating
with OpenEmbedded (OE) that is shared between OE and the Yocto Project.
This metadata is found in the <filename>meta</filename> directory of the source
directory.</para></listitem>
<listitem><para><emphasis>Package:</emphasis> In the context of the Yocto Project,
this term refers to the packaged output from a baked recipe.
This metadata is found in the <filename>meta</filename> directory of the Yocto Project
files.</para></listitem>
<listitem><para><emphasis>Package:</emphasis> The packaged output from a baked recipe.
A package is generally the compiled binaries produced from the recipe's sources.
You bake something by running it through BitBake.</para>
<para>It is worth noting that the term "package" can, in general, have subtle
meanings. For example, the packages refered to in the
"<ulink url='&YOCTO_DOCS_QS_URL;#packages'>The Packages</ulink>" section are
compiled binaries that when installed add functionality to your Linux
distribution.</para>
<para>Another point worth noting is that historically within the Yocto Project,
recipes were referred to as packages - thus, the existence of several BitBake
variables that are seemingly mis-named,
(e.g. <ulink url='&YOCTO_DOCS_REF_URL;#var-PR'><filename>PR</filename></ulink>,
<ulink url='&YOCTO_DOCS_REF_URL;#var-PRINC'><filename>PRINC</filename></ulink>,
<ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>, and
<ulink url='&YOCTO_DOCS_REF_URL;#var-PE'><filename>PE</filename></ulink>).
</para></listitem>
<listitem><para id='poky'><emphasis>Poky:</emphasis> The term "poky" can mean several things.
In its most general sense, it is an open-source project that was initially developed
by OpenedHand. With OpenedHand, poky was developed off of the existing OpenEmbedded
build system becoming a build system for embedded images.
After Intel Corporation aquired OpenedHand, the project poky became the basis for
the Yocto Project's build system.
Within the Yocto Project source repositories, poky exists as a separate Git repository
that can be cloned to yield a local copy on the host system.
Thus, "poky" can refer to the local copy of the Source Directory used to develop within
the Yocto Project.</para></listitem>
You bake something by running it through BitBake.</para></listitem>
<listitem><para><emphasis>Poky:</emphasis> The build tool that the Yocto Project
uses to create images.</para></listitem>
<listitem><para><emphasis>Recipe:</emphasis> A set of instructions for building packages.
A recipe describes where you get source code and which patches to apply.
Recipes describe dependencies for libraries or for other recipes, and they
also contain configuration and compilation options.
Recipes contain the logical unit of execution, the software/images to build, and
use the <filename>.bb</filename> file extension.</para></listitem>
<listitem>
<para id='source-directory'><emphasis>Source Directory:</emphasis>
This term refers to the directory structure created as a result of either downloading
and unpacking a Yocto Project release tarball or creating a local copy of
the <filename>poky</filename> Git repository
<filename>git://git.yoctoproject.org/poky</filename>.
Sometimes you might here the term "poky directory" used to refer to this
directory structure.</para>
<para>The Source Directory contains BitBake, Documentation, metadata and
other files that all support the Yocto Project.
Consequently, you must have the Source Directory in place on your development
system in order to do any development using the Yocto Project.</para>
<para>For tarball expansion, the name of the top-level directory of the Source Directory
is derived from the Yocto Project release tarball.
For example, downloading and unpacking <filename>&YOCTO_POKY_TARBALL;</filename>
results in a Source Directory whose top-level folder is named
<filename>&YOCTO_POKY;</filename>.
If you create a local copy of the Git repository, then you can name the repository
anything you like.
Throughout much of the documentation, <filename>poky</filename> is used as the name of
the top-level folder of the local copy of the poky Git repository.
So, for example, cloning the <filename>poky</filename> Git repository results in a
local Git repository whose top-level folder is also named <filename>poky</filename>.</para>
<para>It is important to understand the differences between the Source Directory created
by unpacking a released tarball as compared to cloning
<filename>git://git.yoctoproject.org/poky</filename>.
When you unpack a tarball, you have an exact copy of the files based on the time of
release - a fixed release point.
Any changes you make to your local files in the Source Directory are on top of the release.
On the other hand, when you clone the <filename>poky</filename> Git repository, you have an
active development repository.
In this case, any local changes you make to the Source Directory can be later applied
to active development branches of the upstream <filename>poky</filename> Git
repository.</para>
<para>Finally, if you want to track a set of local changes while starting from the same point
as a release tarball, you can create a local Git branch that
reflects the exact copy of the files at the time of their release.
You do this by using Git tags that are part of the repository.</para>
<para>For more information on concepts around Git repositories, branches, and tags,
see the
"<link linkend='repositories-tags-and-branches'>Repositories, Tags, and Branches</link>"
section.</para></listitem>
<listitem><para><emphasis>Tasks:</emphasis> Arbitrary groups of software Recipes.
You simply use Tasks to hold recipes that, when built, usually accomplish a single task.
For example, a task could contain the recipes for a companys proprietary or value-add software.
@@ -399,6 +286,84 @@
by the maintainer of the source code.
For example, in order for a developer to work on a particular piece of code, they need to
first get a copy of it from an "upstream" source.</para></listitem>
<listitem>
<para id='yocto-project-files'><emphasis>Yocto Project Files:</emphasis>
This term refers to the directory structure created as a result of either downloading
and unpacking a Yocto Project release tarball or setting up a Git repository
by cloning <filename>git://git.yoctoproject.org/poky</filename>.
Sometimes the term "the Yocto Project Files structure" is used as well.</para>
<para>The Yocto Project Files contain BitBake, Documentation, metadata and
other files that all support the development environment.
Consequently, you must have the Yocto Project Files in place on your development
system in order to do any development using the Yocto Project.</para>
<para>The name of the top-level directory of the Yocto Project Files structure
is derived from the Yocto Project release tarball.
For example, downloading and unpacking <filename>&YOCTO_POKY_TARBALL;</filename>
results in a Yocto Project file structure whose Yocto Project source directory is named
<filename>&YOCTO_POKY;</filename>.
If you create a Git repository, then you can name the repository anything you like.
Throughout much of the documentation, the name of the Git repository is used as the
name for the local folder.
So, for example, cloning the <filename>poky</filename> Git repository results in a
local Git repository also named <filename>poky</filename>.</para>
<para>It is important to understand the differences between Yocto Project Files created
by unpacking a release tarball as compared to cloning
<filename>git://git.yoctoproject.org/poky</filename>.
When you unpack a tarball, you have an exact copy of the files based on the time of
release - a fixed release point.
Any changes you make to your local Yocto Project Files are on top of the release.
On the other hand, when you clone the Yocto Project Git repository, you have an
active development repository.
In this case, any local changes you make to the Yocto Project can be later applied
to active development branches of the upstream Yocto Project Git repository.</para>
<para>Finally, if you want to track a set of local changes while starting from the same point
as a release tarball, you can create a local Git branch that
reflects the exact copy of the files at the time of their release.
You do this using Git tags that are part of the repository.</para>
<para>For more information on concepts around Git repositories, branches, and tags,
see the
"<link linkend='repositories-tags-and-branches'>Repositories, Tags, and Branches</link>"
section.</para></listitem>
<listitem>
<para id='yocto-project-build-directory'><emphasis>Yocto Project Build Directory:</emphasis>
This term refers to the area used by the Yocto Project for builds.
The area is created when you <filename>source</filename> the Yocto Project setup
environment script that is found in the Yocto Project files area
(i.e. <filename>oe-init-build-env</filename>).
The <ulink url='&YOCTO_DOCS_REF_URL;#var-TOPDIR'><filename>TOPDIR</filename></ulink>
variable points to the build directory.</para>
<para>You have a lot of flexibility when creating the Yocto Project Build Directory.
Following are some examples that show how to create the directory:
<itemizedlist>
<listitem><para>Create the build directory in your current working directory
and name it <filename>build</filename>.
This is the default behavior.
<literallayout class='monospaced'>
$ cd ~/poky
$ source oe-init-build-env
</literallayout></para></listitem>
<listitem><para>Provide a directory path and specifically name the build
directory.
This next example creates a build directory named <filename>YP-&POKYVERSION;</filename>
in your home directory within the directory <filename>mybuilds</filename>.
If <filename>mybuilds</filename> does not exist, the directory is created for you:
<literallayout class='monospaced'>
$ source &OE_INIT_PATH; $HOME/mybuilds/YP-&POKYVERSION;
</literallayout></para></listitem>
<listitem><para>Provide an existing directory to use as the build directory.
This example uses the existing <filename>mybuilds</filename> directory
as the build directory.
<literallayout class='monospaced'>
$ source &OE_INIT_PATH; $HOME/mybuilds/
</literallayout></para></listitem>
</itemizedlist>
</para></listitem>
</itemizedlist>
</para>
</section>
@@ -432,13 +397,13 @@
</para>
<para>
When you build an image using the Yocto Project, the build process uses a
known list of licenses to ensure compliance.
When you build an image using Yocto Project, the build process uses a known list of licenses to
ensure compliance.
You can find this list in the Yocto Project files directory at
<filename>meta/files/common-licenses</filename>.
Once the build completes, the list of all licenses found and used during that build are
kept in the
<link linkend='build-directory'>Build Directory</link> at
<link linkend='yocto-project-build-directory'>Yocto Project Build Directory</link> at
<filename>tmp/deploy/images/licenses</filename>.
</para>
@@ -466,12 +431,6 @@
<ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/tree/meta/files/common-licenses'>here</ulink>.
This wiki page discusses the license infrastructure used by the Yocto Project.
</para>
<para>
For information that can help you to maintain compliance with various open source licensing
during the lifecycle of a product created using the Yocto Project, see the
"<link linkend='maintaining-open-source-license-compliance-during-your-products-lifecycle'>Maintaining Open Source License Compliance During Your Product's Lifecycle</link>" section.
</para>
</section>
<section id='git'>
@@ -535,9 +494,10 @@
It is important to understand that Git tracks content change and not files.
Git uses "branches" to organize different development efforts.
For example, the <filename>poky</filename> repository has
<filename>bernard</filename>,
<filename>edison</filename>, <filename>denzil</filename>, <filename>danny</filename>
and <filename>master</filename> branches among others.
<filename>laverne</filename>, <filename>bernard</filename>,
<filename>edison</filename>, <filename>denzil</filename> and
<filename>master</filename> branches among
others.
You can see all the branches by going to
<ulink url='&YOCTO_GIT_URL;/cgit.cgi/poky/'></ulink> and
clicking on the
@@ -570,9 +530,9 @@
$ git checkout -b &DISTRO_NAME; origin/&DISTRO_NAME;
</literallayout>
In this example, the name of the top-level directory of your local Yocto Project
Files Git repository is <filename>poky</filename>,
and the name of the local working area (or local branch) you have created and checked
out is <filename>&DISTRO_NAME;</filename>.
Files Git repository is <filename>poky</filename>.
And, the name of the local working area (or local branch) you have created and checked
out is named <filename>&DISTRO_NAME;</filename>.
The files in your repository now reflect the same files that are in the
<filename>&DISTRO_NAME;</filename> development branch of the Yocto Project's
<filename>poky</filename> repository.
@@ -582,8 +542,9 @@
at the time you created your local branch, which could be
different than the files at the time of a similarly named release.
In other words, creating and checking out a local branch based on the
<filename>&DISTRO_NAME;</filename> branch name is not the same as
cloning and checking out the <filename>master</filename> branch.
<filename>&DISTRO_NAME;</filename> branch name is not the same as creating and
checking out a local branch based on the <filename>&DISTRO_NAME;-&DISTRO;</filename>
release.
Keep reading to see how you create a local snapshot of a Yocto Project Release.
</para>
@@ -599,7 +560,7 @@
</para>
<para>
Some key tags are <filename>bernard-5.0</filename>, <filename>denzil-7.0</filename>,
Some key tags are <filename>laverne-4.0</filename>, <filename>bernard-5.0</filename>,
and <filename>&DISTRO_NAME;-&POKYVERSION;</filename>.
These tags represent Yocto Project releases.
</para>
@@ -681,18 +642,17 @@
a working branch on your local machine where you can isolate work.
It is a good idea to use local branches when adding specific features or changes.
This way if you dont like what you have done you can easily get rid of the work.</para></listitem>
<listitem><para><emphasis><filename>git branch</filename>:</emphasis> Reports
existing local branches and
tells you the branch in which you are currently working.</para></listitem>
<listitem><para><emphasis><filename>git branch</filename>:</emphasis> Reports existing branches and
tells you which branch in which you are currently working.</para></listitem>
<listitem><para><emphasis><filename>git branch -D &lt;branch-name&gt;</filename>:</emphasis>
Deletes an existing local branch.
You need to be in a local branch other than the one you are deleting
in order to delete <filename>&lt;branch-name&gt;</filename>.</para></listitem>
Deletes an existing branch.
You need to be in a branch other than the one you are deleting
in order to delete &lt;branch-name&gt;.</para></listitem>
<listitem><para><emphasis><filename>git pull</filename>:</emphasis> Retrieves information
from an upstream Git
repository and places it in your local Git repository.
You use this command to make sure you are synchronized with the repository
from which you are basing changes (.e.g. the master branch).</para></listitem>
from which you are basing changes (.e.g. the master repository).</para></listitem>
<listitem><para><emphasis><filename>git push</filename>:</emphasis> Sends all your local changes you
have committed to an upstream Git repository (e.g. a contribution repository).
The maintainer of the project draws from these repositories when adding your changes to the
@@ -774,8 +734,6 @@
A somewhat formal method exists by which developers commit changes and push them into the
"contrib" area and subsequently request that the maintainer include them into "master"
This process is called “submitting a patch” or “submitting a change.”
For information on submitting patches and changes, see the
"<link linkend='how-to-submit-a-change'>How to Submit a Change</link>" section.
</para>
<para>
@@ -838,18 +796,13 @@
<filename>send-pull-request</filename> that ship with the release to facilitate this
workflow.
You can find these scripts in the local Yocto Project files Git repository in
the <filename>scripts</filename> directory.</para>
<para>You can find more information on these scripts in the
"<link linkend='pushing-a-change-upstream'>Using
Scripts to Push a Change Upstream and Request a Pull</link>" section.
</para></listitem>
the <filename>scripts</filename> directory.</para></listitem>
<listitem><para><emphasis>Patch Workflow:</emphasis> This workflow allows you to notify the
maintainer through an email that you have a change (or patch) you would like considered
for the "master" branch of the Git repository.
To send this type of change you format the patch and then send the email using the Git commands
<filename>git format-patch</filename> and <filename>git send-email</filename>.
You can find information on how to submit changes
later in this chapter.</para></listitem>
You can find information on how to submit later in this chapter.</para></listitem>
</itemizedlist>
</para>
</section>
@@ -881,9 +834,8 @@
a bug.</para></listitem>
<listitem><para>When submitting a new bug, be sure to choose the appropriate
Classification, Product, and Component for which the issue was found.
Defects for Yocto Project fall into one of six classifications: Yocto Project
Components, Infrastructure, Build System &amp; Metadata, Documentation,
QA/Testing, and Runtime.
Defects for Yocto Project fall into one of four classifications: Yocto Projects,
Infrastructure, Poky, and Yocto Metadata Layers.
Each of these Classifications break down into multiple Products and, in some
cases, multiple Components.</para></listitem>
<listitem><para>Use the bug form to choose the correct Hardware and Architecture
@@ -903,54 +855,54 @@
<listitem><para>Submit the bug by clicking the "Submit Bug" button.</para></listitem>
</orderedlist>
</para>
<note>
Bugs in the Yocto Project Bugzilla follow naming convention:
<filename>[YOCTO #&lt;number&gt;]</filename>, where <filename>&lt;number&gt;</filename> is the
assigned defect ID used in Bugzilla.
So, for example, a valid way to refer to a defect would be <filename>[YOCTO #1011]</filename>.
This convention becomes important if you are submitting patches against the Yocto Project
code itself.
</note>
</section>
<section id='how-to-submit-a-change'>
<title>How to Submit a Change</title>
<para>
Contributions to the Yocto Project and OpenEmbedded are very welcome.
Because the system is extremely configurable and flexible, we recognize that developers
Contributions to the Yocto Project are very welcome.
Because the Yocto Project is extremely configurable and flexible, we recognize that developers
will want to extend, configure or optimize it for their specific uses.
You should send patches to the appropriate mailing list so that they
can be reviewed and merged by the appropriate maintainer.
For a list of the Yocto Project and related mailing lists, see the
You should send patches to the appropriate Yocto Project mailing list to get them
in front of the Yocto Project Maintainer.
For a list of the Yocto Project mailing lists, see the
"<ulink url='&YOCTO_DOCS_REF_URL;#resources-mailinglist'>Mailing lists</ulink>" section in
the Yocto Project Reference Manual.
The Yocto Project Reference Manual.
</para>
<para>
The following is some guidance on which mailing list to use for what type of change:
The following is some guidance on which mailing list to use for what type of defect:
<itemizedlist>
<listitem><para>For changes to the core metadata, send your patch to the
<ulink url='&OE_LISTS_URL;/listinfo/openembedded-core'>openembedded-core</ulink> mailing list.
For example, a change to anything under the <filename>meta</filename> or
<filename>scripts</filename> directories
should be sent to this mailing list.</para></listitem>
<listitem><para>For changes to BitBake (anything under the <filename>bitbake</filename>
directory), send your patch to the
<ulink url='&OE_LISTS_URL;/listinfo/bitbake-devel'>bitbake-devel</ulink> mailing list.</para></listitem>
<listitem><para>For changes to <filename>meta-yocto</filename>, send your patch to the
<ulink url='&YOCTO_LISTS_URL;/listinfo/poky'>poky</ulink> mailing list.</para></listitem>
<listitem><para>For changes to other layers hosted on
<filename>yoctoproject.org</filename> (unless the
layer's documentation specifies otherwise), tools, and Yocto Project
documentation, use the
<ulink url='&YOCTO_LISTS_URL;/listinfo/yocto'>yocto</ulink> mailing list.</para></listitem>
<listitem><para>For additional recipes that do not fit into the core metadata,
you should determine which layer the recipe should go into and submit the
change in the manner recommended by the documentation (e.g. README) supplied
with the layer. If in doubt, please ask on the
<ulink url='&YOCTO_LISTS_URL;/listinfo/yocto'>yocto</ulink> or
<ulink url='&OE_LISTS_URL;/listinfo/openembedded-devel'>openembedded-devel</ulink>
mailing lists.</para></listitem>
<listitem><para>For defects against the Yocto Project build system Poky, send
your patch to the
<ulink url='&YOCTO_LISTS_URL;/listinfo/poky'></ulink> mailing list.
This mailing list corresponds to issues that are not specific to the Yocto Project but
are part of the OE-core.
For example, a defect against anything in the <filename>meta</filename> layer
or the BitBake Manual could be sent to this mailing list.</para></listitem>
<listitem><para>For defects against Yocto-specific layers, tools, and Yocto Project
documentation use the
<ulink url='&YOCTO_LISTS_URL;/listinfo/yocto'></ulink> mailing list.
This mailing list corresponds to Yocto-specific areas such as
<filename>meta-yocto</filename>, <filename>meta-intel</filename>,
<filename>linux-yocto</filename>, and <filename>documentation</filename>.</para></listitem>
</itemizedlist>
</para>
<para>
When you send a patch, be sure to include a "Signed-off-by:"
line in the same style as required by the Linux kernel.
Adding this line signifies that you, the submitter, have agreed to the Developer's Certificate of Origin 1.1
Adding this line signifies the developer has agreed to the Developer's Certificate of Origin 1.1
as follows:
<literallayout class='monospaced'>
Developer's Certificate of Origin 1.1
@@ -979,53 +931,52 @@
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
</literallayout>
A Poky contributions tree (<filename>poky-contrib</filename>,
<filename>git://git.yoctoproject.org/poky-contrib.git</filename>)
exists for contributors to stage contributions.
If people desire such access, please ask on the mailing list.
Usually, the Yocto Project team will grant access to anyone with a proven track
record of good patches.
</para>
<para>
In a collaborative environment, it is necessary to have some sort of standard
or method through which you submit changes.
Otherwise, things could get quite chaotic.
One general practice to follow is to make small, controlled changes.
Keeping changes small and isolated aids review, makes merging/rebasing easier
and keeps the change history clean when anyone needs to refer to it in future.
One general practice to follow is to make small, controlled changes to the
Yocto Project.
Keeping changes small and isolated lets you best keep pace with future Yocto Project changes.
</para>
<para>
When you make a commit, you must follow certain standards established by the
OpenEmbedded and Yocto Project development teams.
For each commit, you must provide a single-line summary of the change and you
should almost always provide a more detailed description of what you did (i.e.
the body of the commit message).
When you create a commit, you must follow certain standards established by the
Yocto Project development team.
For each commit, you must provide a single-line summary of the change and you
almost always provide a more detailed description of what you did (i.e. the body
of the commit).
The only exceptions for not providing a detailed description would be if your
change is a simple, self-explanatory change that needs no further description
beyond the summary.
Here are the guidelines for composing a commit message:
change is a simple, self-explanatory change that needs no description.
Here are the Yocto Project commit message guidelines:
<itemizedlist>
<listitem><para>Provide a single-line, short summary of the change.
This summary is typically viewable in the "shortlist" of changes.
This summary is typically viewable by source control systems.
Thus, providing something short and descriptive that gives the reader
a summary of the change is useful when viewing a list of many commits.
This should be prefixed by the recipe name (if changing a recipe), or
else the short form path to the file being changed.
</para></listitem>
<listitem><para>For the body of the commit message, provide detailed information
that describes what you changed, why you made the change, and the approach
you used. It may also be helpful if you mention how you tested the change.
you used.
Provide as much detail as you can in the body of the commit message.
</para></listitem>
<listitem><para>If the change addresses a specific bug or issue that is
associated with a bug-tracking ID, include a reference to that ID in
your detailed description.
For example, the Yocto Project uses a specific convention for bug
references - any commit that addresses a specific bug should include the
bug ID in the description (typically at the beginning) as follows:
associated with a bug-tracking ID, prefix your detailed description
with the bug or issue ID.
For example, the Yocto Project tracks bugs using a bug-naming convention.
Any commits that address a bug must start with the bug ID in the description
as follows:
<literallayout class='monospaced'>
[YOCTO #&lt;bug-id&gt;]
&lt;detailed description of change&gt;
YOCTO #&lt;bug-id&gt;: &lt;Detailed description of commit&gt;
</literallayout></para></listitem>
Where &lt;bug-id&gt; is replaced with the specific bug ID from the
Yocto Project Bugzilla instance.
</itemizedlist>
</para>
@@ -1036,22 +987,21 @@
</para>
<para>
Following are general instructions for both pushing changes upstream and for submitting
changes as patches.
Following are general instructions for both pushing changes upstream and for submitting changes as patches.
</para>
<section id='pushing-a-change-upstream'>
<title>Using Scripts to Push a Change Upstream and Request a Pull</title>
<title>Pushing a Change Upstream and Requesting a Pull</title>
<para>
The basic flow for pushing a change to an upstream "contrib" Git repository is as follows:
<itemizedlist>
<listitem><para>Make your changes in your local Git repository.</para></listitem>
<listitem><para>Stage your changes by using the <filename>git add</filename>
command on each file you changed.</para></listitem>
<listitem><para>Stage your commit (or change) by using the <filename>git add</filename>
command.</para></listitem>
<listitem><para>Commit the change by using the <filename>git commit</filename>
command and push it to the "contrib" repository.
Be sure to provide a commit message that follows the projects commit message standards
Be sure to provide a commit message that follows the projects commit standards
as described earlier.</para></listitem>
<listitem><para>Notify the maintainer that you have pushed a change by making a pull
request.
@@ -1059,18 +1009,13 @@
pull requests to the Yocto Project.
These scripts are <filename>create-pull-request</filename> and
<filename>send-pull-request</filename>.
You can find these scripts in the <filename>scripts</filename> directory
within the <link linkend='source-directory'>Source Directory</link>.</para>
<para>Using these scripts correctly formats the requests without introducing any
whitespace or HTML formatting.
The maintainer that receives your patches needs to be able to save and apply them
directly from your emails.
Using these scripts is the preferred method for sending patches.</para>
You can find these scripts in the <filename>scripts</filename> directory of the
Yocto Project file structure.</para>
<para>For help on using these scripts, simply provide the
<filename>-h</filename> argument as follows:
<filename>--help</filename> argument as follows:
<literallayout class='monospaced'>
$ ~/poky/scripts/create-pull-request -h
$ ~/poky/scripts/send-pull-request -h
$ ~/poky/scripts/create-pull-request --help
$ ~/poky/scripts/send-pull-request --help
</literallayout></para></listitem>
</itemizedlist>
</para>
@@ -1082,32 +1027,16 @@
</section>
<section id='submitting-a-patch'>
<title>Using Email to Submit a Patch</title>
<title>Submitting a Patch Through Email</title>
<para>
You can submit patches without using the <filename>create-pull-request</filename> and
<filename>send-pull-request</filename> scripts described in the previous section.
Keep in mind, the preferred method is to use the scripts, however.
</para>
<para>
Depending on the components changed, you need to submit the email to a specific
mailing list.
For some guidance on which mailing list to use, see the list in the
"<link linkend='how-to-submit-a-change'>How to Submit a Change</link>" section
earlier in this manual.
For a description of the available mailing lists, see
"<ulink url='&YOCTO_DOCS_REF_URL;#resources-mailinglist'>Mailing Lists</ulink>"
section in the Yocto Project Reference Manual.
</para>
<para>
Here is the general procedure on how to submit a patch through email without using the
scripts:
If you have a just a few changes, you can commit them and then submit them as an
email to the maintainer.
Here is a general procedure:
<itemizedlist>
<listitem><para>Make your changes in your local Git repository.</para></listitem>
<listitem><para>Stage your changes by using the <filename>git add</filename>
command on each file you changed.</para></listitem>
<listitem><para>Stage your commit (or change) by using the <filename>git add</filename>
command.</para></listitem>
<listitem><para>Commit the change by using the
<filename>git commit --signoff</filename> command.
Using the <filename>--signoff</filename> option identifies you as the person
@@ -1140,10 +1069,7 @@
the series of patches.
For information on the <filename>git format-patch</filename> command,
see <filename>GIT_FORMAT_PATCH(1)</filename> displayed using the
<filename>man git-format-patch</filename> command.</para>
<note>If you are or will be a frequent contributor to the Yocto Project
or to OpenEmbedded, you might consider requesting a contrib area and the
necessary associated rights.</note></listitem>
<filename>man git-format-patch</filename> command.</para></listitem>
<listitem><para>Import the files into your mail client by using the
<filename>git send-email</filename> command.
<note>In order to use <filename>git send-email</filename>, you must have the

View File

@@ -9,8 +9,9 @@
<para>
This chapter introduces the Yocto Project and gives you an idea of what you need to get started.
You can find enough information to set up your development host and build or use images for
hardware supported by the Yocto Project by reading the
<ulink url='&YOCTO_DOCS_QS_URL;'>Yocto Project Quick Start</ulink>.
hardware supported by the Yocto Project by reading
<ulink url='&YOCTO_DOCS_QS_URL;'>
The Yocto Project Quick Start</ulink>.
</para>
<para>
@@ -23,16 +24,15 @@
<para>
The Yocto Project is an open-source collaboration project focused on embedded Linux development.
The project currently provides a build system, which is
referred to as the OpenEmbedded build system in the Yocto Project documentation.
The Yocto Project provides various ancillary tools suitable for the embedded developer
and also features the Sato reference User Interface, which is optimized for
The project currently provides a build system, which is sometimes referred to as "Poky",
and provides various ancillary tools suitable for the embedded developer.
The Yocto Project also features the Sato reference User Interface, which is optimized for
stylus driven, low-resolution screens.
</para>
<para>
You can use the OpenEmbedded build system, which uses
BitBake to develop complete Linux
You can use the Yocto Project build system, which uses
<ulink url='http://bitbake.berlios.de/manual/'>BitBake</ulink>, to develop complete Linux
images and associated user-space applications for architectures based on ARM, MIPS, PowerPC,
x86 and x86-64.
While the Yocto Project does not provide a strict testing framework,
@@ -53,55 +53,56 @@
<listitem><para><emphasis>Host System:</emphasis> You should have a reasonably current
Linux-based host system.
You will have the best results with a recent release of Fedora,
OpenSUSE, Ubuntu, or CentOS as these releases are frequently tested against the Yocto Project
and officially supported.
For a list of the distributions under validation and their status, see the
"<ulink url='&YOCTO_DOCS_REF_URL;#detailed-supported-distros'>Supported Linux Distributions</ulink>" section
in the Yocto Project Reference Manual and the wiki page at
<ulink url='&YOCTO_WIKI_URL;/wiki/Distribution_Support'>Distribution Support</ulink>.</para>
<para>
OpenSUSE, or Ubuntu as these releases are frequently tested against the Yocto Project
and officially supported.
You should also have about 100 gigabytes of free disk space for building images.
</para></listitem>
<listitem><para><emphasis>Packages:</emphasis> The OpenEmbedded build system
requires certain packages exist on your development system (e.g. Python 2.6 or 2.7).
<listitem><para><emphasis>Packages:</emphasis> The Yocto Project requires certain packages
exist on your development system (e.g. Python 2.6 or 2.7).
See "<ulink url='&YOCTO_DOCS_QS_URL;#packages'>The Packages</ulink>"
section in the Yocto Project Quick Start for the exact package
section in the Yocto Project Quick start for the exact package
requirements and the installation commands to install them
for the supported distributions.</para></listitem>
<listitem id='local-yp-release'><para><emphasis>Yocto Project Release:</emphasis>
You need a release of the Yocto Project.
You set up a with local <link linkend='source-directory'>Source Directory</link>
one of two ways depending on whether you
are going to contribute back into the Yocto Project or not.
You can get set up with local
<link linkend='yocto-project-files'>Yocto Project Files</link> one of two ways
depending on whether you
are going to be contributing back into the Yocto Project source repository or not.
<note>
Regardless of the method you use, this manual refers to the resulting local
hierarchical set of files as the "Source Directory."
Regardless of the method you use, this manual refers to the resulting
hierarchical set of files as the "Yocto Project Files" or the "Yocto Project File
Structure."
</note>
<itemizedlist>
<listitem><para><emphasis>Tarball Extraction:</emphasis> If you are not going to contribute
back into the Yocto Project, you can simply download a Yocto Project release you want
back into the Yocto Project, you can simply download the Yocto Project release you want
from the websites <ulink url='&YOCTO_HOME_URL;/download'>download page</ulink>.
Once you have the tarball, just extract it into a directory of your choice.</para>
<para>For example, the following command extracts the Yocto Project &DISTRO;
release tarball
into the current working directory and sets up the local Source Directory
with a top-level folder named <filename>&YOCTO_POKY;</filename>:
into the current working directory and sets up the Yocto Project file structure
with a top-level directory named <filename>&YOCTO_POKY;</filename>:
<literallayout class='monospaced'>
$ tar xfj &YOCTO_POKY_TARBALL;
</literallayout></para>
<para>This method does not produce a local Git repository.
Instead, you simply end up with a snapshot of the release.</para></listitem>
<para>This method does not produce a Git repository.
Instead, you simply end up with a local snapshot of the
Yocto Project files that are based on the particular release in the
tarball.</para></listitem>
<listitem><para><emphasis>Git Repository Method:</emphasis> If you are going to be contributing
back into the Yocto Project or you simply want to keep up
with the latest developments, you should use Git commands to set up a local
Git repository of the upstream <filename>poky</filename> source repository.
Doing so creates a repository with a complete history of changes and allows
Git repository of the Yocto Project Files.
Doing so creates a Git repository with a complete history of changes and allows
you to easily submit your changes upstream to the project.
Because you cloned the repository, you have access to all the Yocto Project development
branches and tag names used in the upstream repository.</para>
<para>The following transcript shows how to clone the <filename>poky</filename>
<para>The following transcript shows how to clone the Yocto Project Files'
Git repository into the current working directory.
<note>You can view the Yocto Project Source Repositories at
<note>The name of the Yocto Project Files Git repository in the Yocto Project Files
Source Repositories is <filename>poky</filename>.
You can view the Yocto Project Source Repositories at
<ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink></note>
The command creates the local repository in a directory named <filename>poky</filename>.
For information on Git used within the Yocto Project, see the
@@ -120,30 +121,30 @@
wiki page</ulink>, which describes how to create both <filename>poky</filename>
and <filename>meta-intel</filename> Git repositories.</para></listitem>
</itemizedlist></para></listitem>
<listitem id='local-kernel-files'><para><emphasis>Yocto Project Kernel:</emphasis>
If you are going to be making modifications to a supported Yocto Project kernel, you
<listitem id='local-kernel-files'><para><emphasis>Linux Yocto Kernel:</emphasis>
If you are going to be making modifications to a supported Linux Yocto kernel, you
need to establish local copies of the source.
You can find Git repositories of supported Yocto Project Kernels organized under
You can find Git repositories of supported Linux Yocto Kernels organized under
"Yocto Linux Kernel" in the Yocto Project Source Repositories at
<ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink>.</para>
<para>This setup can involve creating a bare clone of the Yocto Project kernel and then
<para>This setup involves creating a bare clone of the Linux Yocto kernel and then
copying that cloned repository.
You can create the bare clone and the copy of the bare clone anywhere you like.
For simplicity, it is recommended that you create these structures outside of the
Source Directory (usually <filename>poky</filename>).</para>
Yocto Project Files Git repository.</para>
<para>As an example, the following transcript shows how to create the bare clone
of the <filename>linux-yocto-3.4</filename> kernel and then create a copy of
of the <filename>linux-yocto-3.2</filename> kernel and then create a copy of
that clone.
<note>When you have a local Yocto Project kernel Git repository, you can
<note>When you have a local Linux Yocto kernel Git repository, you can
reference that repository rather than the upstream Git repository as
part of the <filename>clone</filename> command.
Doing so can speed up the process.</note></para>
<para>In the following example, the bare clone is named
<filename>linux-yocto-3.4.git</filename>, while the
copy is named <filename>my-linux-yocto-3.4-work</filename>:
<filename>linux-yocto-3.2.git</filename>, while the
copy is named <filename>my-linux-yocto-3.2-work</filename>:
<literallayout class='monospaced'>
$ git clone --bare git://git.yoctoproject.org/linux-yocto-3.4 linux-yocto-3.4.git
Initialized empty Git repository in /home/scottrif/linux-yocto-3.4.git/
$ git clone --bare git://git.yoctoproject.org/linux-yocto-3.2 linux-yocto-3.2.git
Initialized empty Git repository in /home/scottrif/linux-yocto-3.2.git/
remote: Counting objects: 2468027, done.
remote: Compressing objects: 100% (392255/392255), done.
remote: Total 2468027 (delta 2071693), reused 2448773 (delta 2052498)
@@ -152,9 +153,9 @@
</literallayout></para>
<para>Now create a clone of the bare clone just created:
<literallayout class='monospaced'>
$ git clone linux-yocto-3.4.git my-linux-yocto-3.4-work
Cloning into 'my-linux-yocto-3.4-work'...
done.
$ git clone linux-yocto-3.2.git my-linux-yocto-3.2-work
Initialized empty Git repository in /home/scottrif/my-linux-yocto-3.2-work/.git/
Checking out files: 100% (37619/37619), done.
</literallayout></para></listitem>
<listitem id='poky-extras-repo'><para><emphasis>
The <filename>poky-extras</filename> Git Repository</emphasis>:
@@ -165,16 +166,16 @@
edit to point to your locally modified kernel source files and to build the kernel
image.
Pointing to these local files is much more efficient than requiring a download of the
kernel's source files from upstream each time you make changes to the kernel.</para>
source files from upstream each time you make changes to the kernel.</para>
<para>You can find the <filename>poky-extras</filename> Git Repository in the
"Yocto Metadata Layers" area of the Yocto Project Source Repositories at
<ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink>.
It is good practice to create this Git repository inside the Source Directory.</para>
It is good practice to create this Git repository inside the Yocto Project
files Git repository.</para>
<para>Following is an example that creates the <filename>poky-extras</filename> Git
repository inside the Source Directory, which is named <filename>poky</filename>
in this case:
repository inside the Yocto Project files Git repository, which is named
<filename>poky</filename> in this case:
<literallayout class='monospaced'>
$ cd ~/poky
$ git clone git://git.yoctoproject.org/poky-extras poky-extras
Initialized empty Git repository in /home/scottrif/poky/poky-extras/.git/
remote: Counting objects: 618, done.
@@ -193,13 +194,13 @@
layer.
You can get set up for BSP development one of two ways: tarball extraction or
with a local Git repository.
It is a good idea to use the same method that you used to set up the Source Directory.
It is a good idea to use the same method used to set up the Yocto Project Files.
Regardless of the method you use, the Yocto Project uses the following BSP layer
naming scheme:
<literallayout class='monospaced'>
meta-&lt;BSP_name&gt;
</literallayout>
where <filename>&lt;BSP_name&gt;</filename> is the recognized BSP name.
where &lt;BSP_name&gt; is the recognized BSP name.
Here are some examples:
<literallayout class='monospaced'>
meta-crownbay
@@ -219,18 +220,17 @@
Again, this method just produces a snapshot of the BSP layer in the form
of a hierarchical directory structure.</para></listitem>
<listitem><para><emphasis>Git Repository Method:</emphasis> If you are working
with a local Git repository for your Source Directory, you should also use this method
with a Yocto Project Files Git repository, you should also use this method
to set up the <filename>meta-intel</filename> Git repository.
You can locate the <filename>meta-intel</filename> Git repository in the
"Yocto Metadata Layers" area of the Yocto Project Source Repositories at
<ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink>.</para>
<para>Typically, you set up the <filename>meta-intel</filename> Git repository inside
the Source Directory.
the Yocto Project Files Git repository.
For example, the following transcript shows the steps to clone the
<filename>meta-intel</filename>
Git repository inside the local <filename>poky</filename> Git repository.
Git repository inside the <filename>poky</filename> Git repository.
<literallayout class='monospaced'>
$ cd ~/poky
$ git clone git://git.yoctoproject.org/meta-intel.git
Initialized empty Git repository in /home/scottrif/poky/meta-intel/.git/
remote: Counting objects: 3380, done.
@@ -248,8 +248,9 @@
applications using the Eclipse Integrated Development Environment (IDE),
you will need this plug-in.
See the
"<link linkend='setting-up-the-eclipse-ide'>Setting up the Eclipse IDE</link>"
section for more information.</para></listitem>
"<ulink url='&YOCTO_DOCS_ADT_URL;#setting-up-the-eclipse-ide'>Setting up the Eclipse IDE</ulink>"
section in the Yocto Application Development Toolkit (ADT)
Users Guide for more information.</para></listitem>
</itemizedlist>
</para>
</section>
@@ -267,13 +268,13 @@
<para>
The build process is as follows:
<orderedlist>
<listitem><para>Make sure you have set up the Source Directory described in the
<listitem><para>Make sure you have the Yocto Project files as described in the
previous section.</para></listitem>
<listitem><para>Initialize the build environment by sourcing a build environment
script.</para></listitem>
<listitem><para>Optionally ensure the <filename>conf/local.conf</filename> configuration file,
<listitem><para>Optionally ensure the <filename>/conf/local.conf</filename> configuration file,
which is found in the
<link linkend='build-directory'>Build Directory</link>,
<link linkend='yocto-project-build-directory'>Yocto Project Build Directory</link>,
is set up how you want it.
This file defines many aspects of the build environment including
the target machine architecture through the
@@ -284,9 +285,8 @@
a centralized tarball download directory through the
<filename><ulink url='&YOCTO_DOCS_REF_URL;#var-DL_DIR'>DL_DIR</ulink></filename> variable.</para></listitem>
<listitem><para>Build the image using the <filename>bitbake</filename> command.
If you want information on BitBake, see the user manual inculded in the
<filename>bitbake/doc/manual</filename> directory of the
<link linkend='source-directory'>Source Directory</link>.</para></listitem>
If you want information on BitBake, see the user manual at
<ulink url='&OE_DOCS_URL;/bitbake/html'></ulink>.</para></listitem>
<listitem><para>Run the image either on the actual hardware or using the QEMU
emulator.</para></listitem>
</orderedlist>
@@ -297,93 +297,20 @@
<title>Using Pre-Built Binaries and QEMU</title>
<para>
Another option you have to get started is to use pre-built binaries.
The Yocto Project provides many types of binaries with each release.
See the "<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Images</ulink>"
chapter in the Yocto Project Reference Manual
for descriptions of the types of binaries that ship with a Yocto Project
release.
Another option you have to get started is to use pre-built binaries.
This scenario is ideal for developing software applications to run on your target hardware.
To do this, you need to install the stand-alone Yocto Project cross-toolchain tarball and
then download the pre-built kernel that you will boot in the QEMU emulator.
Next, you must download and extract the target root filesystem for your target
machines architecture.
Finally, you set up the environment to emulate the hardware and then start the QEMU emulator.
</para>
<para>
Using a pre-built binary is ideal for developing software applications to run on your
target hardware.
To do this, you need to be able to access the appropriate cross-toolchain tarball for
the architecture on which you are developing.
If you are using an SDK type image, the image ships with the complete toolchain native to
the architecture.
If you are not using an SDK type image, you need to separately download and
install the stand-alone Yocto Project cross-toolchain tarball.
</para>
<para>
Regardless of the type of image you are using, you need to download the pre-built kernel
that you will boot in the QEMU emulator and then download and extract the target root
filesystem for your target machines architecture.
You can get architecture-specific binaries and filesystems from
<ulink url='&YOCTO_MACHINES_DL_URL;'>machines</ulink>.
You can get installation scripts for stand-alone toolchains from
<ulink url='&YOCTO_TOOLCHAIN_DL_URL;'>toolchains</ulink>.
Once you have all your files, you set up the environment to emulate the hardware
by sourcing an environment setup script.
Finally, you start the QEMU emulator.
You can find details on all these steps in the
"<ulink url='&YOCTO_DOCS_QS_URL;#using-pre-built'>Using Pre-Built Binaries and QEMU</ulink>"
section of the Yocto Project Quick Start.
</para>
<para>
Using QEMU to emulate your hardware can result in speed issues
depending on the target and host architecture mix.
For example, using the <filename>qemux86</filename> image in the emulator
on an Intel-based 32-bit (x86) host machine is fast because the target and
host architectures match.
On the other hand, using the <filename>qemuarm</filename> image on the same Intel-based
host can be slower.
But, you still achieve faithful emulation of ARM-specific issues.
</para>
<para>
To speed things up, the QEMU images support using <filename>distcc</filename>
to call a cross-compiler outside the emulated system.
If you used <filename>runqemu</filename> to start QEMU, and the
<filename>distccd</filename> application is present on the host system, any
BitBake cross-compiling toolchain available from the build system is automatically
used from within QEMU simply by calling <filename>distcc</filename>.
You can accomplish this by defining the cross-compiler variable
(e.g. <filename>export CC="distcc"</filename>).
Alternatively, if you are using a suitable SDK image or the appropriate
stand-alone toolchain is present in <filename>/opt/poky</filename>,
the toolchain is also automatically used.
</para>
<note>
Several mechanisms exist that let you connect to the system running on the
QEMU emulator:
<itemizedlist>
<listitem><para>QEMU provides a framebuffer interface that makes standard
consoles available.</para></listitem>
<listitem><para>Generally, headless embedded devices have a serial port.
If so, you can configure the operating system of the running image
to use that port to run a console.
The connection uses standard IP networking.</para></listitem>
<listitem><para>SSH servers exist in some QEMU images.
The <filename>core-image-sato</filename> QEMU image has a Dropbear secure
shell (ssh) server that runs with the root password disabled.
The <filename>core-image-basic</filename> and <filename>core-image-lsb</filename> QEMU images
have OpenSSH instead of Dropbear.
Including these SSH servers allow you to use standard <filename>ssh</filename> and
<filename>scp</filename> commands.
The <filename>core-image-minimal</filename> QEMU image, however, contains no ssh
server.</para></listitem>
<listitem><para>You can use a provided, user-space NFS server to boot the QEMU session
using a local copy of the root filesystem on the host.
In order to make this connection, you must extract a root filesystem tarball by using the
<filename>runqemu-extract-sdk</filename> command.
After running the command, you must then point the <filename>runqemu</filename>
script to the extracted directory instead of a root filesystem image file.</para></listitem>
</itemizedlist>
</note>
</section>
</chapter>
<!--

Some files were not shown because too many files have changed in this diff Show More