Compare commits

..

346 Commits

Author SHA1 Message Date
Richard Purdie
6caa7d1d6c bitbake: command: Fix getCmdLineAction bugs
Executing "bitbake" doesn't get a sane message since the None return value
wasn't being handled correctly. Also fix msg -> cmd_action['msg'] as
otherwise an invalid variable is accessed which then crashes the server
due to the previous bug.

(Bitbake rev: c6211291ae07410832031a5274690437cc2b09a6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:44:14 +00:00
Richard Purdie
2a69358610 bitbake: command: Add missing import traceback
Without this, if an exception occurs the server will silently crash
with no feedback to the user about why (since traceback isn't imported).

(Bitbake rev: e637a635bf7b5a9a2e9dc20afc18aceec98d578f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:44:06 +00:00
Christopher Larson
f3acb135e7 bitbake: command: add error to return of runCommand
Currently, command.py can return an error message from runCommand, due to
being unable to run the command, yet few of our UIs (just hob) can handle it
today. This can result in seeing a TypeError with traceback in certain rare
circumstances.

To resolve this, we need a clean way to get errors back from runCommand,
without having to isinstance() the return value. This implements such a thing
by making runCommand also return an error (or None if no error occurred).

As runCommand now has a method of returning errors, we can also alter the
getCmdLineAction bits such that the returned value is just the action, not an
additional message. If a sync command wants to return an error, it raises
CommandError(message), and the message will be passed to the caller
appropriately.

Example Usage:

    result, error = server.runCommand(...)
    if error:
        log.error('Unable to run command: %s' % error)
        return 1

(Bitbake rev: 717831b8315cb3904d9b590e633000bc897e8fb6)

Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:58 +00:00
Saul Wold
878ef6a31c libtasn1: Upgrade to 2.13
(From OE-Core rev: 94c375a281378413d24a402ec6a59762d0eb5b85)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:41 +00:00
Nitin A Kamble
f49e7629df libtasn1: fix build with automake 1.12
(From OE-Core rev: 6fb4913eb237062303bcda50e9910f53dc95d0dd)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:41 +00:00
Saul Wold
e124ac50dc libtasn1: Update to 2.12
Use the GUN_MIRROR correctly

(From OE-Core rev: d02a682360d803f9b5f033ddc5d0f43020eebd13)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:41 +00:00
Martin Jansa
498951a247 bison: move remove-gets.patch to BASE_SRC_URI, it's needed for bison-native too if host has (e)glibc-2.16
(From OE-Core rev: cae95a527c1e9faefc0c051254e67dad7fad4197)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-28 12:43:41 +00:00
Elizabeth Flanagan
3456295898 build-appliance-image: Bump SRCREV
With the pending point release for denzil we need to point
to the release revision and the correct branch.

(From OE-Core rev: 0a9e8bf35afd5990c1b586bba5eb68f643458a4b)

Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-04 22:23:43 +00:00
Elizabeth Flanagan
c917351c9b DISTRO var bump for pending release
With the pending 1.2.2 release we need to bump distro vars.

(From meta-yocto rev: f9b4864a7fb4f25df74f1bf3dc1d55e72bd27fc1)

Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-04 22:21:17 +00:00
Tom Zanussi
f3db1cd8ff yocto-bsp: set branches_base for list_property_values()
yocto_bsp_list_property_values() is missing the context it needs to
properly filter choicelists, so add it to the context object.

Fixes [YOCTO #3233]

(From meta-yocto rev: 064b15f76c5b52899f4c3fdef06412c3063062a5)

(From meta-yocto rev: 601b6227908f3dd7972ad62c53d1041f4429aeb2)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:37:08 +00:00
Scott Rifenbark
b0689417df documentation: Formats. Also, the January 2013 date summary
I went into the chapters and did some formatting in order
to generate a new commit.  The commit summary message for
the previous commit was wrong and I pushed it.  The date
for the 1.2.2 release is January 2013.

(From yocto-docs rev: 457549a44cb7871d5c645f5aab02350cf76b6f1f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:27 +00:00
Scott Rifenbark
270402dd38 documentation: Updated manual history tables for Feb 2013
The release date for the five manuals was updated to
Feb 2013 for the 1.2.2 release.

(From yocto-docs rev: 2110815be55bddbfd24495aad7b8d5e2b69f3475)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:27 +00:00
Scott Rifenbark
5d30ffd8f8 documentation: Added February 2013 as release date for 1.3.1
I added this date as the release date for the five manuals
that have a manual history table.

(From yocto-docs rev: 5f107aab8bd2de0be78163eaf356656ddae4bf5f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:27 +00:00
Scott Rifenbark
c2c8bb40e8 poky.ent: Updated to remove "current" and replace with 1.2.2
(From yocto-docs rev: ad61c64d5b33ca3b7aa02f67934c7c2317d8cbe1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:27 +00:00
Scott Rifenbark
625044e9a8 documentation: Updated Manual History table for 1.2.2 release
Involved putting in a place holder date, bumping the copyright
date to 2013, and updating the poky.ent file as appropriate
for 1.2.2 and 7.0.2.

(From yocto-docs rev: 0a76733066b3440809ecafce756c5fdb4eafaae6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:26 +00:00
Scott Rifenbark
d571e5a68a Documentation: ref-manual - Updated LIC_FILES_CHKSUM example.
One of the examples used "startline" instead of "beginline".
Correction made.

(From yocto-docs rev: 59345ad197619280bef7a469d671feae80f0c4e6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:26 +00:00
Scott Rifenbark
8e078932c3 documentation: poky-ref-manual - Fixed grammar typo.
(From yocto-docs rev: 2660f17b79a36772081e37117be85ee304b78f20)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:26 +00:00
Scott Garman
151d4fbc4e cups: patch for CVE-2011-2896
Patch from: http://cups.org/strfiles/3867/str3867.patch

The LZW decompressor in the LWZReadByte function in giftoppm.c in the
David Koblas GIF decoder in PBMPLUS, as used in the gif_read_lzw
function in filter/image-gif.c in CUPS before 1.4.7, the LZWReadByte
function in plug-ins/common/file-gif-load.c in GIMP 2.6.11 and earlier,
the LZWReadByte function in img/gifread.c in XPCE in SWI-Prolog 5.10.4
and earlier, and other products, does not properly handle code words
that are absent from the decompression table when encountered, which
allows remote attackers to trigger an infinite loop or a heap-based
buffer overflow, and possibly execute arbitrary code, via a crafted
compressed stream, a related issue to CVE-2006-1168 and CVE-2011-2895.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-2896

[YOCTO #3582]
[ CQID: WIND00299595 ]

(From OE-Core rev: f4aca76c7933abf2771999c309d49ab91a3d9480)

Signed-off-by: Li Wang <li.wang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>

Merged with denzil branch, partial fix for denzil bug [YOCTO #3652]

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:26 +00:00
Li Wang
caa1d03089 librsvg: CVE-2011-3146
Store node type separately in RsvgNode

commit 34c95743ca692ea0e44778e41a7c0a129363de84 upstream

The node name (formerly RsvgNode:type) cannot be used to infer
the sub-type of RsvgNode that we're dealing with, since for unknown
elements we put type = node-name. This lead to a (potentially exploitable)
crash e.g. when the element name started with "fe" which tricked
the old code into considering it as a RsvgFilterPrimitive.
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-3146

https://bugzilla.gnome.org/show_bug.cgi?id=658014

[YOCTO #3581]
[ CQID: WIND00376773 ]
Upstream-Status: Backport

(From OE-Core rev: 6d030fcb69221da073ce413049deb8447934bed5)

Signed-off-by: Li Wang <li.wang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>

Resolved merge conflicts with denzil branch.

Fixes denzil bug [YOCTO #3651].

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:25 +00:00
yanjun.zhu
a86e32a18b squashfs: fix CVE-2012-4025
CQID:WIND00366813

Reference: http://squashfs.git.sourceforge.net/git/gitweb.cgi?
p=squashfs/squashfs;a=patch;h=8515b3d420f502c5c0236b86e2d6d7e3b23c190e

Integer overflow in the queue_init function in unsquashfs.c in
unsquashfs in Squashfs 4.2 and earlier allows remote attackers
to execute arbitrary code via a crafted block_log field in the
superblock of a .sqsh file, leading to a heap-based buffer overflow.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-4025

(From OE-Core rev: e6fddd1961061895e9335fa94b636163efdc9caa)

Signed-off-by: yanjun.zhu <yanjun.zhu@windriver.com>

[YOCTO #3564]
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:25 +00:00
Scott Garman
156c2554b7 freetype: patches for CVE-2012-5668, 5669, and 5670
For details of these security issues, please see:

http://www.openwall.com/lists/oss-security/2012/12/25/1

Thanks to Eren Turkay <eren@hambedded.org> for submitting source
patches that apply cleanly to freetype 2.4.9.

This fixes denzil bug [YOCTO #3649]

(From OE-Core rev: be34916d81b71385a560a6990c7b30eba243b356)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:25 +00:00
Scott Garman
b6037b6d2f libxml2: patch for CVE-2012-2871
the patch come from:
http://src.chromium.org/viewvc/chrome/trunk/src/third_party/libxml/ \
src/include/libxml/tree.h?r1=56276&r2=149930

libxml2 2.9.0-rc1 and earlier, as used in Google Chrome before
21.0.1180.89, does not properly support a cast of an unspecified
variable during handling of XSL transforms, which allows remote
attackers to cause a denial of service or possibly have unknown other
impact via a crafted document, related to the _xmlNs data structure in
include/libxml/tree.h.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-2871

[YOCTO #3580]
[ CQID: WIND00376779 ]

(From OE-Core rev: fa3d44594360786b2526d64f0ea5bc26b44a1fa8)

Signed-off-by: Li Wang <li.wang at windriver.com>

This fixes denzil bug [YOCTO #3648]

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:25 +00:00
Scott Garman
cf77faba91 gitignore: add generated doc files to ignore list
(From OE-Core rev: c067fbcb910f888cc6328d725a395ce681862377)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:24 +00:00
Richard Purdie
88b65c79ac boot-directdisk: Fix kernel location after STAGING_KERNEL_DIR change
This catches up with the STAGING_KERNEL_DIR location change
and uses the correct variable to future proof this issue.

[YOCTO #2783]

(From OE-Core rev: 28715eff6dff3415b1d7b0be8cbb465c417e307f)

(From OE-Core rev: f02a7341e37aec155772e1546d8b21ef2c9f5e9d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:24 +00:00
Scott Garman
bfae0622b5 build-appliance-image: Allow SRCREV to be overriden
This will allow use to automagically set the SRCREV for builds on the
autobuilder. It will still require manual updating for releases.

(From OE-Core rev: 1b4781e5c6eee234fcf57dd53d5167b31d81a482)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:24 +00:00
Scott Garman
01c1421270 psplash: new patch to fix segfault
This fixes a segmentation fault when passing -a without
an argument.

Fixes [YOCTO #2903]

(From OE-Core rev: f5b8ba5e51ac41cf375119a88083617f667a85d5)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:23 +00:00
Mihai Lindner
2ccb03f9b7 sysklogd: removed tabs from syslog.conf
Yocto #2926: syslog.conf should not have tabs within the selector field.
Removed tabs from the selector field of syslog rules. Tabs or spaces
should be used, in syslog.conf, only when separating selectors from
actions.

(From OE-Core rev: 1316be4e597332a629842b3f5a7dde8e45dd057d)

(From OE-Core rev: c806466c8d4a9d0d4a66d34d3565d5879c2f2b0f)

Signed-off-by: Mihai Lindner <mihaix.lindner@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:23 +00:00
Khem Raj
07f304f4e2 grub,guile,cpio,tar,wget: Fix gnulib for absense of gets in eglibc
eglibc 2.16 does not export gets anymore

(From OE-Core rev: 043d67c6677fa87496c4c441e9d366e2003ab9aa)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts with denzil branch and backported guile
patch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:23 +00:00
Khem Raj
db915496d3 bison: Fix for gets being removed from eglibc 2.16
(From OE-Core rev: bc91a267d097c100480ea02ece7fb372167eaf7f)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:23 +00:00
Khem Raj
8fe4344e74 gettext,m4,augeas,gnutls: Account for removal of gets in eglibc 2.16
These recipes use gnulib which needs this change to use gets
when its defined and not otherwise. Until that change goes into
gnulib and then all these package upgrade gnulib in their sourcebase
we patch them

(From OE-Core rev: b955f1a7bc716055c78ed575eccac6f611dc2395)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts with denzil branch and backported gnutls
patch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:22 +00:00
Khem Raj
3c57cb356e diffutils: Fix build with eglibc 2.16
eglibc 2.16 has removed gets so we account for that

(From OE-Core rev: b6bcd4e26e94364939c8874db90e64fbb242e841)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:22 +00:00
Khem Raj
38d6972032 coreutils: Fix build with eglibc 2.16
eglibc 2.16 has removed gets so we account for that

(From OE-Core rev: 965243ab5b5d992977193c444dbbbf09701467c2)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-01-03 12:34:22 +00:00
Scott Garman
ef745cb34f poky.conf: Add Ubuntu 12.04.1 LTS to SANITY_TESTED_DISTROS
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 16:02:04 +00:00
yanjun.zhu
a6f0dcbe7d squashfs: fix for CVE-2012-4024
Reference:http://squashfs.git.sourceforge.net/git/gitweb.cgi?p=
squashfs/squashfs;a=commit;h=19c38fba0be1ce949ab44310d7f49887576cc123

Fix potential stack overflow in get_component() where an individual
pathname component in an extract file (specified on the command line
or in an extract file) could exceed the 1024 byte sized targname
allocated on the stack.

Fix by dynamically allocating targname rather than storing it as
a fixed size on the stack.

[YOCTO #3513]

Fixes denzil [YOCTO #3520]

(From OE-Core rev: d35560f33f257bd12a07c7c0be770319086d6ad9)

Signed-off-by: yanjun.zhu <yanjun.zhu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:20 +00:00
Marcin Juszkiewicz
cf796f8908 libxml: disable lzma
On my system libxml-native got linked with host copy of liblzma and as a
result libxslt-native was not linkable:

| x86_64-linux-libtool: link: gcc -isystem/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/include -O2 -pipe -Wall -Wl,-rpath-link -Wl,/home/hrw
/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib -Wl,-rpath-link -Wl,/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-
linux/lib -Wl,-rpath -Wl,/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib -Wl,-rpath -Wl,/home/hrw/HDD/devel/canonical/ci-linaro/oecore/buil
d/tmp-eglibc/sysroots/x86_64-linux/lib -Wl,-O1 -o .libs/xsltproc xsltproc.o  -L/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib -L/home/hrw/
HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/lib ../libxslt/.libs/libxslt.so ../libexslt/.libs/libexslt.so /home/hrw/HDD/devel/canonical/ci-linaro/oecore/
build/tmp-eglibc/work/x86_64-linux/libxslt-native-1.1.26-r8/libxslt-1.1.26/libxslt/.libs/libxslt.so /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux
/usr/lib/libxml2.so -ldl /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/liblzma.so -lrt -lz -lm -pthread -Wl,-rpath -Wl,/home/hrw/HDD/deve
l/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib
| /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/libxml2.so: undefined reference to `lzma_code@XZ_5.0'
| /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/libxml2.so: undefined reference to `lzma_auto_decoder@XZ_5.0'
| /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/libxml2.so: undefined reference to `lzma_end@XZ_5.0'
| /home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/sysroots/x86_64-linux/usr/lib/libxml2.so: undefined reference to `lzma_properties_decode@XZ_5.0'
| collect2: error: ld returned 1 exit status
| make[2]: *** [xsltproc] Error 1
| make[2]: Leaving directory `/home/hrw/HDD/devel/canonical/ci-linaro/oecore/build/tmp-eglibc/work/x86_64-linux/libxslt-native-1.1.26-r8/libxslt-1.1.26/xsltproc'

(From OE-Core rev: 42e03215cc494f1508b96c2bb63243a02e5ef812)

Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:20 +00:00
Saul Wold
e416fb6920 libxml2: Update to 2.8.0
removed 2 patches that are now fixed upstream
updated hash.c LIC_FILES_CHKSUM due to updating the date to 2012

(From OE-Core rev: c74ed920d3a9a0e379f8fd450e2841628ee0beb2)

Signed-off-by: Saul Wold <sgw@linux.intel.com>

Resolved merge conflicts in denzil branch.

Addresses CVE-2011-1944.

Fixes denzil [YOCTO #2703]

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:19 +00:00
Richard Purdie
aa66dc91d3 libxml2/libxslt: Don't depend on ansidecl.h header
We don't DEPEND on binutils for ansidecl.h so ensure we should never
use the header. This makes builds determinstic and means something like:

bitbake binutils
bitbake libxml2 -c configure
bitbake binutils -c clean
bitbake libxml2

doen't fail to build.

(From OE-Core rev: 54d27bbc26d1e45e51ee8ef0f051a2bd8f627cc0)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:19 +00:00
Nitin A Kamble
b45184aef3 libxml2: fix build with automake 1.12
(From OE-Core rev: dd1b77c473ee92608ad0a5bdbea0880d2f613c2c)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:19 +00:00
Phil Blundell
0e43be806d openssl: Use ${CFLAGS} not ${FULL_OPTIMIZATION}
The latter variable is only applicable for target builds and could
result in passing incompatible options (and/or failing to pass
required options) to ${BUILD_CC} for a virtclass-native build.

(From OE-Core rev: d5a99f3dab07fa676788b434e18174c0798d4460)

Signed-off-by: Phil Blundell <philb@gnu.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:18 +00:00
Scott Garman
d6935247dd openssl: upgrade to 1.0.0j
Addresses CVE-2012-2333

Fixes [YOCTO #2682]

Fixes denzil [YOCTO #2701]

(From OE-Core rev: cf84ebac391b243099fe0d05223433ecb8e71641)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:18 +00:00
yanjun.zhu
b8f6d9cbfd libproxy: Fix for CVE-2012-4504
Reference:https://code.google.com/p/libproxy/source/detail?r=853

Stack-based buffer overflow in the url::get_pac function in url.cpp
in libproxy 0.4.x before 0.4.9 allows remote servers to have an
unspecified impact via a large proxy.pac file.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-4504

[YOCTO #3487]

Fixes denzil [YOCTO #3511]

(From OE-Core rev: 543d608ae6251956b84e6423ec66f146f926d4b8)

Signed-off-by: yanjun.zhu <yanjun.zhu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:18 +00:00
Martin Jansa
2288ff099c opkg-utils: bump SRCREV to latest
(From OE-Core rev: 119215fee75a64de49d498c3d57446783722a292)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:18 +00:00
Andrei Gherzan
93cc23571e opkg-utils: Add needed python modules as RDEPENDS
(From OE-Core rev: dadfb4914b25a970c61e7f2354c01086d4823fd6)

Signed-off-by: Andrei Gherzan <andrei@gherzan.ro>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:17 +00:00
Robert Yang
95f2a5b635 rootfs_rpm.bbclass: save rpmlib rather than remove it
The rpmlib was removed when images that add
"remove_packaging_data_files" to ROOTFS_POSTPROCESS_COMMAND, which would
make the increment rpm image generation doesn't work in the second
build, since list_installed_packages would get incorrect value in the
second build, move the rpmlib to ${T} rather than remove it, and move it
back when INC_RPM_IMAGE_GEN =1.

[YOCTO #2690]

(From OE-Core rev: c30e79510c06701f10f659eedaa0fe785538ac17)

(From OE-Core rev: 15e13ea1fc8a0c29b4ca68c31c83ca013c89c36e)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:17 +00:00
Robert Yang
75997b4565 package_rpm.bbclass: Fix incremental rpm image generation
Fix the incremental rpm image generation, it didn't work since the code
has been changed.

The btmanifest should have a ".manifest" suffix, so that it can be moved
to ${T} by rootfs_rpm.bbclass:
mv ${IMAGE_ROOTFS}/install/*.manifest ${T}/

Note: The locale pkgs would always be re-installed.

[YOCTO #2690]

(From OE-Core rev: 5149630746626c6d416f26ab9dd1c7213fcd8c50)

(From OE-Core rev: 1f5113ae91ed639cf06fcbb9431b460d7a06bbbc)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:17 +00:00
Roy.Li
970d279774 bitbake: compile tar-replacement firstly
Compiling tar-replacement or not is decided by version of host tar,
if the host tar version is lower than 1.23, Compiling tar-replacement
is needed.

When doing popoluate tar-replacement sysroot to write the tar to
sysroot, but writing is not finished. other packages probably
use the being written tar to unzip file, which will lead to failure
and report the below error:
"bitbake_build/tmp/sysroots/x86_64-linux/usr/bin/tar: Text file busy"

Now we compile tar-replacement firstly to ensure that a being written
tar command will not be used.

(From OE-Core rev: 3c1c4719fc96f6f1fbb257413d6baf3d91fdf4e8)

(From OE-Core rev: 1a6f61d9493bdbade256dc6c19bbffe75a2684a4)

Signed-off-by: Roy.Li <rongqing.li@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:16 +00:00
Joe Slater
40e6fc6a65 gettext: install libgettextlib.a before removing it
In a multiple job build, Makefile can simultaneously
be installing and removing libgettextlib.a.  We serialize
the operations.

(From OE-Core rev: 2750546b2152eecdbb37e963a2495383f6944184)

(From OE-Core rev: 500c9c1e0047ba9f35e3591f4252fe2dd38bc4f1)

Signed-off-by: Joe Slater <jslater@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:16 +00:00
Paul Eggleton
e9c2218231 classes/qmake_base: support linux-gnuspe/linux-uclibcspe TARGET_OS
Fix borrowed from OE-Classic. This should fix build failures during
do_configure of Qt applications with the p1022ds machine from
meta-fsl-ppc, for example.

(From OE-Core rev: a19fc8e19a6cc6885a1e0616b1f42cc49c8f2c9f)

(From OE-Core rev: 0baef81f0ebf854b3e3e78b0d3745cc8ad41491e)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:16 +00:00
Ross Burton
84399e189b gst-plugins-good: disable (uninstalled) examples
The examples pull in a GTK+ build dependency, so remove that too.

(From OE-Core rev: f6975629fd5aa34bf423415bf2328e2146a6e675)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-12-07 15:58:15 +00:00
Paul Eggleton
846b7c3887 bitbake: lib/bb/siggen.py: log when tainting the signature of a task
Log a note when applying a taint to a task signature (e.g. when using
the -f or -C command line options) so that the user knows this has been
done.

(Bitbake rev: 0fd960fdea83874eedb541cbc2920257e0f3fb81)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-12 08:58:48 +01:00
Paul Eggleton
732007cbb6 bitbake: bitbake: ensure -f causes dependent tasks to be re-run
If -f is specified, force dependent tasks to be re-run next time. This
works by changing the force behaviour so that instead of deleting the
task's stamp, we write a "taint" file into the stamps directory, which
will alter the taskhash randomly and thus trigger the task to re-run
next time we evaluate whether or not that should be done as well as
influencing the taskhashes of any dependent tasks so that they are
similarly re-triggered. As a bonus because we write this file as
<stamp file name>.taskname.taint, the existing code which deletes the
stamp files in OE's do_clean will already handle removing it.

This means you can now do the following:

bitbake somepackage
[ change the source code in the package's WORKDIR ]
bitbake -c compile -f somepackage
bitbake somepackage

and the result will be that all of the tasks that depend on do_compile
(do_install, do_package, etc.) will be re-run in the last step.

Note that to operate in the manner described above you need full hashing
enabled (i.e. BB_SIGNATURE_HANDLER must be set to a signature handler
that inherits from BasicHash). If this is not the case, -f will just
delete the stamp for the specified task as it did before.

This fix is required for [YOCTO #2615] and [YOCTO #2256].

(Bitbake rev: f7b55a94226f9acd985f87946e26d01bd86a35bb)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-12 08:58:34 +01:00
Martin Jansa
9945923c8b openssl: add deprecated and unmaintained find.pl from perl-5.14 to fix perlpath.pl
* openembedded-core/meta/recipes-connectivity/openssl/openssl.inc
*
* is using perlpath.pl:
*
*   do_configure () {
*           cd util
*           perl perlpath.pl ${STAGING_BINDIR_NATIVE}
*   ...
*
* and perlpath.pl is using find.pl:
* openssl-1.0.0i/util/perlpath.pl:
*   #!/usr/local/bin/perl
*   #
*   # modify the '#!/usr/local/bin/perl'
*   # line in all scripts that rely on perl.
*   #
*
*   require "find.pl";
*   ...
*
* which was removed in perl-5.16.0 and marked as deprecated and
* unmaintained in 5.14 and older:
* /tmp/usr/lib/perl5/5.14.2/find.pl:
*   warn "Legacy library @{[(caller(0))[6]]} will be removed from the Perl
*   core distribution in the next major release. Please install it from the
*   CPAN distribution Perl4::CoreLibs. It is being used at @{[(caller)[1]]},
*   line @{[(caller)[2]]}.\n";
*
*   # This library is deprecated and unmaintained. It is included for
*   # compatibility with Perl 4 scripts which may use it, but it will be
*   # removed in a future version of Perl. Please use the File::Find module
*   # instead.

(from OE-Core rev c09bf5d177a7ecd2045ef7e13fff4528137a9775)

(From OE-Core rev: c15fae372cf75403facc28cf76f973b1279425dd)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:09 +01:00
Dennis Lan
9bf253f467 openjade-native: fix undefined Getopts error, use std namespace
Using Gentoo Linux as the build host, it fails without this patch
Use Getopt::Std in place of getopts.pl.

https://bugs.gentoo.org/show_bug.cgi?id=420083

which following error:
/usr/bin/perl -w ./../msggen.pl -l jstyleModule InterpreterMessages.msg
/usr/bin/perl -w ./../msggen.pl -l jstyleModule DssslAppMessages.msg
Undefined subroutine &main::Getopts called at ./../msggen.pl line 22.
make[2]: *** [InterpreterMessages.h] Error 2
make[2]: *** Waiting for unfinished jobs....
Undefined subroutine &main::Getopts called at ./../msggen.pl line 22.
make[2]: *** [DssslAppMessages.h] Error 2

(from OE-Core rev 169a89b10817b742c063fcd76721e4dbbcca6199)

(From OE-Core rev: 7c7dcb05685d840c70474d409f6a58ae459c46f0)

Signed-off-by: Dennis Lan <dennis.yxun@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:08 +01:00
Richard Purdie
d156f75fea siteconfig: Clear cache before rebuilding
This ensures consistent build results and avoids build failures when compiler flags
change for example.

(From OE-Core rev: a5ff8396cad130f809f8f8da49bb38e6f80f923c)

(From OE-Core rev: b21d1daf709ddce14c93a5f16c91ff702e1cb7ff)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:08 +01:00
Darren Hart
809d97b938 gnutls: Update SRC_URI to use GNU_MIRROR
The current SRC_URI fails. Update it with the GNU_MIRROR SRC_URI from
upstream commit 753b22012f10c393c191d3116b9d38ee4be6d112.

(From OE-Core rev: 8430748e838872b22fe0e83a7dbf3a2a5b1faba2)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: John Howard <john.howard@intel.com>
CC: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:08 +01:00
Paul Eggleton
e5a85ac2e4 classes/cml1: ensure -c menuconfig forces a rebuild next time
Ensure the following results in the kernel being rebuilt, repackaged and
re-deployed in the final step:

bitbake virtual/kernel
bitbake -c menuconfig virtual/kernel
[ make changes to the kernel configuration and save ]
bitbake virtual/kernel

If there are no changes to the configuration saved, the rebuild will not
be triggered.

Note that this relies on a function recently added to BitBake and
requires full hashing (i.e. BB_SIGNATURE_HANDLER must be set to a
signature handler that inherits from BasicHash) - if this is not the
case or the function is not available in the version of BitBake being
used this change will do nothing.

Fixes [YOCTO #2256].

(From OE-Core rev: 9bf6b60e1599cf5dd87089d42584583cdfd6807a)

(From OE-Core rev: a9600e68e64a111be4cb934e14b914fa553b5654)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:08 +01:00
Jesse Zhang
b4bb378261 udev: don't mount with -o sync
mount.sh mounts all partitions with -o sync, which is bad for system
performance.

(From OE-Core rev: d49cf73754150b50a911d326aaa666f5da78855c)

(From OE-Core rev: 44c102386c9bca17743d2edd1f94d4071974204d)

Signed-off-by: Jesse Zhang <sen.zhang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-10 23:30:07 +01:00
Bogdan Marinescu
e7d4eba0a9 Save proxy settings
Proxy settings were not properly saved between Hob runs. This
fix is mostly a backport of code in master.

[YOCTO #3024]

Signed-off-by: Bogdan Marinescu <bogdan.a.marinescu@intel.com>
2012-10-01 18:13:15 +01:00
Paul Eggleton
aad5c9f699 bitbake: hob: format error messages properly
Error messages that use arguments need to be formatted properly, or we
don't get the full message. Use a formatter to do this when an error
occurs.

Partial fix for [YOCTO #2983].

(Bitbake rev: 6783538884adecd914909a9ab4ca73c27575f3ad)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-01 18:13:05 +01:00
Matthew McClintock
42d9652fb1 poky.conf: add distros that work correctly
All the builds pass for these distros as well

atom-pc, beagleboard, mpc8513e-rdb, routerstationpro

As well as other powerpc targets:

p1022ds, p4080ds, p5020ds, p5020ds-64b

Signed-off-by: Matthew McClintock <msm@freescale.com>
2012-10-01 18:12:48 +01:00
Paul Eggleton
4e09d164d9 bitbake: hob: ensure error message text is properly escaped
Our lack of markup escaping was causing invalid markup, leading to the
error dialog being blank. Use the glib markup escaping function provided
by PyGTK+ to do this properly and avoid the blank error dialogs.

Partial fix for [YOCTO #2983].

(Bitbake rev: 563ea5233a5ab1629c51e802d04280692f96c596)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-10-01 18:12:32 +01:00
Darren Hart
d567e770c3 bootimg: Use STAGING_KERNEL_DIR
bootimg.bbclass using STAGING_DIR_HOST/kernel instead of
STAGING_KERNEL_DIR, resulting in build failure of live images.

| install: cannot stat `/usr/local/dev/yocto/fishriver-test/build/tmp/sysroots/fishriver/kernel/bzImage': No such file or directory

Replace it with STAGING_KERNEL_DIR.

(From OE-Core rev: 8f16811a8d51982a8b3d70e6087aef4a41926840)

(From OE-Core rev: 032bd9a856f9ca0b43dff272bd4f95481aa46597)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Tested-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:18 +01:00
Richard Purdie
a5f5b1e80a texi2html: Fix perl location on recent distros
This fixes errors like:
| error: Failed dependencies:
|       /bin/perl is needed by texi2html-5.0-r1.i586

(From OE-Core rev: d4c27021ffc813732526ab9ae6969e5ae0bdf7e8)

(From OE-Core rev: f28dcaf565050d5c857c3d09164104410a2e4173)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:18 +01:00
Richard Purdie
d47234d4c9 dbus: Ensure dbus-nativesdk doesn't RPROVIDE dbus-x11
dbus-x11 should not RPROVIDE dbus-x11 as this is incorrect and confuses
builds. This fixes the nativesdk case.

(From OE-Core rev: f4cc32585f9ac392460991b46b8cfa7a347a27e6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:18 +01:00
Radu Moisan
3cdd930eeb dbus: include dbus-launch in the main dbus package
Followed suggestions from Bugz 2261:

2) make the virtual/libx11 DEPENDS conditional based on the x11 distro feature.
This makes the build dependencies reflect the feature list.

3) remove dbus-x11, meaning that dbus-launch with its potential X11 dependency
is now back in dbus where is belongs.

4) make dbus provide dbus-x11, for compatibility.

Fixes [Yocto #2261]

(From OE-Core rev: 9bf6d834312581e8b8741fb9d1621e4c40de5687)

Signed-off-by: Radu Moisan <radu.moisan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:18 +01:00
Franklin S. Cooper Jr
cfd36177f7 u-boot: Use fw_env.config if avaliable
* Add support for board specific fw_env.config file if avaliable

(From OE-Core rev: 4bc2151d6bb500b0489bc00bce7574dc24f41b90)

Signed-off-by: Franklin S. Cooper Jr <fcooper@ti.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:17 +01:00
Ross Burton
1ecc27c20c pulseaudio: remove ConsoleKit dependency
ConsoleKit is a runtime dependency for the ConsoleKit module, but there isn't a
build-time dependency.

(From OE-Core rev: ebfc81f57bbc60e958472d9a1257e6a19f60adbb)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:17 +01:00
Martin Jansa
82114edb4a pulseaudio: fix pulseaudio-server RDEPENDS
* module-cork-music-on-phone was renamed to module-role-cork
  http://cgit.freedesktop.org/pulseaudio/pulseaudio/commit/?id=3c5cc345472302b9511c19244b3eceb4a3674d8c

(From OE-Core rev: b102849a145ca6602ac8e499b1420672d290c11b)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:17 +01:00
Khem Raj
a62752bb4f pulseaudio: Always enable NLS
When NLS is disabled e.g. on uclibc the build fails
The actual problem is that pulseaudio build system
should cater for it but it does not

(From OE-Core rev: b7d10637059b2352bcca45bc15b26d0dd056e78f)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:17 +01:00
Constantin Musca
7ffad91e79 pulseaudio: upgrade to 2.1
(From OE-Core rev: 540093fd9f52c86e6803554e2796668227bb89b5)

Signed-off-by: Constantin Musca <constantinx.musca@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:16 +01:00
Xin Ouyang
1e2d6bffc5 libatomics-ops: update to the latest version 7.2
All old patches are droped because:

Merged into 7.2 by upstream:
* fedora/libatomic_ops-1.2-ppclwzfix.patch
* gentoo/libatomic_ops-1.2-mips.patch
* gentoo/sh4-atomic-ops.patch
* libatomics-ops_fix_for_x32.patch

Obsolete:
* doublefix.patch

(From OE-Core rev: 59afdbbddbacf5d9c668bb8f011c8f150421d498)

Signed-off-by: Xin Ouyang <Xin.Ouyang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:16 +01:00
Cristian Iorga
cb1c31939d libcanberra: upgrade to 0.29
(From OE-Core rev: 074c34617a361a589665774ac8d9060a4ef4ef82)

Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:16 +01:00
Cristian Iorga
bec76a3813 pulseaudio: upgrade to 2.0
(From OE-Core rev: 961872787ac2c2b18d4589967e68a60ab3a4cc86)

Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Conflicts:

	meta/recipes-multimedia/pulseaudio/pulseaudio_2.0.bb
2012-09-28 16:53:16 +01:00
Denys Dmytriyenko
ead623b2a0 pulseaudio: fix typo in the patch name, pulseaudo -> pulseaudio
No PR bump is needed.

(From OE-Core rev: ac3edffa2586064ff480b89e80b608f14e566fa7)

Signed-off-by: Denys Dmytriyenko <denys@ti.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:16 +01:00
Khem Raj
b973813796 libatomics-ops: Make it build for SH4
(From OE-Core rev: fc47820982aea41f2b0fdd4d87fb0242bf7346dd)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:15 +01:00
Saul Wold
8941d5aa3b pulseaudio: disable tcpwrap by default
This ensures that tcpwrapper usage is always disabled, this was
inconsistent because it would test for libwrap and sometimes enable
and sometimes not.

This ensures consistent build reproducibility.

(From OE-Core rev: 5b3d18d12fff156d4d360b779eb4ae78a480ce10)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:15 +01:00
Saul Wold
f6876adb6a bluez4: fix packaging issue after update
WARNING: QA Issue: bluez4: Files/directories were installed but not shipped
  /usr/share/dbus-1
  /usr/share/dbus-1/system-services
  /usr/share/dbus-1/system-services/org.bluez.service

(From OE-Core rev: 5c52472af73ed91970096eed3d216fdbc7ff42d2)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:15 +01:00
Cristian Iorga
3c3614c562 bluez4: update to ver. 4.101
(From OE-Core rev: 6e6407e9a8c59cb51685c4b767b62eacb8dbf852)

Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:15 +01:00
Cristian Iorga
1c71304f81 gst-plugin-bluetooth: update to ver. 4.101
(From OE-Core rev: 6d1bbc26d27506e5dd5c32013ea5074c5c5a342d)

Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:14 +01:00
Dongxiao Xu
5be28549ae bluez-hcidump: upgrade to version 2.4
(From OE-Core rev: 4934e54821ed19fb98ea7691af6a2d3bcb1922f7)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:14 +01:00
Jonas Danielsson
bbe83b55cd bluez4: make alsa support conditional upon DISTRO_FEATURES
Do not enable alsa in bluez4 unless it's included in DISTRO_FEATURES.

(From OE-Core rev: f6297d648b1464719d1e1e42e99d473b69a13e56)

Signed-off-by: Jonas Danielsson <jonas.danielsson@lundinova.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:14 +01:00
Paul Eggleton
3c363a70aa dhcp: remove dependency of dev/staticdev packages on main package
The main package is empty and is not produced, which leaves the dev
and staticdev packages broken. Remove the dependencies (added in
bitbake.conf by default) to fix this.

(From OE-Core rev: 5380c65e819d82f783cb75aa21db7c73bb445189)

(From OE-Core rev: 02dc5c9b7b1f21c9f8d9a9299933fa88dc16c542)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:14 +01:00
Paul Eggleton
2fab410f2e classes/mirrors: remove bogus gnutls mirror
This mirror entry which maps to itself plus a slash, if matched, put the
fetcher into a circular loop until the stack space is exhausted. A patch
has been sent to fix this issue in BitBake, but we should remove the
bogus entry as well.

(Note that this entry does not actually trigger the issue with current
master because the gnutls recipe now uses GNU_MIRROR instead of
ftp.gnutls.org, thus the bogus mirror entry is not matched.)

(from OE-Core rev 0de1827a9601143b090f751ea702fdb65a936b77)

(From OE-Core rev: 31ec9690c37c3a57e557684cbf5e5a4069bd57b7)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:13 +01:00
Matthew McClintock
4b8d430c1f sysvinit-inittab_2.88dsf.bb: only run serial checks at boot if we have items to check
Right now, we delay running the serial console checks to we boot up. This causes
issues for read only file systems. So, if have not configured any serial ports to
check via SERIAL_CONSOLES_CHECK we can skip the check at boot. This fixes any
issues with read only file systems and ipk packaging.

(From OE-Core rev: 2136030c1d240d9b8f123e3c8af5dacf66e86ab4)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:13 +01:00
Saul Wold
b0f05958fc kernel: Fix packaging issue
Remove /etc since it is empty, when creating a machine that does not
deliver any module config files, the /etc is empty and is then warned
about not being shipped, so we remove it.

This occurs in the routerstationpro with the following warning:
WARNING: For recipe linux-yocto, the following files/directories were installed but not shipped in any package:
WARNING:   /etc

(From OE-Core rev: 961498e3b4c4a93070bf278e67fc48c02333cd63)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:13 +01:00
Matthew McClintock
57cdce3108 valgrind_3.7.0.bb: fix missing leading space on _append
(From OE-Core rev: a175e09d1b0be85d8cbc58672485ec5ee5475ae2)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:13 +01:00
Richard Purdie
241653a01b autotools.bbclass: Add functionality to force a clean of ${B} when reconfiguring (and ${S} != ${B})
Unfortunately whilst rerunning configure and make against a project will mostly
work there are situations where it does not correctly do the right thing.

In particular, eglibc and gcc will fail out with errors where settings
do not match a previously built configuration. It could be argued they are
broken but the situation is what it is. There is the possibility of more subtle
errors too.

This patch adds removal of the build directory (${B}) when configure is
rerunning, the sstate checksum for do_configure has changed and ${S} != ${B}.
We could simply use a stamp but saving out the previous configuration checksum
adds some data at no real overhead.

If we find there are things where we want to disable this behaviour with
CONFIGURESTAMPFILE = "" in the recipe, or users could disable it globally.

[YOCTO #2774]
[YOCTO #2848]

This is particularly helpful for eglibc and gcc which use split builds by default and
are a particular source of reconfigure type problems.

(From OE-Core rev: f15f61af77cc4e52a037f509f8e49e1ea530cf35)

(From OE-Core rev: 14fc04e480aaf1cb5cd9d3a04a5b38d2fda115b1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:12 +01:00
Matthew McClintock
f6fb4890df eglibc_2.15: make patch only for Freescale machines
It's only Freescale machines that don't imlpement fsqrt, we don't want
this to effect others.

This patch was only added after the last release of denzil, so it's not
present in the release yet. Also, 2.15 is removed from master so it
should only apply to denzil branch

(From OE-Core rev: c541f746253fdb6d11cd961c7dff1aca8c2d2703)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:12 +01:00
Zhenhua Luo
b4de44f425 valgrind: fix debug info reading error when do memcheck on ppc targets
following is the error message:
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /lib/ld-2.13.so:
        --2263-- Can't make sense of .got section mapping
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /home/root/lzh:
        --2263-- Can't make sense of .data section mapping
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /usr/lib/valgrind/vgpreload_core-ppc32-linux.so:
        --2263-- Can't make sense of .data section mapping
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /usr/lib/valgrind/vgpreload_memcheck-ppc32-linux.so:
        --2263-- Can't make sense of .data section mapping
        --2263-- WARNING: Serious error when reading debug info
        --2263-- When reading debug info from /lib/libc-2.13.so:
        --2263-- Can't make sense of .data section mapping

(From OE-Core rev: 14626cc76210ed6fe40316a311f24147ed8de8be)

(From OE-Core rev: a76be502fbb9517c38cd716fa1f21a238b314162)

Signed-off-by: Zhenhua Luo <b19537@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:11 +01:00
Saul Wold
b32c50e016 python-pygtk: Upgrade to 2.24
This is needed for the build appliance and Hob also

(From OE-Core rev: a0abfd60e8cb78b40278eec85a8d0c722f8ef1e4)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:11 +01:00
Saul Wold
b5013e9e9e build-appliance: add zip-native, which is needed to build the final zip bundle
(From OE-Core rev: 8aeceab5d03fa3c88f0128ce1ac6bfde0d88e1b6)

(From OE-Core rev: 9ab2613327fcd4cf811afc52eff92903644e9b11)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:10 +01:00
Scott Garman
0cd0a3b475 kernel.bbclass: put perf .debug dir in -dbg package
This is needed to avoid the following QA error:

ERROR: QA Issue: non debug package contains .debug directory:
kernel-dev path

Patch proposed by Matthew McClintock <msm@freescale.com>

(From OE-Core rev: df879f191d1e86596444cb30a0a77a785958520c)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:10 +01:00
Scott Garman
825c647f65 relocatable.bbclass: Account for case when ORIGIN is in RPATH
This patch was backported from OE-Core rev:
43600df0d4efc976a9451163dd334b4763937932

This fixes a case when RPATH embedded in program have one of
its path already relative to ORIGIN. We were losing that path
if such a path existed. This patch appends it to the new edited
rpath being created when we see it.

so RPATH like below

(RPATH) Library rpath:
[$ORIGIN/../lib/amd64/jli:$ORIGIN/../jre/lib/amd64/jli]

would end up being empty

but after this patch its kept intact

(From OE-Core rev: 9ebb327ae17d1a765fd1499546ccf9076bb93234)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:10 +01:00
Khem Raj
16b6bc4288 kernel.bbclass: Dont package kxgettext.o
kxgettext.o is generated when building ppc kernels
so we end up with packaging errors like

> ERROR: QA Issue: Architecture did not match (20 to 62) on
> /work/virtex5-poky-linux/linux-xilinx-2.6.38-r00/packages-split/kernel-dev/usr/src/kernel/scripts/kconfig/kxgettext.o

(From OE-Core rev: 21952b62e3fca6c9fe750db62ca2b0587912be8a)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:09 +01:00
Bruce Ashfield
4ef357f033 kernel.bbclass: add non-santized kernel provides
If the kernel version string uses characters or symbols that
need to be santized for the package name, we can end up with a
mismatch between module requirements and what the kernel
provides.

The kernel version is pulled from utsrelease.h, which contains
the exact string that was passed to the kernel build, not
one that is santized, this can result in:

 echo "CONFIG_LOCALVERSION="\"MYVER+snapshot_standard\" >> ${B}/.config

 <build>

 % rpm -qp kernel-module-uvesafb-3.4-r0.qemux86.rpm --requires
update-modules
kernel-3.4.3-MYVER+snapshot_standard
 % rpm -qp kernel-3.4.3-myver+snapshot-standard-3.4-r0.qemux86.rpm --provides
kernel-3.4.3-myver+snapshot-standard = 3.4-r0

At rootfs assembly time, we'll have a dependency issue with the kernel
providing the santizied string and the modules requiring the utsrelease.h
string.

To not break existing use cases, we can add a second provides to the
kernel packaging with the unsantized version string, and allowing the
kernel module packaging to be unchanged.

   RPROVIDES_kernel-base += "kernel-${KERNEL_VERSION}"

 % rpm -qp kernel-3.4.3-myver+snapshot-standard-3.4-r0.qemux86.rpm --provides
kernel-3.4.3-MYVER+snapshot_standard
kernel-3.4.3-myver+snapshot-standard = 3.4-r0

(From OE-Core rev: 0af1d1412add1baf3f6c1a5cfb2e4f92fb6a85dc)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:09 +01:00
Darren Hart
91714ea8ae kernel: Add kernel headers to kernel-dev package
[YOCTO #1614]

Add the kernel headers to the kernel-dev package. This packages what was
already built and kept in sysroots for building modules with bitbake.
Making this available on the target requires removing some additional
host binaries.

Move the location to /usr/src/kernel

Before use on the target, the user will need to:

    # cd /usr/src/kernel
    # make scripts

This renders the kernel-misc recipe empty, so remove it.

As we use /usr/src/kernel in several places (and I missed one in the
previous version), add a KERNEL_SRC_DIR variable and use that throughout
the class to avoid update errors in the future.

Now that we package the kernel headers, drop the
kernel_package_preprocess function which removed them from PKGD.

All *-sdk image recipes include dev-pkgs, so the kernel-dev package will
be installed by default on all such images.

(From OE-Core rev: 0e3e88f9f87d1083ddd7dcaa526b3cd7a1cd53ff)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: Bruce Ashfield <bruce.ashfield@windriver.com>
CC: Tom Zanussi <tom.zanussi@intel.com>
CC: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:09 +01:00
Bruce Ashfield
fbb459aab9 kernel: save $kerndir/tools and $kerndir/lib from pruning
The kernel source tree in the sysroot has all unecessary source
code removed. The existing use case is to support module building
out of the sysroot, but as more toolsa are moved into the kernel
tree itself there are new use cases for the kernel sysroot source.

To avoid putting dependencies on the kernel, and to be able to
individually build and package these tools out of the source tree,
we can save $kerndir/tools and $kernddir/lib from being removed.
This enables tools like perf to be built our of the kernel source
in the sysroot, without significantly increasing the amount of
source in the sysroot.

(From OE-Core rev: 456f97c25488c2f6f6810b1a32781513cc719d8e)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:09 +01:00
Martin Jansa
7b7f3cff44 kernel.bbclass: pass KERNEL_VERSION to depmod calls in postinst
* without this, kernel upgrades where KERNEL_VERSION is changed
  e.g. 3.4.2 -> 3.4.3 generate .dep for running 3.4.2 and after reboot user ends
  up without any module loaded to make it worse after reboot nothing is upgraded
  to trigger another kernel(-module) postinst to generate .dep for now running 3.4.3

(From OE-Core rev: 2a7cdf088e484bb123a72826a02c3169a418ed0a)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:08 +01:00
Koen Kooi
a512e68202 man: make man actually work by installing custom man.config
The default man.conf is named wrong and doesn't work. It references gtbl, while groff installs tbl and other things. This man.conf is imported from OE classic and runtime tested on angstrom.

Before:

root@beaglebone:~# man man
sh: /usr/bin/gtbl: No such file or directory
sh: line 0: echo: write error: Broken pipe
gunzip: write: Broken pipe
gunzip: error inflating
sh: line 0: echo: write error: Broken pipe
sh: line 0: echo: write error: Broken pipe

After:

root@beaglebone:~# man man
MAN(1)                        Manual pager utils                        MAN(1)

NAME
       man - an interface to the on-line reference manuals

SYNOPSIS
       man  [-C  file]  [-d]  [-D]  [--warnings[=warnings]]  [-R encoding] [-L
       locale] [-m system[,...]] [-M path] [-S list]  [-e  extension]  [-i|-I]
       [--regex|--wildcard]   [--names-only]  [-a]  [-u]  [--no-subpages]  [-P
       pager] [-r prompt] [-7] [-E encoding] [--no-hyphenation] [--no-justifi-
       cation]  [-p  string]  [-t]  [-T[device]]  [-H[browser]] [-X[dpi]] [-Z]
       [[section] page ...] ...
       man -k [apropos options] regexp ...
       man -K [-w|-W] [-S list] [-i|-I] [--regex] [section] term ...
       man -f [whatis options] page ...
       man -l [-C file] [-d] [-D] [--warnings[=warnings]]  [-R  encoding]  [-L
       locale]  [-P  pager]  [-r  prompt]  [-7] [-E encoding] [-p string] [-t]
       [-T[device]] [-H[browser]] [-X[dpi]] [-Z] file ...
       man -w|-W [-C file] [-d] [-D] page ...
       man -c [-C file] [-d] [-D] page ...
       man [-hV]

Check for config name:

root@beaglebone:~# rm /etc/man.config
root@beaglebone:~# man man
Warning: cannot open configuration file /etc/man.config
No manual entry for man

As a bonus a bunch of references to the buildhost get removed from the config file.

(From OE-Core rev: 13d82ecd6b25ff4c34b3639e10113d7ebb33dc88)

Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:08 +01:00
Koen Kooi
ef789c1327 man: fix RDEPENDS and reformat recipe
(From OE-Core rev: f9aba0793123dafffc305c028f10e8f595c5a4ee)

Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:08 +01:00
Richard Purdie
2011408939 opkg-utils: UPdate to version with python 2.6 fix
(From OE-Core rev: ba2058aa74eb6cd263bd19a8338eeeced734f55c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:07 +01:00
Martin Jansa
9520635a00 opkg-utils: bump SRCREV
* there are 2 small fixes
  python-2.6 compatibility
  missing C option for opkg-build

(From OE-Core rev: 825a992af39d4eb75f105241e4cd94624b1dea43)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:07 +01:00
Martin Jansa
a259fdb296 opkg-utils: bump SRCREV for Packages cache fix and other fixes
(From OE-Core rev: ce5b46980f35097bd5fcc8195c5d5be1b980c870)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:07 +01:00
Valentin Popa
55997cbf3d xz: updated to version 5.1.1alpha
The licenses are the same, only some white spaces added/removed.

(From OE-Core rev: dbfc3d05e49b46ec033623d66e64cf781df4f632)

Signed-off-by: Valentin Popa <valentin.popa@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:07 +01:00
Martin Jansa
988318927a package.bbclass: fix TypeError in runstrip
* some packages have .ko files which are not elf, without this change
  it fails with TypeError, with this change only runstip fails and
  reports where:
  ERROR: runstrip: ''arm-oe-linux-gnueabi-strip'  '/OE/shr-core/tmp-eglibc/work/armv4t-oe-linux-gnueabi/emacs-23.4-r0/package/usr/share/emacs/23.4/etc/tutorials/TUTORIAL.ko'' strip command failed

(From OE-Core rev: 12e40ca7317289fec126d9f30b28a717fe72d274)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:06 +01:00
Richard Purdie
bf9a5226fc distutils/steuptools: Fix files layout and unbreak builds
The last two distutils changes progressivly broke the builds. Firstly they
moved things from the site_packages directory to being higher up the tree
which introduced package QA warnings as a side effect. Secondly, it interacts
badly with setuptools which passes in --root=${D} itself.

This patch restores the original directory layout, hence fixing the QA
warnings and also passes extra options to setuptools to deal with the
--root option it passes.

(From OE-Core rev: bed18d5df7915e4127a538be9c7550e185c8c850)

(From OE-Core rev: f60a04ccbf9ed614b5b5b9b02c8a24980bf17d3d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:06 +01:00
Matthew McClintock
d72eea5a5d distutils.bblass: change order of args to install step
This let's the user override install-lib argument again if it needs
to be something else, otherwise things like python-setuptools
won't be able to modify the install-lib dir

This fixes a new issue exposed by my previous distutils patch
that fixed the python modules default install location

(From OE-Core rev: 0fd7b7dd361e59c687366480cd9f89c81c2e5bce)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:06 +01:00
Matthew McClintock
a99b03e3ed distutils.bbclass: fix libdir for 64-bit python modules built with distutils
Without this some modules will be intalled in /usr/lib/python2.6/
instead of /usr/${libdir}/python2.6

(From OE-Core rev: bc6bd774aa8a3e085e9cabcefb11c3fc537139d5)

(From OE-Core rev: fc513eda1cfccc583f49847c3c04b5c781585e15)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:06 +01:00
Saul Wold
1773d1da62 build-appliance-image: Add vmx* files and build zip file
This commit adds the vmx* files needed to setup a VMware image,
this also packages the vmdk along with the vmx files.

(From OE-Core rev: ed0ffc12ed6f98471dced1ce2020af4e5c99f2c7)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:05 +01:00
Saul Wold
d8ccc44114 build-appliance-image: Update SRCREV to Denzil 1.2.1
(From OE-Core rev: 242fb49ac416824e53c58a8a0cb9bb9d19a72ec4)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:05 +01:00
Valentin Popa
6f3e6c75de build-appliance-image: rename from self-hosted-image
(-) renamed self-hosted-image to build-appliance-image
(-) replaced build-appliance-image description

[YOCTO #2636]

(From OE-Core rev: 04096f31778886479dac479132bded57e717653e)

(From OE-Core rev: bf133c331029ac588b27173145db5be5f6ee1ef5)

Signed-off-by: Valentin Popa <valentin.popa@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:05 +01:00
Franklin S Cooper Jr
89fa2c1fc6 psplash: LIC_CHKSUM Tweak
* Change the license checksum to use the lines in the psplash.h that contains
  license information instead of doing a checksum on the entire file.

(From OE-Core rev: 2c80eb5b9c103087774f032be01f5cf6309464d7)

Signed-off-by: Franklin S Cooper Jr <fcooper@ti.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:05 +01:00
Kang Kai
e22b4411ae ltp_20120104: add rdepends
[Yocto #2973]

Add rdepends libaio to fix this defect.

(From OE-Core rev: 79d12314729649add741509a46b7770e22dd23ad)

Signed-off-by: Kang Kai <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:04 +01:00
Bruce Ashfield
33921847a2 kernel-yocto: set master branch to a defined SRCREV
To support custom repositories that set a SRCREV and that only have
a single master branch, do_validate_branches needs a special case
for 'master'. We can't delete and recreate the branch, since you
cannot delete the current branch, instead we must reset the branch
to the proper SRCREV.

(From OE-Core rev: 3a8dc0a01d2756bb8f51afccad772fca1dc48af3)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:04 +01:00
Bruce Ashfield
5427f5d70f linux-yocto: allow do_validate_branches to handle all branches
Branch validation will not restrict a branch that doesn't exist
in the tree at the time of validation (since you can't reset a
SRCREV on a non-existent branch). This restriction can be removed
by looking for all branches that contain the specified SRCREV
and forcing them to that value.

(From OE-Core rev: 790f6441851fd4b2b84129340c438092f058135b)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:04 +01:00
Jeff Polk
4624b5eefb recipes-core/eglibc-2.13: Patch for locale-base-tt-ru packaging
The eglibc-2.13 build can fail because locale-base-tt-ru is in
PACKAGES twice. This is because the SUPPORTED list and the i18n
directories are out of sync with each other; the SUPPORTED list
expects a directory named "tt_RU.UTF8", but the directory is
actually named "tt_RU", and likewise for the @iqtelif variants.

(From OE-Core rev: 280886bb865efde6bda327a1c821220d64c893ba)

Signed-off-by: Peter Seebach <peter.seebach@windriver.com>
Signed-off-by: Jeff Polk <jeff.polk@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:04 +01:00
Matthew McClintock
e7376bb459 eglibc/gcc: add patches to fix eglibc 2.15 build
This drops one patch against eglibc for 2.15 and adds two new ones,
also it adds a gcc patch. We use all of these internally and they
are tested quite well.

(From OE-Core rev: a7014c446b0d2f3b40c4b058c64bb61c8720d799)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:03 +01:00
Marcin Juszkiewicz
c2bbe5f5d3 libpam: disable NIS to not link with libtirpc when it is available
I was checking ways to make incremental builds faster so I started using
sstate-cache and SSTATE_MIRRORS. But this gave me some nasty bug:

| Collected errors:
|  * satisfy_dependencies_for: Cannot satisfy the following dependencies
for php-cgi:
|  *    libtirpc1 (>= 0.2.2) *
|  * opkg_install_cmd: Cannot install package php-cgi.

I checked details:

In my previous build libtirpc got built before libpam so libpam found it
and linked. As a result packages depend on libtirpc1 but as there is no
such build dependency sstate handling code did not used libtirpc copy...

(From OE-Core rev: e629bdcd1bcb51f2d2101fb53daeac0bd29ab637)

(From OE-Core rev: 8ff92269cd63e153892d129e6e2255812a454a99)

Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:03 +01:00
Matthew McClintock
758677e212 glib.inc: disable selinux for native builds
In addition to dbus, we also need to disable selinux for glib as well
otherwise we will get the same link error

(Note: Upstream master has disabled selinux AFAICT)

(From OE-Core rev: 318bc896b1bd5399807a417865b8e088d9d9eb15)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:03 +01:00
Matthew McClintock
b96bd8465a dbus.inc: disable selinux for native builds
(Note: Upstream master has disabled selinux for this AFAICT)

Fixes issues such as:

| /usr/bin/ld: cannot find -lselinux
| collect2: ld returned 1 exit status
| make[3]: *** [libdbus-glib-1.la] Error 1

(From OE-Core rev: 7ee10f25430421dc6e9552ffe15a6a5acbd4cb51)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-09-28 16:53:02 +01:00
Tom Zanussi
65ffa73950 yocto-bsp: use base branches for qemu 'newbranch' case
The branch updating for the [YOCTO #2587] fix inadvertently changed
some of the qemu branch names incorrectly, fix it.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:22 +01:00
Tom Zanussi
a76fc366ce yocto-bsp: generate default properties even if json specified
Users seem to want to specify incomplete property sets when using json
input.  Allow this by generating default properties before the
user-specified properties are applied; the user will then get the
defaults for any unspecified values, and avoid cryptic backtraces.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:22 +01:00
Tom Zanussi
759237f721 yocto-bsp: use emgd 1.10 for i386 template
Make i386 template use emgd 1.10 for denzil, along with associated
changes.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:22 +01:00
Tom Zanussi
3e632506c2 yocto-bsp: add i586 option for i386
Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
1b9306705c yocto-bsp: add some standard policy
Add some useful default options to to the i386 and x86_64 templates.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
65706548f6 yocto-bsp: remove 'branch' statements in .scc if reusing branch
If reusing a branch (need_new_branch == 'n') we don't need to branch
in the .scc, so make it conditional on need_new_branch.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
bc6b04fe22 yocto-bsp: use rstrip() for assignment lines
strip() isn't necessary and causes unintended formatting changes in
the output; rstrip() remove the trailing newlines as intended while
leaving indenting whitespace intact.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
8a51a8afe8 yocto-bsp: use standard branch mapping in bsp templates
Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
2517376ee8 yocto-bsp: add standard branch mapping
Add a mechanism to distinguish common-pc variants of standard
branches.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
9078e985ac yocto-bsp: use branches_base
Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
b123553c1a yocto-bsp: allow branch display filtering
Add a "branches_base" property that can be used to allow only matching
branches to be returned from all_branches().

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
717704d12c yocto-bsp: update default branch names
Make sure the default branch names match branch names found in the
kernel branch listing.

Fixes [YOCTO #2587].

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:21 +01:00
Tom Zanussi
e5c5e20a5e yocto-bsp: strip '/base' from kernel branches in templates
For new branches, users can specify /base branches, but we don't want
the '/base' in the resultant branch name, so remove it.

Fixes [YOCTO #2693].

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:20 +01:00
Tom Zanussi
96631f080b yocto-bsp: add new strip_base() function
Add a strip_base() function to remove '/base' from the branch names
presented to the user.

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
2012-08-21 11:35:20 +01:00
Paul Eggleton
78d15a8015 cooker: fix UnboundLocalError when exception occurs during parsing
Fix a recent regression where we see the following additional error
after an error occurs during parsing:

ERROR: Command execution failed: Traceback (most recent call last):
  File "/home/paul/poky/poky/bitbake/lib/bb/command.py", line 84, in runAsyncCommand
    self.cooker.updateCache()
  File "/home/paul/poky/poky/bitbake/lib/bb/cooker.py", line 1202, in updateCache
    if not self.parser.parse_next():
  File "/home/paul/poky/poky/bitbake/lib/bb/cooker.py", line 1672, in parse_next
    self.virtuals += len(result)
UnboundLocalError: local variable 'result' referenced before assignment

(Bitbake rev: 1ae0181ba49ccfcb2d889de5dd1d8912b9e49157)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:20 +01:00
Matthew McClintock
ffff5fa2d7 bitbake/fetch2: remove references to ChecksumError class
From: Matthew McClintock <msm@freescale.com>

When merging fetch2 improvements from master into denzil, there
were too many dependencies to pull in the entire ChecksumError
class, so this patch removes references to ChecksumError for
compatability.

Fixes this issue:

NameError: global name 'ChecksumError' is not defined

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
2012-08-21 11:35:08 +01:00
Richard Purdie
119b7ff164 bitbake: fetch2: Handle errors orruring when building mirror urls
When we build the mirror urls, its possible an error will occur. If it
does, it should just mean we don't attempt this mirror url. The current
code actually aborts *all* the mirrors, not just the failed url.

This patch catches and logs the exception allowing things to continue.

(Bitbake rev: c35cbd1a1403865cf4f59ec88e1881669868103c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:08 +01:00
Richard Purdie
1162729c79 bitbake: fetch2: Improve mirror looping to consider more cases
Currently we only consider one pass through the mirror list. This doesn't
catch cases where for example you might want to setup a mirror of a mirror
and allow multiple redirection. There is no reason we can't support this
and the patch loops through the list recursively now.

As a safeguard, it will stop if any duplicate urls are found, hence
avoiding circular dependency looping.

(From Poky rev: 0ec0a4412865e54495c07beea1ced8355da58073)

(Bitbake rev: e585730e931e6abdb15ba8a3849c5fd22845b891)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:08 +01:00
Richard Purdie
4591963649 bitbake: fetch2: Explicitly check for mirror tarballs in mirror handling code
With support for things like git:// -> git:// urls, we need to be
more explicity about the mirrortarball check since we need to fall
through to the following code in other cases.

(From Poky rev: 28e858cd6f7509468ef3e527a86820b9e06044db)

(Bitbake rev: a2459f5ca2f517964287f9a7c666a6856434e631)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Richard Purdie
3f30bf4eab bitbake: fetch2: Split try_mirrors into two parts
There are no functionality changes in this change

(From Poky rev: d222ebb7c75d74fde4fd04ea6feb27e10a862bae)

(Bitbake rev: db62e109cc36380ff8b8918628c9dea14ac9afbc)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Conflicts:

	bitbake/lib/bb/fetch2/__init__.py

Signed-off-by: Khem Raj <kraj@juniper.net>
2012-08-21 11:35:07 +01:00
Richard Purdie
ea032bb2c3 bitbake: fetch2: Ensure when downloading we are consistently in the same directory
This assists with build reproducuility. It also avoids errors if cwd
happens not to exist when we call into the fetcher. That situation
would be unusual but I hit it with the unit tests.

(From Poky rev: 86517af9e066c2da1d580fa66b7c7f0340f3403e)

(Bitbake rev: b886c6c15a58643e06ca5ad7a3ff1f7766e4f48c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Richard Purdie
4bfb54e0ba bitbake: fetch2: Only cache data if fn is set, its pointless caching it against a None value
(From Poky rev: c2df30bf6d1f8c263a38c45866936c1bf496ece5)

(Bitbake rev: f4b59cc6e1c3ddc168a1678ce39ff402ea1ff4cc)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Richard Purdie
729b396b21 bitbake: fetch2: Fix error handling in uri_replace()
(From Poky rev: 1bfba28a583cb167f60e05ecdf34d0786dc1eec5)

(Bitbake rev: aa7467a764ddcbc7d65af99e88cf093b6ec6d24e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Richard Purdie
bdb918f808 bitbake: fetch2/__init__: Make it clearer when uri_replace doesn't return a match
(From Poky rev: dc9976331c5cbb0983adb54f6deb97b9203bacbc)

(Bitbake rev: eb96609864dec95a516e6e687dd6a2f31d523acf)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:35:07 +01:00
Zhenhua Luo
e4f8c7f693 valgrind: fix default.supp missing issue
When run valgrind, following error appears:
    ==2254== FATAL: can't open suppressions file "/usr/lib/valgrind/default.supp"

(From OE-Core rev: 0b3261d513cdad80174a9b9e804981c50bcb7ca2)

(From OE-Core rev: 95756cfbb7a9348b23cb46a49a5509e57e973faf)

Signed-off-by: Zhenhua Luo <b19537@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:33:31 +01:00
Paul Eggleton
4539cf1936 classes/license: fix manifest to work with deb
Prepend the license manifest creation call to ROOTFS_POSTPROCESS_COMMAND
instead of appending to ROOTFS_POSTINSTALL_COMMAND. The latter is not
implemented for the deb backend (and probably ought to just be removed
completely), and by using _prepend we can still ensure it occurs before
package info is removed (and before buildhistory in case it is needed
there in future).

(From OE-Core rev: 6ffd958ff2f7f1d07ab9da5ca8db1727dd074980)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:45 +01:00
Paul Eggleton
6631752c25 scripts/buildhistory-diff: add GitPython version check
Display an error if the user does not have at least version 0.3.1 of
GitPython installed.

(From OE-Core rev: 07b9c3bc67439d47627fe256796465520b533753)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:45 +01:00
Paul Eggleton
75b7901f22 buildhistory_analysis: fix error when version specifier missing
Passing None to split_versions() will raise an exception, so check that
the version is specified before passing it in.

(From OE-Core rev: a530aee6d9b2b63ab5fa780b1761eac759e8c833)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:45 +01:00
Paul Eggleton
c520132cb8 classes/rootfs_*: fix splitting package dependency strings
If a + character appears in a version specification within the list of
package dependencies, the version will not be removed from the list in
list_package_depends/recommends leading to garbage appearing in the
dependency graphs generated by buildhistory. To avoid any future
problems due to unusual characters appearing in versions, change the
regex to match almost any character.

Fixes [YOCTO #2451].

(From OE-Core rev: d592c3a26c630d5f3bfba4804a93766447bf72c9)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:45 +01:00
Saul Wold
a8d4eb449f foomatic: fix perl path for target
This problem appears on F17 when configure finds /bin/perl, since the beh
script is a target side script, we need to set PERL in the do_configure_prepend
in order for the correct perl to be used

(From OE-Core rev: f189ee78bed0920cfd33689ebb9aad45fded2c4d)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:44 +01:00
Otavio Salvador
4253c34000 shadow: use 'users' group by default
The rootfs has 'users' group at number 100 and without this fix it
would assign to a non-existent group and if a group with gid as 1000
is created later it would own all files for users created.

(From OE-Core rev: c2bd2936907ea8b776d58e8cc58a8359a6e7e9b9)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Saul Wold <sgw@linux.intel.com>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:44 +01:00
Otavio Salvador
1a70ddc4e8 shadow-native: use 'users' group by default
The rootfs has 'users' group at number 100 and without this fix it
would assign to a non-existent group and if a group with gid as 1000
is created later it would own all files for users created.

(From OE-Core rev: 42e9f988bc691ca763d5eda3537d6281b7902794)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Saul Wold <sgw@linux.intel.com>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:44 +01:00
Matthew McClintock
6352d0a9a1 u-boot.inc: update linker arguments to pass --sysroot arg
If we are building from sstate-cache it's possible to be building
from another folder on another machine, therefore the linker requires
that a proper --sysroot is passed too it so it can find things like
libgcc.a and avoid errors such as:

| arm-poky-linux-gnueabi-gcc  -g  -O2  -fno-common -ffixed-r8 -msoft-float   -D__KERNEL__ -DCONFIG_SYS_TEXT_BASE=0x80008000 -I/local/yocto/upstream/label/ubuntu1204-64b/machine/beagleboard/poky/edison/tmp/work/beagleboard-poky-linux-gnueabi/u-boot-v2011.06+git5+b1af6f532e0d348b153d5c148369229d24af361a-r0/git/include -fno-builtin -ffreestanding -nostdinc -isystem /local/yocto/upstream/label/ubuntu1204-64b/machine/beagleboard/poky/edison/tmp/sysroots/x86_64-linux/usr/bin/armv7a-vfp-neon-poky-linux-gnueabi/../../lib/armv7a-vfp-neon-poky-linux-gnueabi/gcc/arm-poky-linux-gnueabi/4.6.3/include -pipe  -DCONFIG_ARM -D__ARM__ -marm  -mabi=aapcs-linux -mno-thumb-interwork -march=armv5 -Wall -Wstrict-prototypes -fno-stack-protector -fno-toplevel-reorder   -o hello_world.o hello_world.c -c
| arm-poky-linux-gnueabi-gcc  -g  -O2  -fno-common -ffixed-r8 -msoft-float   -D__KERNEL__ -DCONFIG_SYS_TEXT_BASE=0x80008000 -I/local/yocto/upstream/label/ubuntu1204-64b/machine/beagleboard/poky/edison/tmp/work/beagleboard-poky-linux-gnueabi/u-boot-v2011.06+git5+b1af6f532e0d348b153d5c148369229d24af361a-r0/git/include -fno-builtin -ffreestanding -nostdinc -isystem /local/yocto/upstream/label/ubuntu1204-64b/machine/beagleboard/poky/edison/tmp/sysroots/x86_64-linux/usr/bin/armv7a-vfp-neon-poky-linux-gnueabi/../../lib/armv7a-vfp-neon-poky-linux-gnueabi/gcc/arm-poky-linux-gnueabi/4.6.3/include -pipe  -DCONFIG_ARM -D__ARM__ -marm  -mabi=aapcs-linux -mno-thumb-interwork -march=armv5 -Wall -Wstrict-prototypes -fno-stack-protector -fno-toplevel-reorder   -o stubs.o stubs.c -c
| arm-poky-linux-gnueabi-ld  -r -o libstubs.o  stubs.o
| arm-poky-linux-gnueabi-ld -g -Ttext 0x80300000 \
| 			-o hello_world -e hello_world hello_world.o libstubs.o \
| 			-L. -lgcc
| arm-poky-linux-gnueabi-ld: cannot find -lgcc
| make[1]: *** [hello_world] Error 1

(From OE-Core rev: ad78441045183277a7e77341f4af6d9d65a4a3c8)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:44 +01:00
Nitin A Kamble
0a6f9a5f5a tcl: fix target recipe build issue on older distros
the builddir is put in front of the LD_LIBRARY_PATH, causing dynamically
linking of target library with native tclsh.

Fix this behavior to cross build tcl correctly.

This issue got exposed when eglibc-2.15 was configured for the target.

(From OE-Core rev: 8e25fe0ecc3d6fe2d5456b525c5014554bc70cfe)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:43 +01:00
Richard Purdie
7c5318e1a0 utils.bbclass: add helper function to add all multilib variants of a specific package
This is useful for the scenario where we want to add 'gcc' to
the root file system for all multilib variants

(From OE-Core rev: e82c2f0b91611f3e755985bb8d1608ca5792e825)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:43 +01:00
Enrico Scholz
88927f4f1f libtool: fixed parallel build related race
While building libtool, the libtool script itself will be regenerated
because OE modifies a dependency[1]. With -jX, this operation (-->
removal, creation of non-x file, 'chmod a+x') can happen at a time when
the script is going to be executed.  This can cause errors like:

| arm-linux-gnueabi-libtool: compile:  ccache arm-linux-gnueabi-gcc ...
| ...
| /bin/sh ./config.status libtool
| ...
| arm-linux-gnueabi-libtool: compile:  ccache arm-linux-gnueabi-gcc ...
| /bin/sh: ./arm-linux-gnueabi-libtool: Permission denied
| make[2]: *** [libltdl/libltdl_libltdl_la-lt__alloc.lo] Error 126

I am not sure whether the custom do_compile_prepend() is still needed.
For now only the issue above will be fixed by executing ./config.status
yet again.

[1] see 648290d5bf

(From OE-Core rev: 15204a6cbcdbbb84e02da05b1fb15644fe7df332)

Signed-off-by: Enrico Scholz <enrico.scholz@sigma-chemnitz.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:43 +01:00
Ting Liu
68888987ce image_types.bbclass: redefine EXTRA_IMAGECMD_jffs2 to leverage siteinfo
(From OE-Core rev: 3bff2398cd2d730111faa182d16356e189a36353)

Signed-off-by: Ting Liu <b28495@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:43 +01:00
Matthew McClintock
3129f4ff0e task-core-tools-testapps.bb: kexec-tools does not work on e5500-64b parts
This prevents kexec from building for this part since it does not work

(From OE-Core rev: d9bf008b36e8b2211624705d8ee4e90d94463dd5)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Matthew McClintock
09ae715bb1 gcc-package-runtime.inc: Fix QA warning
> ERROR: QA Issue: gcc-runtime: Files/directories were installed but not shipped
>   /usr/lib/libgomp.so.1.0.0
>   /usr/lib/libgomp.so.1

(From OE-Core rev: 4ec107f822453bd9468009d7a2124a3d592610b5)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Leon Woestenberg
c26cfe7738 kernel.bbclass: Copy bounds.h only if it exists, needed for 2.6.x.
Linux 2.6.x kernels did not (all) have the bounds.h file, so copy
only iff exists.

(See OE-Core 02ac0d1b65389e1779d5f95047f761d7a82ef7a4)

(From OE-Core rev: 6e9cfa4ba34d8899dfb271818ef30730de8353fa)

Signed-off-by: Leon Woestenberg <leon@sidebranch.com>
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Khem Raj
81aac104fa xserver-xorg: Fix build on powerpc
(From OE-Core rev: 5e141b2a7331f7ee8d9eedf02c4fc2ae5ed8d5ec)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Saul Wold
bc754beb3a curl: Use gnutls for target and openssl for native
Since gnutls is available on the target use it, but we do not build gnutls for
the native side as it adds too many dependecies, so use openssl.

(From OE-Core rev: 0dc6543a2d898d381c287d6b7becfc8fb8f279c0)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:42 +01:00
Saul Wold
e20af93811 curl: enable ssl support
This patch enables ssl support for curl to allow git to clone from
https / ssl sites. We do not want to enable gnutls for native or
nativesdk, as it adds additional dependency and increase build time

[YOCTO #2532]

(From OE-Core rev: 9f7e9fb6cd08b3048e97dd1011f0510416beb103)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Martin Donnelly
235667966c augeas: Add libxml2 dependency
This patch fixes the following Augeas configure error.

| checking for LIBXML... no
| configure: error: Package requirements (libxml-2.0) were not met:
|
| No package 'libxml-2.0' found
|
| Consider adjusting the PKG_CONFIG_PATH environment variable if you
| installed software in a non-standard prefix.
|
| Alternatively, you may set the environment variables LIBXML_CFLAGS
| and LIBXML_LIBS to avoid the need to call pkg-config.
| See the pkg-config man page for more details.
| ERROR: oe_runconf failed

(From OE-Core rev: 1d55679821003ac4d652b08f2eebab1636505042)

Signed-off-by: Martin Donnelly <martin.donnelly@ge.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Zhenhua Luo
258bbaa1d2 task-core-sdk.bb: add libgomp and libgomp-dev by RECOMMENDS
(From OE-Core rev: e072dc29c6f4dd3c429e2f0e07da3c29bda36023)

Signed-off-by: Zhenhua Luo <b19537@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Ting Liu
f44faa8757 lsof: define linux C library type when using eglibc
lsof tries to compile a temp c source file and execute the binary to
determine linux C library type (file Configure, line 2689-2717).
It is inpracticable for cross-compilation and may have build issue on
some distros since it depends on host settings.

Fix below error when building for 64bit target on 64bit host:
[...]
| dsock.c:481:44: error: 'TCP_LISTEN' undeclared (first use in this function)
| dsock.c:482:45: error: 'TCP_CLOSING' undeclared (first use in this function)
[...]
| make: *** [dsock.o] Error 1

The actual issue exists in do_configure:
[...]
Testing C library type with cc ... done
Cannot determine C library type; assuming it is not glibc.

Which is in turn caused by missing 'gnu/stubs-32.h" when compiling
the temp c source file on host:
[...]
fatal error: gnu/stubs-32.h: No such file or directory compilation terminated.

file gnu/stubs-32.h is provided by 32bit glibc.

(From OE-Core rev: 8c38bc022de209187f31952ae02313dd3104f4c6)

Signed-off-by: Ting Liu <b28495@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Matthew McClintock
3cf5fc2272 sysvinit-inittab_2.88dsf.bb: Allow multiple serial port consoles to be defined
Set SERIAL_CONSOLES if you want to define multiple serial consoles, also if
you need to check for the presence of the serial consoles you can also define
SERIAL_CONSOLES_CHECK to determine if these are present when you boot. This
will prevent error message that pop up when the serial port is not present.

SERIAL_CONSOLES = "115200;ttyS0 115200;ttyS1 115200;ttyEHV0"
SERIAL_CONSOLES_CHECK = "${SERIAL_CONSOLES}"

The above lines in machine.conf or elsewhere will have the effect of having
two serial consoles and removing any that are not present at boot

(From OE-Core rev: 2e7dddfce4a40a56f671116a2001b13c57667c70)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:41 +01:00
Matthew McClintock
ffd554d2ff libgomp: add libgomp (openmp) library, and build for powerpc targets by default
(From OE-Core rev: d58668c6770f519199192c7e3817fbc7d6576af3)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:40 +01:00
Matthew McClintock
4899d07aa7 gcc: gcc-cross-canadian: use correct location for libraries for powerpc64
This fixes the issue where gcc invokes the linker with an incorrect -L
library location and gives up because it can't find libraries. It was
looking in a /lib folder instead of /lib64

(From OE-Core rev: aa010039a38188f1b1b38a978287d1597138b8b9)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:40 +01:00
Matthew McClintock
0b71ac7a99 gcc-configure-common.inc: use --with-long-double-128 on powerpc to comply with ABI
(From OE-Core rev: a2e00d2cae8e4b58fc3b9fc7853da519a615aa31)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:40 +01:00
Matthew McClintock
bdebedb6cf openjade-native_1.3.2.bb: fix typo and change the deps exclusion to correct var
(From OE-Core rev: 6ef3b77ba8baddb5748f2ee27d39a5a0d32e3bfb)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:40 +01:00
Matthew McClintock
b2b365e8f0 dtc.inc: fix for libdir == /usr/lib64
On 64bit systems dtc will still install libaries in /usr/lib
unless we havet this override

(From OE-Core rev: 679e04a33b6e4569e7a95758ccb10d50931f5d67)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:39 +01:00
Matthew McClintock
738f163797 packagedata.py: Fix get_subpkgedata_fn for multilib
This happens when tryng to add libgcc-dev to as a multilib package
(e.g. IMAGE_INSTALL_append = " lib32-libgcc-dev")

| Processing task-core-boot...
| Processing fman-ucode...
| Processing dosfstools...
| Processing lib32-libgcc-dev...
| Unable to find package lib32-libgcc-dev (libgcc-dev)!
NOTE: package fsl-image-full-1.0-r1.1.3.6: task do_rootfs: Failed

RPM (or bitbake?) is looking in the tmp/pkgdata, however some of these file
paths are mungned for the multilib scenario:

$ find tmp/pkgdata/ | grep libgcc-dev$
tmp/pkgdata/ppce5500-fsl-linux/runtime/lib32-libgcc-dev
tmp/pkgdata/ppc64e5500-fsl-linux/runtime/libgcc-dev

This patch fixes where we look for these files so they can be found and
properly installed for the multilib root file system

(From OE-Core rev: 24e8399aeccf4b0742acd986bb506ff6f388b4a2)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:39 +01:00
Matthew McClintock
1ddd5378ec qemu-0.15.1: add patch to fix compilatation problems on powerpc
ERROR: Function failed: do_compile (see /opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/work/ppc64e5500-fsl-linux/qemu-0.15.1-r6/temp/log.do_compile.28447 for further information)
ERROR: Logfile of failure stored in: /opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/work/ppc64e5500-fsl-linux/qemu-0.15.1-r6/temp/log.do_compile.28447
Log data follows:
| DEBUG: SITE files ['endian-big', 'bit-64', 'powerpc-common', 'common-linux', 'common-glibc', 'powerpc-linux', 'powerpc64-linux', 'common']
| ERROR: Function failed: do_compile (see /opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/work/ppc64e5500-fsl-linux/qemu-0.15.1-r6/temp/log.do_compile.28447 for further information)
| NOTE: make -j 24
|   LINK  ppc-linux-user/qemu-ppc
| /opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/sysroots/x86_64-linux/usr/libexec/ppc64e5500-fsl-linux/gcc/powerpc64-fsl-linux/4.6.4/ld:/opt/yocto/cache-build/p5020ds-64b/build_p5020ds-64b_release/tmp/work/ppc64e5500-fsl-linux/qemu-0.15.1-r6/qemu-0.15.1/ppc64.ld:84: syntax error
| collect2: ld returned 1 exit status
| make[1]: *** [qemu-ppc] Error 1
| make: *** [subdir-ppc-linux-user] Error 2
| make: *** Waiting for unfinished jobs....
| ERROR: oe_runmake failed

(From OE-Core rev: 2a1f7a8be5170cdb85f9faae81d94ac2ca8b6566)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:39 +01:00
Matthew McClintock
ebbd09e21d libxml-parser-perl_2.41.bb: fix MakeMaker issues with using wrong CC/LD/etc
MakeMaker has a bug where it does not propagate CC/LD/etc information
down to subproject it generates Makefiles for... this recipe has has an
Expat subproject which has issues building if we are using sstate-cache
and it will reference the old sysroots and be unable to build properly.
There is an upstream MakeMaker bug for this issue but we can work around
it by fixing up the Makefiles for now

See:
https://rt.cpan.org/Public/Bug/Display.html?id=28632

(From OE-Core rev: e1609123a6ca6aef18e48afe0ce61325da910fc1)

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:39 +01:00
Zhenhua Luo
09c83947da linux-dtb: add multi-dtb build support
including following enhancement:
    * support multi-dtb build
    * skip dtb build and install when KERNEL_DEVICETREE is empty
    * print a warning message when specified dts file is not available

(From OE-Core rev: 59b149e466c9fc81ec94a740e805339db97fc3ac)

Signed-off-by: Zhenhua Luo <b19537@freescale.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:38 +01:00
Saul Wold
8c6455b6db xinetd: Update to 2.3.15
(From OE-Core rev: 48f93e0ade1c534b9af2b84874f9b17e3107c724)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-08-21 11:16:38 +01:00
Scott Rifenbark
73cdebf60d documentation/dev-manual/dev-manual-kernel-appendix.xml: Add note about conflict
Added a note to the part of the example where you bitbake the kernel
after turning off CONFIG_SMP.  The warnings you get can cause confusion.
the note explains they are normal.

(From yocto-docs rev: 08ed090f0b8b6970832242a52827ae2957918cf3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-29 15:54:26 +01:00
Scott Rifenbark
6b06a4fa1b documentation/dev-manual/dev-manual-kernel-appendix.xml: Added branch step
The example did not specify to switch to the "denzil" branch after
establishing the local repo of poky-extras.  The example will not
work without this step.

(From yocto-docs rev: 69b99a77f1f8247c217e77af89ecec3982adc264)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-29 15:54:26 +01:00
Ross Burton
2dcbb48df9 gconf.bbclass: don't register schemas in the install stage
Previously this was installing schemas in the sysroot, which is wrong for native
packages as nothing should touch the sysroot directly, and even more wrong for
non-native packages as the sysroot is irrelevant.

So, export the environment variable that stops the registration happening at
install time. The postinst script will handle the non-native case, and for the
sysroot I've opened #2648.  This isn't a massive problem as nothing to my
knowledge actually installs schemas to the sysroot.

[YOCTO #2245]

(From OE-Core rev: 741146fa90f28f7ce8d82ee7f7e254872d519724)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-29 15:49:34 +01:00
Elizabeth Flanagan
fb2335fa2b distro.conf: Flipping for pending point release
7.0.1/1.2.1 release. Flipping distro.conf values

Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
2012-06-21 14:22:32 -07:00
Richard Purdie
e972d78009 poky-tiny: eliminate mtrace rdepends
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 12:01:52 +01:00
Scott Rifenbark
91d6344765 documentation: Updated Manual revision history table
Using July 2012 for the release date.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 12:01:39 +01:00
Anders Darander
a707b3269c qt4-embedded: fix QT_ARCH usage in QT_CONFIG_FLAGS
After the change to shell style functions (from python style), the
ability to use oe_filter_out on QT_CONFIG_FLAGS got broken.

This patch solves that by referring to QT_ARCH in a more correct way.

(From OE-Core rev: 8394dda5f12157c88005a788cd35421f498c9b82)

Signed-off-by: Anders Darander <anders@chargestorm.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:35 +01:00
Bruce Ashfield
0d3748ca5d linux-yocto/3.0: update to v3.0.32
Updating the 3.0 kernel SRCREVs to integrate the v3.0.32 -stable
release.

(From OE-Core rev: 6d97c94d25713b47417e184308ab43947c7f243d)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:34 +01:00
Bruce Ashfield
5f2b526109 linux-yocto/3.2: update to v3.2.18
Updating the 3.2 kernel SRCREVs to pickup the -stable update
to v3.2.18.

(From OE-Core rev: 0308f91b17b052902a01c98afdd5619cd0c617e5)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:34 +01:00
Bruce Ashfield
6bb0fdda40 linux-yocto: policy cleanups
Updating the meta SRCREVs to pickup configuration policy cleanups:

  49f931b meta/fishriver: remove redundant features and options
  51a6d3f meta/emenlow: remove redundant features and options
  101dd7f meta/crownbay: remove redundant features and options
  4110ecd meta/sugarbay: remove redundant features and options
  0f1304a meta/jasperforest: remove redundant features and options
  0a56a3b meta/common-pc-64: factor out SCSI CDROM option
  b71938a meta/common-pc-64: use usb-mass-storage feature
  0724f40 meta: add scsi cdrom feature
  438bca8 meta/common-pc: use usb-mass-storage feature
  c970881 meta: factor out SCSI options from the usb-mass-storage feature
  4c8135e meta: add scsi disk feature
  6872a81 meta: add scsi feature
  e706ec5 meta/sugarbay: factor out policy-related options
  8b7fbc2 meta/jasperforest: factor out policy-related options
  fea1b0e meta/fishriver: factor out policy-related options
  13bf9ab meta/emenlow: factor out policy-related options
  4748d50 meta/crownbay: factor out policy-related options
  44f592f meta/common-pc-64: factor out policy-related options
  5a3f5c7 meta/common-pc: factor out policy-related options
  1f5a10b meta/common-pc-64: use usb features
  4b87723 meta/common-pc: use usb features
  594ba05 meta: add ROOT_HUB_TT config option to the usb/ehci-hcd feature

(From OE-Core rev: db35cd40c7abe13a9701eb74099d69d461cadb0a)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:34 +01:00
Bruce Ashfield
5ad28e97e6 linux-yocto: intel BSP config changes
Updating the meta SRCREV for the following fixes:

   1dfd60f meta/fishriver: move smp options from recipe-space
   012780a meta/emenlow: move smp options from recipe-space
   b59b1a5 meta/crownbay: move smp options from recipe-space
   74dc6ac meta/sugarbay: remove boot-live options
   a4bedcb meta/jasperforest: remove boot-live options
   4ae7b81 meta/sugarbay: use usb features
   30e7e8c meta/jasperforest: use usb features
   22d0c5d meta/fishriver: use usb features
   e262965 meta/emenlow: use usb features

(From OE-Core rev: bde50853658bab563a888b82278a6acfdce6305b)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:34 +01:00
Bruce Ashfield
495ea21c8b linux-yocto/3.2: configuration and pch merge
Updating the 3.2 SRCREVs to import the following meta/config
changes:

   6b3d4e0 meta: add mei feature
   519abac meta: add usb/uhci-hcd feature
   a67c5a3 meta/crownbay: use usb features
   0855066 meta: add usb/ohci-hcd feature
   15f1a99 meta: add usb/ehci-hcd feature
   8fa6408 meta: add usb/xhci-hcd feature
   c724a55 meta: add usb/base feature
   b55b3a1 sys940x: Cleanup sys940x.scc
   93f2e97 sys940x: Use PHYSICAL_START of 0x200000 to boot
   aaa034b sys940x: Add common standard and preempt-rt features
   e2b1286 sys940x: Add efi-ext to standard and preempt-rt configs
   d188c21 sys940x: Move emgd-1.10 data to the standard scc file
   72d9369 fri2: Cleanup fri2-$KTYPE.scc files re efi-ext.scc
   dbcb120 fri2: Use emgd-1.10 feature and branch

And the following driver fix:

   f39a0a9 pch_gbe: Do not abort probe on bad MAC

(From OE-Core rev: 0609299880ad0aca121e7192d84f85d913c40c62)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:33 +01:00
Saul Wold
94e3e894d0 eglibc: added ac_cv_path_ to CACHED_CONFIGUREVARS
On Fedora 17, bash has moved to /usr/bin/bash and the configure process finds it
on the host machine there, this ensures that it is set correctly for the target.

[YOCTO #2363]

(From OE-Core rev: 0d957dd0604230bef1d01ee9992c56d2aca62ec1)

Signed-off-by: Saul Wold <sgw@linux.intel.com>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:33 +01:00
Saul Wold
8389decfe6 quilt: added ac_cv_path_BASH to CACHED_CONFIGUREVARS
On Fedora 17, bash has moved to /usr/bin/bash and the configure process finds it
on the host machine there, this ensures that it is set correctly for the target.

[YOCTO #2363]

(From OE-Core rev: d54ff1f79f05ba5bd0e1006545e7f1e699998668)

Signed-off-by: Saul Wold <sgw@linux.intel.com>

Reworked commit to fix merge conflicts with denzil branch.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:33 +01:00
Nitin A Kamble
7412611252 eglibc: package mtrace separately
add libc-mtrace as dependency for task-core-tools-debug

now eglibc-mtrace gets included in an sdk image and not in a non-sdk image.

This does not affect builds with uclibc.

This fixes bug: [YOCTO# 2374]

(From OE-Core rev: 6f78625dbab5c81ef20b197aee5206f63611b673)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:32 +01:00
Gary Thomas
026d502b2a webkit-gtk: Apply work around for all PowerPC targets
The current patch for bug #1570 only applies to qemuppc but should be
applicable for all PowerPC targets.  Also update the patch so that
only one language backend, either ICU or PANGO, is built.

Also remove some old customizations (dependencies on darwin) as these
should now be handled in a layer specific .bbappend file.

(From OE-Core rev: 87eae0851e5334734df40a833596c6cbc6715f7f)

Signed-off-by: Gary Thomas <gary@mlbassoc.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:32 +01:00
Richard Purdie
e81f7c6152 openjade-native: Ensure we reautoconf the package
Currently since configure.in in is in a subdirectory, we don't reautoconf the
recipe. We really need to do this, to update things like the libtool script used
and fix various issues such as those that could creep in if a reautoconf is
triggered for some reason. Since this source only calls AM_INIT_AUTOMAKE to gain the
PACKAGE and VERSION definitions and that macro now errors if Makefile.am doesn't
exist, we need to add these definitions manually.

These changes avoid failures like:

----
| ...
| DssslApp.cxx:117:36: error: 'PACKAGE' was not declared in this scope
| DssslApp.cxx:118:36: error: 'VERSION' was not declared in this scope
| make[2]: *** [DssslApp.lo] Error 1
----

(From OE-Core rev: 87753615435c8aec7df5964045e24f13877cd7cc)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-21 11:59:32 +01:00
Paul Eggleton
119e1b7dc9 poky.conf: use correct version string for Ubuntu 12.04
Since it is an LTS release, the final version string was not
"Ubuntu 12.04" but "Ubuntu 12.04 LTS", so use this when doing the tested
host distribution check.

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:27:19 +01:00
Paul Eggleton
5b3a0eac61 hob: handle sanity check failures as a separate event
In order to show a friendlier error message that does not bury the
actual sanity error in our typical preamble about disabling sanity
checks, use a separate event to indicate that sanity checks failed.

This change is intended to work together with the related change to
sanity.bbclass in OE-Core.

Fixes [YOCTO #2336].

(Bitbake rev: 24b631acdaa143a4de39c6e1328849660c66f219)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:27:11 +01:00
Kang Kai
4c4924ad1b cooker.py: terminate the Parser processes
[Yocto 2142]

Force to exit HOB when hob is parsing recipes, the bitbake doesn't stop.
It hangs on function BitBakeServerConnection::terminate in file
server/process.py:
    else:
        self.procserver.join()
It is waiting for the children process quit.

In stage of parse recipes BBCooker spawns Parser processes as many as
cpu numbers. When quit the Parser processes they make their internal
Queue to call cancel_join_thread() to avoid block but don't work at
this time.
So force to terminate the Parser processes.

(Bitbake rev: bebef58b21bdff7a3ee1fa2449b7df19144f26fd)

Signed-off-by: Kang Kai <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:27:05 +01:00
Shane Wang
9e6d1101b4 Hob: Adjust the progress bar and set 100% only when all is done.
After parsing recipes, Hob will populate recipes and packages, which is probably
time exhaused. So, this patch is to adjust the progress bar and ensure 100% is
set if and only if all populations are done.

The patch also fixes "weird 18 second delay when parsing recipes" on build appliance.
Because Hob is doing something, but the progress bar shows 100% and wait there.

[Yocto #2341]

(Bitbake rev: 2c4a21dc8a588c8cf05549ddd9734731a46bea10)

Signed-off-by: Shane Wang <shane.wang@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:55 +01:00
Scott Rifenbark
2bddf70a84 documentation/poky.ent: Updated variables for correct 1.2.1 build.
Key variables are DISTRO at "1.2.1", YOCTO_DOC_VERSION at "current",
and POKYVERSION at "7.0.1".  Note that I have to change "current"
to "1.2.1" before publishing any manuals prior to the official release
of 1.2.1.

(From yocto-docs rev: e62e0baec71c9d39473a9c67caf17f26346539d5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:23 +01:00
Scott Rifenbark
6698060d8e documentation/bsp-guide/bsp.xml: Review comments to recommendations
I added a small review comment to the section based on reviewer
feedback.

(From yocto-docs rev: 206d43c23efa114b57a1e75e469a6f5bdaf94715)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:22 +01:00
Scott Rifenbark
bd3cd64da3 documentation/bsp-guide/bsp.xml: Updates to requirements section
Implemented review feedback from Dave Stewart and Tom Zanussi.

(From yocto-docs rev: 774e00d34d2abd466a6d64b4b91f60d87203add4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:22 +01:00
Scott Rifenbark
c88f25ddb4 documentation/bsp-guide/bsp.xml: BSP recommendations section added
Added the "Requirements and Recommendations for Released BSPs"
section.  This section was requested by Dave Stewart based on
community input for direction on how to create a BSP that was
compliant with the Yocto Project.  The input for the section came
from Tom Zanussi.

A spell-check was performed also prior to this commit that addressed
a few spelling issues across the file.

(From yocto-docs rev: 6357eb7a26abb3dca14daf5d9b9a4e245dd0827b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:26:22 +01:00
Laurentiu Palcu
ef215694de freetype: upgrade to 2.4.9
(From OE-Core rev: 7c767d3723e0b55d3bcd3864a9cdbce6d11d5b35)

Signed-off-by: Laurentiu Palcu <laurentiu.palcu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:50 +01:00
Paul Gortmaker
71a6fb605a gitignore: add wildcard to match toplevel patch files
To support the basic workflow of trivial patches:

 git format-patch HEAD~.. ; git send-email --to foo@bar.com 0001-foo.patch

We don't want git status reporting on patches lying in the top
level dir in this case.

Cc: Richard Purdie <richard.purdie@linuxfoundation.org>
(From OE-Core rev: 7e32cbf30352e12c55c3c378631f4e238cf682c5)

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:49 +01:00
Joe Slater
bfc8589048 shared-mime-info: fix build race condition
The definition of install-data-hook in Makefile.am leads
to multiple, overlapping, executions of the install-binPROGRAMS
target.  We modify the definition to avoid that.

(From OE-Core rev: d8a09cb17f2f3b43718ba354da7368a2ed793766)

Signed-off-by: Joe Slater <jslater@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:49 +01:00
Song.Li
6514e193ac groff: Fix build on Fedora 17
Generally distros keep perl at /usr/bin/perl
But Fedora 17 also has /bin/perl,
this causes groff_1.20.1 build to put perl
interpreter path as /bin/perl
But we set perl location for target as /usr/bin/perl

This mismatch of perl path causes failure of rootfs image creation
like this:

| error: Failed dependencies:
|       bin/perl is needed by groff-1.20.1-r1.ppc603e

(From OE-Core rev: 75824ff13f43b330b11cf9a130f061baee785e1a)

Signed-off-by: Song.Li <song.li@windriver.com>

Sync up with the do_install_append_virtclass-native chunk.

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:49 +01:00
Jesse Zhang
44fb9daa81 beecrypt: disable java
If java is installed on host, beecrypt will attempt to use it.

(From OE-Core rev: 4d2ff0a69692f54313ffa9dc83d0e4a2ddba47c3)

Signed-off-by: Jesse Zhang <sen.zhang@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:49 +01:00
Scott Garman
309b2c090e runqemu-ifup: enable arp proxying
This allows core-image-sato to access the WAN.

Thanks to Dexuan Cui for proposing this fix.

Fixes [YOCTO #2329]

(From OE-Core rev: 680a94c378f20c00e8bee0575b8922bccc008fec)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Tom Zanussi
67c7bc5e6c gnupg: disable CCID driver
The CCID driver driver is apparently unnecessary, so disable it.

Also remove the associated libusb dependency, since that won't be
needed either.

According to Scott Garman <scott.a.garman@intel.com>:

I'd just note that the CCID smartcard reader is a specific piece of
hardware that is unlikely to be used in a majority of our use cases.

(From OE-Core rev: 2fcd564b5395950f480a288d434c64c8fee65ece)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts when importing from oe-core master.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Tom Zanussi
e06e502bbb gnupg: add libusb to DEPENDS
gnupg apparently depends on libusb:

| error: Failed dependencies:
| 	 libusb-0.1-4 >= 0.1.3 is needed by gnupg-2.0.18-r1.core2

So add libusb to gnupg DEPENDS.

(From OE-Core rev: 1a76f50c1f159477a86dc7a6cb95873cee05d9e6)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Resolved merge conflicts when importing from oe-core master.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Zhai Edwin
89c0e81273 webkit-gtk: Use glib as unicode backend to avoid browser crash
webkit-gtk depends on ICU for the unicode, but ICU is not safe when build and
target system owns different endian. ICU's community is not responsive to make
a patch for this, so glib is used as work around here.

[YOCTO #1570] got fixed

(From OE-Core rev: df83a9480ba7b2fd2bcc0a92932d51434d7795a0)

Signed-off-by: Zhai Edwin <edwin.zhai@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Paul Eggleton
78a5471a29 classes/sanity: send sanity check failure as a separate event for Hob
In order to show a friendlier error message within Hob that does not
bury the actual sanity error in our typical preamble about disabling
sanity checks, use a separate event to indicate that sanity checks
failed.

This change is intended to work together with the related change to
BitBake, however it has a check to ensure that it does not fail with
older versions that do not include that change.

Fixes [YOCTO #2336].

(From OE-Core rev: 3788f9bcb36cca90ca8cf650c9d33f5485e3087b)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Joshua Lock
b30a243f3f sanity.bbclass: copy the data store and finalise before running checks
At the ConfigParsed event the datastore has yet to be finalised and thus
appends and overrides have not been set.
To ensure the sanity check is being run against the configuration values
the user has set call finalize() on a copy of the datastore and pass that
for all sanity checks.

(From OE-Core rev: 527e26ea1e44f114fc9fcec1bc7d83156dba1a70)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:48 +01:00
Richard Purdie
20657c1fa0 image.bbclass: Ensure ${S} is cleaned at the start of rootfs generation
Some image classes such as bootimg save files into ${S} as part of rootfs
generation. For correctness we should therefore clean this at the start of
image generation to ensure reproducibility.

I found this issue when some files I thought should disappear from my rootfs
would not disappear.

(From OE-Core rev: 23b7d7dab475caca4558e3b20db534122bee1525)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Mihai Lindner
3029a08744 sudo: fixed wrong chmod path
Placed $D between braces ${D} to be correctly expanded to the
workdir path, instead of a path relative to host rootfs.
Currently, bitbake sudo fails on host systems where sudo is not
installed.

(From OE-Core rev: 83c5acfe4731990c296be1bf67059452a72f9584)

Signed-off-by: Mihai Lindner <mihaix.lindner@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Richard Purdie
d376a4e8f1 bitbake.conf: Improve wget timeouts
The wget default is a 900 second timeout and 20 retries. This is way too long
for most of our usecases so this patch changes it to a 30 second timeout and
reduces retries from 5 to 2. We have good mirror infrastructure, this will
let us fall back to it easier.

(From OE-Core rev: dbb88617576ea9bbeec08f5e5e15c26c4c18347f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Xiaofeng Yan
0d0846e06f ncurses: Avoid occasional builling failure when having parallel processable task
ncurses failure non-gplv3 build (race issue) like the following \
error information:

| tic: error while loading shared libraries: /srv/home/pokybuild \
/yocto-autobuilder/yocto-slave/nightly-non-gpl3/build/build/tmp/\
work/x86_64-linux/ncurses-native-5.9-r8.1/ncurses-5.9/narrowc/lib\
/libtinfo.so.5: file too short
| ? tic could not build /srv/home/pokybuild/yocto-autobuilder/\
yocto-slave/nightly-non-gpl3/build/build/tmp/work/x86_64-linux/\
ncurses-native-5.9-r8.1/image/srv/home/pokybuild/yocto-autobuilder\
/yocto-slave/nightly-non-gpl3/build/build/tmp/sysroots/x86_64-linux\
/usr/share/terminfo
| make[1]: *** [install.data] Error 1

This is a race issue which is caused by
install.libs and install.data:

1) install.data needs run tic
2) tic needs libtinfo.so
3) install.libs would regenerate libtinfo.so
4) but install.data doesn't depend on install.libs, and they can run
   parallelly

So there would be errors in a very critical condition: tic is begining
to run at the same time when install.libs is generating libtinfo.so, and
this libtinfo.so is not integrity, then there would be the  above error.

Let task install.libs run before install.data for fixing this bug.

[YOCTO #2298]

(From OE-Core rev: 6993570787a97fbca5ea81513b0120c6d7563484)

Signed-off-by: Xiaofeng Yan <xiaofeng.yan@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Robert Yang
59ac33c77f rpm 5.4.0: respect to the arch when choose the alternatives
There is a bug if we:
1) bitbake diffutils with MACHINE=crownbay
2) bitbake diffutils with MACHINE=qemux86
3) bitbake core-image-sato with MACHINE=crownbay

Then the diffutils.i586 would be installed to the crownbay's image, this
is because diffutils.i586 is newer than diffutils.core2, and rpm doesn't
respect to the arch priorities:

We have put the archs in order in _solve_dbpath:

crownbay/solvedb:core2/solvedb:i586/solvedb:all/solvedb

Fix rpm to respect to the order, for example, if it finds a pkg in both
core2/ and i586/, and the core2/ comes first, it should not use the one
in i586/ even if it's build time is newer.

Note: Don't worry about the _free(*ptr), it can check whether ptr is
NULL or not.

This is for the denzil branch, and the master branch also needs it.

[YOCTO #2360]

(From OE-Core rev: 2199e6b9c82bb2b6738e87903f30329586db20e2)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-14 11:23:47 +01:00
Richard Purdie
3cb36a5ed9 Update version to 1.15.2 (correspdoning to Yocto 1.2 release)
(Bitbake rev: 270a05b0b4ba0959fe0624d2a4885d7b70426da5)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:10:07 +01:00
Liming An
75e32007ef Hob: add original url show function with the tooltip hyperlink for user
When case about No browser, such as running in 'Build Appliance', user can't open
the hyper link, so add this work around for user. (Checking the browser is avaiable
or not is hard by different system and browser type)

[YOCTO #2340]

(Bitbake rev: 02cc701869bceb2d0e11fe3cf51fb0582cda01b0)

Signed-off-by: Liming An <limingx.l.an@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:59 +01:00
Liming An
d52e74cee9 Hob: change the refresh icon speed to make it view clear
Because the arrow icon refresh so fast as the go backward by illusion, so adjust it slow.

[YOCTO #2335]

(Bitbake rev: ac4a8885fafdc0d1e79831334ead9a8ddb6e2472)

Signed-off-by: Liming An <limingx.l.an@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:52 +01:00
Dongxiao Xu
f1630d3cd4 Hob: Clear the building status if command failed
We may meet certain command failure during build time, for example,
out of memory. In this case, we need to clear the "building" status.

This fixes [YOCTO #2371]

(Bitbake rev: 283dbbbf5d34adb4c9e3aa87e3925fdebe21ff42)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:42 +01:00
Tom Zanussi
b7f1a8f870 yocto-bsp: clarify help with reference to meta-intel
The current yocto-bsp help assumes knowledge that the meta-intel layer
needs to be cloned before it's put into the BBLAYERS.  Avoid the
guesswork and state the details explicitly in the help.

Also, the shorter 'usage' string doesn't mention it at all; it would
help to at minimum mention it and refer the user to the detailed help.

Fixes [YOCTO #2330].

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:31 +01:00
Tom Zanussi
015f117d85 yocto-kernel: use BUILDDIR to find bblayers.conf
The current code assumes that builddir == srcdir/build, which it
obviously isn't sometimes.  Use BUILDDIR to get the actual builddir
being used.

Fixes [YOCTO #2219].

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:24 +01:00
Richard Purdie
b8338046ba netbase: Correctly set FILESEXTRAPATHS to include the version
[YOCTO #2366]

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-06-05 23:09:10 +01:00
Scott Rifenbark
7552ccd06c documentation/yocto-project-qs/yocto-project-qs.xml: pre-built example fix
The example showing how to use pre-built images, the toolchain, and
filesystem was off a bit.  I changed some wording to indicate using the
.ext3 filetype of the filesystem.  Previously it talked about expanding
the tarball version but the example has been changed to use .ext3.
Also, the environment setup file has been mis-named forever.  It should
have i586 in it and not i686.  And, finally, the image name does not
have a release number as part of the name.

(From yocto-docs rev: 97ed79993dd3e2eede4807482e15633b66b99f49)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:29 +01:00
Scott Rifenbark
e24d5cc2cd documentation/dev-manual/dev-manual-bsp-appendix.xml: .bbappend example
The linux-yocto_3.2.bbappend example was out of date.  There is no
longer a kernel features statement in the last part of the section.
Only COMPATIBLE_MACHINE, KMACHINE, and KBRANCH remain.  I removed
the fourth one from the text description and the example code.

(From yocto-docs rev: 89a11ce3c2a43e2d7c26599976d906011130131f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Scott Rifenbark
1f2fc974df documentation/dev-manual/dev-manual-bsp-appendix.xml: Added note
Added a note telling the user that the commit ID strings in the
example might not match the actual commit ID strings found in
the .bbappend file.

(From yocto-docs rev: 0477122c42eaf6d5e18e28a2356fe58c1070c608)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Scott Rifenbark
a473ba170d documentation/yocto-project-qs/yocto-project-qs.xml: Minor wording changes
(From yocto-docs rev: 528e34b1694739396295b769cc6f83d58dd3bf59)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Scott Rifenbark
a5fe09c6aa documentation/dev-manual/dev-manual-kernel-appendix.xml: Kernel example fixed
Due to a bug (2256) the example that changes the kernel configuration
through menuconfig did not work.  I have re-written the section
to now start with the default behavior of CONFIG_SMP=y and then
have the user change the configuration to where it is not set.

The changes include the reversing of the flow and the work-around
needed due to bug 2256.

(From yocto-docs rev: 2eaaafab0390d1108b212b9cfb7ca8365e0f39a9)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Scott Rifenbark
2072256b05 documentation: Added 1.2.1 manual entry information.
Added 1.2.1 manual history entry to five manuals.  The date
is to be determined.

(From yocto-docs rev: bb920814d5adaa24d37fbcefd85de2ba93ddf604)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:28 +01:00
Paul Eggleton
b623203ac9 documentation/yocto-project-qs/yocto-project-qs.xml: Setup changes
Remove mercurial as this is no longer needed.
linuxdoc-tools was mentioned twice in the CentOS list.
We no longer support Fedora versions older than 15 so remove this note.

This commit applies to 1.2, 1.2.1, and 1.3.

(From yocto-docs rev: 1347f92c49e61a42aa51e5c1ffccde88a449a4fb)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:27 +01:00
Scott Rifenbark
c7e4a6ae2c documentation/Makefile: Fixed figures publishing bug
I discovered a bug when publishing documents.  There are two scp
commands that copy a document's files and figures to the appropriate
directory in the srifenbark@yocto-www:~/www.yoctoproject.or-docs
server where the manuals are published.  The second scp command
had a "/figures" at the end.  This was causing a new "figures"
directory to be created within the "figures" directory.  This
redundancy shows up as missing figures in the manuals if a new figure
or changed figure is ever added to the book after initial
publishing.  I removed the extra "/figures" at the end of the scp
command.

(From yocto-docs rev: 5ab530f998427405a0486b94ca76cff58a4cf463)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:22:27 +01:00
Andrei Gherzan
1628159028 fotowall: Add #include ui_wizard.h to ExportWizard.cpp
App/ExportWizard.cpp depends on wizard.h which depends on ui_wizard. The last one
should be already generated before compiling ExportWizard.cpp.

[YOCTO #2297]

(From OE-Core rev: d7bf94647f17c0382caad8af0bdda837b14b22dc)

Signed-off-by: Andrei Gherzan <andrei@gherzan.ro>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:14 +01:00
Khem Raj
b477f676e3 classes/mirrors.bbclass: Point snapshot.debian.org mirror to working location
If you point to snapshot.debian.net/archive/pool then it will fetch
you a html page which will end up in corrupt download. The locations
have changed for archives and here we point the mirror to right
location.

(From OE-Core rev: d5574749b2272357f6bdad04c37ec0657b391cca)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:14 +01:00
Khem Raj
9d2534ab24 uclibc.inc: uclibc rtld does support GNU_HASH
(From OE-Core rev: 090d8a687517c2d4deb33295a3cceb5175aa28f3)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:13 +01:00
Khem Raj
df815f20c8 openssl: Fix build for mips64(el)
(From OE-Core rev: 8c74ddf5fd5502fd759f310096e9013fad0ca4db)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:13 +01:00
Zhai Edwin
b64eefe2bb pango: Fix modules load failure in multilib environment
Multi-libs of Pango need different modules, thus different config files and
utils. This patch separate config file and utils with different MLPREFIX to
avoid conflict.

[YOCTO #2356] got fixed.

(From OE-Core rev: 535e298b98182d95c3280d2d46aa6388e27aac40)

Signed-off-by: Zhai Edwin <edwin.zhai@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:13 +01:00
Richard Purdie
4abd299bf0 sed: Explicitly disable acl for deterministic builds
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:13 +01:00
Dongxiao Xu
30c3c8420e qt4: move functions from python to shell style
In qt4's do_configure operation, it will refer to some variables that
are derived from 'd', however these variable values may be not correct
in multilib case since the extraction of these variables happens before
the multilib handler.

The fix is to move these python style functions back to shell style.

This fixes [YOCTO #2355]

[RP: Fix whitepace]
(From OE-Core rev: 98cb2efe4e9f3092d531c9fc809406c3ef559725)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
[SG: Resolve merge conflicts for 1.2.1]
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:12 +01:00
Darren Hart
a74fb01b6b initrdscripts: Update install.sh to work with mmc devices
Fixes [YOCTO #2385]

The installer only searches for hd[ab] sd[ab]. Some newer BSPs have mmcblk
devices that should be used as the install target. These devices also have a
partition prefix (mmcblk0p1 instead of mmcblk01). As they are detected
asynchronously, it is necessary to add the rootwait kernel parameter to avoid
a race condition trying to mount the root device.

As BSPs like the FRI2 and the sys940x have mmc devices and will have a 1.2
release, we should push this to 1.2.1. The changes are perfectly contained and
easily verified.

Test for an mmcblk device and add the p partition prefix if necessary. Add the
rootwait kernel parameter when an mmcblk device is detected.  Replace the series
of explicit umount commands with a single umount using a wildcard. This will
find all the partitions and will not try to unmount non-existant devices. Avoid
copy and paste errors by replacing /dev/${device}${pX} references with the
previously assigned rootfs, bootfs, and swap variables.

These changes have been tested on the FRI2 Sato image which installed to
/dev/mmcblk0 as well as the N450 Sato image which installed to /dev/sda. Both
were successful.

(From OE-Core rev: 36634e16c0a0c80674bacf20f9841e3b042bd5fd)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:12 +01:00
Paul Eggleton
c003c04590 buildhistory: fix multiple commit of images and packages at the same time
The echo line here was merging multiple lines into one, and the result
was that if both image and package changes had to be comitted then only
the image changes were being committed and the package changes could
potentially be merged into the next package change. Quoting the variable
reference fixes this.

Fixes [YOCTO #2411]

(From OE-Core rev: 540cd9d42a4db562e5eca431cec89ac5a6a05cab)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:12 +01:00
Saul Wold
35cc0b023f builder: Add Please Wait Dialog Box
Add dialog box while bitbake starts hob to inform user
to please wait for the hob screen to become visible.

(From OE-Core rev: e9239e4250ef140920847f78625cc7206763c32c)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:12 +01:00
Nitin A Kamble
c2826b50ce quilt: fix perl path in target perl scripts
While building on distros like fedora17, which has /bin/perl,
the target perl scripts get perl path also as /bin/perl.
And that is not correction path of perl on the target.

This commit avoids this error.

| error: Failed dependencies:
|       /bin/perl is needed by quilt-0.51-r2.i586
NOTE: package core-image-sato-sdk-1.0-r0: task do_rootfs: Failed
ERROR: Task 8
(/home/nitin/prj/poky.git/meta/recipes-sato/images/core-image-sato-sdk.bb,
do_rootfs) failed with exit code '1'

(From OE-Core rev: c8c394bd806978c867f2fe82e4bde65c98764880)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:11 +01:00
Saul Wold
75225bcc84 boost: Ensure we use our user-config.jam file
This change ensures we use the user-config.jam Configuration
that we created and will not use anything from the user's home
directory.

[YOCTO #2302]

(From OE-Core rev: f246e467b8513f1c1c33b5e7462ae6478754d531)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:11 +01:00
Mark Norman
d3204ddc12 uclibc SDK not including libpthread_nonshared.a
Modified the uclibc PACKAGES list order to ensure the uclibc-dev package is
processed before uclibc-staticdev to allow *_nonshared.a libraries to be
packaged in the uclibc-dev package.  The *_nonshared.a libraries are required
by the SDK.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:11 +01:00
Scott Garman
7c7ac8548d runqemu-ifup: enable ip masquerading for QEMU NAT addresses
Fix the IP masquerading settings so that networked QEMU sessions can
reach external networks.

This is a partial fix for [YOCTO #2329].

(From OE-Core rev: 78c7a82a2e3214eaec3c559269e3cc6c219759c0)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:11 +01:00
Scott Garman
bbf95cae4c openssl: upgrade to 1.0.0i
Addresses CVE-2012-2110

Fixes bug [YOCTO #2368]

(From OE-Core rev: 51a122a5593c62d7ffd07f860e54a2fb0327959c)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:10 +01:00
Scott Garman
3df821277d libpng: upgrade to 1.2.49
License hasn't changed, just updated the md5 checksums due to trivial
date changes within the text (and the position of the license text
within png.h).

Addresses CVE-2011-3045

Fixes [YOCTO #2352]

(From OE-Core rev: 6e2235a4d769b16ebf68d6bbed56d8bcc0e0c83f)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:10 +01:00
Andrei Gherzan
4f4685469a python: Add patch to search for db.h in inc_dirs and remove warning
python should search for db.h in inc_dirs and not in a hardcoded path.
If db.h is found but HASHVERSION is not 2 we avoid a warning by not.
adding this module to missing variable.

[YOCTO #1937]

(From OE-Core rev: 8eb3e52d39147f8cb98ec95857be17db0444098e)

Signed-off-by: Andrei Gherzan <andrei@gherzan.ro>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:10 +01:00
Andrei Gherzan
d1bc1191d6 python: Add patch for 64bit platform
This patch was added for 64bit host machines. In the compile process python
is checking if platform is a 64bit platform using sys.maxint which is the host's
value. The patch fixes this issue so that python would check if TARGET machine
is 64bit not the HOST machine. In this way will have "dl" and "imageop" modules
built if HOST machine is 64bit but the target machine is 32bit.

[YOCTO #1937]

(From OE-Core rev: 22ae3959f40845ebcc00413ccf733539472a1a81)

Signed-off-by: Andrei Gherzan <andrei@gherzan.ro>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:10 +01:00
Andreas Oberritter
5c507a2fd7 {kernel, module}.bbclass: don't run depmod for module packages during do_rootfs
* depmod already gets executed by pkg_postinst_kernel-image.

* If you build a module using module.bbclass, pkg_postinst returns 1 in
  do_rootfs, causing pkg_postinst to run again on first boot. To improve
  this situation, I copied pkg_postinst from kernel.bbclass to module.bbclass.
  This was rejected by Koen, because he doesn't like the code from
  kernel.bblcass, which uses ${STAGING_DIR_KERNEL}. Richard then suggested
  that calling depmod during do_rootfs wasn't necessary at all, because
  it already gets done by kernel-image.

(From OE-Core rev: 663b4be025283a30adb823760ce9d9a056106bcf)

Signed-off-by: Andreas Oberritter <obi@opendreambox.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:09 +01:00
Richard Purdie
bf4740cf66 utils.bbclass: Testing via env in create_wrapper is a nice idea but breaks things
For example, pseudo-native wants to set LD_LIBRBARY_PATH but setting this
into the environment here causes the existing pseudo (running during do_install)
to poke into paths in /opt and this breaks builds.

The simplest fix is simply not to do this. Comments tweaks to match the code.

(From OE-Core rev: 915769c405e24751eae613e9ef55f05490a726de)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:09 +01:00
Nitin A Kamble
24ffb5c0b1 libproxy: fix compilation with gcc 4.7
(From OE-Core rev: 6689c52eb13430593d6afe48dba3973467cd2404)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:09 +01:00
Paul Eggleton
a92fed4fe5 dpkg-native: fix deb-based rootfs construction failure on Fedora 16
Backport a fix from 1.16.x upstream to use fd instead of stream-based
I/O in dpkg-deb, which avoids the use of fflush() on an input stream
(the behaviour of which is undefined by POSIX, and appears to have
changed in the version of glibc introduced in Fedora 16 and presumably
other systems).

Fixes [YOCTO #1858].

(From OE-Core rev: b1c28667592e736115ab5e603a12c2723b939cf2)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:09 +01:00
Saul Wold
a518e1e3b1 quilt: move empty quiltrc to native sysconfdir
patch.bbclass orignally pointed at /usr/bin/quiltrc for an empty
version to ensure that no user setting were picked up, change this
to /etc/quiltrc in the Native sysroot since we now have a native
sysconfdir.

Make sure that the quiltrc is actually installed in the Native
sysconfdir, not the target, so fix this after the recipe split.

(From OE-Core rev: aec4cdc6efda430a0965d6b3b4f84c7943390273)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:08 +01:00
Saul Wold
fc9716930a gcc: Add plugins package for ARM, fix /usr/incude packaging
WARNING: For recipe gcc, the following files/directories were installed but not shipped in any package:
WARNING:   /usr/include
WARNING:   /usr/lib/gcc/arm-poky-linux-gnueabi/4.6.4/plugin/libgcc
WARNING:   /usr/lib/gcc/arm-poky-linux-gnueabi/4.6.4/plugin/libgcc/config
WARNING:   /usr/lib/gcc/arm-poky-linux-gnueabi/4.6.4/plugin/libgcc/config/arm
WARNING:   /usr/lib/gcc/arm-poky-linux-gnueabi/4.6.4/plugin/libgcc/config/arm/bpabi-lib.h
(From OE-Core rev: cf49cf3958b24fdb89d57abbf1f1b30c07a06030)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:08 +01:00
Saul Wold
f99ced96cf xserver-kdrive: Add xkb to existing docs list
WARNING: For recipe xserver-kdrive, the following files/directories were installed but not shipped in any package:
WARNING:   /usr/share/man
WARNING:   /usr/share/man/man5
WARNING:   /usr/share/man/man1
WARNING:   /usr/share/man/man1/Xephyr.1
WARNING:   /usr/share/man/man1/Xserver.1
(From OE-Core rev: 515cbe565b684359ac9d8bd0fb523aa3d2f810e2)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:08 +01:00
Saul Wold
d0f0d1b41d libgcc: Package additional *crt*.o files for PPC
WARNING: For recipe libgcc, the following files/directories were installed but not shipped in any package:
WARNING:   /usr/lib/powerpc-poky-linux/4.6.4/ecrti.o
WARNING:   /usr/lib/powerpc-poky-linux/4.6.4/ncrti.o
WARNING:   /usr/lib/powerpc-poky-linux/4.6.4/ecrtn.o
WARNING:   /usr/lib/powerpc-poky-linux/4.6.4/ncrtn.o
(From OE-Core rev: 580d734ddc928aaaac9acaa248427b01731074f2)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:08 +01:00
Saul Wold
77203b75f5 binutils: add embedspu for ppc builds
WARNING: For recipe binutils, the following files/directories were installed but not shipped in any package:
WARNING:   /usr/bin/embedspu
(From OE-Core rev: 15c8ea4d35edbcaf03c94aba06ded85851679157)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:07 +01:00
Ken Werner
a0f1aca7a0 bdwgc: Set ARM_INSTRUCTION_SET to "arm"
The bdwgc recipe uses a version of libatomic that fails when building in Thumb
mode. This has been fixed upstream already. The
pulseaudio/libatomics-ops_1.2.bb has the same issue and sets the
ARM_INSTRUCTION_SET to "arm" (probably until a new version gets pulled in).
This patch applies the same workaround to the bdwgc/bdwgc_20110107.bb recipe.

(From OE-Core rev: 544fe63b6a861129ea15f4cd37952e513ab0013e)

Signed-off-by: Ken Werner <ken.werner@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:07 +01:00
Darren Hart
b3de1f1140 gthumb: Disable parallel make for gthumb install
With PARALLEL_MAKE set to 14, I frequently see the gthumb do_install
task hang. Make is spinning at 100% CPU and the build makes no
more progress.

The following work-around proposed by Richard Purdie allows progress
to be made.

(From OE-Core rev: e933129ddb8ae55d618b5875fca26bc46fcb100b)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: Joshua Lock <josh@linux.intel.com>
CC: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:07 +01:00
Andreas Oberritter
6e93ac2581 python: use PKGSUFFIX for libpython2
* python-nativesdk shouldn't provide libpython2, but
  libpython2-nativesdk.

(From OE-Core rev: 260dfd9ccbf7d1e0ed60256aaf80fed5bf0c24e2)

Signed-off-by: Andreas Oberritter <obi@opendreambox.org>

[PR Bump - sgw]

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:07 +01:00
Otavio Salvador
c1bfbf7168 connman: backport test script fixes
Those fixes are required to get the test scripts to work with current
0.79 DBus API.

(From OE-Core rev: aadeb3199d1b34369b63810696b9d61a86afb31d)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:06 +01:00
Khem Raj
5a7d852a94 connman: Fix linking with gold linker
Fixes errors like below

/home/kraj/work/angstrom/build/tmp-angstrom_2010_x-eglibc/sysroots/x86_64-linux/usr/libexec/armv5te-angstrom-linux-gnueabi/gcc/arm-angstrom-linux-gnueabi/4.6.3/ld:
error: hidden symbol '__start___debug' is not defined locally
/home/kraj/work/angstrom/build/tmp-angstrom_2010_x-eglibc/sysroots/x86_64-linux/usr/libexec/armv5te-angstrom-linux-gnueabi/gcc/arm-angstrom-linux-gnueabi/4.6.3/ld:
error: hidden symbol '__stop___debug' is not defined locally
collect2: ld returned 1 exit status
make[1]: *** [plugins/loopback.la] Error 1

(From OE-Core rev: 3e6e97b40f8cb9568993c5cc65d73189ec6b7b8a)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-31 21:15:06 +01:00
Scott Rifenbark
3bf8069100 documentation/yocto-project-qs/yocto-project-qs.xml: added quotes
Need quotes around the INHERIT statement.

Reported-by: Frans Meulenbroeks <fransmeulenbroeks@gmail.com>
(From yocto-docs rev: b040ab0cf8e776c04fc787ba09722327b60913f2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:36 +01:00
Scott Rifenbark
cbd192a6c5 documentation/dev-manual/dev-manual-kernel-appendix.xml: grammar fix
(From yocto-docs rev: 8f62155b56f82c705f05585d2ab68d4a4af5a501)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:36 +01:00
Scott Rifenbark
6d22ae627b documentation/dev-manual/dev-manual-kernel-appendix.xml: Removed KMACHINE
The example that compliles the altered code will not run now when the
KMACHINE statement is in the linux-yocto_3.2.bbappend file.  I have
commented it out of the book.

(From yocto-docs rev: 112a10a32ba3d7b24f22e25e39202b717571cbf0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:36 +01:00
Scott Rifenbark
49a58c65b6 documentation/dev-manual/dev-manual-kernel-appendix.xml: updated KSRC
The KSRC example needs "_3_2" at the end of the variable.

(From yocto-docs rev: 99bf77dd648b28c2d425d23215383b7c733b054d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:36 +01:00
Scott Rifenbark
06cde35657 documentation/dev-manual/dev-manual-kernel-appendix.xml: added quotes
Turns out the KSRC_linux_yocto ?= /home/scottrif/linux-yocto-3.2.git
statement in the linux-yocto_3.2.bbappend file in poky-extras needs
quote characters around the pathname.  I updated the example
statement.

(From yocto-docs rev: bcdb8d230f20bf69567380d562c991ff6eeb41cd)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
196a62b50c documentation/dev-manual/dev-manual-kernel-appendix.xml: kernel machine update
Found two instances of "yocto/standard/common-pc/base".  this should
now be "standard/default/common-pc/base".

(From yocto-docs rev: d710bc868409ad21bdf9e63c042ec40b0d305ad0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
8ddaa3ede8 documentation/dev-manual/dev-manual-kernel-appendix.xml: 3.0 to 3.2
The kernel used for example is no linux-yocto_3.2.  I changed
all occurences of yocto_3.0 over to yocto_3.2

(From yocto-docs rev: c3585a0fec0381a88071004660ab96016f9674e2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
52ccf5a9eb documentation/dev-manual/dev-manual-kernel-appendix.xml: formatting fix
(From yocto-docs rev: 1d1a5059163749b5adecf9432ffc5e2f2207acf4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
b2c9b25f97 documentation/dev-manual/dev-manual-kernel-appendix.xml: altered example
The example code with the printk statements needed to be altered.
And the wording supporting the example was modifyied to be more
generic.

(From yocto-docs rev: 4d03fe2e08dbdcab438aae551e9696e11a3e4477)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:35 +01:00
Scott Rifenbark
5a1fb95a8d documentation/dev-manual/dev-manual-kernel-appendix.xml: updated cpuinit
Looks like calibrate_delay(void) changed in the example.  Updated
to the most recent code.

(From yocto-docs rev: 402af7d379b0df5e97b1863aa627aad98ceb5e6f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:34 +01:00
Scott Rifenbark
22b9983cc7 documentation/dev-manual/dev-manual-kernel-appendix.xml: output updated
Updated the example sets up the bare clone.  The shell output changed
because the upstream repo changed to
"origin/standard/default/common-pc/base".

(From yocto-docs rev: 72077ca9e7db747cbccc4d9d8deabfa424c6147c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:34 +01:00
Scott Rifenbark
7f5e6a1959 documentation/Makefile: Added denzil specific .PNG file logic
A new figure was needed in the Kernel appendix.  I added that
figure to the block of code that creates the tarball for the
denzil branch.

(From yocto-docs rev: cb3997cad15f7bf6f812f939a3fe4dcac955376d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:34 +01:00
Scott Rifenbark
01025ad2c4 documentation/dev-manual: New figure just for denzil
New image needed for Denzil.  I created a new file named
"kernel-example-repos-denzil.png" and copied it to the
Figures folder.  I also deleted the
"kernel-example-repos.png" image.

(From yocto-docs rev: 150b4c01cce283ae8de29f51a2e4e7dcb60281ca)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:34 +01:00
Scott Rifenbark
71580376c9 documentation/dev-manual/dev-manual-bsp-appendix.xml: updated hddimg example
In the "Building and Booting the Image" section there is an example
.hddimg file.  I updated the file to be the actual file used during
the BSP example build.

(From yocto-docs rev: ce759fb3d4e5e22f0928cdd03c17c0b5d9f4167b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
0774a11505 documentation/dev-manual/dev-manual-bsp-appendix.xml: BBLAYER update
For 1.2 you evidently need to add $HOME/poky/meta-intel to your
bblayers.conf file.

(From yocto-docs rev: 05bb85dd133d8da0697cd4414b05dde2a636b737)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
4d2d5abd8b documentation/dev-manual/dev-manual-bsp-appendix.xml: wording updated
Wording for linux-yocto_3.2.bbappend file updated to support the
addition o fhte KBRANCH variable.

(From yocto-docs rev: 6bd32650f1004055ac67157f96ab62abf5883047)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
884034b256 documentation/dev-manual/dev-manual-bsp-appendix.xml: mymachine.conf
edited it to now include the PREFERRED_VERSION variable.

(From yocto-docs rev: 6ddd56cbcec752e27b2bbf0fc687af79b2249377)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
ee98021efe documentation/dev-manual/dev-manual-bsp-appendix.xml: recipes-kernel update
The section on changing recipes-kernel was way out of date.
I updated all relavent changes.

(From yocto-docs rev: b9f954983447e45766a0bf785285c0591fe9d340)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
f52747d7a2 documentation/dev-manual/dev-manual-bsp-appendix.xml: 3.2 for 3.0
Kernel used in now 3.2 and not 3.0.

(From yocto-docs rev: 8ee757e0d4f97f7652de2c9ee1556c142920596a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:33 +01:00
Scott Rifenbark
1bf998fe41 documentation/dev-manual/dev-manual-bsp-appendix.xml: added layerdepends
The layer.conf file now uses a LAYERDEPENDS variable.  I added that
to the example.

(From yocto-docs rev: 09f4d9e74ceccb3053a36d2a3deed5cc3d3be157)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
1b3c00a34f documentation/dev-manual/dev-manual-bsp-appendix.xml: changed kernel
The kernel in mymachine.conf had to be changed from 3.0 to 3.2

(From yocto-docs rev: 8a385bfa11298251fd80445d6fd2da6034d6b9dc)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
b3f870297e documentation/dev-manual/dev-manual-bsp-appendix.xml: Output updated
The output for creating and switching to the denzil branch
for meta-intel needed updated.

(From yocto-docs rev: 54602beb1aa56521c7f5812803724ff53bf11bf1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
93deb57c91 documentation/dev-manual/dev-manual-bsp-appendix.xml: Bad variable
The variable substitution had to be changed from
"&DISTRO_NAME;-6.0.0.tar.bz2" to "&DISTRO_NAME;-&POKYVERSION;.tar.bz2".

(From yocto-docs rev: 8ed6cb5e2b56dee3fa8d127b449183ae141a9153)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
2883b754a1 documentation/dev-manual/dev-manual-bsp-appendix.xml: Added link
Created a link to the Yocto Project Files term.

(From yocto-docs rev: 32d7d7008ebcb0b25f77b855025c7059526b9694)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
86325bbc5d documentation/dev-manual/dev-manual-bsp-appendix.xml: typo corrected.
(From yocto-docs rev: 73eba4180162fcd6570ae90c6cac1b16088d4a01)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:32 +01:00
Scott Rifenbark
746d718f53 documentation/dev-manual/dev-manual-bsp-appendix.xml: Bad variable
Had to remove "poky-" from the front of this variable that resolves
to a YP Files top-level name from the tarball.

(From yocto-docs rev: d01d5bd6c4d1fd754d4fccc087d557058d6a5733)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:31 +01:00
Scott Rifenbark
c498338197 documentation/dev-manual/dev-manual-model.xml: Fixed a bad link title.
(From yocto-docs rev: 0b59afe539b2adc3459c1e22404136d81250d292)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:31 +01:00
Scott Rifenbark
ba554bd865 documentation/dev-manual/dev-manual-model.xml: Better wording.
(From yocto-docs rev: bb3fa5eeed2784b415d009ae07c39149adc1a147)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:31 +01:00
Scott Rifenbark
0dda5d88a5 documentation/dev-manual/dev-manual-model.xml: Added BitBake
Throughout the documentation set I have refered to the YP build
system generically in order to avoid use of the "Poky" term.  Richard
has suggested that we refer to the actual thing that does the building.
So I have added BitBake to this particular sentence to refer to the
tool.

(From yocto-docs rev: 4d52fc9c8d1e1cbfca99590fcaa09392f5d235bf)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:31 +01:00
Scott Rifenbark
06f44161f1 documentation/dev-manual/dev-manual-model.xml: Fixed poor references.
(From yocto-docs rev: 91885c11cc33a10b3d65006304bf5a6ca748f13f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
4b4a018466 documentation/dev-manual/figures/kernel-overview-3.png: Removed file
This file was replaced by a release-specific file named
"kernel-overview-3-denzil.

(From yocto-docs rev: e9604111299d3699105225302c43a25e7b2730b1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
caf6532b51 documentation: release-specific figure needed for denzil in dev-manual
dev-manual/dev-manual-model.xml:  The Bare Clone and Copy of the Bare
Clone figures are out of date for denzil.  These needed to be
re-done so they use "linux-yocto-3.2.git" and "my-linux-yocto-3.0-work"
as the root names.  This presents a Makefile issue when making the
denzil and pre-denzil versions of the manuals.  Whenever you use a
different figure for a different release, you need to involve the
BRANCH variable in the Makefile.  This is necessary because you are
using different figures in the generated tarballs.  The set of figures
could be unique to the release.  The outdated figure is
"kernel-overview-3.png" and will eventually be removed (later commit).
I created a new figure named "kernel-overview-3-denzil.png" and used
that in the dev-manual-model.xml file.

documentation/Makefile:  I updated the Makefile to test for a
"denzil" release build and if so include the new file in the
generated tarball.  This commit adds the new .PNG file as well.
Fixed the Makefile so that if you don't supply a BRANCH value, it
uses the latest figures (denzil).

(From yocto-docs rev: 49552b12a967f97eb4d75477895bf32f61d69aa6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
a81cb954bb documentation/dev-manual/dev-manual-model.xml: Wording change
Changed the Note wording to work with the list and not be specific
to a number of supported kernels.

(From yocto-docs rev: a6ffe0834c0ed76ec09315f34c65888c20eed958)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
0089bb9ad0 documentation/dev-manual/dev-manual-model.xml: Updated wording for list
The list specifically named four kernels supported.  I changed it so
it would say "several kernels".

(From yocto-docs rev: b6c34f86c1f3724c1416b8fb7770e1c33587e065)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:30 +01:00
Scott Rifenbark
ce8d4157db documentation/dev-manual/dev-manual-model.xml: BSP Layer step updated
Several things out-of-date for step five of the BSP Creation overview.

(From yocto-docs rev: ec06bd4f7bb1764e4a37328a51923d7b707d19e6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:29 +01:00
Scott Rifenbark
66b18cb5cd documentation/dev-manual/dev-manual-model.xml: Fixed link
The link and wording to the YP Downloads page on the website was
wrong.  Fixed it up.

(From yocto-docs rev: 5baf847c9b5b8af07c8945921352d3aba2a9cfa8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:29 +01:00
Scott Rifenbark
bbf33914ea documentation/dev-manual/dev-manual-common-tasks.xml: Font corrected.
(From yocto-docs rev: 0fab3eecf7f67ae890ff4fc2f6c12fed4aa4d897)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:29 +01:00
Scott Rifenbark
e1e12bfd0c documentation/dev-manual/dev-manual-common-tasks.xml: Section name fixed.
(From yocto-docs rev: 6c5724d8c0e75efc22dd2f4477a797afeaed5347)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:29 +01:00
Scott Rifenbark
bc7f18c61d documentation/dev-manual/dev-manual-common-tasks.xml: fixed path
Added more detail to the pathname for the example
formfactor_0.0.bbappend file.

(From yocto-docs rev: 32e60999494bb5b69d683008ad804613e4b99d07)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
89e2958475 documentation/dev-manual/dev-manual-common-tasks.xml: link and output fixed
Fixed a reference to Yocto Project Files and provided a link.

Put in an updated version of the meta/recipes-bsp/formfactor/
formfactor_0.0.bb file in the example.

(From yocto-docs rev: 05001174d2337a91e839e991a3e9ecd6657a56f4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
943c6917e6 documentation/dev-manual/dev-manual-start.xml: shell output examples updated
Updated various shell output examples created from cloning various
Git repositories, etc.

(From yocto-docs rev: ed167b1643a60ab30c09c2f42baebf781564ca20)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
2b6e86beae documentation/dev-manual/dev-manual-start.xml: updated output for bare clone
Updated the shell output example when user creates a bare clone
of kernel.  We use linux-yocto-3.2 here.

(From yocto-docs rev: e24beac8c8b6c65f94b71f36bf9f5d918ee4375e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
42a9a50771 documentation/dev-manual/dev-manual-newbie.xml: Tag example fixed
The example that creates a local branch based on a release tag
in the "Repositories, Tags, and Branches" section was not optimal.
Darren Hart informed me that naming a local branch the same name
as a tag confuses Git.  Plus, the "-b" option was mis-placed.
Renamed the local branch to have "my-" in front of it and moved
the "-b" option earlier in the command.

(From yocto-docs rev: 24ab16d18fb317efb86d2c4ddb2ac1a1449df519)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:28 +01:00
Scott Rifenbark
74c34c9d3c documentation/dev-manual/dev-manual-newbie.xml: Fixed branch example
The example in the "Repositories, Tags, and Branches" section that
creates a local branch that tracks the upstream branch is incorrect.
The syntax should be "git checkout -b &DISTRO_NAME; origin/&DISTRO_NAME;

Fixed it.

(From yocto-docs rev: 7b47dd460f240a0d7f07edf2767bcad1ddc9d4c3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:27 +01:00
Scott Rifenbark
e9b8cf485c documentation/dev-manual/dev-manual-newbie.xml: Link added for TOPDIR
(From yocto-docs rev: e02c1762fadd22f6ffc06e91ac82ebb59a7a7f68)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:27 +01:00
Scott Rifenbark
2863d953bd documentation/dev-manual/dev-manual-intro.xml: Hob and BA added
Added Hob and Build Appliance to the list of other stuff the user
might want to reference.

(From yocto-docs rev: 74ca0a95f0ea1b2045a42f0895ba874bdfa2d46c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:27 +01:00
Scott Rifenbark
716bdd4bf5 documentation/dev-manual/dev-manual-start.xml: Misc fixes
Better wording for the role of BitBake.  Updated shell output for
the clone of poky.

(From yocto-docs rev: 0f7d9557413827f82388d3fe677109074f04e30c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:27 +01:00
Paul Eggleton
dfecd3e3d7 documentation/yocto-project-qs/yocto-project-qs.xml: Package requirements
The following packages no longer need to be installed on the host
system:

* python-psyco
* help2man
* cvs
* hg

Additionally, linuxdoc-tools was mentioned twice in the Fedora list.

(From yocto-docs rev: bf7f37e040e5d5e19738f4c3a313acfd406351e3)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:26 +01:00
Scott Rifenbark
142de43be2 documentation/dev-manual/dev-manual-common-tasks.xml: Fix cusomizing example
As suggested by Paul Eggleton and Richard Purdie, the example that
describes another method for creating a cusomt image was modified
so that it is based on an existing recipe instead of requiring a
new image.

Reported-by: Paul Eggleton <paul.eggleton@linux.intel.com>
(From yocto-docs rev: b5b32be9087c3d1c8e8d97751ce2cce09829f23b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 21:00:26 +01:00
Scott Rifenbark
752c707df3 documentation/poky.ent: Changed "latest" to "current"
Needed to change this so that the manuals will make correctly and
manual links will not point to the "latest" version of manuals on the
YP website.  This change should have been made prior to the final
1.2 build so that it would have been in the 1.2 tarball.

(From yocto-docs rev: a8615e05aef205629c832041f30c76567d8359bd)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-05-01 20:57:24 +01:00
Richard Purdie
d20a24310e self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: fd989e1bceef6df36619ba8944c8141abefd282e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:21:45 +01:00
Richard Purdie
8e04664ffd self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: 117ca04008415ed0e6e10dcd373ab5f685b3225a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:17:25 +01:00
Dongxiao Xu
3ab5d73f0c sanity.bbclass: Add a new case to issue sanity_check()
Judge if "SanityCheck" event is received, it will issue the
sanity_check() and send "SanityCheckPassed" back if succeeded.

(From OE-Core rev: 19704f9e69ecf09531687385b478b47f49fe372d)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:15:47 +01:00
Dongxiao Xu
33f048240d Hob: Issue sanity check after parse is completed
In original scheme, sanity check is part of the parsing process. If a
sanity check fails, it means the parsing is failed and values in Hob
GUI may not correct.

With this commit, Hob will actively issue sanity_check() after the
parsing is completed.

This fixes [YOCTO #2361]

(Bitbake rev: 36968815dcc91759eeacb308bf4b294af416eee5)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:15:42 +01:00
Dongxiao Xu
0bf04aa4ad Hob: Add proxy setting into setting's md5
If user changed the proxy setting, we will reparse configuration because
it may need sanity check.

(Bitbake rev: 0be54917cd88ea8f110027a7840ac69a411fd589)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:15:36 +01:00
Dongxiao Xu
612555e6fe event.py: Add SanityCheck and SanityCheckPassed events
(Bitbake rev: 4d7bf9d813229b78b1cd87d06f7042e7923b7db4)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-24 10:15:29 +01:00
Richard Purdie
35196ff703 self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: 85bebd85c4f6603ac8fc1290121c34b92cc434f9)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:13:00 +01:00
Lianhao Lu
0a48c697d7 pseudo: PR bump.
Bump PR value due to the commit
c6c701f424aeb502d20ff02d02712e56f4e259a5.

(From OE-Core rev: b6ee2880fccf04923ede31256ea418451cbf2e46)

Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:10:39 +01:00
Scott Rifenbark
7e56770a60 documentation: Updated Manual Revision Tables again.
After some discussion from Song and Richard, the dates in the
manual revision table has been updated to "April 2012" for the
1.2 release.

(From yocto-docs rev: b3fc2ec7c5aedb8ea0a2d502bdcd7e8f4092ed96)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:08:16 +01:00
Scott Rifenbark
9a548f0ee4 documentation: Replacements for "1.1" and "edison", etc.
I did a quick and dirty scrub over the manuals for the strings
"1.1" and "edison".  I found some instances that were not properly
variablized.  Also, discovered some references to the
linux-yocto-3.0-1.1.x.  All but one instance of this needed changed
to linux-yocto-3.2.

(From yocto-docs rev: 620fb4b7626defcefc8a039de09ae4599ee7f454)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:08:09 +01:00
Scott Rifenbark
946c650a47 documentation: Manual Revision Tables updated
Five tables updated for the five manuals that have the tables.
Used "May 2012" as the date.

(From yocto-docs rev: 0d4d46ba300c07ff9c73186506be5b409bef9d1b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:08:02 +01:00
Scott Rifenbark
f99c947c32 documentation/yocto-project-qs/yocto-project-qs.xml: Added Build Appliance
Added a blurb about the Build Appliance to the start of the QS.

(From yocto-docs rev: b2766121c05740300fd5a6cea2f3b8a2f62db6e5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:55 +01:00
Tom Zanussi
9ffbd2ef22 documentation/dev-manual/dev-manual-common-tasks.xml: removed kernel26
kernel26 is now obsolete so remove mention of it from the docs.
Removed from docs.

(From yocto-docs rev: 7b9da106d746192f802095584b04e3ee8347eabd)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:48 +01:00
Scott Rifenbark
66625417b4 documentation/poky-ref-manual/ref-images.xml: added link
added the link for the Build Appliance page to the description of the
self-hosted image.

(From yocto-docs rev: 719ba4308489b29eefa7f08ddffb65bd5e41fc2c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:40 +01:00
Joshua Lock
e95ce40abd scripts/hob: disable sanity checks when launching
This enables us to use the GUI to change any settings which might cause
sanity checks to fail, such as the proxy configuration.

(From OE-Core rev: fe98d1c7159636f123b27292bbd4cc224b532bf0)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:33 +01:00
Joshua Lock
4a83ebbee0 sanity.bbclass: add variable to disable the sanity checks
It's useful for Hob to be able to disable the sanity checks completely
without marking them as passed so that the user can get into the GUI to
configure their settings, etc.

Add a variable, DISABLE_SANITY_CHECKS, to do so.

(From OE-Core rev: b022641f939bcfcdaddddc4db3af4d2dc70de832)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:26 +01:00
Richard Purdie
90705b36ad python: Fix various contamination issues leading to broken/missing c modules
The move of libcrypto to /lib instead of /usr/lib has broken the _hashlib module
compilation. There were also a number of other failing modules which should
have been building correctly. This turned out partly to be the /lib issue
but also due to a number of native paths creeping into compiler commandlines.

These changes add in /lib as part of the searh directory and remove
a number of host contamination issues within setup.py. Post release we
should really further go through this file and just delete large sections
of it as its hard to be sure what strange paths python is injecting as
search paths.

This patch also fixes issues where re-execution of the compile task
would corrupt the Makefile in various ways, again leading to puzzling
paths within the configuration.

(From OE-Core rev: 20e2761e1da1cb5dcd267e161f2a6b6a429e9f39)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:19 +01:00
Richard Purdie
be5a5c7e7b bitbake.conf: Add a STAGING_BASELIBDIR variable that recipes can use to find base_libdir
(From OE-Core rev: 4697911991caa2f2a21dd43f54e0c4404d722873)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:12 +01:00
Joshua Lock
64471e9340 hob: enable sanity checks after launch
To ensure the users configuration is sanity tested enable the sanity
checks after the GUI has started but before any parsing is done.

(Bitbake rev: 244ce2b900ae6cecbeeccfe2056e61c132476261)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-23 23:07:05 +01:00
Richard Purdie
4becd60e65 self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: b19af63)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 16:08:34 +01:00
Richard Purdie
9fcfda78b9 poky-tiny: Drop now unneeded DISTRO_FEATURES_LIBC_TOOLCHAIN (after gettext fix)
After the recent gettext dependency fix (commit 6e5cb40dfa
"gettext.bbclass: Ensure we don't overwrite other DEPENDS_GETTEXT values",
its no longer necessary to have to have these options to build meta-toolchain.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 16:05:58 +01:00
Richard Purdie
8558c3e1f4 initramfs-live-boot: Disable unionfs until its issue with the system rootdir are resolved
There are issues with the current unionfs when making a union mount over "/".
Until these are resolved we can't use unionfs for live booting so disable this
temporarily as a workaround.

unionfs is usable in other circumstances.

[YOCTO #2331 workaround]

(From OE-Core rev: 60ee26ae23132b916019d58e20b8c2e1ddd2b471)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:42 +01:00
Richard Purdie
0bfb42dbb6 pseudo: Drop nativesdk wrapper and link against old memcpy symbol
The -nativesdk pseudo wrapper setting LD_LIBRARY_PATH turned out to be a
bad idea since it can mix up different libc and lib-dl verisons which
may or may not work depending on the phase of the moon.

As an alternative to solving the original problem, this patch drops the
symbol version requirement on memcpy which allows pseudo to work with
libc's back to 2.7 which should be sufficient for our supported targets
using nativesdk.

[YOCTO #2299]
[YOCTO #2351]

(From OE-Core rev: c6c701f424aeb502d20ff02d02712e56f4e259a5)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:42 +01:00
Richard Purdie
37b069ea5d pseudo: Fix bashisms
(From OE-Core rev: 90e22bbb316088fa951d51e75de4e5424bd51ed6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:42 +01:00
Richard Purdie
6d7260e8f6 package.bbclass: Ensure kernel modules get stripped
Kernel modules are not marked as executable but we do expect to strip them.
This patch adds in missing code to ensure we do this. Without this images
are getting sigificantly bloated in size.

(From OE-Core rev: 00b0a5f2f51bb3f88bbb9ae558c2859e3c1c406c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:41 +01:00
Richard Purdie
236bda9ed6 gettext.bbclass: Ensure we don't overwrite other DEPENDS_GETTEXT values
In particular, this overwrites the value from cross-canadian.bbclass in
some cases which isn't the desired behaviour and unnecessarily
complicates/breaks the dependency chain.

(From OE-Core rev: 751ead4fa7d4120de906a1d9cb1d5a29357bebad)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:41 +01:00
Richard Purdie
375835092c qemu: Backport a patch to solve SSE2 instruction emulation issues
This fix addresses various issues seen in qemux86-64 images:
 * scroll bars in matchbox-terminal not working
 * files not appearing in pcmanfm
 * warnings on the console from glib/gobject about invalid gdouble values

Its due to an emulation issue in qemu which the backported patch fixes.

I managed to debug it to a specific function, Khem found the qemu patch
to backport, thanks Khem!

[YOCTO #1906]

(From OE-Core rev: 69d083f8b8d8f7d095ed5682d305870c4d93fe62)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-22 15:56:41 +01:00
Scott Rifenbark
2c3d4f5bee documentation/bsp-guide/bsp.xml: spelling corrected.
Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: c0ee8ce391114f7a5b4f1c59fdf997ba4f3bcf75)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-18 16:42:15 +01:00
Scott Rifenbark
45114a9df0 documentation/poky-ref-manual/ref-images.xml: Added self-hosted image
I added the self-hosted-image to the list of images.

(From yocto-docs rev: a8265cb523705a374d23bf60aab5b7969ad937fc)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-18 16:42:05 +01:00
Richard Purdie
08290c6003 self-hosted-image: Update poky revision to point at the 1.2 release branch
(From OE-Core rev: 00256125873ff6f1630743a712e882e5f473a9d2)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2012-04-18 15:59:56 +01:00
Elizabeth Flanagan
729e7f774c distro.conf: Flipping for denzil
Flipping values in distro.conf for upcoming release

Signed-off-by: Elizabeth Flanagan <elizabeth.flanagan@intel.com>
2012-04-18 15:55:32 +01:00
7918 changed files with 464641 additions and 569674 deletions

51
.gitignore vendored
View File

@@ -1,34 +1,41 @@
*.pyc
*.pyo
/*.patch
/.repo/
/build*/
build*/conf/local.conf
build*/conf/bblayers.conf
build*/downloads
build*/tmp/
build*/sstate-cache
pyshtables.py
pstage/
scripts/oe-git-proxy-socks
sources/
meta-*/
buildtools/
meta-*
!meta-skeleton
!meta-selftest
hob-image-*.bb
!meta-demoapps
*.swp
*.orig
*.rej
*~
!meta-poky
!meta-yocto
!meta-yocto-bsp
!meta-yocto-imported
/documentation/*/eclipse/
/documentation/*/*.html
/documentation/*/*.pdf
/documentation/*/*.tgz
/bitbake/doc/bitbake-user-manual/bitbake-user-manual.html
/bitbake/doc/bitbake-user-manual/bitbake-user-manual.pdf
/bitbake/doc/bitbake-user-manual/bitbake-user-manual.tgz
pull-*/
bitbake/lib/toaster/contrib/tts/backlog.txt
bitbake/lib/toaster/contrib/tts/log/*
bitbake/lib/toaster/contrib/tts/.cache/*
bitbake/lib/bb/tests/runqueue-tests/bitbake-cookerdaemon.log
documentation/adt-manual/adt-manual.html
documentation/adt-manual/adt-manual.pdf
documentation/adt-manual/adt-manual.tgz
documentation/dev-manual/dev-manual.html
documentation/dev-manual/dev-manual.pdf
documentation/dev-manual/dev-manual.tgz
documentation/poky-ref-manual/poky-ref-manual.html
documentation/poky-ref-manual/poky-ref-manual.pdf
documentation/poky-ref-manual/poky-ref-manual.tgz
documentation/poky-ref-manual/bsp-guide.html
documentation/poky-ref-manual/bsp-guide.pdf
documentation/bsp-guide/bsp-guide.html
documentation/bsp-guide/bsp-guide.pdf
documentation/bsp-guide/bsp-guide.tgz
documentation/yocto-project-qs/yocto-project-qs.html
documentation/yocto-project-qs/yocto-project-qs.tgz
documentation/kernel-manual/kernel-manual.html
documentation/kernel-manual/kernel-manual.tgz
documentation/kernel-manual/kernel-manual.pdf
documentation/mega-manual/mega-manual.html
documentation/mega-manual/mega-manual.pdf
documentation/mega-manual/mega-manual.tgz

View File

@@ -1,2 +0,0 @@
# Template settings
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf}

28
LICENSE
View File

@@ -1,20 +1,14 @@
Different components of OpenEmbedded are under different licenses (a mix
of MIT and GPLv2). See LICENSE.GPL-2.0-only and LICENSE.MIT for further
details of the individual licenses.
Different components of Poky are under different licenses (a mix of
MIT and GPLv2). Please see:
All metadata is MIT licensed unless otherwise stated. Source code
included in tree for individual recipes (e.g. patches) are under
the LICENSE stated in the associated recipe (.bb file) unless
otherwise stated.
bitbake/COPYING (GPLv2)
meta/COPYING.MIT (MIT)
meta-extras/COPYING.MIT (MIT)
which cover the components in those subdirectories. This means all
metadata is MIT licensed unless otherwise stated. Source code included
in tree for individual recipes is under the LICENSE stated in the .bb
file for those software projects unless otherwise stated.
License information for any other files is either explicitly stated
or defaults to GPL version 2 only.
Individual files contain the following style tags instead of the full license
text to identify their license:
SPDX-License-Identifier: GPL-2.0-only
SPDX-License-Identifier: MIT
This enables machine processing of license information based on the SPDX
License Identifiers that are here available: http://spdx.org/licenses/
or defaults to GPL version 2.

View File

@@ -1,288 +0,0 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
Note:
Individual files contain the following tag instead of the full license text.
SPDX-License-Identifier: GPL-2.0-only
This enables machine processing of license information based on the SPDX
License Identifiers that are here available: http://spdx.org/licenses/

View File

@@ -1,25 +0,0 @@
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Note:
Individual files contain the following tag instead of the full license text.
SPDX-License-Identifier: MIT
This enables machine processing of license information based on the SPDX
License Identifiers that are here available: http://spdx.org/licenses/

View File

@@ -1,71 +0,0 @@
OpenEmbedded-Core and Yocto Project Maintainer Information
==========================================================
OpenEmbedded and Yocto Project work jointly together to maintain the metadata,
layers, tools and sub-projects that make up their ecosystems.
The projects operate through collaborative development. This currently takes
place on mailing lists for many components as the "pull request on github"
workflow works well for single or small numbers of maintainers but we have
a large number, all with different specialisms and benefit from the mailing
list review process. Changes therefore undergo peer review through mailing
lists in many cases.
This file aims to acknowledge people with specific skills/knowledge/interest
both to recognise their contributions but also empower them to help lead and
curate those components. Where we have people with specialist knowledge in
particular areas, during review patches/feedback from these people in these
areas would generally carry weight.
This file is maintained in OE-Core but may refer to components that are separate
to it if that makes sense in the context of maintainership. The README of specific
layers and components should ultimately be definitive about the patch process and
maintainership for the component.
Recipe Maintainers
------------------
See meta/conf/distro/include/maintainers.inc
Component/Subsystem Maintainers
-------------------------------
* Kernel (inc. linux-yocto, perf): Bruce Ashfield
* Reproducible Builds: Joshua Watt
* Toaster: David Reyna
* Hash-Equivalence: Joshua Watt
* Recipe upgrade infrastructure: Alex Kanavin
* Toolchain: Khem Raj
* ptest-runner: Aníbal Limón
* opkg: Alex Stewart
* devtool: Saul Wold
* eSDK: Saul Wold
* overlayfs: Vyacheslav Yurkov
Maintainers needed
------------------
* Pseudo
* Layer Index
* recipetool
* QA framework/automated testing
* error reporting system/web UI
* wic
* Patchwork
* Patchtest
* Matchbox
* Sato
* Autobuilder
Layer Maintainers needed
------------------------
* meta-gplv2 (ideally new strategy but active maintainer welcome)
Shadow maintainers/development needed
--------------------------------------
* toaster
* bitbake

View File

@@ -1,5 +0,0 @@
Some project contributors who are sadly no longer with us:
Greg Gilbert (treke) - Ahead of his time with licensing
Thomas Wood (thos) - Creator of the original sato
Scott Rifenbark (scottrif) - Our long standing techwriter whose words live on

View File

@@ -1,35 +0,0 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
DESTDIR = final
ifeq ($(shell if which $(SPHINXBUILD) >/dev/null 2>&1; then echo 1; else echo 0; fi),0)
$(error "The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed")
endif
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile.sphinx clean publish
publish: Makefile.sphinx html singlehtml
rm -rf $(BUILDDIR)/$(DESTDIR)/
mkdir -p $(BUILDDIR)/$(DESTDIR)/
cp -r $(BUILDDIR)/html/* $(BUILDDIR)/$(DESTDIR)/
cp $(BUILDDIR)/singlehtml/index.html $(BUILDDIR)/$(DESTDIR)/singleindex.html
sed -i -e 's@index.html#@singleindex.html#@g' $(BUILDDIR)/$(DESTDIR)/singleindex.html
clean:
@rm -rf $(BUILDDIR)
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile.sphinx
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

29
README Normal file
View File

@@ -0,0 +1,29 @@
Poky
====
Poky is an integration of various components to form a complete prepackaged
build system and development environment. It features support for building
customised embedded device style images. There are reference demo images
featuring a X11/Matchbox/GTK themed UI called Sato. The system supports
cross-architecture application development using QEMU emulation and a
standalone toolchain and SDK with IDE integration.
Additional information on the specifics of hardware that Poky supports
is available in README.hardware. Further hardware support can easily be added
in the form of layers which extend the systems capabilities in a modular way.
As an integration layer Poky consists of several upstream projects such as
BitBake, OpenEmbedded-Core, Yocto documentation and various sources of information
e.g. for the hardware support. Poky is in turn a component of the Yocto Project.
The Yocto Project has extensive documentation about the system including a
reference manual which can be found at:
http://yoctoproject.org/community/documentation
OpenEmbedded-Core is a layer containing the core metadata for current versions
of OpenEmbedded. It is distro-less (can build a functional image with
DISTRO = "") and contains only emulated machine support.
For information about OpenEmbedded, see the OpenEmbedded website:
http://www.openembedded.org/

View File

@@ -1,29 +0,0 @@
OpenEmbedded-Core
=================
OpenEmbedded-Core is a layer containing the core metadata for current versions
of OpenEmbedded. It is distro-less (can build a functional image with
DISTRO = "nodistro") and contains only emulated machine support.
For information about OpenEmbedded, see the OpenEmbedded website:
https://www.openembedded.org/
The Yocto Project has extensive documentation about OE including a reference manual
which can be found at:
https://docs.yoctoproject.org/
Contributing
------------
Please refer to
https://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
for guidelines on how to submit patches.
Mailing list:
https://lists.openembedded.org/g/openembedded-core
Source code:
https://git.openembedded.org/openembedded-core/

469
README.hardware Normal file
View File

@@ -0,0 +1,469 @@
Poky Hardware README
====================
This file gives details about using Poky with different hardware reference
boards and consumer devices. A full list of target machines can be found by
looking in the meta/conf/machine/ directory. If in doubt about using Poky with
your hardware, consult the documentation for your board/device.
Support for additional devices is normally added by creating BSP layers - for
more information please see the Yocto Board Support Package (BSP) Developer's
Guide - documentation source is in documentation/bspguide or download the PDF
from:
http://yoctoproject.org/community/documentation
Support for machines other than QEMU may be moved out to separate BSP layers in
future versions.
QEMU Emulation Targets
======================
To simplify development Poky supports building images to work with the QEMU
emulator in system emulation mode. Several architectures are currently
supported:
* ARM (qemuarm)
* x86 (qemux86)
* x86-64 (qemux86-64)
* PowerPC (qemuppc)
* MIPS (qemumips)
Use of the QEMU images is covered in the Poky Reference Manual. The Poky
MACHINE setting corresponding to the target is given in brackets.
Hardware Reference Boards
=========================
The following boards are supported by Poky's core layer:
* Texas Instruments Beagleboard (beagleboard)
* Freescale MPC8315E-RDB (mpc8315e-rdb)
* Ubiquiti Networks RouterStation Pro (routerstationpro)
For more information see the board's section below. The Poky MACHINE setting
corresponding to the board is given in brackets.
Consumer Devices
================
The following consumer devices are supported by Poky's core layer:
* Intel Atom based PCs and devices (atom-pc)
For more information see the device's section below. The Poky MACHINE setting
corresponding to the device is given in brackets.
Specific Hardware Documentation
===============================
Intel Atom based PCs and devices (atom-pc)
==========================================
The atom-pc MACHINE is tested on the following platforms:
o Asus EeePC 901
o Acer Aspire One
o Toshiba NB305
o Intel Embedded Development Board 1-N450 (Black Sand)
and is likely to work on many unlisted Atom based devices. The MACHINE type
supports ethernet, wifi, sound, and i915 graphics by default in addition to
common PC input devices, busses, and so on.
Depending on the device, it can boot from a traditional hard-disk, a USB device,
or over the network. Writing poky generated images to physical media is
straightforward with a caveat for USB devices. The following examples assume the
target boot device is /dev/sdb, be sure to verify this and use the correct
device as the following commands are run as root and are not reversable.
USB Device:
1. Build a live image. This image type consists of a simple filesystem
without a partition table, which is suitable for USB keys, and with the
default setup for the atom-pc machine, this image type is built
automatically for any image you build. For example:
$ bitbake core-image-minimal
2. Use the "dd" utility to write the image to the raw block device. For
example:
# dd if=core-image-minimal-atom-pc.hddimg of=/dev/sdb
If the device fails to boot with "Boot error" displayed, or apparently
stops just after the SYSLINUX version banner, it is likely the BIOS cannot
understand the physical layout of the disk (or rather it expects a
particular layout and cannot handle anything else). There are two possible
solutions to this problem:
1. Change the BIOS USB Device setting to HDD mode. The label will vary by
device, but the idea is to force BIOS to read the Cylinder/Head/Sector
geometry from the device.
2. Without such an option, the BIOS generally boots the device in USB-ZIP
mode. To write an image to a USB device that will be bootable in
USB-ZIP mode, carry out the following actions:
a. Determine the geometry of your USB device using fdisk:
# fdisk /dev/sdb
Command (m for help): p
Disk /dev/sdb: 4011 MB, 4011491328 bytes
124 heads, 62 sectors/track, 1019 cylinders, total 7834944 sectors
...
Command (m for help): q
b. Configure the USB device for USB-ZIP mode:
# mkdiskimage -4 /dev/sdb 1019 124 62
Where 1019, 124 and 62 are the cylinder, head and sectors/track counts
as reported by fdisk (substitute the values reported for your device).
When the operation has finished and the access LED (if any) on the
device stops flashing, remove and reinsert the device to allow the
kernel to detect the new partition layout.
c. Copy the contents of the poky image to the USB-ZIP mode device:
# mkdir /tmp/image
# mkdir /tmp/usbkey
# mount -o loop core-image-minimal-atom-pc.hddimg /tmp/image
# mount /dev/sdb4 /tmp/usbkey
# cp -rf /tmp/image/* /tmp/usbkey
d. Install the syslinux boot loader:
# syslinux /dev/sdb4
e. Unmount everything:
# umount /tmp/image
# umount /tmp/usbkey
Install the boot device in the target board and configure the BIOS to boot
from it.
For more details on the USB-ZIP scenario, see the syslinux documentation:
http://git.kernel.org/?p=boot/syslinux/syslinux.git;a=blob_plain;f=doc/usbkey.txt;hb=HEAD
Texas Instruments Beagleboard (beagleboard)
===========================================
The Beagleboard is an ARM Cortex-A8 development board with USB, DVI-D, S-Video,
2D/3D accelerated graphics, audio, serial, JTAG, and SD/MMC. The xM adds a
faster CPU, more RAM, an ethernet port, more USB ports, microSD, and removes
the NAND flash. The beagleboard MACHINE is tested on the following platforms:
o Beagleboard C4
o Beagleboard xM rev A & B
The Beagleboard C4 has NAND, while the xM does not. For the sake of simplicity,
these instructions assume you have erased the NAND on the C4 so its boot
behavior matches that of the xM. To do this, issue the following commands from
the u-boot prompt (note that the unlock may be unecessary depending on the
version of u-boot installed on your board and only one of the erase commands
will succeed):
# nand unlock
# nand erase
# nand erase.chip
To further tailor these instructions for your board, please refer to the
documentation at http://www.beagleboard.org.
From a Linux system with access to the image files perform the following steps
as root, replacing mmcblk0* with the SD card device on your machine (such as sdc
if used via a usb card reader):
1. Partition and format an SD card:
# fdisk -lu /dev/mmcblk0
Disk /dev/mmcblk0: 3951 MB, 3951034368 bytes
255 heads, 63 sectors/track, 480 cylinders, total 7716864 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 * 63 144584 72261 c Win95 FAT32 (LBA)
/dev/mmcblk0p2 144585 465884 160650 83 Linux
# mkfs.vfat -F 16 -n "boot" /dev/mmcblk0p1
# mke2fs -j -L "root" /dev/mmcblk0p2
The following assumes the SD card partition 1 and 2 are mounted at
/media/boot and /media/root respectively. Removing the card and reinserting
it will do just that on most modern Linux desktop environments.
The files referenced below are made available after the build in
build/tmp/deploy/images.
2. Install the boot loaders
# cp MLO-beagleboard /media/boot/MLO
# cp u-boot-beagleboard.bin /media/boot/u-boot.bin
3. Install the root filesystem
# tar x -C /media/root -f core-image-$IMAGE_TYPE-beagleboard.tar.bz2
# tar x -C /media/root -f modules-$KERNEL_VERSION-beagleboard.tgz
4. Install the kernel uImage
# cp uImage-beagleboard.bin /media/boot/uImage
5. Prepare a u-boot script to simplify the boot process
The Beagleboard can be made to boot at this point from the u-boot command
shell. To automate this process, generate a user.scr script as follows.
Install uboot-mkimage (from uboot-mkimage on Ubuntu or uboot-tools on Fedora).
Prepare a script config:
# (cat << EOF
setenv bootcmd 'mmc init; fatload mmc 0:1 0x80300000 uImage; bootm 0x80300000'
setenv bootargs 'console=tty0 console=ttyO2,115200n8 root=/dev/mmcblk0p2 rootwait rootfstype=ext3 ro'
boot
EOF
) > serial-boot.cmd
# mkimage -A arm -O linux -T script -C none -a 0 -e 0 -n "Core Minimal" -d ./serial-boot.cmd ./boot.scr
# cp boot.scr /media/boot
6. Unmount the SD partitions, insert the SD card into the Beagleboard, and
boot the Beagleboard
Note: As of the 2.6.37 linux-yocto kernel recipe, the Beagleboard uses the
OMAP_SERIAL device (ttyO2). If you are using an older kernel, such as the
2.6.34 linux-yocto-stable, be sure to replace ttyO2 with ttyS2 above. You
should also override the machine SERIAL_CONSOLE in your local.conf in
order to setup the getty on the serial line:
SERIAL_CONSOLE_beagleboard = "115200 ttyS2"
Freescale MPC8315E-RDB (mpc8315e-rdb)
=====================================
The MPC8315 PowerPC reference platform (MPC8315E-RDB) is aimed at hardware and
software development of network attached storage (NAS) and digital media server
applications. The MPC8315E-RDB features the PowerQUICC II Pro processor, which
includes a built-in security accelerator.
(Note: you may find it easier to order MPC8315E-RDBA; this appears to be the
same board in an enclosure with accessories. In any case it is fully
compatible with the instructions given here.)
Setup instructions
------------------
You will need the following:
* NFS root setup on your workstation
* TFTP server installed on your workstation
* Straight-thru 9-conductor serial cable (DB9, M/F) connected from your
PC to UART1
* Ethernet connected to the first ethernet port on the board
--- Preparation ---
Note: if you have altered your board's ethernet MAC address(es) from the
defaults, or you need to do so because you want multiple boards on the same
network, then you will need to change the values in the dts file (patch
linux/arch/powerpc/boot/dts/mpc8315erdb.dts within the kernel source). If
you have left them at the factory default then you shouldn't need to do
anything here.
--- Booting from NFS root ---
Load the kernel and dtb (device tree blob), and boot the system as follows:
1. Get the kernel (uImage-mpc8315e-rdb.bin) and dtb (uImage-mpc8315e-rdb.dtb)
files from the Poky build tmp/deploy directory, and make them available on
your TFTP server.
2. Connect the board's first serial port to your workstation and then start up
your favourite serial terminal so that you will be able to interact with
the serial console. If you don't have a favourite, picocom is suggested:
$ picocom /dev/ttyUSB0 -b 115200
3. Power up or reset the board and press a key on the terminal when prompted
to get to the U-Boot command line
4. Set up the environment in U-Boot:
=> setenv ipaddr <board ip>
=> setenv serverip <tftp server ip>
=> setenv bootargs root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:255.255.255.0:mpc8315e:eth0:off console=ttyS0,115200
5. Download the kernel and dtb, and boot:
=> tftp 800000 uImage-mpc8315e-rdb.bin
=> tftp 780000 uImage-mpc8315e-rdb.dtb
=> bootm 800000 - 780000
Ubiquiti Networks RouterStation Pro (routerstationpro)
======================================================
The RouterStation Pro is an Atheros AR7161 MIPS-based board. Geared towards
networking applications, it has all of the usual features as well as three
type IIIA mini-PCI slots and an on-board 3-port 10/100/1000 Ethernet switch,
in addition to the 10/100/1000 Ethernet WAN port which supports
Power-over-Ethernet.
Setup instructions
------------------
You will need the following:
* A serial cable - female to female (or female to male + gender changer)
NOTE: cable must be straight through, *not* a null modem cable.
* USB flash drive or hard disk that is able to be powered from the
board's USB port.
* tftp server installed on your workstation
NOTE: in the following instructions it is assumed that /dev/sdb corresponds
to the USB disk when it is plugged into your workstation. If this is not the
case in your setup then please be careful to substitute the correct device
name in all commands where appropriate.
--- Preparation ---
1) Build an image (e.g. core-image-minimal) using "routerstationpro" as the
MACHINE
2) Partition the USB drive so that primary partition 1 is type Linux (83).
Minimum size depends on your root image size - core-image-minimal probably
only needs 8-16MB, other images will need more.
# fdisk /dev/sdb
Command (m for help): p
Disk /dev/sdb: 4011 MB, 4011491328 bytes
124 heads, 62 sectors/track, 1019 cylinders, total 7834944 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009e87d
Device Boot Start End Blocks Id System
/dev/sdb1 62 1952751 976345 83 Linux
3) Format partition 1 on the USB as ext3
# mke2fs -j /dev/sdb1
4) Mount partition 1 and then extract the contents of
tmp/deploy/images/core-image-XXXX.tar.bz2 into it (preserving permissions).
# mount /dev/sdb1 /media/sdb1
# cd /media/sdb1
# tar -xvjpf tmp/deploy/images/core-image-XXXX.tar.bz2
5) Unmount the USB drive and then plug it into the board's USB port
6) Connect the board's serial port to your workstation and then start up
your favourite serial terminal so that you will be able to interact with
the serial console. If you don't have a favourite, picocom is suggested:
$ picocom /dev/ttyUSB0 -b 115200
7) Connect the network into eth0 (the one that is NOT the 3 port switch). If
you are using power-over-ethernet then the board will power up at this point.
8) Start up the board, watch the serial console. Hit Ctrl+C to abort the
autostart if the board is configured that way (it is by default). The
bootloader's fconfig command can be used to disable autostart and configure
the IP settings if you need to change them (default IP is 192.168.1.20).
9) Make the kernel (tmp/deploy/images/vmlinux-routerstationpro.bin) available
on the tftp server.
10) If you are going to write the kernel to flash (optional - see "Booting a
kernel directly" below for the alternative), remove the current kernel and
rootfs flash partitions. You can list the partitions using the following
bootloader command:
RedBoot> fis list
You can delete the existing kernel and rootfs with these commands:
RedBoot> fis delete kernel
RedBoot> fis delete rootfs
--- Booting a kernel directly ---
1) Load the kernel using the following bootloader command:
RedBoot> load -m tftp -h <ip of tftp server> vmlinux-routerstationpro.bin
You should see a message on it being successfully loaded.
2) Execute the kernel:
RedBoot> exec -c "console=ttyS0,115200 root=/dev/sda1 rw rootdelay=2 board=UBNT-RSPRO"
Note that specifying the command line with -c is important as linux-yocto does
not provide a default command line.
--- Writing a kernel to flash ---
1) Go to your tftp server and gzip the kernel you want in flash. It should
halve the size.
2) Load the kernel using the following bootloader command:
RedBoot> load -r -b 0x80600000 -m tftp -h <ip of tftp server> vmlinux-routerstationpro.bin.gz
This should output something similar to the following:
Raw file loaded 0x80600000-0x8087c537, assumed entry at 0x80600000
Calculate the length by subtracting the first number from the second number
and then rounding the result up to the nearest 0x1000.
3) Using the length calculated above, create a flash partition for the kernel:
RedBoot> fis create -b 0x80600000 -l 0x240000 kernel
(change 0x240000 to your rounded length -- change "kernel" to whatever
you want to name your kernel)
--- Booting a kernel from flash ---
To boot the flashed kernel perform the following steps.
1) At the bootloader prompt, load the kernel:
RedBoot> fis load -d -e kernel
(Change the name "kernel" above if you chose something different earlier)
(-e means 'elf', -d 'decompress')
2) Execute the kernel using the exec command as above.
--- Automating the boot process ---
After writing the kernel to flash and testing the load and exec commands
manually, you can automate the boot process with a boot script.
1) RedBoot> fconfig
(Answer the questions not specified here as they pertain to your environment)
2) Run script at boot: true
Boot script:
.. fis load -d -e kernel
.. exec
Enter script, terminate with empty line
>> fis load -d -e kernel
>> exec -c "console=ttyS0,115200 root=/dev/sda1 rw rootdelay=2 board=UBNT-RSPRO"
>>
3) Answer the remaining questions and write the changes to flash:
Update RedBoot non-volatile configuration - continue (y/n)? y
... Erase from 0xbfff0000-0xc0000000: .
... Program from 0x87ff0000-0x88000000 at 0xbfff0000: .
4) Power cycle the board.

View File

@@ -1 +0,0 @@
meta-yocto-bsp/README.hardware.md

View File

@@ -1 +0,0 @@
README.poky.md

View File

@@ -1 +0,0 @@
meta-poky/README.poky.md

View File

@@ -1,15 +0,0 @@
QEMU Emulation Targets
======================
To simplify development, the build system supports building images to
work with the QEMU emulator in system emulation mode. Several architectures
are currently supported in 32 and 64 bit variants:
* ARM (qemuarm + qemuarm64)
* x86 (qemux86 + qemux86-64)
* PowerPC (qemuppc only)
* MIPS (qemumips + qemumips64)
Use of the QEMU images is covered in the Yocto Project Reference Manual.
The appropriate MACHINE variable value corresponding to the target is given
in brackets.

View File

@@ -1,2 +0,0 @@
*min.js binary
*min.css binary

339
bitbake/COPYING Normal file
View File

@@ -0,0 +1,339 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License.

19
bitbake/HEADER Normal file
View File

@@ -0,0 +1,19 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# <one line to give the program's name and a brief idea of what it does.>
# Copyright (C) <year> <name of author>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

View File

@@ -1,29 +0,0 @@
BitBake is licensed under the GNU General Public License version 2.0. See
LICENSE.GPL-2.0-only for further details.
Individual files contain the following style tags instead of the full license text:
SPDX-License-Identifier: GPL-2.0-only
This enables machine processing of license information based on the SPDX
License Identifiers that are here available: http://spdx.org/licenses/
The following external components are distributed with this software:
* The Toaster Simple UI application is based upon the Django project template, the files of which are covered by the BSD license and are copyright (c) Django Software
Foundation and individual contributors.
* Twitter Bootstrap (including Glyphicons), redistributed under the MIT license
* jQuery is redistributed under the MIT license.
* Twitter typeahead.js redistributed under the MIT license. Note that the JS source has one small modification, so the full unminified file is currently included to make it obvious where this is.
* jsrender is redistributed under the MIT license.
* QUnit is redistributed under the MIT license.
* Font Awesome fonts redistributed under the SIL Open Font License 1.1
* simplediff is distributed under the zlib license.

View File

@@ -1,288 +0,0 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
Note:
Individual files contain the following tag instead of the full license text.
SPDX-License-Identifier: GPL-2.0-only
This enables machine processing of license information based on the SPDX
License Identifiers that are here available: http://spdx.org/licenses/

View File

@@ -1,25 +0,0 @@
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Note:
Individual files contain the following tag instead of the full license text.
SPDX-License-Identifier: MIT
This enables machine processing of license information based on the SPDX
License Identifiers that are here available: http://spdx.org/licenses/

View File

@@ -1,43 +0,0 @@
Bitbake
=======
BitBake is a generic task execution engine that allows shell and Python tasks to be run
efficiently and in parallel while working within complex inter-task dependency constraints.
One of BitBake's main users, OpenEmbedded, takes this core and builds embedded Linux software
stacks using a task-oriented approach.
For information about Bitbake, see the OpenEmbedded website:
https://www.openembedded.org/
Bitbake plain documentation can be found under the doc directory or its integrated
html version at the Yocto Project website:
https://docs.yoctoproject.org
Contributing
------------
Please refer to
https://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
for guidelines on how to submit patches, just note that the latter documentation is intended
for OpenEmbedded (and its core) not bitbake patches (bitbake-devel@lists.openembedded.org)
but in general main guidelines apply. Once the commit(s) have been created, the way to send
the patch is through git-send-email. For example, to send the last commit (HEAD) on current
branch, type:
git send-email -M -1 --to bitbake-devel@lists.openembedded.org
Mailing list:
https://lists.openembedded.org/g/bitbake-devel
Source code:
https://git.openembedded.org/bitbake/
Testing:
Bitbake has a testsuite located in lib/bb/tests/ whichs aim to try and prevent regressions.
You can run this with "bitbake-selftest". In particular the fetcher is well covered since
it has so many corner cases. The datastore has many tests too. Testing with the testsuite is
recommended before submitting patches, particularly to the fetcher and datastore. We also
appreciate new test cases and may require them for more obscure issues.

View File

@@ -1,4 +1,6 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2003, 2004 Phil Blundell
@@ -7,40 +9,256 @@
# Copyright (C) 2005 ROAD GmbH
# Copyright (C) 2006 Richard Purdie
#
# SPDX-License-Identifier: GPL-2.0-only
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import warnings
warnings.simplefilter("default")
import sys, logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),
'lib'))
import optparse
import warnings
from traceback import format_exception
try:
import bb
except RuntimeError as exc:
sys.exit(str(exc))
from bb import event
import bb.msg
from bb import cooker
from bb import ui
from bb import server
from bb import cookerdata
from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
__version__ = "1.15.2"
logger = logging.getLogger("BitBake")
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
__version__ = "2.0.1"
class BBConfiguration(object):
"""
Manages build options and configurations for one run
"""
def __init__(self, options):
for key, val in options.__dict__.items():
setattr(self, key, val)
self.pkgs_to_build = []
def get_ui(config):
if config.ui:
interface = config.ui
else:
interface = 'knotty'
try:
# Dynamically load the UI based on the ui name. Although we
# suggest a fixed set this allows you to have flexibility in which
# ones are available.
module = __import__("bb.ui", fromlist = [interface])
return getattr(module, interface).main
except AttributeError:
sys.exit("FATAL: Invalid user interface '%s' specified.\n"
"Valid interfaces: depexp, goggle, ncurses, hob, knotty [default], knotty2." % interface)
# Display bitbake/OE warnings via the BitBake.Warnings logger, ignoring others"""
warnlog = logging.getLogger("BitBake.Warnings")
_warnings_showwarning = warnings.showwarning
def _showwarning(message, category, filename, lineno, file=None, line=None):
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno)
warnlog.warn(s)
warnings.showwarning = _showwarning
warnings.filterwarnings("ignore")
warnings.filterwarnings("default", module="(<string>$|(oe|bb)\.)")
warnings.filterwarnings("ignore", category=PendingDeprecationWarning)
warnings.filterwarnings("ignore", category=ImportWarning)
warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
def main():
parser = optparse.OptionParser(
version = "BitBake Build Tool Core version %s, %%prog version %s" % (bb.__version__, __version__),
usage = """%prog [options] [package ...]
Executes the specified task (default is 'build') for a given set of BitBake files.
It expects that BBFILES is defined, which is a space separated list of files to
be executed. BBFILES does support wildcards.
Default BBFILES are the .bb files in the current directory.""")
parser.add_option("-b", "--buildfile", help = "execute the task against this .bb file, rather than a package from BBFILES. Does not handle any dependencies.",
action = "store", dest = "buildfile", default = None)
parser.add_option("-k", "--continue", help = "continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other dependencies of these targets can be processed all the same.",
action = "store_false", dest = "abort", default = True)
parser.add_option("-a", "--tryaltconfigs", help = "continue with builds by trying to use alternative providers where possible.",
action = "store_true", dest = "tryaltconfigs", default = False)
parser.add_option("-f", "--force", help = "force run of specified cmd, regardless of stamp status",
action = "store_true", dest = "force", default = False)
parser.add_option("-c", "--cmd", help = "Specify task to execute. Note that this only executes the specified task for the providee and the packages it depends on, i.e. 'compile' does not implicitly call stage for the dependencies (IOW: use only if you know what you are doing). Depending on the base.bbclass a listtasks tasks is defined and will show available tasks",
action = "store", dest = "cmd")
parser.add_option("-r", "--read", help = "read the specified file before bitbake.conf",
action = "append", dest = "prefile", default = [])
parser.add_option("-R", "--postread", help = "read the specified file after bitbake.conf",
action = "append", dest = "postfile", default = [])
parser.add_option("-v", "--verbose", help = "output more chit-chat to the terminal",
action = "store_true", dest = "verbose", default = False)
parser.add_option("-D", "--debug", help = "Increase the debug level. You can specify this more than once.",
action = "count", dest="debug", default = 0)
parser.add_option("-n", "--dry-run", help = "don't execute, just go through the motions",
action = "store_true", dest = "dry_run", default = False)
parser.add_option("-S", "--dump-signatures", help = "don't execute, just dump out the signature construction information",
action = "store_true", dest = "dump_signatures", default = False)
parser.add_option("-p", "--parse-only", help = "quit after parsing the BB files (developers only)",
action = "store_true", dest = "parse_only", default = False)
parser.add_option("-s", "--show-versions", help = "show current and preferred versions of all packages",
action = "store_true", dest = "show_versions", default = False)
parser.add_option("-e", "--environment", help = "show the global or per-package environment (this is what used to be bbread)",
action = "store_true", dest = "show_environment", default = False)
parser.add_option("-g", "--graphviz", help = "emit the dependency trees of the specified packages in the dot syntax",
action = "store_true", dest = "dot_graph", default = False)
parser.add_option("-I", "--ignore-deps", help = """Assume these dependencies don't exist and are already provided (equivalent to ASSUME_PROVIDED). Useful to make dependency graphs more appealing""",
action = "append", dest = "extra_assume_provided", default = [])
parser.add_option("-l", "--log-domains", help = """Show debug logging for the specified logging domains""",
action = "append", dest = "debug_domains", default = [])
parser.add_option("-P", "--profile", help = "profile the command and print a report",
action = "store_true", dest = "profile", default = False)
parser.add_option("-u", "--ui", help = "userinterface to use",
action = "store", dest = "ui")
parser.add_option("-t", "--servertype", help = "Choose which server to use, none, process or xmlrpc",
action = "store", dest = "servertype")
parser.add_option("", "--revisions-changed", help = "Set the exit code depending on whether upstream floating revisions have changed or not",
action = "store_true", dest = "revisions_changed", default = False)
parser.add_option("", "--server-only", help = "Run bitbake without UI, the frontend can connect with bitbake server itself",
action = "store_true", dest = "server_only", default = False)
parser.add_option("-B", "--bind", help = "The name/address for the bitbake server to bind to",
action = "store", dest = "bind", default = False)
options, args = parser.parse_args(sys.argv)
configuration = BBConfiguration(options)
configuration.pkgs_to_build.extend(args[1:])
ui_main = get_ui(configuration)
# Server type can be xmlrpc, process or none currently, if nothing is specified,
# the default server is process
if configuration.servertype:
server_type = configuration.servertype
else:
server_type = 'process'
try:
module = __import__("bb.server", fromlist = [server_type])
server = getattr(module, server_type)
except AttributeError:
sys.exit("FATAL: Invalid server type '%s' specified.\n"
"Valid interfaces: xmlrpc, process [default], none." % servertype)
if configuration.server_only:
if configuration.servertype != "xmlrpc":
sys.exit("FATAL: If '--server-only' is defined, we must set the servertype as 'xmlrpc'.\n")
if not configuration.bind:
sys.exit("FATAL: The '--server-only' option requires a name/address to bind to with the -B option.\n")
if configuration.bind and configuration.servertype != "xmlrpc":
sys.exit("FATAL: If '-B' or '--bind' is defined, we must set the servertype as 'xmlrpc'.\n")
# Save a logfile for cooker into the current working directory. When the
# server is daemonized this logfile will be truncated.
cooker_logfile = os.path.join(os.getcwd(), "cooker.log")
bb.msg.init_msgconfig(configuration.verbose, configuration.debug,
configuration.debug_domains)
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
logger.addHandler(handler)
# Before we start modifying the environment we should take a pristine
# copy for possible later use
initialenv = os.environ.copy()
# Clear away any spurious environment variables. But don't wipe the
# environment totally. This is necessary to ensure the correct operation
# of the UIs (e.g. for DISPLAY, etc.)
bb.utils.clean_environment()
server = server.BitBakeServer()
if configuration.bind:
server.initServer((configuration.bind, 0))
else:
server.initServer()
idle = server.getServerIdleCB()
cooker = bb.cooker.BBCooker(configuration, idle, initialenv)
cooker.parseCommandLine()
server.addcooker(cooker)
server.saveConnectionDetails()
server.detach(cooker_logfile)
# Should no longer need to ever reference cooker
del cooker
logger.removeHandler(handler)
if not configuration.server_only:
# Setup a connection to the server (cooker)
server_connection = server.establishConnection()
try:
return server.launchUI(ui_main, server_connection.connection, server_connection.events)
finally:
bb.event.ui_queue = []
server_connection.terminate()
else:
print("server address: %s, server port: %s" % (server.serverinfo.host, server.serverinfo.port))
return 1
if __name__ == "__main__":
if __version__ != bb.__version__:
sys.exit("Bitbake core version and program version mismatch!")
try:
sys.exit(bitbake_main(BitBakeConfigParameters(sys.argv),
cookerdata.CookerConfiguration()))
except BBMainException as err:
sys.exit(err)
except bb.BBHandledException:
sys.exit(1)
ret = main()
except Exception:
ret = 1
import traceback
traceback.print_exc()
sys.exit(1)
traceback.print_exc(5)
sys.exit(ret)

View File

@@ -1,204 +1,12 @@
#!/usr/bin/env python3
# bitbake-diffsigs / bitbake-dumpsig
# BitBake task signature data dump and comparison utility
#
# Copyright (C) 2012-2013, 2017 Intel Corporation
#
# SPDX-License-Identifier: GPL-2.0-only
#
#!/usr/bin/env python
import os
import sys
import warnings
warnings.simplefilter("default")
import argparse
import logging
import pickle
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb.tinfoil
import bb.siggen
import bb.msg
myname = os.path.basename(sys.argv[0])
logger = bb.msg.logger_create(myname)
is_dump = myname == 'bitbake-dumpsig'
def find_siginfo(tinfoil, pn, taskname, sigs=None):
result = None
tinfoil.set_event_mask(['bb.event.FindSigInfoResult',
'logging.LogRecord',
'bb.command.CommandCompleted',
'bb.command.CommandFailed'])
ret = tinfoil.run_command('findSigInfo', pn, taskname, sigs)
if ret:
while True:
event = tinfoil.wait_event(1)
if event:
if isinstance(event, bb.command.CommandCompleted):
break
elif isinstance(event, bb.command.CommandFailed):
logger.error(str(event))
sys.exit(2)
elif isinstance(event, bb.event.FindSigInfoResult):
result = event.result
elif isinstance(event, logging.LogRecord):
logger.handle(event)
else:
logger.error('No result returned from findSigInfo command')
sys.exit(2)
return result
def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
""" Find the most recent signature files for the specified PN/task """
if not taskname.startswith('do_'):
taskname = 'do_%s' % taskname
if sig1 and sig2:
sigfiles = find_siginfo(bbhandler, pn, taskname, [sig1, sig2])
if not sigfiles:
logger.error('No sigdata files found matching %s %s matching either %s or %s' % (pn, taskname, sig1, sig2))
sys.exit(1)
elif sig1 not in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig1))
sys.exit(1)
elif sig2 not in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig2))
sys.exit(1)
latestfiles = [sigfiles[sig1], sigfiles[sig2]]
else:
filedates = find_siginfo(bbhandler, pn, taskname)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-2:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
return latestfiles
# Define recursion callback
def recursecb(key, hash1, hash2):
hashes = [hash1, hash2]
hashfiles = find_siginfo(tinfoil, key, None, hashes)
recout = []
if not hashfiles:
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
elif hash1 not in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash1))
elif hash2 not in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash2))
else:
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb, color=color)
for change in out2:
for line in change.splitlines():
recout.append(' ' + line)
return recout
parser = argparse.ArgumentParser(
description=("Dumps" if is_dump else "Compares") + " siginfo/sigdata files written out by BitBake")
parser.add_argument('-D', '--debug',
help='Enable debug output',
action='store_true')
if is_dump:
parser.add_argument("-t", "--task",
help="find the signature data file for the last run of the specified task",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
parser.add_argument("sigdatafile1",
help="Signature file to dump. Not used when using -t/--task.",
action="store", nargs='?', metavar="sigdatafile")
if len(sys.argv) > 2:
bb.siggen.compare_sigfiles(sys.argv[1], sys.argv[2])
else:
parser.add_argument('-c', '--color',
help='Colorize the output (where %(metavar)s is %(choices)s)',
choices=['auto', 'always', 'never'], default='auto', metavar='color')
parser.add_argument('-d', '--dump',
help='Dump the last signature data instead of comparing (equivalent to using bitbake-dumpsig)',
action='store_true')
parser.add_argument("-t", "--task",
help="find the signature data files for the last two runs of the specified task and compare them",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
parser.add_argument("-s", "--signature",
help="With -t/--task, specify the signatures to look for instead of taking the last two",
action="store", dest="sigargs", nargs=2, metavar=('fromsig', 'tosig'))
parser.add_argument("sigdatafile1",
help="First signature file to compare (or signature file to dump, if second not specified). Not used when using -t/--task.",
action="store", nargs='?')
parser.add_argument("sigdatafile2",
help="Second signature file to compare",
action="store", nargs='?')
options = parser.parse_args()
if is_dump:
options.color = 'never'
options.dump = True
options.sigdatafile2 = None
options.sigargs = None
if options.debug:
logger.setLevel(logging.DEBUG)
color = (options.color == 'always' or (options.color == 'auto' and sys.stdout.isatty()))
if options.taskargs:
with bb.tinfoil.Tinfoil() as tinfoil:
tinfoil.prepare(config_only=True)
if not options.dump and options.sigargs:
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1], options.sigargs[0],
options.sigargs[1])
else:
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1])
if options.dump:
logger.debug("Signature file: %s" % files[-1])
output = bb.siggen.dump_sigfile(files[-1])
else:
if len(files) < 2:
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (
options.taskargs[0], options.taskargs[1]))
sys.exit(1)
# Recurse into signature comparison
logger.debug("Signature file (previous): %s" % files[-2])
logger.debug("Signature file (latest): %s" % files[-1])
output = bb.siggen.compare_sigfiles(files[-2], files[-1], recursecb, color=color)
else:
if options.sigargs:
logger.error('-s/--signature can only be used together with -t/--task')
sys.exit(1)
try:
if not options.dump and options.sigdatafile1 and options.sigdatafile2:
with bb.tinfoil.Tinfoil() as tinfoil:
tinfoil.prepare(config_only=True)
output = bb.siggen.compare_sigfiles(options.sigdatafile1, options.sigdatafile2, recursecb, color=color)
elif options.sigdatafile1:
output = bb.siggen.dump_sigfile(options.sigdatafile1)
else:
logger.error('Must specify signature file(s) or -t/--task')
parser.print_help()
sys.exit(1)
except IOError as e:
logger.error(str(e))
sys.exit(1)
except (pickle.UnpicklingError, EOFError):
logger.error('Invalid signature data - ensure you are specifying sigdata/siginfo files')
sys.exit(1)
if output:
print('\n'.join(output))
bb.siggen.dump_sigfile(sys.argv[1])

View File

@@ -1 +0,0 @@
bitbake-diffsigs

9
bitbake/bin/bitbake-dumpsig Executable file
View File

@@ -0,0 +1,9 @@
#!/usr/bin/env python
import os
import sys
import warnings
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb.siggen
bb.siggen.dump_sigfile(sys.argv[1])

View File

@@ -1,50 +0,0 @@
#! /usr/bin/env python3
#
# Copyright (C) 2021 Richard Purdie
#
# SPDX-License-Identifier: GPL-2.0-only
#
import argparse
import io
import os
import sys
import warnings
warnings.simplefilter("default")
bindir = os.path.dirname(__file__)
topdir = os.path.dirname(bindir)
sys.path[0:0] = [os.path.join(topdir, 'lib')]
import bb.tinfoil
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Bitbake Query Variable")
parser.add_argument("variable", help="variable name to query")
parser.add_argument("-r", "--recipe", help="Recipe name to query", default=None, required=False)
parser.add_argument('-u', '--unexpand', help='Do not expand the value (with --value)', action="store_true")
parser.add_argument('-f', '--flag', help='Specify a variable flag to query (with --value)', default=None)
parser.add_argument('--value', help='Only report the value, no history and no variable name', action="store_true")
args = parser.parse_args()
if args.unexpand and not args.value:
print("--unexpand only makes sense with --value")
sys.exit(1)
if args.flag and not args.value:
print("--flag only makes sense with --value")
sys.exit(1)
with bb.tinfoil.Tinfoil(tracking=True) as tinfoil:
if args.recipe:
tinfoil.prepare(quiet=2)
d = tinfoil.parse_recipe(args.recipe)
else:
tinfoil.prepare(quiet=2, config_only=True)
d = tinfoil.config_data
if args.flag:
print(str(d.getVarFlag(args.variable, args.flag, expand=(not args.unexpand))))
elif args.value:
print(str(d.getVar(args.variable, expand=(not args.unexpand))))
else:
bb.data.emit_var(args.variable, d=d, all=True)

View File

@@ -1,169 +0,0 @@
#! /usr/bin/env python3
#
# Copyright (C) 2019 Garmin Ltd.
#
# SPDX-License-Identifier: GPL-2.0-only
#
import argparse
import hashlib
import logging
import os
import pprint
import sys
import threading
import time
import warnings
warnings.simplefilter("default")
try:
import tqdm
ProgressBar = tqdm.tqdm
except ImportError:
class ProgressBar(object):
def __init__(self, *args, **kwargs):
pass
def __enter__(self):
return self
def __exit__(self, *args, **kwargs):
pass
def update(self):
pass
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
import hashserv
DEFAULT_ADDRESS = 'unix://./hashserve.sock'
METHOD = 'stress.test.method'
def main():
def handle_stats(args, client):
if args.reset:
s = client.reset_stats()
else:
s = client.get_stats()
pprint.pprint(s)
return 0
def handle_stress(args, client):
def thread_main(pbar, lock):
nonlocal found_hashes
nonlocal missed_hashes
nonlocal max_time
client = hashserv.create_client(args.address)
for i in range(args.requests):
taskhash = hashlib.sha256()
taskhash.update(args.taskhash_seed.encode('utf-8'))
taskhash.update(str(i).encode('utf-8'))
start_time = time.perf_counter()
l = client.get_unihash(METHOD, taskhash.hexdigest())
elapsed = time.perf_counter() - start_time
with lock:
if l:
found_hashes += 1
else:
missed_hashes += 1
max_time = max(elapsed, max_time)
pbar.update()
max_time = 0
found_hashes = 0
missed_hashes = 0
lock = threading.Lock()
total_requests = args.clients * args.requests
start_time = time.perf_counter()
with ProgressBar(total=total_requests) as pbar:
threads = [threading.Thread(target=thread_main, args=(pbar, lock), daemon=False) for _ in range(args.clients)]
for t in threads:
t.start()
for t in threads:
t.join()
elapsed = time.perf_counter() - start_time
with lock:
print("%d requests in %.1fs. %.1f requests per second" % (total_requests, elapsed, total_requests / elapsed))
print("Average request time %.8fs" % (elapsed / total_requests))
print("Max request time was %.8fs" % max_time)
print("Found %d hashes, missed %d" % (found_hashes, missed_hashes))
if args.report:
with ProgressBar(total=args.requests) as pbar:
for i in range(args.requests):
taskhash = hashlib.sha256()
taskhash.update(args.taskhash_seed.encode('utf-8'))
taskhash.update(str(i).encode('utf-8'))
outhash = hashlib.sha256()
outhash.update(args.outhash_seed.encode('utf-8'))
outhash.update(str(i).encode('utf-8'))
client.report_unihash(taskhash.hexdigest(), METHOD, outhash.hexdigest(), taskhash.hexdigest())
with lock:
pbar.update()
parser = argparse.ArgumentParser(description='Hash Equivalence Client')
parser.add_argument('--address', default=DEFAULT_ADDRESS, help='Server address (default "%(default)s")')
parser.add_argument('--log', default='WARNING', help='Set logging level')
subparsers = parser.add_subparsers()
stats_parser = subparsers.add_parser('stats', help='Show server stats')
stats_parser.add_argument('--reset', action='store_true',
help='Reset server stats')
stats_parser.set_defaults(func=handle_stats)
stress_parser = subparsers.add_parser('stress', help='Run stress test')
stress_parser.add_argument('--clients', type=int, default=10,
help='Number of simultaneous clients')
stress_parser.add_argument('--requests', type=int, default=1000,
help='Number of requests each client will perform')
stress_parser.add_argument('--report', action='store_true',
help='Report new hashes')
stress_parser.add_argument('--taskhash-seed', default='',
help='Include string in taskhash')
stress_parser.add_argument('--outhash-seed', default='',
help='Include string in outhash')
stress_parser.set_defaults(func=handle_stress)
args = parser.parse_args()
logger = logging.getLogger('hashserv')
level = getattr(logging, args.log.upper(), None)
if not isinstance(level, int):
raise ValueError('Invalid log level: %s' % args.log)
logger.setLevel(level)
console = logging.StreamHandler()
console.setLevel(level)
logger.addHandler(console)
func = getattr(args, 'func', None)
if func:
client = hashserv.create_client(args.address)
return func(args, client)
return 0
if __name__ == '__main__':
try:
ret = main()
except Exception:
ret = 1
import traceback
traceback.print_exc()
sys.exit(ret)

View File

@@ -1,66 +0,0 @@
#! /usr/bin/env python3
#
# Copyright (C) 2018 Garmin Ltd.
#
# SPDX-License-Identifier: GPL-2.0-only
#
import os
import sys
import logging
import argparse
import sqlite3
import warnings
warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
import hashserv
VERSION = "1.0.0"
DEFAULT_BIND = 'unix://./hashserve.sock'
def main():
parser = argparse.ArgumentParser(description='Hash Equivalence Reference Server. Version=%s' % VERSION,
epilog='''The bind address is the path to a unix domain socket if it is
prefixed with "unix://". Otherwise, it is an IP address
and port in form ADDRESS:PORT. To bind to all addresses, leave
the ADDRESS empty, e.g. "--bind :8686". To bind to a specific
IPv6 address, enclose the address in "[]", e.g.
"--bind [::1]:8686"'''
)
parser.add_argument('-b', '--bind', default=DEFAULT_BIND, help='Bind address (default "%(default)s")')
parser.add_argument('-d', '--database', default='./hashserv.db', help='Database file (default "%(default)s")')
parser.add_argument('-l', '--log', default='WARNING', help='Set logging level')
parser.add_argument('-u', '--upstream', help='Upstream hashserv to pull hashes from')
parser.add_argument('-r', '--read-only', action='store_true', help='Disallow write operations from clients')
args = parser.parse_args()
logger = logging.getLogger('hashserv')
level = getattr(logging, args.log.upper(), None)
if not isinstance(level, int):
raise ValueError('Invalid log level: %s' % args.log)
logger.setLevel(level)
console = logging.StreamHandler()
console.setLevel(level)
logger.addHandler(console)
server = hashserv.create_server(args.bind, args.database, upstream=args.upstream, read_only=args.read_only)
server.serve_forever()
return 0
if __name__ == '__main__':
try:
ret = main()
except Exception:
ret = 1
import traceback
traceback.print_exc()
sys.exit(ret)

View File

@@ -1,102 +1,595 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# This script has subcommands which operate against your bitbake layers, either
# displaying useful information, or acting against them.
# See the help output for details on available commands.
# Copyright (C) 2011 Mentor Graphics Corporation
# Copyright (C) 2011-2015 Intel Corporation
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Copyright (C) 2012 Intel Corporation
import cmd
import logging
import warnings
import os
import sys
import argparse
import warnings
warnings.simplefilter("default")
import fnmatch
from collections import defaultdict
bindir = os.path.dirname(__file__)
topdir = os.path.dirname(bindir)
sys.path[0:0] = [os.path.join(topdir, 'lib')]
import bb.tinfoil
import bb.msg
logger = bb.msg.logger_create('bitbake-layers', sys.stdout)
def main():
parser = argparse.ArgumentParser(
description="BitBake layers utility",
epilog="Use %(prog)s <subcommand> --help to get help on a specific command",
add_help=False)
parser.add_argument('-d', '--debug', help='Enable debug output', action='store_true')
parser.add_argument('-q', '--quiet', help='Print only errors', action='store_true')
parser.add_argument('-F', '--force', help='Force add without recipe parse verification', action='store_true')
parser.add_argument('--color', choices=['auto', 'always', 'never'], default='auto', help='Colorize output (where %(metavar)s is %(choices)s)', metavar='COLOR')
global_args, unparsed_args = parser.parse_known_args()
# Help is added here rather than via add_help=True, as we don't want it to
# be handled by parse_known_args()
parser.add_argument('-h', '--help', action='help', default=argparse.SUPPRESS,
help='show this help message and exit')
subparsers = parser.add_subparsers(title='subcommands', metavar='<subcommand>')
subparsers.required = True
if global_args.debug:
logger.setLevel(logging.DEBUG)
elif global_args.quiet:
logger.setLevel(logging.ERROR)
# Need to re-run logger_create with color argument
# (will be the same logger since it has the same name)
bb.msg.logger_create('bitbake-layers', output=sys.stdout,
color=global_args.color,
level=logger.getEffectiveLevel())
plugins = []
tinfoil = bb.tinfoil.Tinfoil(tracking=True)
tinfoil.logger.setLevel(logger.getEffectiveLevel())
try:
tinfoil.prepare(True)
for path in ([topdir] +
tinfoil.config_data.getVar('BBPATH').split(':')):
pluginpath = os.path.join(path, 'lib', 'bblayers')
bb.utils.load_plugins(logger, plugins, pluginpath)
registered = False
for plugin in plugins:
if hasattr(plugin, 'register_commands'):
registered = True
plugin.register_commands(subparsers)
if hasattr(plugin, 'tinfoil_init'):
plugin.tinfoil_init(tinfoil)
if not registered:
logger.error("No commands registered - missing plugins?")
sys.exit(1)
args = parser.parse_args(unparsed_args, namespace=global_args)
if getattr(args, 'parserecipes', False):
tinfoil.config_data.disableTracking()
tinfoil.parse_recipes()
tinfoil.config_data.enableTracking()
return args.func(args)
finally:
tinfoil.shutdown()
import bb.cache
import bb.cooker
import bb.providers
import bb.utils
from bb.cooker import state
import bb.fetch2
if __name__ == "__main__":
try:
ret = main()
except bb.BBHandledException:
ret = 1
except Exception:
ret = 1
import traceback
traceback.print_exc()
sys.exit(ret)
logger = logging.getLogger('BitBake')
warnings.filterwarnings("ignore", category=DeprecationWarning)
def main(args):
# Set up logging
console = logging.StreamHandler(sys.stdout)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
bb.msg.addDefaultlogFilter(console)
console.setFormatter(format)
logger.addHandler(console)
initialenv = os.environ.copy()
bb.utils.clean_environment()
cmds = Commands(initialenv)
if args:
# Allow user to specify e.g. show-layers instead of show_layers
args = [args[0].replace('-', '_')] + args[1:]
cmds.onecmd(' '.join(args))
else:
cmds.do_help('')
return cmds.returncode
class Commands(cmd.Cmd):
def __init__(self, initialenv):
cmd.Cmd.__init__(self)
self.returncode = 0
self.config = Config(parse_only=True)
self.cooker = bb.cooker.BBCooker(self.config,
self.register_idle_function,
initialenv)
self.config_data = self.cooker.configuration.data
bb.providers.logger.setLevel(logging.ERROR)
self.cooker_data = None
self.bblayers = (self.config_data.getVar('BBLAYERS', True) or "").split()
def register_idle_function(self, function, data):
pass
def prepare_cooker(self):
sys.stderr.write("Parsing recipes..")
logger.setLevel(logging.WARNING)
try:
while self.cooker.state in (state.initial, state.parsing):
self.cooker.updateCache()
except KeyboardInterrupt:
self.cooker.shutdown()
self.cooker.updateCache()
sys.exit(2)
logger.setLevel(logging.INFO)
sys.stderr.write("done.\n")
self.cooker_data = self.cooker.status
self.cooker_data.appends = self.cooker.appendlist
def check_prepare_cooker(self):
if not self.cooker_data:
self.prepare_cooker()
def default(self, line):
"""Handle unrecognised commands"""
sys.stderr.write("Unrecognised command or option\n")
self.do_help('')
def do_help(self, topic):
"""display general help or help on a specified command"""
if topic:
sys.stdout.write('%s: ' % topic)
cmd.Cmd.do_help(self, topic.replace('-', '_'))
else:
sys.stdout.write("usage: bitbake-layers <command> [arguments]\n\n")
sys.stdout.write("Available commands:\n")
procnames = self.get_names()
for procname in procnames:
if procname[:3] == 'do_':
sys.stdout.write(" %s\n" % procname[3:].replace('_', '-'))
doc = getattr(self, procname).__doc__
if doc:
sys.stdout.write(" %s\n" % doc.splitlines()[0])
def do_show_layers(self, args):
"""show current configured layers"""
self.check_prepare_cooker()
logger.plain('')
logger.plain("%s %s %s" % ("layer".ljust(20), "path".ljust(40), "priority"))
logger.plain('=' * 74)
for layerdir in self.bblayers:
layername = self.get_layer_name(layerdir)
layerpri = 0
for layer, _, regex, pri in self.cooker.status.bbfile_config_priorities:
if regex.match(os.path.join(layerdir, 'test')):
layerpri = pri
break
logger.plain("%s %s %d" % (layername.ljust(20), layerdir.ljust(40), layerpri))
def version_str(self, pe, pv, pr = None):
verstr = "%s" % pv
if pr:
verstr = "%s-%s" % (verstr, pr)
if pe:
verstr = "%s:%s" % (pe, verstr)
return verstr
def do_show_overlayed(self, args):
"""list overlayed recipes (where the same recipe exists in another layer that has a higher layer priority)
usage: show-overlayed [-f] [-s]
Lists the names of overlayed recipes and the available versions in each
layer, with the preferred version first. Note that skipped recipes that
are overlayed will also be listed, with a " (skipped)" suffix.
Options:
-f instead of the default formatting, list filenames of higher priority
recipes with the ones they overlay indented underneath
-s only list overlayed recipes where the version is the same
"""
self.check_prepare_cooker()
show_filenames = False
show_same_ver_only = False
for arg in args.split():
if arg == '-f':
show_filenames = True
elif arg == '-s':
show_same_ver_only = True
else:
sys.stderr.write("show-overlayed: invalid option %s\n" % arg)
self.do_help('')
return
items_listed = self.list_recipes('Overlayed recipes', None, True, show_same_ver_only, show_filenames, True)
# Check for overlayed .bbclass files
classes = defaultdict(list)
for layerdir in self.bblayers:
classdir = os.path.join(layerdir, 'classes')
if os.path.exists(classdir):
for classfile in os.listdir(classdir):
if os.path.splitext(classfile)[1] == '.bbclass':
classes[classfile].append(classdir)
# Locating classes and other files is a bit more complicated than recipes -
# layer priority is not a factor; instead BitBake uses the first matching
# file in BBPATH, which is manipulated directly by each layer's
# conf/layer.conf in turn, thus the order of layers in bblayers.conf is a
# factor - however, each layer.conf is free to either prepend or append to
# BBPATH (or indeed do crazy stuff with it). Thus the order in BBPATH might
# not be exactly the order present in bblayers.conf either.
bbpath = str(self.config_data.getVar('BBPATH', True))
overlayed_class_found = False
for (classfile, classdirs) in classes.items():
if len(classdirs) > 1:
if not overlayed_class_found:
logger.plain('=== Overlayed classes ===')
overlayed_class_found = True
mainfile = bb.utils.which(bbpath, os.path.join('classes', classfile))
if show_filenames:
logger.plain('%s' % mainfile)
else:
# We effectively have to guess the layer here
logger.plain('%s:' % classfile)
mainlayername = '?'
for layerdir in self.bblayers:
classdir = os.path.join(layerdir, 'classes')
if mainfile.startswith(classdir):
mainlayername = self.get_layer_name(layerdir)
logger.plain(' %s' % mainlayername)
for classdir in classdirs:
fullpath = os.path.join(classdir, classfile)
if fullpath != mainfile:
if show_filenames:
print(' %s' % fullpath)
else:
print(' %s' % self.get_layer_name(os.path.dirname(classdir)))
if overlayed_class_found:
items_listed = True;
if not items_listed:
logger.plain('No overlayed files found.')
def do_show_recipes(self, args):
"""list available recipes, showing the layer they are provided by
usage: show-recipes [-f] [-m] [pnspec]
Lists the names of overlayed recipes and the available versions in each
layer, with the preferred version first. Optionally you may specify
pnspec to match a specified recipe name (supports wildcards). Note that
skipped recipes will also be listed, with a " (skipped)" suffix.
Options:
-f instead of the default formatting, list filenames of higher priority
recipes with other available recipes indented underneath
-m only list where multiple recipes (in the same layer or different
layers) exist for the same recipe name
"""
self.check_prepare_cooker()
show_filenames = False
show_multi_provider_only = False
pnspec = None
title = 'Available recipes:'
for arg in args.split():
if arg == '-f':
show_filenames = True
elif arg == '-m':
show_multi_provider_only = True
elif not arg.startswith('-'):
pnspec = arg
title = 'Available recipes matching %s:' % pnspec
else:
sys.stderr.write("show-recipes: invalid option %s\n" % arg)
self.do_help('')
return
self.list_recipes(title, pnspec, False, False, show_filenames, show_multi_provider_only)
def list_recipes(self, title, pnspec, show_overlayed_only, show_same_ver_only, show_filenames, show_multi_provider_only):
pkg_pn = self.cooker.status.pkg_pn
(latest_versions, preferred_versions) = bb.providers.findProviders(self.cooker.configuration.data, self.cooker.status, pkg_pn)
allproviders = bb.providers.allProviders(self.cooker.status)
# Ensure we list skipped recipes
# We are largely guessing about PN, PV and the preferred version here,
# but we have no choice since skipped recipes are not fully parsed
skiplist = self.cooker.skiplist.keys()
skiplist.sort( key=lambda fileitem: self.cooker.calc_bbfile_priority(fileitem) )
skiplist.reverse()
for fn in skiplist:
recipe_parts = os.path.splitext(os.path.basename(fn))[0].split('_')
p = recipe_parts[0]
if len(recipe_parts) > 1:
ver = (None, recipe_parts[1], None)
else:
ver = (None, 'unknown', None)
allproviders[p].append((ver, fn))
if not p in pkg_pn:
pkg_pn[p] = 'dummy'
preferred_versions[p] = (ver, fn)
def print_item(f, pn, ver, layer, ispref):
if f in skiplist:
skipped = ' (skipped)'
else:
skipped = ''
if show_filenames:
if ispref:
logger.plain("%s%s", f, skipped)
else:
logger.plain(" %s%s", f, skipped)
else:
if ispref:
logger.plain("%s:", pn)
logger.plain(" %s %s%s", layer.ljust(20), ver, skipped)
preffiles = []
items_listed = False
for p in sorted(pkg_pn):
if pnspec:
if not fnmatch.fnmatch(p, pnspec):
continue
if len(allproviders[p]) > 1 or not show_multi_provider_only:
pref = preferred_versions[p]
preffile = bb.cache.Cache.virtualfn2realfn(pref[1])[0]
if preffile not in preffiles:
preflayer = self.get_file_layer(preffile)
multilayer = False
same_ver = True
provs = []
for prov in allproviders[p]:
provfile = bb.cache.Cache.virtualfn2realfn(prov[1])[0]
provlayer = self.get_file_layer(provfile)
provs.append((provfile, provlayer, prov[0]))
if provlayer != preflayer:
multilayer = True
if prov[0] != pref[0]:
same_ver = False
if (multilayer or not show_overlayed_only) and (same_ver or not show_same_ver_only):
if not items_listed:
logger.plain('=== %s ===' % title)
items_listed = True
print_item(preffile, p, self.version_str(pref[0][0], pref[0][1]), preflayer, True)
for (provfile, provlayer, provver) in provs:
if provfile != preffile:
print_item(provfile, p, self.version_str(provver[0], provver[1]), provlayer, False)
# Ensure we don't show two entries for BBCLASSEXTENDed recipes
preffiles.append(preffile)
return items_listed
def do_flatten(self, args):
"""flattens layer configuration into a separate output directory.
usage: flatten [layer1 layer2 [layer3]...] <outputdir>
Takes the specified layers (or all layers in the current layer
configuration if none are specified) and builds a "flattened" directory
containing the contents of all layers, with any overlayed recipes removed
and bbappends appended to the corresponding recipes. Note that some manual
cleanup may still be necessary afterwards, in particular:
* where non-recipe files (such as patches) are overwritten (the flatten
command will show a warning for these)
* where anything beyond the normal layer setup has been added to
layer.conf (only the lowest priority number layer's layer.conf is used)
* overridden/appended items from bbappends will need to be tidied up
* when the flattened layers do not have the same directory structure (the
flatten command should show a warning when this will cause a problem)
Warning: if you flatten several layers where another layer is intended to
be used "inbetween" them (in layer priority order) such that recipes /
bbappends in the layers interact, and then attempt to use the new output
layer together with that other layer, you may no longer get the same
build results (as the layer priority order has effectively changed).
"""
arglist = args.split()
if len(arglist) < 1:
logger.error('Please specify an output directory')
self.do_help('flatten')
return
if len(arglist) == 2:
logger.error('If you specify layers to flatten you must specify at least two')
self.do_help('flatten')
return
outputdir = arglist[-1]
if os.path.exists(outputdir) and os.listdir(outputdir):
logger.error('Directory %s exists and is non-empty, please clear it out first' % outputdir)
return
self.check_prepare_cooker()
layers = self.bblayers
if len(arglist) > 2:
layernames = arglist[:-1]
found_layernames = []
found_layerdirs = []
for layerdir in layers:
layername = self.get_layer_name(layerdir)
if layername in layernames:
found_layerdirs.append(layerdir)
found_layernames.append(layername)
for layername in layernames:
if not layername in found_layernames:
logger.error('Unable to find layer %s in current configuration, please run "%s show-layers" to list configured layers' % (layername, os.path.basename(sys.argv[0])))
return
layers = found_layerdirs
else:
layernames = []
# Ensure a specified path matches our list of layers
def layer_path_match(path):
for layerdir in layers:
if path.startswith(os.path.join(layerdir, '')):
return layerdir
return None
appended_recipes = []
for layer in layers:
overlayed = []
for f in self.cooker.overlayed.iterkeys():
for of in self.cooker.overlayed[f]:
if of.startswith(layer):
overlayed.append(of)
logger.plain('Copying files from %s...' % layer )
for root, dirs, files in os.walk(layer):
for f1 in files:
f1full = os.sep.join([root, f1])
if f1full in overlayed:
logger.plain(' Skipping overlayed file %s' % f1full )
else:
ext = os.path.splitext(f1)[1]
if ext != '.bbappend':
fdest = f1full[len(layer):]
fdest = os.path.normpath(os.sep.join([outputdir,fdest]))
bb.utils.mkdirhier(os.path.dirname(fdest))
if os.path.exists(fdest):
if f1 == 'layer.conf' and root.endswith('/conf'):
logger.plain(' Skipping layer config file %s' % f1full )
continue
else:
logger.warn('Overwriting file %s', fdest)
bb.utils.copyfile(f1full, fdest)
if ext == '.bb':
if f1 in self.cooker_data.appends:
appends = self.cooker_data.appends[f1]
if appends:
logger.plain(' Applying appends to %s' % fdest )
for appendname in appends:
if layer_path_match(appendname):
self.apply_append(appendname, fdest)
appended_recipes.append(f1)
# Take care of when some layers are excluded and yet we have included bbappends for those recipes
for recipename in self.cooker_data.appends.iterkeys():
if recipename not in appended_recipes:
appends = self.cooker_data.appends[recipename]
first_append = None
for appendname in appends:
layer = layer_path_match(appendname)
if layer:
if first_append:
self.apply_append(appendname, first_append)
else:
fdest = appendname[len(layer):]
fdest = os.path.normpath(os.sep.join([outputdir,fdest]))
bb.utils.mkdirhier(os.path.dirname(fdest))
bb.utils.copyfile(appendname, fdest)
first_append = fdest
# Get the regex for the first layer in our list (which is where the conf/layer.conf file will
# have come from)
first_regex = None
layerdir = layers[0]
for layername, pattern, regex, _ in self.cooker.status.bbfile_config_priorities:
if regex.match(os.path.join(layerdir, 'test')):
first_regex = regex
break
if first_regex:
# Find the BBFILES entries that match (which will have come from this conf/layer.conf file)
bbfiles = str(self.config_data.getVar('BBFILES', True)).split()
bbfiles_layer = []
for item in bbfiles:
if first_regex.match(item):
newpath = os.path.join(outputdir, item[len(layerdir)+1:])
bbfiles_layer.append(newpath)
if bbfiles_layer:
# Check that all important layer files match BBFILES
for root, dirs, files in os.walk(outputdir):
for f1 in files:
ext = os.path.splitext(f1)[1]
if ext in ['.bb', '.bbappend']:
f1full = os.sep.join([root, f1])
entry_found = False
for item in bbfiles_layer:
if fnmatch.fnmatch(f1full, item):
entry_found = True
break
if not entry_found:
logger.warning("File %s does not match the flattened layer's BBFILES setting, you may need to edit conf/layer.conf or move the file elsewhere" % f1full)
def get_file_layer(self, filename):
for layer, _, regex, _ in self.cooker.status.bbfile_config_priorities:
if regex.match(filename):
for layerdir in self.bblayers:
if regex.match(os.path.join(layerdir, 'test')):
return self.get_layer_name(layerdir)
return "?"
def get_layer_name(self, layerdir):
return os.path.basename(layerdir.rstrip(os.sep))
def apply_append(self, appendname, recipename):
appendfile = open(appendname, 'r')
recipefile = open(recipename, 'a')
recipefile.write('\n')
recipefile.write('##### bbappended from %s #####\n' % self.get_file_layer(appendname))
recipefile.writelines(appendfile.readlines())
recipefile.close()
appendfile.close()
def do_show_appends(self, args):
"""list bbappend files and recipe files they apply to
usage: show-appends
Recipes are listed with the bbappends that apply to them as subitems.
"""
self.check_prepare_cooker()
if not self.cooker_data.appends:
logger.plain('No append files found')
return
logger.plain('State of append files:')
pnlist = list(self.cooker_data.pkg_pn.keys())
pnlist.sort()
for pn in pnlist:
self.show_appends_for_pn(pn)
self.show_appends_for_skipped()
def show_appends_for_pn(self, pn):
filenames = self.cooker_data.pkg_pn[pn]
best = bb.providers.findBestProvider(pn,
self.cooker.configuration.data,
self.cooker_data,
self.cooker_data.pkg_pn)
best_filename = os.path.basename(best[3])
self.show_appends_output(filenames, best_filename)
def show_appends_for_skipped(self):
filenames = [os.path.basename(f)
for f in self.cooker.skiplist.iterkeys()]
self.show_appends_output(filenames, None, " (skipped)")
def show_appends_output(self, filenames, best_filename, name_suffix = ''):
appended, missing = self.get_appends_for_files(filenames)
if appended:
for basename, appends in appended:
logger.plain('%s%s:', basename, name_suffix)
for append in appends:
logger.plain(' %s', append)
if best_filename:
if best_filename in missing:
logger.warn('%s: missing append for preferred version',
best_filename)
self.returncode |= 1
def get_appends_for_files(self, filenames):
appended, notappended = [], []
for filename in filenames:
_, cls = bb.cache.Cache.virtualfn2realfn(filename)
if cls:
continue
basename = os.path.basename(filename)
appends = self.cooker_data.appends.get(basename)
if appends:
appended.append((basename, list(appends)))
else:
notappended.append(basename)
return appended, notappended
class Config(object):
def __init__(self, **options):
self.pkgs_to_build = []
self.debug_domains = []
self.extra_assume_provided = []
self.prefile = []
self.postfile = []
self.debug = 0
self.__dict__.update(options)
def __getattr__(self, attribute):
try:
return super(Config, self).__getattribute__(attribute)
except AttributeError:
return None
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]) or 0)

View File

@@ -1,15 +1,7 @@
#!/usr/bin/env python3
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
#!/usr/bin/env python
import os
import sys,logging
import optparse
import warnings
warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),'lib'))
@@ -40,14 +32,12 @@ def main():
dest="host", type="string", default=PRHOST_DEFAULT)
parser.add_option("--port", help="port number(default: 8585)", action="store",
dest="port", type="int", default=PRPORT_DEFAULT)
parser.add_option("-r", "--read-only", help="open database in read-only mode",
action="store_true")
options, args = parser.parse_args(sys.argv)
prserv.init_logger(os.path.abspath(options.logfile),options.loglevel)
if options.start:
ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile), options.read_only)
ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile))
elif options.stop:
ret=prserv.serv.stop_daemon(options.host, options.port)
else:
@@ -60,6 +50,6 @@ if __name__ == "__main__":
except Exception:
ret = 1
import traceback
traceback.print_exc()
traceback.print_exc(5)
sys.exit(ret)

119
bitbake/bin/bitbake-runtask Executable file
View File

@@ -0,0 +1,119 @@
#!/usr/bin/env python
import os
import sys
import warnings
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
from bb import fetch2
import logging
logger = logging.getLogger("BitBake")
try:
import cPickle as pickle
except ImportError:
import pickle
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
class BBConfiguration(object):
"""
Manages build options and configurations for one run
"""
def __init__(self, **options):
self.data = {}
self.file = []
self.cmd = None
self.dump_signatures = True
self.prefile = []
self.postfile = []
self.parse_only = True
def __getattr__(self, attribute):
try:
return super(BBConfiguration, self).__getattribute__(attribute)
except AttributeError:
return None
_warnings_showwarning = warnings.showwarning
def _showwarning(message, category, filename, lineno, file=None, line=None):
"""Display python warning messages using bb.msg"""
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno)
s = s.split("\n")[0]
bb.msg.warn(None, s)
warnings.showwarning = _showwarning
warnings.simplefilter("ignore", DeprecationWarning)
import bb.event
import bb.cooker
buildfile = sys.argv[1]
taskname = sys.argv[2]
if len(sys.argv) >= 4:
dryrun = sys.argv[3]
else:
dryrun = False
if len(sys.argv) >= 5:
hashfile = sys.argv[4]
p = pickle.Unpickler(file(hashfile, "rb"))
hashdata = p.load()
else:
hashdata = None
handler = bb.event.LogHandler()
logger.addHandler(handler)
#An example to make debug log messages show up
#bb.msg.init_msgconfig(True, 3, [])
console = logging.StreamHandler(sys.stdout)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
bb.msg.addDefaultlogFilter(console)
console.setFormatter(format)
def worker_fire(event, d):
if isinstance(event, logging.LogRecord):
console.handle(event)
bb.event.worker_fire = worker_fire
bb.event.worker_pid = os.getpid()
initialenv = os.environ.copy()
config = BBConfiguration()
def register_idle_function(self, function, data):
pass
cooker = bb.cooker.BBCooker(config, register_idle_function, initialenv)
config_data = cooker.configuration.data
cooker.status = config_data
cooker.handleCollections(config_data.getVar("BBFILE_COLLECTIONS", 1))
fn, cls = bb.cache.Cache.virtualfn2realfn(buildfile)
buildfile = cooker.matchFile(fn)
fn = bb.cache.Cache.realfn2virtual(buildfile, cls)
cooker.buildSetVars()
# Load data into the cache for fn and parse the loaded cache data
the_data = bb.cache.Cache.loadDataFull(fn, cooker.get_file_appends(fn), cooker.configuration.data)
if taskname.endswith("_setscene"):
the_data.setVarFlag(taskname, "quieterrors", "1")
if hashdata:
bb.parse.siggen.set_taskdata(hashdata["hashes"], hashdata["deps"])
for h in hashdata["hashes"]:
the_data.setVar("BBHASH_%s" % h, hashdata["hashes"][h])
for h in hashdata["deps"]:
the_data.setVar("BBHASHDEPS_%s" % h, hashdata["deps"][h])
ret = 0
if dryrun != "True":
ret = bb.build.exec_task(fn, taskname, the_data)
sys.exit(ret)

View File

@@ -1,77 +0,0 @@
#!/usr/bin/env python3
#
# Copyright (C) 2012 Richard Purdie
#
# SPDX-License-Identifier: GPL-2.0-only
#
import os
import sys, logging
import warnings
warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
import unittest
try:
import bb
import hashserv
import layerindexlib
except RuntimeError as exc:
sys.exit(str(exc))
tests = ["bb.tests.codeparser",
"bb.tests.color",
"bb.tests.cooker",
"bb.tests.cow",
"bb.tests.data",
"bb.tests.event",
"bb.tests.fetch",
"bb.tests.parse",
"bb.tests.persist_data",
"bb.tests.runqueue",
"bb.tests.siggen",
"bb.tests.utils",
"bb.tests.compression",
"hashserv.tests",
"layerindexlib.tests.layerindexobj",
"layerindexlib.tests.restapi",
"layerindexlib.tests.cooker"]
for t in tests:
t = '.'.join(t.split('.')[:3])
__import__(t)
# Set-up logging
class StdoutStreamHandler(logging.StreamHandler):
"""Special handler so that unittest is able to capture stdout"""
def __init__(self):
# Override __init__() because we don't want to set self.stream here
logging.Handler.__init__(self)
@property
def stream(self):
# We want to dynamically write wherever sys.stdout is pointing to
return sys.stdout
handler = StdoutStreamHandler()
bb.logger.addHandler(handler)
bb.logger.setLevel(logging.DEBUG)
ENV_HELP = """\
Environment variables:
BB_SKIP_NETTESTS set to 'yes' in order to skip tests using network
connection
BB_TMPDIR_NOCLEAN set to 'yes' to preserve test tmp directories
"""
class main(unittest.main):
def _print_help(self, *args, **kwargs):
super(main, self)._print_help(*args, **kwargs)
print(ENV_HELP)
if __name__ == '__main__':
main(defaultTest=tests, buffer=True)

View File

@@ -1,53 +0,0 @@
#!/usr/bin/env python3
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Copyright (C) 2020 Richard Purdie
#
import os
import sys
import warnings
warnings.simplefilter("default")
import logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 10 or not sys.argv[1].startswith("decafbad"):
print("bitbake-server is meant for internal execution by bitbake itself, please don't use it standalone.")
sys.exit(1)
import bb.server.process
lockfd = int(sys.argv[2])
readypipeinfd = int(sys.argv[3])
logfile = sys.argv[4]
lockname = sys.argv[5]
sockname = sys.argv[6]
timeout = float(sys.argv[7])
xmlrpcinterface = (sys.argv[8], int(sys.argv[9]))
if xmlrpcinterface[0] == "None":
xmlrpcinterface = (None, xmlrpcinterface[1])
# Replace standard fds with our own
with open('/dev/null', 'r') as si:
os.dup2(si.fileno(), sys.stdin.fileno())
so = open(logfile, 'a+')
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(so.fileno(), sys.stderr.fileno())
# Have stdout and stderr be the same so log output matches chronologically
# and there aren't two seperate buffers
sys.stderr = sys.stdout
logger = logging.getLogger("BitBake")
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
logger.addHandler(handler)
bb.server.process.execServer(lockfd, readypipeinfd, lockname, sockname, timeout, xmlrpcinterface)

View File

@@ -1,553 +0,0 @@
#!/usr/bin/env python3
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import os
import sys
import warnings
warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
from bb import fetch2
import logging
import bb
import select
import errno
import signal
import pickle
import traceback
import queue
import shlex
import subprocess
from multiprocessing import Lock
from threading import Thread
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 2 or not sys.argv[1].startswith("decafbad"):
print("bitbake-worker is meant for internal execution by bitbake itself, please don't use it standalone.")
sys.exit(1)
profiling = False
if sys.argv[1].startswith("decafbadbad"):
profiling = True
try:
import cProfile as profile
except:
import profile
# Unbuffer stdout to avoid log truncation in the event
# of an unorderly exit as well as to provide timely
# updates to log files for use with tail
try:
if sys.stdout.name == '<stdout>':
import fcntl
fl = fcntl.fcntl(sys.stdout.fileno(), fcntl.F_GETFL)
fl |= os.O_SYNC
fcntl.fcntl(sys.stdout.fileno(), fcntl.F_SETFL, fl)
#sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
except:
pass
logger = logging.getLogger("BitBake")
worker_pipe = sys.stdout.fileno()
bb.utils.nonblockingfd(worker_pipe)
# Need to guard against multiprocessing being used in child processes
# and multiple processes trying to write to the parent at the same time
worker_pipe_lock = None
handler = bb.event.LogHandler()
logger.addHandler(handler)
if 0:
# Code to write out a log file of all events passing through the worker
logfilename = "/tmp/workerlogfile"
format_str = "%(levelname)s: %(message)s"
conlogformat = bb.msg.BBLogFormatter(format_str)
consolelog = logging.FileHandler(logfilename)
consolelog.setFormatter(conlogformat)
logger.addHandler(consolelog)
worker_queue = queue.Queue()
def worker_fire(event, d):
data = b"<event>" + pickle.dumps(event) + b"</event>"
worker_fire_prepickled(data)
def worker_fire_prepickled(event):
global worker_queue
worker_queue.put(event)
#
# We can end up with write contention with the cooker, it can be trying to send commands
# and we can be trying to send event data back. Therefore use a separate thread for writing
# back data to cooker.
#
worker_thread_exit = False
def worker_flush(worker_queue):
worker_queue_int = b""
global worker_pipe, worker_thread_exit
while True:
try:
worker_queue_int = worker_queue_int + worker_queue.get(True, 1)
except queue.Empty:
pass
while (worker_queue_int or not worker_queue.empty()):
try:
(_, ready, _) = select.select([], [worker_pipe], [], 1)
if not worker_queue.empty():
worker_queue_int = worker_queue_int + worker_queue.get()
written = os.write(worker_pipe, worker_queue_int)
worker_queue_int = worker_queue_int[written:]
except (IOError, OSError) as e:
if e.errno != errno.EAGAIN and e.errno != errno.EPIPE:
raise
if worker_thread_exit and worker_queue.empty() and not worker_queue_int:
return
worker_thread = Thread(target=worker_flush, args=(worker_queue,))
worker_thread.start()
def worker_child_fire(event, d):
global worker_pipe
global worker_pipe_lock
data = b"<event>" + pickle.dumps(event) + b"</event>"
try:
worker_pipe_lock.acquire()
while(len(data)):
written = worker_pipe.write(data)
data = data[written:]
worker_pipe_lock.release()
except IOError:
sigterm_handler(None, None)
raise
bb.event.worker_fire = worker_fire
lf = None
#lf = open("/tmp/workercommandlog", "w+")
def workerlog_write(msg):
if lf:
lf.write(msg)
lf.flush()
def sigterm_handler(signum, frame):
signal.signal(signal.SIGTERM, signal.SIG_DFL)
os.killpg(0, signal.SIGTERM)
sys.exit()
def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskhash, unihash, appends, taskdepdata, extraconfigdata, quieterrors=False, dry_run_exec=False):
# We need to setup the environment BEFORE the fork, since
# a fork() or exec*() activates PSEUDO...
envbackup = {}
fakeroot = False
fakeenv = {}
umask = None
uid = os.getuid()
gid = os.getgid()
taskdep = workerdata["taskdeps"][fn]
if 'umask' in taskdep and taskname in taskdep['umask']:
umask = taskdep['umask'][taskname]
elif workerdata["umask"]:
umask = workerdata["umask"]
if umask:
# umask might come in as a number or text string..
try:
umask = int(umask, 8)
except TypeError:
pass
dry_run = cfg.dry_run or dry_run_exec
# We can't use the fakeroot environment in a dry run as it possibly hasn't been built
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not dry_run:
fakeroot = True
envvars = (workerdata["fakerootenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
os.environ[key] = value
fakeenv[key] = value
fakedirs = (workerdata["fakerootdirs"][fn] or "").split()
for p in fakedirs:
bb.utils.mkdirhier(p)
logger.debug2('Running %s:%s under fakeroot, fakedirs: %s' %
(fn, taskname, ', '.join(fakedirs)))
else:
envvars = (workerdata["fakerootnoenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
os.environ[key] = value
fakeenv[key] = value
sys.stdout.flush()
sys.stderr.flush()
try:
pipein, pipeout = os.pipe()
pipein = os.fdopen(pipein, 'rb', 4096)
pipeout = os.fdopen(pipeout, 'wb', 0)
pid = os.fork()
except OSError as e:
logger.critical("fork failed: %d (%s)" % (e.errno, e.strerror))
sys.exit(1)
if pid == 0:
def child():
global worker_pipe
global worker_pipe_lock
pipein.close()
bb.utils.signal_on_parent_exit("SIGTERM")
# Save out the PID so that the event can include it the
# events
bb.event.worker_pid = os.getpid()
bb.event.worker_fire = worker_child_fire
worker_pipe = pipeout
worker_pipe_lock = Lock()
# Make the child the process group leader and ensure no
# child process will be controlled by the current terminal
# This ensures signals sent to the controlling terminal like Ctrl+C
# don't stop the child processes.
os.setsid()
signal.signal(signal.SIGTERM, sigterm_handler)
# Let SIGHUP exit as SIGTERM
signal.signal(signal.SIGHUP, sigterm_handler)
# No stdin
newsi = os.open(os.devnull, os.O_RDWR)
os.dup2(newsi, sys.stdin.fileno())
if umask:
os.umask(umask)
try:
bb_cache = bb.cache.NoCache(databuilder)
(realfn, virtual, mc) = bb.cache.virtualfn2realfn(fn)
the_data = databuilder.mcdata[mc]
the_data.setVar("BB_WORKERCONTEXT", "1")
the_data.setVar("BB_TASKDEPDATA", taskdepdata)
the_data.setVar('BB_CURRENTTASK', taskname.replace("do_", ""))
if cfg.limited_deps:
the_data.setVar("BB_LIMITEDDEPS", "1")
the_data.setVar("BUILDNAME", workerdata["buildname"])
the_data.setVar("DATE", workerdata["date"])
the_data.setVar("TIME", workerdata["time"])
for varname, value in extraconfigdata.items():
the_data.setVar(varname, value)
bb.parse.siggen.set_taskdata(workerdata["sigdata"])
if "newhashes" in workerdata:
bb.parse.siggen.set_taskhashes(workerdata["newhashes"])
ret = 0
the_data = bb_cache.loadDataFull(fn, appends)
the_data.setVar('BB_TASKHASH', taskhash)
the_data.setVar('BB_UNIHASH', unihash)
bb.utils.set_process_name("%s:%s" % (the_data.getVar("PN"), taskname.replace("do_", "")))
if not the_data.getVarFlag(taskname, 'network', False):
if bb.utils.is_local_uid(uid):
logger.debug("Attempting to disable network for %s" % taskname)
bb.utils.disable_network(uid, gid)
else:
logger.debug("Skipping disable network for %s since %s is not a local uid." % (taskname, uid))
# exported_vars() returns a generator which *cannot* be passed to os.environ.update()
# successfully. We also need to unset anything from the environment which shouldn't be there
exports = bb.data.exported_vars(the_data)
bb.utils.empty_environment()
for e, v in exports:
os.environ[e] = v
for e in fakeenv:
os.environ[e] = fakeenv[e]
the_data.setVar(e, fakeenv[e])
the_data.setVarFlag(e, 'export', "1")
task_exports = the_data.getVarFlag(taskname, 'exports')
if task_exports:
for e in task_exports.split():
the_data.setVarFlag(e, 'export', '1')
v = the_data.getVar(e)
if v is not None:
os.environ[e] = v
if quieterrors:
the_data.setVarFlag(taskname, "quieterrors", "1")
except Exception:
if not quieterrors:
logger.critical(traceback.format_exc())
os._exit(1)
try:
if dry_run:
return 0
try:
ret = bb.build.exec_task(fn, taskname, the_data, cfg.profile)
finally:
if fakeroot:
fakerootcmd = shlex.split(the_data.getVar("FAKEROOTCMD"))
subprocess.run(fakerootcmd + ['-S'], check=True, stdout=subprocess.PIPE)
return ret
except:
os._exit(1)
if not profiling:
os._exit(child())
else:
profname = "profile-%s.log" % (fn.replace("/", "-") + "-" + taskname)
prof = profile.Profile()
try:
ret = profile.Profile.runcall(prof, child)
finally:
prof.dump_stats(profname)
bb.utils.process_profilelog(profname)
os._exit(ret)
else:
for key, value in iter(envbackup.items()):
if value is None:
del os.environ[key]
else:
os.environ[key] = value
return pid, pipein, pipeout
class runQueueWorkerPipe():
"""
Abstraction for a pipe between a worker thread and the worker server
"""
def __init__(self, pipein, pipeout):
self.input = pipein
if pipeout:
pipeout.close()
bb.utils.nonblockingfd(self.input)
self.queue = b""
def read(self):
start = len(self.queue)
try:
self.queue = self.queue + (self.input.read(102400) or b"")
except (OSError, IOError) as e:
if e.errno != errno.EAGAIN:
raise
end = len(self.queue)
index = self.queue.find(b"</event>")
while index != -1:
msg = self.queue[:index+8]
assert msg.startswith(b"<event>") and msg.count(b"<event>") == 1
worker_fire_prepickled(msg)
self.queue = self.queue[index+8:]
index = self.queue.find(b"</event>")
return (end > start)
def close(self):
while self.read():
continue
if len(self.queue) > 0:
print("Warning, worker child left partial message: %s" % self.queue)
self.input.close()
normalexit = False
class BitbakeWorker(object):
def __init__(self, din):
self.input = din
bb.utils.nonblockingfd(self.input)
self.queue = b""
self.cookercfg = None
self.databuilder = None
self.data = None
self.extraconfigdata = None
self.build_pids = {}
self.build_pipes = {}
signal.signal(signal.SIGTERM, self.sigterm_exception)
# Let SIGHUP exit as SIGTERM
signal.signal(signal.SIGHUP, self.sigterm_exception)
if "beef" in sys.argv[1]:
bb.utils.set_process_name("Worker (Fakeroot)")
else:
bb.utils.set_process_name("Worker")
def sigterm_exception(self, signum, stackframe):
if signum == signal.SIGTERM:
bb.warn("Worker received SIGTERM, shutting down...")
elif signum == signal.SIGHUP:
bb.warn("Worker received SIGHUP, shutting down...")
self.handle_finishnow(None)
signal.signal(signal.SIGTERM, signal.SIG_DFL)
os.kill(os.getpid(), signal.SIGTERM)
def serve(self):
while True:
(ready, _, _) = select.select([self.input] + [i.input for i in self.build_pipes.values()], [] , [], 1)
if self.input in ready:
try:
r = self.input.read()
if len(r) == 0:
# EOF on pipe, server must have terminated
self.sigterm_exception(signal.SIGTERM, None)
self.queue = self.queue + r
except (OSError, IOError):
pass
if len(self.queue):
self.handle_item(b"cookerconfig", self.handle_cookercfg)
self.handle_item(b"extraconfigdata", self.handle_extraconfigdata)
self.handle_item(b"workerdata", self.handle_workerdata)
self.handle_item(b"newtaskhashes", self.handle_newtaskhashes)
self.handle_item(b"runtask", self.handle_runtask)
self.handle_item(b"finishnow", self.handle_finishnow)
self.handle_item(b"ping", self.handle_ping)
self.handle_item(b"quit", self.handle_quit)
for pipe in self.build_pipes:
if self.build_pipes[pipe].input in ready:
self.build_pipes[pipe].read()
if len(self.build_pids):
while self.process_waitpid():
continue
def handle_item(self, item, func):
if self.queue.startswith(b"<" + item + b">"):
index = self.queue.find(b"</" + item + b">")
while index != -1:
try:
func(self.queue[(len(item) + 2):index])
except pickle.UnpicklingError:
workerlog_write("Unable to unpickle data: %s\n" % ":".join("{:02x}".format(c) for c in self.queue))
raise
self.queue = self.queue[(index + len(item) + 3):]
index = self.queue.find(b"</" + item + b">")
def handle_cookercfg(self, data):
self.cookercfg = pickle.loads(data)
self.databuilder = bb.cookerdata.CookerDataBuilder(self.cookercfg, worker=True)
self.databuilder.parseBaseConfiguration(worker=True)
self.data = self.databuilder.data
def handle_extraconfigdata(self, data):
self.extraconfigdata = pickle.loads(data)
def handle_workerdata(self, data):
self.workerdata = pickle.loads(data)
bb.build.verboseShellLogging = self.workerdata["build_verbose_shell"]
bb.build.verboseStdoutLogging = self.workerdata["build_verbose_stdout"]
bb.msg.loggerDefaultLogLevel = self.workerdata["logdefaultlevel"]
bb.msg.loggerDefaultDomains = self.workerdata["logdefaultdomain"]
for mc in self.databuilder.mcdata:
self.databuilder.mcdata[mc].setVar("PRSERV_HOST", self.workerdata["prhost"])
self.databuilder.mcdata[mc].setVar("BB_HASHSERVE", self.workerdata["hashservaddr"])
self.databuilder.mcdata[mc].setVar("__bbclasstype", "recipe")
def handle_newtaskhashes(self, data):
self.workerdata["newhashes"] = pickle.loads(data)
def handle_ping(self, _):
workerlog_write("Handling ping\n")
logger.warning("Pong from bitbake-worker!")
def handle_quit(self, data):
workerlog_write("Handling quit\n")
global normalexit
normalexit = True
sys.exit(0)
def handle_runtask(self, data):
fn, task, taskname, taskhash, unihash, quieterrors, appends, taskdepdata, dry_run_exec = pickle.loads(data)
workerlog_write("Handling runtask %s %s %s\n" % (task, fn, taskname))
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.databuilder, self.workerdata, fn, task, taskname, taskhash, unihash, appends, taskdepdata, self.extraconfigdata, quieterrors, dry_run_exec)
self.build_pids[pid] = task
self.build_pipes[pid] = runQueueWorkerPipe(pipein, pipeout)
def process_waitpid(self):
"""
Return none is there are no processes awaiting result collection, otherwise
collect the process exit codes and close the information pipe.
"""
try:
pid, status = os.waitpid(-1, os.WNOHANG)
if pid == 0 or os.WIFSTOPPED(status):
return False
except OSError:
return False
workerlog_write("Exit code of %s for pid %s\n" % (status, pid))
if os.WIFEXITED(status):
status = os.WEXITSTATUS(status)
elif os.WIFSIGNALED(status):
# Per shell conventions for $?, when a process exits due to
# a signal, we return an exit code of 128 + SIGNUM
status = 128 + os.WTERMSIG(status)
task = self.build_pids[pid]
del self.build_pids[pid]
self.build_pipes[pid].close()
del self.build_pipes[pid]
worker_fire_prepickled(b"<exitcode>" + pickle.dumps((task, status)) + b"</exitcode>")
return True
def handle_finishnow(self, _):
if self.build_pids:
logger.info("Sending SIGTERM to remaining %s tasks", len(self.build_pids))
for k, v in iter(self.build_pids.items()):
try:
os.kill(-k, signal.SIGTERM)
os.waitpid(-1, 0)
except:
pass
for pipe in self.build_pipes:
self.build_pipes[pipe].read()
try:
worker = BitbakeWorker(os.fdopen(sys.stdin.fileno(), 'rb'))
if not profiling:
worker.serve()
else:
profname = "profile-worker.log"
prof = profile.Profile()
try:
profile.Profile.runcall(prof, worker.serve)
finally:
prof.dump_stats(profname)
bb.utils.process_profilelog(profname)
except BaseException as e:
if not normalexit:
import traceback
sys.stderr.write(traceback.format_exc())
sys.stderr.write(str(e))
finally:
worker_thread_exit = True
worker_thread.join()
workerlog_write("exiting")
if not normalexit:
sys.exit(1)
sys.exit(0)

531
bitbake/bin/bitdoc Executable file
View File

@@ -0,0 +1,531 @@
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Copyright (C) 2005 Holger Hans Peter Freyther
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import optparse, os, sys
# bitbake
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(__file__), 'lib'))
import bb
import bb.parse
from string import split, join
__version__ = "0.0.2"
class HTMLFormatter:
"""
Simple class to help to generate some sort of HTML files. It is
quite inferior solution compared to docbook, gtkdoc, doxygen but it
should work for now.
We've a global introduction site (index.html) and then one site for
the list of keys (alphabetical sorted) and one for the list of groups,
one site for each key with links to the relations and groups.
index.html
all_keys.html
all_groups.html
groupNAME.html
keyNAME.html
"""
def replace(self, text, *pairs):
"""
From pydoc... almost identical at least
"""
while pairs:
(a, b) = pairs[0]
text = join(split(text, a), b)
pairs = pairs[1:]
return text
def escape(self, text):
"""
Escape string to be conform HTML
"""
return self.replace(text,
('&', '&amp;'),
('<', '&lt;' ),
('>', '&gt;' ) )
def createNavigator(self):
"""
Create the navgiator
"""
return """<table class="navigation" width="100%" summary="Navigation header" cellpadding="2" cellspacing="2">
<tr valign="middle">
<td><a accesskey="g" href="index.html">Home</a></td>
<td><a accesskey="n" href="all_groups.html">Groups</a></td>
<td><a accesskey="u" href="all_keys.html">Keys</a></td>
</tr></table>
"""
def relatedKeys(self, item):
"""
Create HTML to link to foreign keys
"""
if len(item.related()) == 0:
return ""
txt = "<p><b>See also:</b><br>"
txts = []
for it in item.related():
txts.append("""<a href="key%(it)s.html">%(it)s</a>""" % vars() )
return txt + ",".join(txts)
def groups(self, item):
"""
Create HTML to link to related groups
"""
if len(item.groups()) == 0:
return ""
txt = "<p><b>See also:</b><br>"
txts = []
for group in item.groups():
txts.append( """<a href="group%s.html">%s</a> """ % (group, group) )
return txt + ",".join(txts)
def createKeySite(self, item):
"""
Create a site for a key. It contains the header/navigator, a heading,
the description, links to related keys and to the groups.
"""
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html><head><title>Key %s</title></head>
<link rel="stylesheet" href="style.css" type="text/css">
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
%s
<h2><span class="refentrytitle">%s</span></h2>
<div class="refsynopsisdiv">
<h2>Synopsis</h2>
<p>
%s
</p>
</div>
<div class="refsynopsisdiv">
<h2>Related Keys</h2>
<p>
%s
</p>
</div>
<div class="refsynopsisdiv">
<h2>Groups</h2>
<p>
%s
</p>
</div>
</body>
""" % (item.name(), self.createNavigator(), item.name(),
self.escape(item.description()), self.relatedKeys(item), self.groups(item))
def createGroupsSite(self, doc):
"""
Create the Group Overview site
"""
groups = ""
sorted_groups = sorted(doc.groups())
for group in sorted_groups:
groups += """<a href="group%s.html">%s</a><br>""" % (group, group)
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html><head><title>Group overview</title></head>
<link rel="stylesheet" href="style.css" type="text/css">
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
%s
<h2>Available Groups</h2>
%s
</body>
""" % (self.createNavigator(), groups)
def createIndex(self):
"""
Create the index file
"""
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html><head><title>Bitbake Documentation</title></head>
<link rel="stylesheet" href="style.css" type="text/css">
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
%s
<h2>Documentation Entrance</h2>
<a href="all_groups.html">All available groups</a><br>
<a href="all_keys.html">All available keys</a><br>
</body>
""" % self.createNavigator()
def createKeysSite(self, doc):
"""
Create Overview of all avilable keys
"""
keys = ""
sorted_keys = sorted(doc.doc_keys())
for key in sorted_keys:
keys += """<a href="key%s.html">%s</a><br>""" % (key, key)
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html><head><title>Key overview</title></head>
<link rel="stylesheet" href="style.css" type="text/css">
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
%s
<h2>Available Keys</h2>
%s
</body>
""" % (self.createNavigator(), keys)
def createGroupSite(self, gr, items, _description = None):
"""
Create a site for a group:
Group the name of the group, items contain the name of the keys
inside this group
"""
groups = ""
description = ""
# create a section with the group descriptions
if _description:
description += "<h2 Description of Grozp %s</h2>" % gr
description += _description
items.sort(lambda x, y:cmp(x.name(), y.name()))
for group in items:
groups += """<a href="key%s.html">%s</a><br>""" % (group.name(), group.name())
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html><head><title>Group %s</title></head>
<link rel="stylesheet" href="style.css" type="text/css">
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
%s
%s
<div class="refsynopsisdiv">
<h2>Keys in Group %s</h2>
<pre class="synopsis">
%s
</pre>
</div>
</body>
""" % (gr, self.createNavigator(), description, gr, groups)
def createCSS(self):
"""
Create the CSS file
"""
return """.synopsis, .classsynopsis
{
background: #eeeeee;
border: solid 1px #aaaaaa;
padding: 0.5em;
}
.programlisting
{
background: #eeeeff;
border: solid 1px #aaaaff;
padding: 0.5em;
}
.variablelist
{
padding: 4px;
margin-left: 3em;
}
.variablelist td:first-child
{
vertical-align: top;
}
table.navigation
{
background: #ffeeee;
border: solid 1px #ffaaaa;
margin-top: 0.5em;
margin-bottom: 0.5em;
}
.navigation a
{
color: #770000;
}
.navigation a:visited
{
color: #550000;
}
.navigation .title
{
font-size: 200%;
}
div.refnamediv
{
margin-top: 2em;
}
div.gallery-float
{
float: left;
padding: 10px;
}
div.gallery-float img
{
border-style: none;
}
div.gallery-spacer
{
clear: both;
}
a
{
text-decoration: none;
}
a:hover
{
text-decoration: underline;
color: #FF0000;
}
"""
class DocumentationItem:
"""
A class to hold information about a configuration
item. It contains the key name, description, a list of related names,
and the group this item is contained in.
"""
def __init__(self):
self._groups = []
self._related = []
self._name = ""
self._desc = ""
def groups(self):
return self._groups
def name(self):
return self._name
def description(self):
return self._desc
def related(self):
return self._related
def setName(self, name):
self._name = name
def setDescription(self, desc):
self._desc = desc
def addGroup(self, group):
self._groups.append(group)
def addRelation(self, relation):
self._related.append(relation)
def sort(self):
self._related.sort()
self._groups.sort()
class Documentation:
"""
Holds the documentation... with mappings from key to items...
"""
def __init__(self):
self.__keys = {}
self.__groups = {}
def insert_doc_item(self, item):
"""
Insert the Doc Item into the internal list
of representation
"""
item.sort()
self.__keys[item.name()] = item
for group in item.groups():
if not group in self.__groups:
self.__groups[group] = []
self.__groups[group].append(item)
self.__groups[group].sort()
def doc_item(self, key):
"""
Return the DocumentationInstance describing the key
"""
try:
return self.__keys[key]
except KeyError:
return None
def doc_keys(self):
"""
Return the documented KEYS (names)
"""
return self.__keys.keys()
def groups(self):
"""
Return the names of available groups
"""
return self.__groups.keys()
def group_content(self, group_name):
"""
Return a list of keys/names that are in a specefic
group or the empty list
"""
try:
return self.__groups[group_name]
except KeyError:
return []
def parse_cmdline(args):
"""
Parse the CMD line and return the result as a n-tuple
"""
parser = optparse.OptionParser( version = "Bitbake Documentation Tool Core version %s, %%prog version %s" % (bb.__version__, __version__))
usage = """%prog [options]
Create a set of html pages (documentation) for a bitbake.conf....
"""
# Add the needed options
parser.add_option( "-c", "--config", help = "Use the specified configuration file as source",
action = "store", dest = "config", default = os.path.join("conf", "documentation.conf") )
parser.add_option( "-o", "--output", help = "Output directory for html files",
action = "store", dest = "output", default = "html/" )
parser.add_option( "-D", "--debug", help = "Increase the debug level",
action = "count", dest = "debug", default = 0 )
parser.add_option( "-v", "--verbose", help = "output more chit-char to the terminal",
action = "store_true", dest = "verbose", default = False )
options, args = parser.parse_args( sys.argv )
bb.msg.init_msgconfig(options.verbose, options.debug)
return options.config, options.output
def main():
"""
The main Method
"""
(config_file, output_dir) = parse_cmdline( sys.argv )
# right to let us load the file now
try:
documentation = bb.parse.handle( config_file, bb.data.init() )
except IOError:
bb.fatal( "Unable to open %s" % config_file )
except bb.parse.ParseError:
bb.fatal( "Unable to parse %s" % config_file )
if isinstance(documentation, dict):
documentation = documentation[""]
# Assuming we've the file loaded now, we will initialize the 'tree'
doc = Documentation()
# defined states
state_begin = 0
state_see = 1
state_group = 2
for key in bb.data.keys(documentation):
data = documentation.getVarFlag(key, "doc")
if not data:
continue
# The Documentation now starts
doc_ins = DocumentationItem()
doc_ins.setName(key)
tokens = data.split(' ')
state = state_begin
string= ""
for token in tokens:
token = token.strip(',')
if not state == state_see and token == "@see":
state = state_see
continue
elif not state == state_group and token == "@group":
state = state_group
continue
if state == state_begin:
string += " %s" % token
elif state == state_see:
doc_ins.addRelation(token)
elif state == state_group:
doc_ins.addGroup(token)
# set the description
doc_ins.setDescription(string)
doc.insert_doc_item(doc_ins)
# let us create the HTML now
bb.utils.mkdirhier(output_dir)
os.chdir(output_dir)
# Let us create the sites now. We do it in the following order
# Start with the index.html. It will point to sites explaining all
# keys and groups
html_slave = HTMLFormatter()
f = file('style.css', 'w')
print >> f, html_slave.createCSS()
f = file('index.html', 'w')
print >> f, html_slave.createIndex()
f = file('all_groups.html', 'w')
print >> f, html_slave.createGroupsSite(doc)
f = file('all_keys.html', 'w')
print >> f, html_slave.createKeysSite(doc)
# now for each group create the site
for group in doc.groups():
f = file('group%s.html' % group, 'w')
print >> f, html_slave.createGroupSite(group, doc.group_content(group))
# now for the keys
for key in doc.doc_keys():
f = file('key%s.html' % doc.doc_item(key).name(), 'w')
print >> f, html_slave.createKeySite(doc.doc_item(key))
if __name__ == "__main__":
main()

View File

@@ -1,173 +0,0 @@
#!/usr/bin/env python3
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
"""git-make-shallow: make the current git repository shallow
Remove the history of the specified revisions, then optionally filter the
available refs to those specified.
"""
import argparse
import collections
import errno
import itertools
import os
import subprocess
import sys
import warnings
warnings.simplefilter("default")
version = 1.0
def main():
if sys.version_info < (3, 4, 0):
sys.exit('Python 3.4 or greater is required')
git_dir = check_output(['git', 'rev-parse', '--git-dir']).rstrip()
shallow_file = os.path.join(git_dir, 'shallow')
if os.path.exists(shallow_file):
try:
check_output(['git', 'fetch', '--unshallow'])
except subprocess.CalledProcessError:
try:
os.unlink(shallow_file)
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
args = process_args()
revs = check_output(['git', 'rev-list'] + args.revisions).splitlines()
make_shallow(shallow_file, args.revisions, args.refs)
ref_revs = check_output(['git', 'rev-list'] + args.refs).splitlines()
remaining_history = set(revs) & set(ref_revs)
for rev in remaining_history:
if check_output(['git', 'rev-parse', '{}^@'.format(rev)]):
sys.exit('Error: %s was not made shallow' % rev)
filter_refs(args.refs)
if args.shrink:
shrink_repo(git_dir)
subprocess.check_call(['git', 'fsck', '--unreachable'])
def process_args():
# TODO: add argument to automatically keep local-only refs, since they
# can't be easily restored with a git fetch.
parser = argparse.ArgumentParser(description='Remove the history of the specified revisions, then optionally filter the available refs to those specified.')
parser.add_argument('--ref', '-r', metavar='REF', action='append', dest='refs', help='remove all but the specified refs (cumulative)')
parser.add_argument('--shrink', '-s', action='store_true', help='shrink the git repository by repacking and pruning')
parser.add_argument('revisions', metavar='REVISION', nargs='+', help='a git revision/commit')
if len(sys.argv) < 2:
parser.print_help()
sys.exit(2)
args = parser.parse_args()
if args.refs:
args.refs = check_output(['git', 'rev-parse', '--symbolic-full-name'] + args.refs).splitlines()
else:
args.refs = get_all_refs(lambda r, t, tt: t == 'commit' or tt == 'commit')
args.refs = list(filter(lambda r: not r.endswith('/HEAD'), args.refs))
args.revisions = check_output(['git', 'rev-parse'] + ['%s^{}' % i for i in args.revisions]).splitlines()
return args
def check_output(cmd, input=None):
return subprocess.check_output(cmd, universal_newlines=True, input=input)
def make_shallow(shallow_file, revisions, refs):
"""Remove the history of the specified revisions."""
for rev in follow_history_intersections(revisions, refs):
print("Processing %s" % rev)
with open(shallow_file, 'a') as f:
f.write(rev + '\n')
def get_all_refs(ref_filter=None):
"""Return all the existing refs in this repository, optionally filtering the refs."""
ref_output = check_output(['git', 'for-each-ref', '--format=%(refname)\t%(objecttype)\t%(*objecttype)'])
ref_split = [tuple(iter_extend(l.rsplit('\t'), 3)) for l in ref_output.splitlines()]
if ref_filter:
ref_split = (e for e in ref_split if ref_filter(*e))
refs = [r[0] for r in ref_split]
return refs
def iter_extend(iterable, length, obj=None):
"""Ensure that iterable is the specified length by extending with obj."""
return itertools.islice(itertools.chain(iterable, itertools.repeat(obj)), length)
def filter_refs(refs):
"""Remove all but the specified refs from the git repository."""
all_refs = get_all_refs()
to_remove = set(all_refs) - set(refs)
if to_remove:
check_output(['xargs', '-0', '-n', '1', 'git', 'update-ref', '-d', '--no-deref'],
input=''.join(l + '\0' for l in to_remove))
def follow_history_intersections(revisions, refs):
"""Determine all the points where the history of the specified revisions intersects the specified refs."""
queue = collections.deque(revisions)
seen = set()
for rev in iter_except(queue.popleft, IndexError):
if rev in seen:
continue
parents = check_output(['git', 'rev-parse', '%s^@' % rev]).splitlines()
yield rev
seen.add(rev)
if not parents:
continue
check_refs = check_output(['git', 'merge-base', '--independent'] + sorted(refs)).splitlines()
for parent in parents:
for ref in check_refs:
print("Checking %s vs %s" % (parent, ref))
try:
merge_base = check_output(['git', 'merge-base', parent, ref]).rstrip()
except subprocess.CalledProcessError:
continue
else:
queue.append(merge_base)
def iter_except(func, exception, start=None):
"""Yield a function repeatedly until it raises an exception."""
try:
if start is not None:
yield start()
while True:
yield func()
except exception:
pass
def shrink_repo(git_dir):
"""Shrink the newly shallow repository, removing the unreachable objects."""
subprocess.check_call(['git', 'reflog', 'expire', '--expire-unreachable=now', '--all'])
subprocess.check_call(['git', 'repack', '-ad'])
try:
os.unlink(os.path.join(git_dir, 'objects', 'info', 'alternates'))
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
subprocess.check_call(['git', 'prune', '--expire', 'now'])
if __name__ == '__main__':
main()

View File

@@ -1,324 +0,0 @@
#!/bin/echo ERROR: This script needs to be sourced. Please run as .
# toaster - shell script to start Toaster
# Copyright (C) 2013-2015 Intel Corp.
#
# SPDX-License-Identifier: GPL-2.0-or-later
#
HELP="
Usage 1: source toaster start|stop [webport=<address:port>] [noweb] [nobuild] [toasterdir]
Optional arguments:
[nobuild] Setup the environment for capturing builds with toaster but disable managed builds
[noweb] Setup the environment for capturing builds with toaster but don't start the web server
[webport] Set the development server (default: localhost:8000)
[toasterdir] Set absolute path to be used as TOASTER_DIR (default: BUILDDIR/../)
Usage 2: source toaster manage [createsuperuser|lsupdates|migrate|makemigrations|checksettings|collectstatic|...]
"
custom_extention()
{
custom_extension=$BBBASEDIR/lib/toaster/orm/fixtures/custom_toaster_append.sh
if [ -f $custom_extension ] ; then
$custom_extension $*
fi
}
databaseCheck()
{
retval=0
# you can always add a superuser later via
# ../bitbake/lib/toaster/manage.py createsuperuser --username=<ME>
$MANAGE migrate --noinput || retval=1
if [ $retval -eq 1 ]; then
echo "Failed migrations, halting system start" 1>&2
return $retval
fi
# Make sure that checksettings can pick up any value for TEMPLATECONF
export TEMPLATECONF
$MANAGE checksettings --traceback || retval=1
if [ $retval -eq 1 ]; then
printf "\nError while checking settings; exiting\n"
return $retval
fi
return $retval
}
webserverKillAll()
{
local pidfile
if [ -f ${BUILDDIR}/.toastermain.pid ] ; then
custom_extention web_stop_postpend
else
custom_extention noweb_stop_postpend
fi
for pidfile in ${BUILDDIR}/.toastermain.pid ${BUILDDIR}/.runbuilds.pid; do
if [ -f ${pidfile} ]; then
pid=`cat ${pidfile}`
while kill -0 $pid 2>/dev/null; do
kill -SIGTERM $pid 2>/dev/null
sleep 1
done
rm ${pidfile}
fi
done
}
webserverStartAll()
{
# do not start if toastermain points to a valid process
if ! cat "${BUILDDIR}/.toastermain.pid" 2>/dev/null | xargs -I{} kill -0 {} ; then
retval=1
rm "${BUILDDIR}/.toastermain.pid"
fi
retval=0
# check the database
databaseCheck || return 1
echo "Starting webserver..."
$MANAGE runserver --noreload "$ADDR_PORT" \
</dev/null >>${BUILDDIR}/toaster_web.log 2>&1 \
& echo $! >${BUILDDIR}/.toastermain.pid
sleep 1
if ! cat "${BUILDDIR}/.toastermain.pid" | xargs -I{} kill -0 {} ; then
retval=1
rm "${BUILDDIR}/.toastermain.pid"
else
echo "Toaster development webserver started at http://$ADDR_PORT"
echo -e "\nYou can now run 'bitbake <target>' on the command line and monitor your build in Toaster.\nYou can also use a Toaster project to configure and run a build.\n"
custom_extention web_start_postpend $ADDR_PORT
fi
return $retval
}
INSTOPSYSTEM=0
# define the stop command
stop_system()
{
# prevent reentry
if [ $INSTOPSYSTEM -eq 1 ]; then return; fi
INSTOPSYSTEM=1
webserverKillAll
# unset exported variables
unset TOASTER_DIR
unset BITBAKE_UI
unset BBBASEDIR
trap - SIGHUP
#trap - SIGCHLD
INSTOPSYSTEM=0
}
verify_prereq() {
# Verify Django version
reqfile=$(python3 -c "import os; print(os.path.realpath('$BBBASEDIR/toaster-requirements.txt'))")
exp='s/Django\([><=]\+\)\([^,]\+\),\([><=]\+\)\(.\+\)/'
# expand version parts to 2 digits to support 1.10.x > 1.8
# (note:helper functions hard to insert in-line)
exp=$exp'import sys,django;'
exp=$exp'version=["%02d" % int(n) for n in django.get_version().split(".")];'
exp=$exp'vmin=["%02d" % int(n) for n in "\2".split(".")];'
exp=$exp'vmax=["%02d" % int(n) for n in "\4".split(".")];'
exp=$exp'sys.exit(not (version \1 vmin and version \3 vmax))'
exp=$exp'/p'
if ! sed -n "$exp" $reqfile | python3 - ; then
req=`grep ^Django $reqfile`
echo "This program needs $req"
echo "Please install with pip3 install -r $reqfile"
return 2
fi
return 0
}
# read command line parameters
if [ -n "$BASH_SOURCE" ] ; then
TOASTER=${BASH_SOURCE}
elif [ -n "$ZSH_NAME" ] ; then
TOASTER=${(%):-%x}
else
TOASTER=$0
fi
export BBBASEDIR=`dirname $TOASTER`/..
MANAGE="python3 $BBBASEDIR/lib/toaster/manage.py"
if [ -z "$OE_ROOT" ]; then
OE_ROOT=`dirname $TOASTER`/../..
fi
# this is the configuraton file we are using for toaster
# we are using the same logic that oe-setup-builddir uses
# (based on TEMPLATECONF and .templateconf) to determine
# which toasterconf.json to use.
# note: There are a number of relative path assumptions
# in the local layers that currently make using an arbitrary
# toasterconf.json difficult.
. $OE_ROOT/.templateconf
if [ -n "$TEMPLATECONF" ]; then
if [ ! -d "$TEMPLATECONF" ]; then
# Allow TEMPLATECONF=meta-xyz/conf as a shortcut
if [ -d "$OE_ROOT/$TEMPLATECONF" ]; then
TEMPLATECONF="$OE_ROOT/$TEMPLATECONF"
fi
fi
fi
unset OE_ROOT
WEBSERVER=1
export TOASTER_BUILDSERVER=1
ADDR_PORT="localhost:8000"
TOASTERDIR=`dirname $BUILDDIR`
unset CMD
for param in $*; do
case $param in
noweb )
WEBSERVER=0
;;
nobuild )
TOASTER_BUILDSERVER=0
;;
start )
CMD=$param
;;
stop )
CMD=$param
;;
webport=*)
ADDR_PORT="${param#*=}"
# Split the addr:port string
ADDR=`echo $ADDR_PORT | cut -f 1 -d ':'`
PORT=`echo $ADDR_PORT | cut -f 2 -d ':'`
# If only a port has been speified then set address to localhost.
if [ $ADDR = $PORT ] ; then
ADDR_PORT="localhost:$PORT"
fi
;;
toasterdir=*)
TOASTERDIR="${param#*=}"
;;
manage )
CMD=$param
manage_cmd=""
;;
--help)
echo "$HELP"
return 0
;;
*)
if [ "manage" == "$CMD" ] ; then
manage_cmd="$manage_cmd $param"
else
echo "$HELP"
exit 1
fi
;;
esac
done
if [ `basename \"$0\"` = `basename \"${TOASTER}\"` ]; then
echo "Error: This script needs to be sourced. Please run as . $TOASTER"
return 1
fi
verify_prereq || return 1
# We make sure we're running in the current shell and in a good environment
if [ -z "$BUILDDIR" ] || ! which bitbake >/dev/null 2>&1 ; then
echo "Error: Build environment is not setup or bitbake is not in path." 1>&2
return 2
fi
# this defines the dir toaster will use for
# 1) clones of layers (in _toaster_clones )
# 2) the build dir (in build)
# 3) the sqlite db if that is being used.
# 4) pid's we need to clean up on exit/shutdown
export TOASTER_DIR=$TOASTERDIR
export BB_ENV_PASSTHROUGH_ADDITIONS="$BB_ENV_PASSTHROUGH_ADDITIONS TOASTER_DIR"
# Determine the action. If specified by arguments, fine, if not, toggle it
if [ "$CMD" = "start" ] ; then
if [ -n "$BBSERVER" ]; then
echo " Toaster is already running. Exiting..."
return 1
fi
elif [ "$CMD" = "" ]; then
echo "No command specified"
echo "$HELP"
return 1
fi
echo "The system will $CMD."
# Execute the commands
custom_extention toaster_prepend $CMD $ADDR_PORT
case $CMD in
start )
# check if addr:port is not in use
if [ "$CMD" == 'start' ]; then
if [ $WEBSERVER -gt 0 ]; then
$MANAGE checksocket "$ADDR_PORT" || return 1
fi
fi
# Create configuration file
conf=${BUILDDIR}/conf/local.conf
line='INHERIT+="toaster buildhistory"'
grep -q "$line" $conf || echo $line >> $conf
if [ $WEBSERVER -eq 0 ] ; then
# Do not update the database for "noweb" unless
# it does not yet exist
if [ ! -f "$TOASTER_DIR/toaster.sqlite" ] ; then
if ! databaseCheck; then
echo "Failed ${CMD}."
return 4
fi
fi
custom_extention noweb_start_postpend $ADDR_PORT
fi
if [ $WEBSERVER -gt 0 ] && ! webserverStartAll; then
echo "Failed ${CMD}."
return 4
fi
export BITBAKE_UI='toasterui'
if [ $TOASTER_BUILDSERVER -eq 1 ] ; then
$MANAGE runbuilds \
</dev/null >>${BUILDDIR}/toaster_runbuilds.log 2>&1 \
& echo $! >${BUILDDIR}/.runbuilds.pid
else
echo "Toaster build server not started."
fi
# set fail safe stop system on terminal exit
trap stop_system SIGHUP
echo "Successful ${CMD}."
custom_extention toaster_postpend $CMD $ADDR_PORT
return 0
;;
stop )
stop_system
echo "Successful ${CMD}."
;;
manage )
cd $BBBASEDIR/lib/toaster
$MANAGE $manage_cmd
;;
esac
custom_extention toaster_postpend $CMD $ADDR_PORT

View File

@@ -1,115 +0,0 @@
#!/usr/bin/env python3
#
# Copyright (C) 2014 Alex Damian
#
# SPDX-License-Identifier: GPL-2.0-only
#
# This file re-uses code spread throughout other Bitbake source files.
# As such, all other copyrights belong to their own right holders.
#
"""
This command takes a filename as a single parameter. The filename is read
as a build eventlog, and the ToasterUI is used to process events in the file
and log data in the database
"""
import os
import sys
import json
import pickle
import codecs
import warnings
warnings.simplefilter("default")
from collections import namedtuple
# mangle syspath to allow easy import of modules
from os.path import join, dirname, abspath
sys.path.insert(0, join(dirname(dirname(abspath(__file__))), 'lib'))
import bb.cooker
from bb.ui import toasterui
class EventPlayer:
"""Emulate a connection to a bitbake server."""
def __init__(self, eventfile, variables):
self.eventfile = eventfile
self.variables = variables
self.eventmask = []
def waitEvent(self, _timeout):
"""Read event from the file."""
line = self.eventfile.readline().strip()
if not line:
return
try:
event_str = json.loads(line)['vars'].encode('utf-8')
event = pickle.loads(codecs.decode(event_str, 'base64'))
event_name = "%s.%s" % (event.__module__, event.__class__.__name__)
if event_name not in self.eventmask:
return
return event
except ValueError as err:
print("Failed loading ", line)
raise err
def runCommand(self, command_line):
"""Emulate running a command on the server."""
name = command_line[0]
if name == "getVariable":
var_name = command_line[1]
variable = self.variables.get(var_name)
if variable:
return variable['v'], None
return None, "Missing variable %s" % var_name
elif name == "getAllKeysWithFlags":
dump = {}
flaglist = command_line[1]
for key, val in self.variables.items():
try:
if not key.startswith("__"):
dump[key] = {
'v': val['v'],
'history' : val['history'],
}
for flag in flaglist:
dump[key][flag] = val[flag]
except Exception as err:
print(err)
return (dump, None)
elif name == 'setEventMask':
self.eventmask = command_line[-1]
return True, None
else:
raise Exception("Command %s not implemented" % command_line[0])
def getEventHandle(self):
"""
This method is called by toasterui.
The return value is passed to self.runCommand but not used there.
"""
pass
def main(argv):
with open(argv[-1]) as eventfile:
# load variables from the first line
variables = json.loads(eventfile.readline().strip())['allvariables']
params = namedtuple('ConfigParams', ['observe_only'])(True)
player = EventPlayer(eventfile, variables)
return toasterui.main(player, player, params)
# run toaster ui on our mock bitbake class
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: %s <event file>" % os.path.basename(sys.argv[0]))
sys.exit(1)
sys.exit(main(sys.argv))

View File

@@ -1,13 +0,0 @@
{
"version": 1,
"loggers": {
"BitBake.SigGen.HashEquiv": {
"level": "VERBOSE",
"handlers": ["BitBake.verbconsole"]
},
"BitBake.RunQueue.HashEquiv": {
"level": "VERBOSE",
"handlers": ["BitBake.verbconsole"]
}
}
}

View File

@@ -1,89 +0,0 @@
#! /usr/bin/env python3
#
# Copyright (C) 2020 Joshua Watt <JPEWhacker@gmail.com>
#
# SPDX-License-Identifier: MIT
import argparse
import os
import random
import shutil
import signal
import subprocess
import sys
import time
def try_unlink(path):
try:
os.unlink(path)
except:
pass
def main():
def cleanup():
shutil.rmtree("tmp/cache", ignore_errors=True)
try_unlink("bitbake-cookerdaemon.log")
try_unlink("bitbake.sock")
try_unlink("bitbake.lock")
parser = argparse.ArgumentParser(
description="Bitbake parser torture test",
epilog="""
A torture test for bitbake's parser. Repeatedly interrupts parsing until
bitbake decides to deadlock.
""",
)
args = parser.parse_args()
if not "BUILDDIR" in os.environ:
print(
"'BUILDDIR' not found in the environment. Did you initialize the build environment?"
)
return 1
os.chdir(os.environ["BUILDDIR"])
run_num = 0
while True:
if run_num % 100 == 0:
print("Calibrating wait time...")
cleanup()
start_time = time.monotonic()
r = subprocess.run(["bitbake", "-p"])
max_wait_time = time.monotonic() - start_time
if r.returncode != 0:
print("Calibration run exited with %d" % r.returncode)
return 1
print("Maximum wait time is %f seconds" % max_wait_time)
run_num += 1
wait_time = random.random() * max_wait_time
print("Run #%d" % run_num)
print("Will sleep for %f seconds" % wait_time)
cleanup()
with subprocess.Popen(["bitbake", "-p"]) as proc:
time.sleep(wait_time)
proc.send_signal(signal.SIGINT)
try:
proc.wait(45)
except subprocess.TimeoutExpired:
print("Run #%d: Waited too long. Possible deadlock!" % run_num)
proc.wait()
return 1
if proc.returncode == 0:
print("Exited successfully. Timeout too long?")
else:
print("Exited with %d" % proc.returncode)
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,83 +0,0 @@
#!/usr/bin/env python3
#
# Copyright (C) 2012, 2018 Wind River Systems, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Used for dumping the bb_cache.dat
#
import os
import sys
import argparse
# For importing bb.cache
sys.path.insert(0, os.path.join(os.path.abspath(os.path.dirname(sys.argv[0])), '../lib'))
from bb.cache import CoreRecipeInfo
import pickle
class DumpCache(object):
def __init__(self):
parser = argparse.ArgumentParser(
description="bb_cache.dat's dumper",
epilog="Use %(prog)s --help to get help")
parser.add_argument("-r", "--recipe",
help="specify the recipe, default: all recipes", action="store")
parser.add_argument("-m", "--members",
help = "specify the member, use comma as separator for multiple ones, default: all members", action="store", default="")
parser.add_argument("-s", "--skip",
help = "skip skipped recipes", action="store_true")
parser.add_argument("cachefile",
help = "specify bb_cache.dat", nargs = 1, action="store", default="")
self.args = parser.parse_args()
def main(self):
with open(self.args.cachefile[0], "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
while True:
try:
key = pickled.load()
val = pickled.load()
except Exception:
break
if isinstance(val, CoreRecipeInfo):
pn = val.pn
if self.args.recipe and self.args.recipe != pn:
continue
if self.args.skip and val.skipped:
continue
if self.args.members:
out = key
for member in self.args.members.split(','):
out += ": %s" % val.__dict__.get(member)
print("%s" % out)
else:
print("%s: %s" % (key, val.__dict__))
elif not self.args.recipe:
print("%s %s" % (key, val))
if __name__ == "__main__":
try:
dump = DumpCache()
ret = dump.main()
except Exception as esc:
ret = 1
import traceback
traceback.print_exc()
sys.exit(ret)

View File

@@ -1,23 +0,0 @@
# SPDX-License-Identifier: MIT
#
# Copyright (c) 2021 Joshua Watt <JPEWhacker@gmail.com>
#
# Dockerfile to build a bitbake hash equivalence server container
#
# From the root of the bitbake repository, run:
#
# docker build -f contrib/hashserv/Dockerfile .
#
FROM alpine:3.13.1
RUN apk add --no-cache python3
COPY bin/bitbake-hashserv /opt/bbhashserv/bin/
COPY lib/hashserv /opt/bbhashserv/lib/hashserv/
COPY lib/bb /opt/bbhashserv/lib/bb/
COPY lib/codegen.py /opt/bbhashserv/lib/codegen.py
COPY lib/ply /opt/bbhashserv/lib/ply/
COPY lib/bs4 /opt/bbhashserv/lib/bs4/
ENTRYPOINT ["/opt/bbhashserv/bin/bitbake-hashserv"]

View File

@@ -1,62 +0,0 @@
# SPDX-License-Identifier: MIT
#
# Copyright (c) 2022 Daniel Gomez <daniel@qtec.com>
#
# Dockerfile to build a bitbake PR service container
#
# From the root of the bitbake repository, run:
#
# docker build -f contrib/prserv/Dockerfile . -t prserv
#
# Running examples:
#
# 1. PR Service in RW mode, port 18585:
#
# docker run --detach --tty \
# --env PORT=18585 \
# --publish 18585:18585 \
# --volume $PWD:/var/lib/bbprserv \
# prserv
#
# 2. PR Service in RO mode, default port (8585) and custom LOGFILE:
#
# docker run --detach --tty \
# --env DBMODE="--read-only" \
# --env LOGFILE=/var/lib/bbprserv/prservro.log \
# --publish 8585:8585 \
# --volume $PWD:/var/lib/bbprserv \
# prserv
#
FROM alpine:3.14.4
RUN apk add --no-cache python3
COPY bin/bitbake-prserv /opt/bbprserv/bin/
COPY lib/prserv /opt/bbprserv/lib/prserv/
COPY lib/bb /opt/bbprserv/lib/bb/
COPY lib/codegen.py /opt/bbprserv/lib/codegen.py
COPY lib/ply /opt/bbprserv/lib/ply/
COPY lib/bs4 /opt/bbprserv/lib/bs4/
ENV PATH=$PATH:/opt/bbprserv/bin
RUN mkdir -p /var/lib/bbprserv
ENV DBFILE=/var/lib/bbprserv/prserv.sqlite3 \
LOGFILE=/var/lib/bbprserv/prserv.log \
LOGLEVEL=debug \
HOST=0.0.0.0 \
PORT=8585 \
DBMODE=""
ENTRYPOINT [ "/bin/sh", "-c", \
"bitbake-prserv \
--file=$DBFILE \
--log=$LOGFILE \
--loglevel=$LOGLEVEL \
--start \
--host=$HOST \
--port=$PORT \
$DBMODE \
&& tail -f $LOGFILE"]

View File

@@ -1,18 +0,0 @@
The MIT License (MIT)
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -6,15 +6,15 @@
"
" This sets up the syntax highlighting for BitBake files, like .bb, .bbclass and .inc
if &compatible || version < 600 || exists("b:loaded_bitbake_plugin")
if &compatible || version < 600
finish
endif
" .bb, .bbappend and .bbclass
au BufNewFile,BufRead *.{bb,bbappend,bbclass} set filetype=bitbake
" .bb and .bbclass
au BufNewFile,BufRead *.b{b,bclass} set filetype=bitbake
" .inc
au BufNewFile,BufRead *.inc set filetype=bitbake
au BufNewFile,BufRead *.inc set filetype=bitbake
" .conf
au BufNewFile,BufRead *.conf

View File

@@ -1,13 +1 @@
" Only do this when not done yet for this buffer
if exists("b:did_ftplugin")
finish
endif
" Don't load another plugin for this buffer
let b:did_ftplugin = 1
let b:undo_ftplugin = "setl cms< sts< sw< et< sua<"
setlocal commentstring=#\ %s
setlocal softtabstop=4 shiftwidth=4 expandtab
setlocal suffixesadd+=.bb,.bbclass
set sts=4 sw=4 et

View File

@@ -1,343 +0,0 @@
" Vim indent file
" Language: BitBake
" Copyright: Copyright (C) 2019 Agilent Technologies, Inc.
" Maintainer: Chris Laplante <chris.laplante@agilent.com>
" License: You may redistribute this under the same terms as Vim itself
if exists("b:did_indent")
finish
endif
if exists("*BitbakeIndent")
finish
endif
runtime! indent/sh.vim
unlet b:did_indent
setlocal indentexpr=BitbakeIndent(v:lnum)
setlocal autoindent nolisp
function s:is_bb_python_func_def(lnum)
let stack = synstack(a:lnum, 1)
if len(stack) == 0
return 0
endif
let top = synIDattr(stack[0], "name")
echo top
return synIDattr(stack[0], "name") == "bbPyFuncDef"
endfunction
"""" begin modified from indent/python.vim, upstream commit 7a9bd7c1e0ce1baf5a02daf36eeae3638aa315c7
"""" This copied code is licensed the same as Vim itself.
setlocal indentkeys+=<:>,=elif,=except
let s:keepcpo= &cpo
set cpo&vim
let s:maxoff = 50 " maximum number of lines to look backwards for ()
function GetPythonIndent(lnum)
" If this line is explicitly joined: If the previous line was also joined,
" line it up with that one, otherwise add two 'shiftwidth'
if getline(a:lnum - 1) =~ '\\$'
if a:lnum > 1 && getline(a:lnum - 2) =~ '\\$'
return indent(a:lnum - 1)
endif
return indent(a:lnum - 1) + (exists("g:pyindent_continue") ? eval(g:pyindent_continue) : (shiftwidth() * 2))
endif
" If the start of the line is in a string don't change the indent.
if has('syntax_items')
\ && synIDattr(synID(a:lnum, 1, 1), "name") =~ "String$"
return -1
endif
" Search backwards for the previous non-empty line.
let plnum = prevnonblank(v:lnum - 1)
if plnum == 0
" This is the first non-empty line, use zero indent.
return 0
endif
call cursor(plnum, 1)
" Identing inside parentheses can be very slow, regardless of the searchpair()
" timeout, so let the user disable this feature if he doesn't need it
let disable_parentheses_indenting = get(g:, "pyindent_disable_parentheses_indenting", 0)
if disable_parentheses_indenting == 1
let plindent = indent(plnum)
let plnumstart = plnum
else
" searchpair() can be slow sometimes, limit the time to 150 msec or what is
" put in g:pyindent_searchpair_timeout
let searchpair_stopline = 0
let searchpair_timeout = get(g:, 'pyindent_searchpair_timeout', 150)
" If the previous line is inside parenthesis, use the indent of the starting
" line.
" Trick: use the non-existing "dummy" variable to break out of the loop when
" going too far back.
let parlnum = searchpair('(\|{\|\[', '', ')\|}\|\]', 'nbW',
\ "line('.') < " . (plnum - s:maxoff) . " ? dummy :"
\ . " synIDattr(synID(line('.'), col('.'), 1), 'name')"
\ . " =~ '\\(Comment\\|Todo\\|String\\)$'",
\ searchpair_stopline, searchpair_timeout)
if parlnum > 0
" We may have found the opening brace of a BitBake Python task, e.g. 'python do_task {'
" If so, ignore it here - it will be handled later.
if s:is_bb_python_func_def(parlnum)
let parlnum = 0
let plindent = indent(plnum)
let plnumstart = plnum
else
let plindent = indent(parlnum)
let plnumstart = parlnum
endif
else
let plindent = indent(plnum)
let plnumstart = plnum
endif
" When inside parenthesis: If at the first line below the parenthesis add
" two 'shiftwidth', otherwise same as previous line.
" i = (a
" + b
" + c)
call cursor(a:lnum, 1)
let p = searchpair('(\|{\|\[', '', ')\|}\|\]', 'bW',
\ "line('.') < " . (a:lnum - s:maxoff) . " ? dummy :"
\ . " synIDattr(synID(line('.'), col('.'), 1), 'name')"
\ . " =~ '\\(Comment\\|Todo\\|String\\)$'",
\ searchpair_stopline, searchpair_timeout)
if p > 0
if s:is_bb_python_func_def(p)
" Handle first non-empty line inside a BB Python task
if p == plnum
return shiftwidth()
endif
" Handle the user actually trying to close a BitBake Python task
let line = getline(a:lnum)
if line =~ '^\s*}'
return -2
endif
" Otherwise ignore the brace
let p = 0
else
if p == plnum
" When the start is inside parenthesis, only indent one 'shiftwidth'.
let pp = searchpair('(\|{\|\[', '', ')\|}\|\]', 'bW',
\ "line('.') < " . (a:lnum - s:maxoff) . " ? dummy :"
\ . " synIDattr(synID(line('.'), col('.'), 1), 'name')"
\ . " =~ '\\(Comment\\|Todo\\|String\\)$'",
\ searchpair_stopline, searchpair_timeout)
if pp > 0
return indent(plnum) + (exists("g:pyindent_nested_paren") ? eval(g:pyindent_nested_paren) : shiftwidth())
endif
return indent(plnum) + (exists("g:pyindent_open_paren") ? eval(g:pyindent_open_paren) : (shiftwidth() * 2))
endif
if plnumstart == p
return indent(plnum)
endif
return plindent
endif
endif
endif
" Get the line and remove a trailing comment.
" Use syntax highlighting attributes when possible.
let pline = getline(plnum)
let pline_len = strlen(pline)
if has('syntax_items')
" If the last character in the line is a comment, do a binary search for
" the start of the comment. synID() is slow, a linear search would take
" too long on a long line.
if synIDattr(synID(plnum, pline_len, 1), "name") =~ "\\(Comment\\|Todo\\)$"
let min = 1
let max = pline_len
while min < max
let col = (min + max) / 2
if synIDattr(synID(plnum, col, 1), "name") =~ "\\(Comment\\|Todo\\)$"
let max = col
else
let min = col + 1
endif
endwhile
let pline = strpart(pline, 0, min - 1)
endif
else
let col = 0
while col < pline_len
if pline[col] == '#'
let pline = strpart(pline, 0, col)
break
endif
let col = col + 1
endwhile
endif
" If the previous line ended with a colon, indent this line
if pline =~ ':\s*$'
return plindent + shiftwidth()
endif
" If the previous line was a stop-execution statement...
" TODO: utilize this logic to deindent when ending a bbPyDefRegion
if getline(plnum) =~ '^\s*\(break\|continue\|raise\|return\|pass\|bb\.fatal\)\>'
" See if the user has already dedented
if indent(a:lnum) > indent(plnum) - shiftwidth()
" If not, recommend one dedent
return indent(plnum) - shiftwidth()
endif
" Otherwise, trust the user
return -1
endif
" If the current line begins with a keyword that lines up with "try"
if getline(a:lnum) =~ '^\s*\(except\|finally\)\>'
let lnum = a:lnum - 1
while lnum >= 1
if getline(lnum) =~ '^\s*\(try\|except\)\>'
let ind = indent(lnum)
if ind >= indent(a:lnum)
return -1 " indent is already less than this
endif
return ind " line up with previous try or except
endif
let lnum = lnum - 1
endwhile
return -1 " no matching "try"!
endif
" If the current line begins with a header keyword, dedent
if getline(a:lnum) =~ '^\s*\(elif\|else\)\>'
" Unless the previous line was a one-liner
if getline(plnumstart) =~ '^\s*\(for\|if\|try\)\>'
return plindent
endif
" Or the user has already dedented
if indent(a:lnum) <= plindent - shiftwidth()
return -1
endif
return plindent - shiftwidth()
endif
" When after a () construct we probably want to go back to the start line.
" a = (b
" + c)
" here
if parlnum > 0
return plindent
endif
return -1
endfunction
let &cpo = s:keepcpo
unlet s:keepcpo
""" end of stuff from indent/python.vim
let b:did_indent = 1
setlocal indentkeys+=0\"
function BitbakeIndent(lnum)
if !has('syntax_items')
return -1
endif
let stack = synstack(a:lnum, 1)
if len(stack) == 0
return -1
endif
let name = synIDattr(stack[0], "name")
" TODO: support different styles of indentation for assignments. For now,
" we only support like this:
" VAR = " \
" value1 \
" value2 \
" "
"
" i.e. each value indented by shiftwidth(), with the final quote " completely unindented.
if name == "bbVarValue"
" Quote handling is tricky. kernel.bbclass has this line for instance:
" EXTRA_OEMAKE = " HOSTCC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" " HOSTCPP="${BUILD_CPP}""
" Instead of trying to handle crazy cases like that, just assume that a
" double-quote on a line by itself (following an assignment) means the
" user is closing the assignment, and de-dent.
if getline(a:lnum) =~ '^\s*"$'
return 0
endif
let prevstack = synstack(a:lnum - 1, 1)
if len(prevstack) == 0
return -1
endif
let prevname = synIDattr(prevstack[0], "name")
" Only indent if there was actually a continuation character on
" the previous line, to avoid misleading indentation.
let prevlinelastchar = synIDattr(synID(a:lnum - 1, col([a:lnum - 1, "$"]) - 1, 1), "name")
let prev_continued = prevlinelastchar == "bbContinue"
" Did the previous line introduce an assignment?
if index(["bbVarDef", "bbVarFlagDef"], prevname) != -1
if prev_continued
return shiftwidth()
endif
endif
if !prev_continued
return 0
endif
" Autoindent can take it from here
return -1
endif
if index(["bbPyDefRegion", "bbPyFuncRegion"], name) != -1
let ret = GetPythonIndent(a:lnum)
" Should normally always be indented by at least one shiftwidth; but allow
" return of -1 (defer to autoindent) or -2 (force indent to 0)
if ret == 0
return shiftwidth()
elseif ret == -2
return 0
endif
return ret
endif
" TODO: GetShIndent doesn't detect tasks prepended with 'fakeroot'
" Need to submit a patch upstream to Vim to provide an extension point.
" Unlike the Python indenter, the Sh indenter is way too large to copy and
" modify here.
if name == "bbShFuncRegion"
return GetShIndent()
endif
" TODO:
" + heuristics for de-denting out of a bbPyDefRegion? e.g. when the user
" types an obvious BB keyword like addhandler or addtask, or starts
" writing a shell task. Maybe too hard to implement...
return -1
endfunction

21
bitbake/contrib/vim/plugin/newbb.vim Normal file → Executable file
View File

@@ -10,22 +10,22 @@
"
" Will try to use git to find the user name and email
if &compatible || v:version < 600 || exists("b:loaded_bitbake_plugin")
if &compatible || v:version < 600
finish
endif
fun! <SID>GetUserName()
let l:user_name = system("git config --get user.name")
let l:user_name = system("git-config --get user.name")
if v:shell_error
return "Unknown User"
return "Unknow User"
else
return substitute(l:user_name, "\n", "", "")
endfun
fun! <SID>GetUserEmail()
let l:user_email = system("git config --get user.email")
let l:user_email = system("git-config --get user.email")
if v:shell_error
return "unknown@user.org"
return "unknow@user.org"
else
return substitute(l:user_email, "\n", "", "")
endfun
@@ -41,10 +41,6 @@ fun! BBHeader()
endfun
fun! NewBBTemplate()
if line2byte(line('$') + 1) != -1
return
endif
let l:paste = &paste
set nopaste
@@ -52,17 +48,18 @@ fun! NewBBTemplate()
call BBHeader()
" New the bb template
put ='SUMMARY = \"\"'
put ='DESCRIPTION = \"\"'
put ='HOMEPAGE = \"\"'
put ='LICENSE = \"\"'
put ='SECTION = \"\"'
put ='DEPENDS = \"\"'
put ='PR = \"r0\"'
put =''
put ='SRC_URI = \"\"'
" Go to the first place to edit
0
/^SUMMARY =/
/^DESCRIPTION =/
exec "normal 2f\""
if paste == 1
@@ -80,7 +77,7 @@ if v:progname =~ "vimdiff"
endif
augroup NewBB
au BufNewFile,BufReadPost *.bb
au BufNewFile *.bb
\ if g:bb_create_on_empty |
\ call NewBBTemplate() |
\ endif

View File

@@ -1,46 +0,0 @@
" Vim plugin file
" Purpose: Create a template for new bbappend file
" Author: Joshua Watt <JPEWhacker@gmail.com>
" Copyright: Copyright (C) 2017 Joshua Watt <JPEWhacker@gmail.com>
"
" This file is licensed under the MIT license, see COPYING.MIT in
" this source distribution for the terms.
"
if &compatible || v:version < 600 || exists("b:loaded_bitbake_plugin")
finish
endif
fun! NewBBAppendTemplate()
if line2byte(line('$') + 1) != -1
return
endif
let l:paste = &paste
set nopaste
" New bbappend template
0 put ='FILESEXTRAPATHS:prepend := \"${THISDIR}/${PN}:\"'
2
if paste == 1
set paste
endif
endfun
if !exists("g:bb_create_on_empty")
let g:bb_create_on_empty = 1
endif
" disable in case of vimdiff
if v:progname =~ "vimdiff"
let g:bb_create_on_empty = 0
endif
augroup NewBBAppend
au BufNewFile,BufReadPost *.bbappend
\ if g:bb_create_on_empty |
\ call NewBBAppendTemplate() |
\ endif
augroup END

View File

@@ -12,7 +12,7 @@
"
" It's an entirely new type, just has specific syntax in shell and python code
if &compatible || v:version < 600 || exists("b:loaded_bitbake_plugin")
if &compatible || v:version < 600
finish
endif
if exists("b:current_syntax")
@@ -44,22 +44,22 @@ syn match bbArrayBrackets "[\[\]]" contained
" BitBake strings
syn match bbContinue "\\$"
syn region bbString matchgroup=bbQuote start=+"+ skip=+\\$+ end=+"+ contained contains=bbTodo,bbContinue,bbVarDeref,bbVarPyValue,@Spell
syn region bbString matchgroup=bbQuote start=+'+ skip=+\\$+ end=+'+ contained contains=bbTodo,bbContinue,bbVarDeref,bbVarPyValue,@Spell
syn region bbString matchgroup=bbQuote start=+"+ skip=+\\$+ excludenl end=+"+ contained keepend contains=bbTodo,bbContinue,bbVarDeref,bbVarPyValue,@Spell
syn region bbString matchgroup=bbQuote start=+'+ skip=+\\$+ excludenl end=+'+ contained keepend contains=bbTodo,bbContinue,bbVarDeref,bbVarPyValue,@Spell
" Vars definition
syn match bbExport "^export" nextgroup=bbIdentifier skipwhite
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
syn match bbIdentifier "[a-zA-Z0-9\-_\.\/\+]\+" display contained
syn match bbVarDeref "${[a-zA-Z0-9\-_:\.\/\+]\+}" contained
syn match bbVarDeref "${[a-zA-Z0-9\-_\.\/\+]\+}" contained
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)" contained nextgroup=bbVarValue
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+][${}a-zA-Z0-9\-_:\.\/\+]*\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbOverrideOperator,bbVarDeref nextgroup=bbVarEq
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+]\+\(_[${}a-zA-Z0-9\-_\.\/\+]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
syn match bbVarValue ".*$" contained contains=bbString,bbVarDeref,bbVarPyValue
syn region bbVarPyValue start=+${@+ skip=+\\$+ end=+}+ contained contains=@python
syn region bbVarPyValue start=+${@+ skip=+\\$+ excludenl end=+}+ contained contains=@python
" Vars metadata flags
syn match bbVarFlagDef "^\([a-zA-Z0-9\-_\.]\+\)\(\[[a-zA-Z0-9\-_\.+]\+\]\)\@=" contains=bbIdentifier nextgroup=bbVarFlagFlag
syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*\(:=\|=\|.=\|=.|+=\|=+\|?=\)\@=" contained contains=bbIdentifier nextgroup=bbVarEq
syn match bbVarFlagDef "^\([a-zA-Z0-9\-_\.]\+\)\(\[[a-zA-Z0-9\-_\.]\+\]\)\@=" contains=bbIdentifier nextgroup=bbVarFlagFlag
syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*\(=\)\@=" keepend excludenl contained contains=bbIdentifier nextgroup=bbVarEq
" Includes and requires
syn keyword bbInclude inherit include require contained
@@ -67,17 +67,15 @@ syn match bbIncludeRest ".*$" contained contains=bbString,bbVarDeref
syn match bbIncludeLine "^\(inherit\|include\|require\)\s\+" contains=bbInclude nextgroup=bbIncludeRest
" Add taks and similar
syn keyword bbStatement addtask deltask addhandler after before EXPORT_FUNCTIONS contained
syn keyword bbStatement addtask addhandler after before EXPORT_FUNCTIONS contained
syn match bbStatementRest ".*$" skipwhite contained contains=bbStatement
syn match bbStatementLine "^\(addtask\|deltask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
syn match bbStatementLine "^\(addtask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
" OE Important Functions
syn keyword bbOEFunctions do_fetch do_unpack do_patch do_configure do_compile do_stage do_install do_package contained
" Generic Functions
syn match bbFunction "\h[0-9A-Za-z_\-\.]*" display contained contains=bbOEFunctions
syn keyword bbOverrideOperator append prepend remove contained
syn match bbFunction "\h[0-9A-Za-z_-]*" display contained contains=bbOEFunctions
" BitBake shell metadata
syn include @shell syntax/sh.vim
@@ -85,16 +83,13 @@ if exists("b:current_syntax")
unlet b:current_syntax
endif
syn keyword bbShFakeRootFlag fakeroot contained
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([\.0-9A-Za-z_:${}\-\.]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbOverrideOperator,bbVarDeref,bbDelimiter nextgroup=bbShFuncRegion skipwhite
syn region bbShFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" contained contains=@shell
" Python value inside shell functions
syn region shDeref start=+${@+ skip=+\\$+ excludenl end=+}+ contained contains=@python
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([0-9A-Za-z_-]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbDelimiter nextgroup=bbShFuncRegion skipwhite
syn region bbShFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@shell
" BitBake python metadata
syn keyword bbPyFlag python contained
syn match bbPyFuncDef "^\(fakeroot\s*\)\?\(python\)\(\s\+[0-9A-Za-z_:${}\-\.]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbPyFlag,bbFunction,bbOverrideOperator,bbVarDeref,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
syn region bbPyFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" contained contains=@python
syn match bbPyFuncDef "^\(python\s\+\)\([0-9A-Za-z_-]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbPyFlag,bbFunction,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
syn region bbPyFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@python
" BitBake 'def'd python functions
syn keyword bbPyDef def contained
@@ -124,6 +119,5 @@ hi def link bbStatement Statement
hi def link bbStatementRest Identifier
hi def link bbOEFunctions Special
hi def link bbVarPyValue PreProc
hi def link bbOverrideOperator Operator
let b:current_syntax = "bb"

View File

@@ -1 +0,0 @@
_build/

View File

@@ -1,35 +0,0 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?= -W --keep-going -j auto
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
DESTDIR = final
ifeq ($(shell if which $(SPHINXBUILD) >/dev/null 2>&1; then echo 1; else echo 0; fi),0)
$(error "The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed")
endif
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile clean publish
publish: Makefile html singlehtml
rm -rf $(BUILDDIR)/$(DESTDIR)/
mkdir -p $(BUILDDIR)/$(DESTDIR)/
cp -r $(BUILDDIR)/html/* $(BUILDDIR)/$(DESTDIR)/
cp $(BUILDDIR)/singlehtml/index.html $(BUILDDIR)/$(DESTDIR)/singleindex.html
sed -i -e 's@index.html#@singleindex.html#@g' $(BUILDDIR)/$(DESTDIR)/singleindex.html
clean:
@rm -rf $(BUILDDIR)
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

View File

@@ -1,55 +0,0 @@
Documentation
=============
This is the directory that contains the BitBake documentation.
Manual Organization
===================
Folders exist for individual manuals as follows:
* bitbake-user-manual --- The BitBake User Manual
Each folder is self-contained regarding content and figures.
If you want to find HTML versions of the BitBake manuals on the web,
go to https://www.openembedded.org/wiki/Documentation.
Sphinx
======
The BitBake documentation was migrated from the original DocBook
format to Sphinx based documentation for the Yocto Project 3.2
release.
Additional information related to the Sphinx migration, and guidelines
for developers willing to contribute to the BitBake documentation can
be found in the Yocto Project Documentation README file:
https://git.yoctoproject.org/cgit/cgit.cgi/yocto-docs/tree/documentation/README
How to build the Yocto Project documentation
============================================
Sphinx is written in Python. While it might work with Python2, for
obvious reasons, we will only support building the BitBake
documentation with Python3.
Sphinx might be available in your Linux distro packages repositories,
however it is not recommend using distro packages, as they might be
old versions, especially if you are using an LTS version of your
distro. The recommended method to install Sphinx and all required
dependencies is to use the Python Package Index (pip).
To install all required packages run:
$ pip3 install sphinx sphinx_rtd_theme pyyaml
To build the documentation locally, run:
$ cd documentation
$ make -f Makefile.sphinx html
The resulting HTML index page will be _build/html/index.html, and you
can browse your own copy of the locally generated documentation with
your browser.

View File

@@ -1,14 +0,0 @@
{% extends "!breadcrumbs.html" %}
{% block breadcrumbs %}
<li>
<span class="doctype_switcher_placeholder">{{ doctype or 'single' }}</span>
<span class="version_switcher_placeholder">{{ release }}</span>
</li>
<li> &raquo;</li>
{% for doc in parents %}
<li><a href="{{ doc.link|e }}">{{ doc.title }}</a> &raquo;</li>
{% endfor %}
<li>{{ title }}</li>
{% endblock %}

View File

@@ -1,7 +0,0 @@
{% extends "!layout.html" %}
{% block extrabody %}
<div id="outdated-warning" style="text-align: center; background-color: #FFBABA; color: #6A0E0E;">
</div>
{% endblock %}

View File

@@ -1,724 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
=========
Execution
=========
|
The primary purpose for running BitBake is to produce some kind of
output such as a single installable package, a kernel, a software
development kit, or even a full, board-specific bootable Linux image,
complete with bootloader, kernel, and root filesystem. Of course, you
can execute the ``bitbake`` command with options that cause it to
execute single tasks, compile single recipe files, capture or clear
data, or simply return information about the execution environment.
This chapter describes BitBake's execution process from start to finish
when you use it to create an image. The execution process is launched
using the following command form::
$ bitbake target
For information on
the BitBake command and its options, see ":ref:`The BitBake Command
<bitbake-user-manual-command>`" section.
.. note::
Prior to executing BitBake, you should take advantage of available
parallel thread execution on your build host by setting the
:term:`BB_NUMBER_THREADS` variable in
your project's ``local.conf`` configuration file.
A common method to determine this value for your build host is to run
the following::
$ grep processor /proc/cpuinfo
This command returns
the number of processors, which takes into account hyper-threading.
Thus, a quad-core build host with hyper-threading most likely shows
eight processors, which is the value you would then assign to
:term:`BB_NUMBER_THREADS`.
A possibly simpler solution is that some Linux distributions (e.g.
Debian and Ubuntu) provide the ``ncpus`` command.
Parsing the Base Configuration Metadata
=======================================
The first thing BitBake does is parse base configuration metadata. Base
configuration metadata consists of your project's ``bblayers.conf`` file
to determine what layers BitBake needs to recognize, all necessary
``layer.conf`` files (one from each layer), and ``bitbake.conf``. The
data itself is of various types:
- **Recipes:** Details about particular pieces of software.
- **Class Data:** An abstraction of common build information (e.g. how to
build a Linux kernel).
- **Configuration Data:** Machine-specific settings, policy decisions,
and so forth. Configuration data acts as the glue to bind everything
together.
The ``layer.conf`` files are used to construct key variables such as
:term:`BBPATH` and :term:`BBFILES`.
:term:`BBPATH` is used to search for configuration and class files under the
``conf`` and ``classes`` directories, respectively. :term:`BBFILES` is used
to locate both recipe and recipe append files (``.bb`` and
``.bbappend``). If there is no ``bblayers.conf`` file, it is assumed the
user has set the :term:`BBPATH` and :term:`BBFILES` directly in the environment.
Next, the ``bitbake.conf`` file is located using the :term:`BBPATH` variable
that was just constructed. The ``bitbake.conf`` file may also include
other configuration files using the ``include`` or ``require``
directives.
Prior to parsing configuration files, BitBake looks at certain
variables, including:
- :term:`BB_ENV_PASSTHROUGH`
- :term:`BB_ENV_PASSTHROUGH_ADDITIONS`
- :term:`BB_PRESERVE_ENV`
- :term:`BB_ORIGENV`
- :term:`BITBAKE_UI`
The first four variables in this list relate to how BitBake treats shell
environment variables during task execution. By default, BitBake cleans
the environment variables and provides tight control over the shell
execution environment. However, through the use of these first four
variables, you can apply your control regarding the environment
variables allowed to be used by BitBake in the shell during execution of
tasks. See the
":ref:`bitbake-user-manual/bitbake-user-manual-metadata:Passing Information Into the Build Task Environment`"
section and the information about these variables in the variable
glossary for more information on how they work and on how to use them.
The base configuration metadata is global and therefore affects all
recipes and tasks that are executed.
BitBake first searches the current working directory for an optional
``conf/bblayers.conf`` configuration file. This file is expected to
contain a :term:`BBLAYERS` variable that is a
space-delimited list of 'layer' directories. Recall that if BitBake
cannot find a ``bblayers.conf`` file, then it is assumed the user has
set the :term:`BBPATH` and :term:`BBFILES` variables directly in the
environment.
For each directory (layer) in this list, a ``conf/layer.conf`` file is
located and parsed with the :term:`LAYERDIR` variable
being set to the directory where the layer was found. The idea is these
files automatically set up :term:`BBPATH` and other
variables correctly for a given build directory.
BitBake then expects to find the ``conf/bitbake.conf`` file somewhere in
the user-specified :term:`BBPATH`. That configuration file generally has
include directives to pull in any other metadata such as files specific
to the architecture, the machine, the local environment, and so forth.
Only variable definitions and include directives are allowed in BitBake
``.conf`` files. Some variables directly influence BitBake's behavior.
These variables might have been set from the environment depending on
the environment variables previously mentioned or set in the
configuration files. The ":ref:`bitbake-user-manual/bitbake-user-manual-ref-variables:Variables Glossary`"
chapter presents a full list of
variables.
After parsing configuration files, BitBake uses its rudimentary
inheritance mechanism, which is through class files, to inherit some
standard classes. BitBake parses a class when the inherit directive
responsible for getting that class is encountered.
The ``base.bbclass`` file is always included. Other classes that are
specified in the configuration using the
:term:`INHERIT` variable are also included. BitBake
searches for class files in a ``classes`` subdirectory under the paths
in :term:`BBPATH` in the same way as configuration files.
A good way to get an idea of the configuration files and the class files
used in your execution environment is to run the following BitBake
command::
$ bitbake -e > mybb.log
Examining the top of the ``mybb.log``
shows you the many configuration files and class files used in your
execution environment.
.. note::
You need to be aware of how BitBake parses curly braces. If a recipe
uses a closing curly brace within the function and the character has
no leading spaces, BitBake produces a parsing error. If you use a
pair of curly braces in a shell function, the closing curly brace
must not be located at the start of the line without leading spaces.
Here is an example that causes BitBake to produce a parsing error::
fakeroot create_shar() {
cat << "EOF" > ${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.sh
usage()
{
echo "test"
###### The following "}" at the start of the line causes a parsing error ######
}
EOF
}
Writing the recipe this way avoids the error:
fakeroot create_shar() {
cat << "EOF" > ${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.sh
usage()
{
echo "test"
###### The following "}" with a leading space at the start of the line avoids the error ######
}
EOF
}
Locating and Parsing Recipes
============================
During the configuration phase, BitBake will have set
:term:`BBFILES`. BitBake now uses it to construct a
list of recipes to parse, along with any append files (``.bbappend``) to
apply. :term:`BBFILES` is a space-separated list of available files and
supports wildcards. An example would be::
BBFILES = "/path/to/bbfiles/*.bb /path/to/appends/*.bbappend"
BitBake parses each
recipe and append file located with :term:`BBFILES` and stores the values of
various variables into the datastore.
.. note::
Append files are applied in the order they are encountered in BBFILES.
For each file, a fresh copy of the base configuration is made, then the
recipe is parsed line by line. Any inherit statements cause BitBake to
find and then parse class files (``.bbclass``) using
:term:`BBPATH` as the search path. Finally, BitBake
parses in order any append files found in :term:`BBFILES`.
One common convention is to use the recipe filename to define pieces of
metadata. For example, in ``bitbake.conf`` the recipe name and version
are used to set the variables :term:`PN` and
:term:`PV`::
PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
PV = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
In this example, a recipe called "something_1.2.3.bb" would set
:term:`PN` to "something" and :term:`PV` to "1.2.3".
By the time parsing is complete for a recipe, BitBake has a list of
tasks that the recipe defines and a set of data consisting of keys and
values as well as dependency information about the tasks.
BitBake does not need all of this information. It only needs a small
subset of the information to make decisions about the recipe.
Consequently, BitBake caches the values in which it is interested and
does not store the rest of the information. Experience has shown it is
faster to re-parse the metadata than to try and write it out to the disk
and then reload it.
Where possible, subsequent BitBake commands reuse this cache of recipe
information. The validity of this cache is determined by first computing
a checksum of the base configuration data (see
:term:`BB_HASHCONFIG_IGNORE_VARS`) and
then checking if the checksum matches. If that checksum matches what is
in the cache and the recipe and class files have not changed, BitBake is
able to use the cache. BitBake then reloads the cached information about
the recipe instead of reparsing it from scratch.
Recipe file collections exist to allow the user to have multiple
repositories of ``.bb`` files that contain the same exact package. For
example, one could easily use them to make one's own local copy of an
upstream repository, but with custom modifications that one does not
want upstream. Here is an example::
BBFILES = "/stuff/openembedded/*/*.bb /stuff/openembedded.modified/*/*.bb"
BBFILE_COLLECTIONS = "upstream local"
BBFILE_PATTERN_upstream = "^/stuff/openembedded/"
BBFILE_PATTERN_local = "^/stuff/openembedded.modified/"
BBFILE_PRIORITY_upstream = "5"
BBFILE_PRIORITY_local = "10"
.. note::
The layers mechanism is now the preferred method of collecting code.
While the collections code remains, its main use is to set layer
priorities and to deal with overlap (conflicts) between layers.
.. _bb-bitbake-providers:
Providers
=========
Assuming BitBake has been instructed to execute a target and that all
the recipe files have been parsed, BitBake starts to figure out how to
build the target. BitBake looks through the :term:`PROVIDES` list for each
of the recipes. A :term:`PROVIDES` list is the list of names by which the
recipe can be known. Each recipe's :term:`PROVIDES` list is created
implicitly through the recipe's :term:`PN` variable and
explicitly through the recipe's :term:`PROVIDES`
variable, which is optional.
When a recipe uses :term:`PROVIDES`, that recipe's functionality can be
found under an alternative name or names other than the implicit :term:`PN`
name. As an example, suppose a recipe named ``keyboard_1.0.bb``
contained the following::
PROVIDES += "fullkeyboard"
The :term:`PROVIDES`
list for this recipe becomes "keyboard", which is implicit, and
"fullkeyboard", which is explicit. Consequently, the functionality found
in ``keyboard_1.0.bb`` can be found under two different names.
.. _bb-bitbake-preferences:
Preferences
===========
The :term:`PROVIDES` list is only part of the solution for figuring out a
target's recipes. Because targets might have multiple providers, BitBake
needs to prioritize providers by determining provider preferences.
A common example in which a target has multiple providers is
"virtual/kernel", which is on the :term:`PROVIDES` list for each kernel
recipe. Each machine often selects the best kernel provider by using a
line similar to the following in the machine configuration file::
PREFERRED_PROVIDER_virtual/kernel = "linux-yocto"
The default :term:`PREFERRED_PROVIDER` is the provider
with the same name as the target. BitBake iterates through each target
it needs to build and resolves them and their dependencies using this
process.
Understanding how providers are chosen is made complicated by the fact
that multiple versions might exist for a given provider. BitBake
defaults to the highest version of a provider. Version comparisons are
made using the same method as Debian. You can use the
:term:`PREFERRED_VERSION` variable to
specify a particular version. You can influence the order by using the
:term:`DEFAULT_PREFERENCE` variable.
By default, files have a preference of "0". Setting
:term:`DEFAULT_PREFERENCE` to "-1" makes the recipe unlikely to be used
unless it is explicitly referenced. Setting :term:`DEFAULT_PREFERENCE` to
"1" makes it likely the recipe is used. :term:`PREFERRED_VERSION` overrides
any :term:`DEFAULT_PREFERENCE` setting. :term:`DEFAULT_PREFERENCE` is often used
to mark newer and more experimental recipe versions until they have
undergone sufficient testing to be considered stable.
When there are multiple "versions" of a given recipe, BitBake defaults
to selecting the most recent version, unless otherwise specified. If the
recipe in question has a
:term:`DEFAULT_PREFERENCE` set lower than
the other recipes (default is 0), then it will not be selected. This
allows the person or persons maintaining the repository of recipe files
to specify their preference for the default selected version.
Additionally, the user can specify their preferred version.
If the first recipe is named ``a_1.1.bb``, then the
:term:`PN` variable will be set to "a", and the
:term:`PV` variable will be set to 1.1.
Thus, if a recipe named ``a_1.2.bb`` exists, BitBake will choose 1.2 by
default. However, if you define the following variable in a ``.conf``
file that BitBake parses, you can change that preference::
PREFERRED_VERSION_a = "1.1"
.. note::
It is common for a recipe to provide two versions -- a stable,
numbered (and preferred) version, and a version that is automatically
checked out from a source code repository that is considered more
"bleeding edge" but can be selected only explicitly.
For example, in the OpenEmbedded codebase, there is a standard,
versioned recipe file for BusyBox, ``busybox_1.22.1.bb``, but there
is also a Git-based version, ``busybox_git.bb``, which explicitly
contains the line ::
DEFAULT_PREFERENCE = "-1"
to ensure that the
numbered, stable version is always preferred unless the developer
selects otherwise.
.. _bb-bitbake-dependencies:
Dependencies
============
Each target BitBake builds consists of multiple tasks such as ``fetch``,
``unpack``, ``patch``, ``configure``, and ``compile``. For best
performance on multi-core systems, BitBake considers each task as an
independent entity with its own set of dependencies.
Dependencies are defined through several variables. You can find
information about variables BitBake uses in the
:doc:`bitbake-user-manual-ref-variables` near the end of this manual. At a
basic level, it is sufficient to know that BitBake uses the
:term:`DEPENDS` and
:term:`RDEPENDS` variables when calculating
dependencies.
For more information on how BitBake handles dependencies, see the
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:Dependencies`
section.
.. _ref-bitbake-tasklist:
The Task List
=============
Based on the generated list of providers and the dependency information,
BitBake can now calculate exactly what tasks it needs to run and in what
order it needs to run them. The
:ref:`bitbake-user-manual/bitbake-user-manual-execution:executing tasks`
section has more information on how BitBake chooses which task to
execute next.
The build now starts with BitBake forking off threads up to the limit
set in the :term:`BB_NUMBER_THREADS`
variable. BitBake continues to fork threads as long as there are tasks
ready to run, those tasks have all their dependencies met, and the
thread threshold has not been exceeded.
It is worth noting that you can greatly speed up the build time by
properly setting the :term:`BB_NUMBER_THREADS` variable.
As each task completes, a timestamp is written to the directory
specified by the :term:`STAMP` variable. On subsequent
runs, BitBake looks in the build directory within ``tmp/stamps`` and
does not rerun tasks that are already completed unless a timestamp is
found to be invalid. Currently, invalid timestamps are only considered
on a per recipe file basis. So, for example, if the configure stamp has
a timestamp greater than the compile timestamp for a given target, then
the compile task would rerun. Running the compile task again, however,
has no effect on other providers that depend on that target.
The exact format of the stamps is partly configurable. In modern
versions of BitBake, a hash is appended to the stamp so that if the
configuration changes, the stamp becomes invalid and the task is
automatically rerun. This hash, or signature used, is governed by the
signature policy that is configured (see the
:ref:`bitbake-user-manual/bitbake-user-manual-execution:checksums (signatures)`
section for information). It is also
possible to append extra metadata to the stamp using the
``[stamp-extra-info]`` task flag. For example, OpenEmbedded uses this
flag to make some tasks machine-specific.
.. note::
Some tasks are marked as "nostamp" tasks. No timestamp file is
created when these tasks are run. Consequently, "nostamp" tasks are
always rerun.
For more information on tasks, see the
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:tasks` section.
Executing Tasks
===============
Tasks can be either a shell task or a Python task. For shell tasks,
BitBake writes a shell script to
``${``\ :term:`T`\ ``}/run.do_taskname.pid`` and then
executes the script. The generated shell script contains all the
exported variables, and the shell functions with all variables expanded.
Output from the shell script goes to the file
``${``\ :term:`T`\ ``}/log.do_taskname.pid``. Looking at the expanded shell functions in
the run file and the output in the log files is a useful debugging
technique.
For Python tasks, BitBake executes the task internally and logs
information to the controlling terminal. Future versions of BitBake will
write the functions to files similar to the way shell tasks are handled.
Logging will be handled in a way similar to shell tasks as well.
The order in which BitBake runs the tasks is controlled by its task
scheduler. It is possible to configure the scheduler and define custom
implementations for specific use cases. For more information, see these
variables that control the behavior:
- :term:`BB_SCHEDULER`
- :term:`BB_SCHEDULERS`
It is possible to have functions run before and after a task's main
function. This is done using the ``[prefuncs]`` and ``[postfuncs]``
flags of the task that lists the functions to run.
.. _checksums:
Checksums (Signatures)
======================
A checksum is a unique signature of a task's inputs. The signature of a
task can be used to determine if a task needs to be run. Because it is a
change in a task's inputs that triggers running the task, BitBake needs
to detect all the inputs to a given task. For shell tasks, this turns
out to be fairly easy because BitBake generates a "run" shell script for
each task and it is possible to create a checksum that gives you a good
idea of when the task's data changes.
To complicate the problem, some things should not be included in the
checksum. First, there is the actual specific build path of a given task
- the working directory. It does not matter if the working directory
changes because it should not affect the output for target packages. The
simplistic approach for excluding the working directory is to set it to
some fixed value and create the checksum for the "run" script. BitBake
goes one step better and uses the
:term:`BB_BASEHASH_IGNORE_VARS` variable
to define a list of variables that should never be included when
generating the signatures.
Another problem results from the "run" scripts containing functions that
might or might not get called. The incremental build solution contains
code that figures out dependencies between shell functions. This code is
used to prune the "run" scripts down to the minimum set, thereby
alleviating this problem and making the "run" scripts much more readable
as a bonus.
So far we have solutions for shell scripts. What about Python tasks? The
same approach applies even though these tasks are more difficult. The
process needs to figure out what variables a Python function accesses
and what functions it calls. Again, the incremental build solution
contains code that first figures out the variable and function
dependencies, and then creates a checksum for the data used as the input
to the task.
Like the working directory case, situations exist where dependencies
should be ignored. For these cases, you can instruct the build process
to ignore a dependency by using a line like the following::
PACKAGE_ARCHS[vardepsexclude] = "MACHINE"
This example ensures that the
``PACKAGE_ARCHS`` variable does not depend on the value of ``MACHINE``,
even if it does reference it.
Equally, there are cases where we need to add dependencies BitBake is
not able to find. You can accomplish this by using a line like the
following::
PACKAGE_ARCHS[vardeps] = "MACHINE"
This example explicitly
adds the ``MACHINE`` variable as a dependency for ``PACKAGE_ARCHS``.
Consider a case with in-line Python, for example, where BitBake is not
able to figure out dependencies. When running in debug mode (i.e. using
``-DDD``), BitBake produces output when it discovers something for which
it cannot figure out dependencies.
Thus far, this section has limited discussion to the direct inputs into
a task. Information based on direct inputs is referred to as the
"basehash" in the code. However, there is still the question of a task's
indirect inputs --- the things that were already built and present in the
build directory. The checksum (or signature) for a particular task needs
to add the hashes of all the tasks on which the particular task depends.
Choosing which dependencies to add is a policy decision. However, the
effect is to generate a master checksum that combines the basehash and
the hashes of the task's dependencies.
At the code level, there are a variety of ways both the basehash and the
dependent task hashes can be influenced. Within the BitBake
configuration file, we can give BitBake some extra information to help
it construct the basehash. The following statement effectively results
in a list of global variable dependency excludes --- variables never
included in any checksum. This example uses variables from OpenEmbedded
to help illustrate the concept::
BB_BASEHASH_IGNORE_VARS ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL \
USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST \
PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN PARALLEL_MAKE \
CCACHE_DIR EXTERNAL_TOOLCHAIN CCACHE CCACHE_DISABLE LICENSE_PATH SDKPKGSUFFIX"
The previous example excludes the work directory, which is part of
``TMPDIR``.
The rules for deciding which hashes of dependent tasks to include
through dependency chains are more complex and are generally
accomplished with a Python function. The code in
``meta/lib/oe/sstatesig.py`` shows two examples of this and also
illustrates how you can insert your own policy into the system if so
desired. This file defines the two basic signature generators
OpenEmbedded-Core uses: "OEBasic" and "OEBasicHash". By default, there
is a dummy "noop" signature handler enabled in BitBake. This means that
behavior is unchanged from previous versions. ``OE-Core`` uses the
"OEBasicHash" signature handler by default through this setting in the
``bitbake.conf`` file::
BB_SIGNATURE_HANDLER ?= "OEBasicHash"
The "OEBasicHash" :term:`BB_SIGNATURE_HANDLER` is the same as the "OEBasic"
version but adds the task hash to the stamp files. This results in any
metadata change that changes the task hash, automatically causing the
task to be run again. This removes the need to bump
:term:`PR` values, and changes to metadata automatically
ripple across the build.
It is also worth noting that the end result of these signature
generators is to make some dependency and hash information available to
the build. This information includes:
- ``BB_BASEHASH_task-``\ *taskname*: The base hashes for each task in the
recipe.
- ``BB_BASEHASH_``\ *filename:taskname*: The base hashes for each
dependent task.
- :term:`BB_TASKHASH`: The hash of the currently running task.
It is worth noting that BitBake's "-S" option lets you debug BitBake's
processing of signatures. The options passed to -S allow different
debugging modes to be used, either using BitBake's own debug functions
or possibly those defined in the metadata/signature handler itself. The
simplest parameter to pass is "none", which causes a set of signature
information to be written out into ``STAMPS_DIR`` corresponding to the
targets specified. The other currently available parameter is
"printdiff", which causes BitBake to try to establish the closest
signature match it can (e.g. in the sstate cache) and then run
``bitbake-diffsigs`` over the matches to determine the stamps and delta
where these two stamp trees diverge.
.. note::
It is likely that future versions of BitBake will provide other
signature handlers triggered through additional "-S" parameters.
You can find more information on checksum metadata in the
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:task checksums and setscene`
section.
Setscene
========
The setscene process enables BitBake to handle "pre-built" artifacts.
The ability to handle and reuse these artifacts allows BitBake the
luxury of not having to build something from scratch every time.
Instead, BitBake can use, when possible, existing build artifacts.
BitBake needs to have reliable data indicating whether or not an
artifact is compatible. Signatures, described in the previous section,
provide an ideal way of representing whether an artifact is compatible.
If a signature is the same, an object can be reused.
If an object can be reused, the problem then becomes how to replace a
given task or set of tasks with the pre-built artifact. BitBake solves
the problem with the "setscene" process.
When BitBake is asked to build a given target, before building anything,
it first asks whether cached information is available for any of the
targets it's building, or any of the intermediate targets. If cached
information is available, BitBake uses this information instead of
running the main tasks.
BitBake first calls the function defined by the
:term:`BB_HASHCHECK_FUNCTION` variable
with a list of tasks and corresponding hashes it wants to build. This
function is designed to be fast and returns a list of the tasks for
which it believes in can obtain artifacts.
Next, for each of the tasks that were returned as possibilities, BitBake
executes a setscene version of the task that the possible artifact
covers. Setscene versions of a task have the string "_setscene" appended
to the task name. So, for example, the task with the name ``xxx`` has a
setscene task named ``xxx_setscene``. The setscene version of the task
executes and provides the necessary artifacts returning either success
or failure.
As previously mentioned, an artifact can cover more than one task. For
example, it is pointless to obtain a compiler if you already have the
compiled binary. To handle this, BitBake calls the
:term:`BB_SETSCENE_DEPVALID` function for
each successful setscene task to know whether or not it needs to obtain
the dependencies of that task.
You can find more information on setscene metadata in the
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:task checksums and setscene`
section.
Logging
=======
In addition to the standard command line option to control how verbose
builds are when execute, bitbake also supports user defined
configuration of the `Python
logging <https://docs.python.org/3/library/logging.html>`__ facilities
through the :term:`BB_LOGCONFIG` variable. This
variable defines a json or yaml `logging
configuration <https://docs.python.org/3/library/logging.config.html>`__
that will be intelligently merged into the default configuration. The
logging configuration is merged using the following rules:
- The user defined configuration will completely replace the default
configuration if top level key ``bitbake_merge`` is set to the value
``False``. In this case, all other rules are ignored.
- The user configuration must have a top level ``version`` which must
match the value of the default configuration.
- Any keys defined in the ``handlers``, ``formatters``, or ``filters``,
will be merged into the same section in the default configuration,
with the user specified keys taking replacing a default one if there
is a conflict. In practice, this means that if both the default
configuration and user configuration specify a handler named
``myhandler``, the user defined one will replace the default. To
prevent the user from inadvertently replacing a default handler,
formatter, or filter, all of the default ones are named with a prefix
of "``BitBake.``"
- If a logger is defined by the user with the key ``bitbake_merge`` set
to ``False``, that logger will be completely replaced by user
configuration. In this case, no other rules will apply to that
logger.
- All user defined ``filter`` and ``handlers`` properties for a given
logger will be merged with corresponding properties from the default
logger. For example, if the user configuration adds a filter called
``myFilter`` to the ``BitBake.SigGen``, and the default configuration
adds a filter called ``BitBake.defaultFilter``, both filters will be
applied to the logger
As an example, consider the following user logging configuration file
which logs all Hash Equivalence related messages of VERBOSE or higher to
a file called ``hashequiv.log`` ::
{
"version": 1,
"handlers": {
"autobuilderlog": {
"class": "logging.FileHandler",
"formatter": "logfileFormatter",
"level": "DEBUG",
"filename": "hashequiv.log",
"mode": "w"
}
},
"formatters": {
"logfileFormatter": {
"format": "%(name)s: %(levelname)s: %(message)s"
}
},
"loggers": {
"BitBake.SigGen.HashEquiv": {
"level": "VERBOSE",
"handlers": ["autobuilderlog"]
},
"BitBake.RunQueue.HashEquiv": {
"level": "VERBOSE",
"handlers": ["autobuilderlog"]
}
}
}

View File

@@ -1,806 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
=====================
File Download Support
=====================
|
BitBake's fetch module is a standalone piece of library code that deals
with the intricacies of downloading source code and files from remote
systems. Fetching source code is one of the cornerstones of building
software. As such, this module forms an important part of BitBake.
The current fetch module is called "fetch2" and refers to the fact that
it is the second major version of the API. The original version is
obsolete and has been removed from the codebase. Thus, in all cases,
"fetch" refers to "fetch2" in this manual.
The Download (Fetch)
====================
BitBake takes several steps when fetching source code or files. The
fetcher codebase deals with two distinct processes in order: obtaining
the files from somewhere (cached or otherwise) and then unpacking those
files into a specific location and perhaps in a specific way. Getting
and unpacking the files is often optionally followed by patching.
Patching, however, is not covered by this module.
The code to execute the first part of this process, a fetch, looks
something like the following::
src_uri = (d.getVar('SRC_URI') or "").split()
fetcher = bb.fetch2.Fetch(src_uri, d)
fetcher.download()
This code sets up an instance of the fetch class. The instance uses a
space-separated list of URLs from the :term:`SRC_URI`
variable and then calls the ``download`` method to download the files.
The instantiation of the fetch class is usually followed by::
rootdir = l.getVar('WORKDIR')
fetcher.unpack(rootdir)
This code unpacks the downloaded files to the specified by ``WORKDIR``.
.. note::
For convenience, the naming in these examples matches the variables
used by OpenEmbedded. If you want to see the above code in action,
examine the OpenEmbedded class file ``base.bbclass``
.
The :term:`SRC_URI` and ``WORKDIR`` variables are not hardcoded into the
fetcher, since those fetcher methods can be (and are) called with
different variable names. In OpenEmbedded for example, the shared state
(sstate) code uses the fetch module to fetch the sstate files.
When the ``download()`` method is called, BitBake tries to resolve the
URLs by looking for source files in a specific search order:
- *Pre-mirror Sites:* BitBake first uses pre-mirrors to try and find
source files. These locations are defined using the
:term:`PREMIRRORS` variable.
- *Source URI:* If pre-mirrors fail, BitBake uses the original URL (e.g
from :term:`SRC_URI`).
- *Mirror Sites:* If fetch failures occur, BitBake next uses mirror
locations as defined by the :term:`MIRRORS` variable.
For each URL passed to the fetcher, the fetcher calls the submodule that
handles that particular URL type. This behavior can be the source of
some confusion when you are providing URLs for the :term:`SRC_URI` variable.
Consider the following two URLs::
https://git.yoctoproject.org/git/poky;protocol=git
git://git.yoctoproject.org/git/poky;protocol=http
In the former case, the URL is passed to the ``wget`` fetcher, which does not
understand "git". Therefore, the latter case is the correct form since the Git
fetcher does know how to use HTTP as a transport.
Here are some examples that show commonly used mirror definitions::
PREMIRRORS ?= "\
bzr://.*/.\* http://somemirror.org/sources/ \
cvs://.*/.\* http://somemirror.org/sources/ \
git://.*/.\* http://somemirror.org/sources/ \
hg://.*/.\* http://somemirror.org/sources/ \
osc://.*/.\* http://somemirror.org/sources/ \
p4://.*/.\* http://somemirror.org/sources/ \
svn://.*/.\* http://somemirror.org/sources/"
MIRRORS =+ "\
ftp://.*/.\* http://somemirror.org/sources/ \
http://.*/.\* http://somemirror.org/sources/ \
https://.*/.\* http://somemirror.org/sources/"
It is useful to note that BitBake
supports cross-URLs. It is possible to mirror a Git repository on an
HTTP server as a tarball. This is what the ``git://`` mapping in the
previous example does.
Since network accesses are slow, BitBake maintains a cache of files
downloaded from the network. Any source files that are not local (i.e.
downloaded from the Internet) are placed into the download directory,
which is specified by the :term:`DL_DIR` variable.
File integrity is of key importance for reproducing builds. For
non-local archive downloads, the fetcher code can verify SHA-256 and MD5
checksums to ensure the archives have been downloaded correctly. You can
specify these checksums by using the :term:`SRC_URI` variable with the
appropriate varflags as follows::
SRC_URI[md5sum] = "value"
SRC_URI[sha256sum] = "value"
You can also specify the checksums as
parameters on the :term:`SRC_URI` as shown below::
SRC_URI = "http://example.com/foobar.tar.bz2;md5sum=4a8e0f237e961fd7785d19d07fdb994d"
If multiple URIs exist, you can specify the checksums either directly as
in the previous example, or you can name the URLs. The following syntax
shows how you name the URIs::
SRC_URI = "http://example.com/foobar.tar.bz2;name=foo"
SRC_URI[foo.md5sum] = 4a8e0f237e961fd7785d19d07fdb994d
After a file has been downloaded and
has had its checksum checked, a ".done" stamp is placed in :term:`DL_DIR`.
BitBake uses this stamp during subsequent builds to avoid downloading or
comparing a checksum for the file again.
.. note::
It is assumed that local storage is safe from data corruption. If
this were not the case, there would be bigger issues to worry about.
If :term:`BB_STRICT_CHECKSUM` is set, any
download without a checksum triggers an error message. The
:term:`BB_NO_NETWORK` variable can be used to
make any attempted network access a fatal error, which is useful for
checking that mirrors are complete as well as other things.
If :term:`BB_CHECK_SSL_CERTS` is set to ``0`` then SSL certificate checking will
be disabled. This variable defaults to ``1`` so SSL certificates are normally
checked.
.. _bb-the-unpack:
The Unpack
==========
The unpack process usually immediately follows the download. For all
URLs except Git URLs, BitBake uses the common ``unpack`` method.
A number of parameters exist that you can specify within the URL to
govern the behavior of the unpack stage:
- *unpack:* Controls whether the URL components are unpacked. If set to
"1", which is the default, the components are unpacked. If set to
"0", the unpack stage leaves the file alone. This parameter is useful
when you want an archive to be copied in and not be unpacked.
- *dos:* Applies to ``.zip`` and ``.jar`` files and specifies whether
to use DOS line ending conversion on text files.
- *striplevel:* Strip specified number of leading components (levels)
from file names on extraction
- *subdir:* Unpacks the specific URL to the specified subdirectory
within the root directory.
The unpack call automatically decompresses and extracts files with ".Z",
".z", ".gz", ".xz", ".zip", ".jar", ".ipk", ".rpm". ".srpm", ".deb" and
".bz2" extensions as well as various combinations of tarball extensions.
As mentioned, the Git fetcher has its own unpack method that is
optimized to work with Git trees. Basically, this method works by
cloning the tree into the final directory. The process is completed
using references so that there is only one central copy of the Git
metadata needed.
.. _bb-fetchers:
Fetchers
========
As mentioned earlier, the URL prefix determines which fetcher submodule
BitBake uses. Each submodule can support different URL parameters, which
are described in the following sections.
.. _local-file-fetcher:
Local file fetcher (``file://``)
--------------------------------
This submodule handles URLs that begin with ``file://``. The filename
you specify within the URL can be either an absolute or relative path to
a file. If the filename is relative, the contents of the
:term:`FILESPATH` variable is used in the same way
``PATH`` is used to find executables. If the file cannot be found, it is
assumed that it is available in :term:`DL_DIR` by the
time the ``download()`` method is called.
If you specify a directory, the entire directory is unpacked.
Here are a couple of example URLs, the first relative and the second
absolute::
SRC_URI = "file://relativefile.patch"
SRC_URI = "file:///Users/ich/very_important_software"
.. _http-ftp-fetcher:
HTTP/FTP wget fetcher (``http://``, ``ftp://``, ``https://``)
-------------------------------------------------------------
This fetcher obtains files from web and FTP servers. Internally, the
fetcher uses the wget utility.
The executable and parameters used are specified by the
``FETCHCMD_wget`` variable, which defaults to sensible values. The
fetcher supports a parameter "downloadfilename" that allows the name of
the downloaded file to be specified. Specifying the name of the
downloaded file is useful for avoiding collisions in
:term:`DL_DIR` when dealing with multiple files that
have the same name.
If a username and password are specified in the ``SRC_URI``, a Basic
Authorization header will be added to each request, including across redirects.
To instead limit the Authorization header to the first request, add
"redirectauth=0" to the list of parameters.
Some example URLs are as follows::
SRC_URI = "http://oe.handhelds.org/not_there.aac"
SRC_URI = "ftp://oe.handhelds.org/not_there_as_well.aac"
SRC_URI = "ftp://you@oe.handhelds.org/home/you/secret.plan"
.. note::
Because URL parameters are delimited by semi-colons, this can
introduce ambiguity when parsing URLs that also contain semi-colons,
for example::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git;a=snapshot;h=a5dd47"
Such URLs should should be modified by replacing semi-colons with '&'
characters::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47"
In most cases this should work. Treating semi-colons and '&' in
queries identically is recommended by the World Wide Web Consortium
(W3C). Note that due to the nature of the URL, you may have to
specify the name of the downloaded file as well::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47;downloadfilename=myfile.bz2"
.. _cvs-fetcher:
CVS fetcher (``(cvs://``)
-------------------------
This submodule handles checking out files from the CVS version control
system. You can configure it using a number of different variables:
- :term:`FETCHCMD_cvs <FETCHCMD>`: The name of the executable to use when running
the ``cvs`` command. This name is usually "cvs".
- :term:`SRCDATE`: The date to use when fetching the CVS source code. A
special value of "now" causes the checkout to be updated on every
build.
- :term:`CVSDIR`: Specifies where a temporary
checkout is saved. The location is often ``DL_DIR/cvs``.
- CVS_PROXY_HOST: The name to use as a "proxy=" parameter to the
``cvs`` command.
- CVS_PROXY_PORT: The port number to use as a "proxyport="
parameter to the ``cvs`` command.
As well as the standard username and password URL syntax, you can also
configure the fetcher with various URL parameters:
The supported parameters are as follows:
- *"method":* The protocol over which to communicate with the CVS
server. By default, this protocol is "pserver". If "method" is set to
"ext", BitBake examines the "rsh" parameter and sets ``CVS_RSH``. You
can use "dir" for local directories.
- *"module":* Specifies the module to check out. You must supply this
parameter.
- *"tag":* Describes which CVS TAG should be used for the checkout. By
default, the TAG is empty.
- *"date":* Specifies a date. If no "date" is specified, the
:term:`SRCDATE` of the configuration is used to
checkout a specific date. The special value of "now" causes the
checkout to be updated on every build.
- *"localdir":* Used to rename the module. Effectively, you are
renaming the output directory to which the module is unpacked. You
are forcing the module into a special directory relative to
:term:`CVSDIR`.
- *"rsh":* Used in conjunction with the "method" parameter.
- *"scmdata":* Causes the CVS metadata to be maintained in the tarball
the fetcher creates when set to "keep". The tarball is expanded into
the work directory. By default, the CVS metadata is removed.
- *"fullpath":* Controls whether the resulting checkout is at the
module level, which is the default, or is at deeper paths.
- *"norecurse":* Causes the fetcher to only checkout the specified
directory with no recurse into any subdirectories.
- *"port":* The port to which the CVS server connects.
Some example URLs are as follows::
SRC_URI = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
SRC_URI = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
.. _svn-fetcher:
Subversion (SVN) Fetcher (``svn://``)
-------------------------------------
This fetcher submodule fetches code from the Subversion source control
system. The executable used is specified by ``FETCHCMD_svn``, which
defaults to "svn". The fetcher's temporary working directory is set by
:term:`SVNDIR`, which is usually ``DL_DIR/svn``.
The supported parameters are as follows:
- *"module":* The name of the svn module to checkout. You must provide
this parameter. You can think of this parameter as the top-level
directory of the repository data you want.
- *"path_spec":* A specific directory in which to checkout the
specified svn module.
- *"protocol":* The protocol to use, which defaults to "svn". If
"protocol" is set to "svn+ssh", the "ssh" parameter is also used.
- *"rev":* The revision of the source code to checkout.
- *"scmdata":* Causes the ".svn" directories to be available during
compile-time when set to "keep". By default, these directories are
removed.
- *"ssh":* An optional parameter used when "protocol" is set to
"svn+ssh". You can use this parameter to specify the ssh program used
by svn.
- *"transportuser":* When required, sets the username for the
transport. By default, this parameter is empty. The transport
username is different than the username used in the main URL, which
is passed to the subversion command.
Following are three examples using svn::
SRC_URI = "svn://myrepos/proj1;module=vip;protocol=http;rev=667"
SRC_URI = "svn://myrepos/proj1;module=opie;protocol=svn+ssh"
SRC_URI = "svn://myrepos/proj1;module=trunk;protocol=http;path_spec=${MY_DIR}/proj1"
.. _git-fetcher:
Git Fetcher (``git://``)
------------------------
This fetcher submodule fetches code from the Git source control system.
The fetcher works by creating a bare clone of the remote into
:term:`GITDIR`, which is usually ``DL_DIR/git2``. This
bare clone is then cloned into the work directory during the unpack
stage when a specific tree is checked out. This is done using alternates
and by reference to minimize the amount of duplicate data on the disk
and make the unpack process fast. The executable used can be set with
``FETCHCMD_git``.
This fetcher supports the following parameters:
- *"protocol":* The protocol used to fetch the files. The default is
"git" when a hostname is set. If a hostname is not set, the Git
protocol is "file". You can also use "http", "https", "ssh" and
"rsync".
.. note::
When ``protocol`` is "ssh", the URL expected in :term:`SRC_URI` differs
from the one that is typically passed to ``git clone`` command and provided
by the Git server to fetch from. For example, the URL returned by GitLab
server for ``mesa`` when cloning over SSH is
``git@gitlab.freedesktop.org:mesa/mesa.git``, however the expected URL in
:term:`SRC_URI` is the following::
SRC_URI = "git://git@gitlab.freedesktop.org/mesa/mesa.git;branch=main;protocol=ssh;..."
Note the ``:`` character changed for a ``/`` before the path to the project.
- *"nocheckout":* Tells the fetcher to not checkout source code when
unpacking when set to "1". Set this option for the URL where there is
a custom routine to checkout code. The default is "0".
- *"rebaseable":* Indicates that the upstream Git repository can be
rebased. You should set this parameter to "1" if revisions can become
detached from branches. In this case, the source mirror tarball is
done per revision, which has a loss of efficiency. Rebasing the
upstream Git repository could cause the current revision to disappear
from the upstream repository. This option reminds the fetcher to
preserve the local cache carefully for future use. The default value
for this parameter is "0".
- *"nobranch":* Tells the fetcher to not check the SHA validation for
the branch when set to "1". The default is "0". Set this option for
the recipe that refers to the commit that is valid for a tag instead
of the branch.
- *"bareclone":* Tells the fetcher to clone a bare clone into the
destination directory without checking out a working tree. Only the
raw Git metadata is provided. This parameter implies the "nocheckout"
parameter as well.
- *"branch":* The branch(es) of the Git tree to clone. Unless
"nobranch" is set to "1", this is a mandatory parameter. The number of
branch parameters must match the number of name parameters.
- *"rev":* The revision to use for the checkout. The default is
"master".
- *"tag":* Specifies a tag to use for the checkout. To correctly
resolve tags, BitBake must access the network. For that reason, tags
are often not used. As far as Git is concerned, the "tag" parameter
behaves effectively the same as the "rev" parameter.
- *"subpath":* Limits the checkout to a specific subpath of the tree.
By default, the whole tree is checked out.
- *"destsuffix":* The name of the path in which to place the checkout.
By default, the path is ``git/``.
- *"usehead":* Enables local ``git://`` URLs to use the current branch
HEAD as the revision for use with ``AUTOREV``. The "usehead"
parameter implies no branch and only works when the transfer protocol
is ``file://``.
Here are some example URLs::
SRC_URI = "git://github.com/fronteed/icheck.git;protocol=https;branch=${PV};tag=${PV}"
SRC_URI = "git://github.com/asciidoc/asciidoc-py;protocol=https;branch=main"
SRC_URI = "git://git@gitlab.freedesktop.org/mesa/mesa.git;branch=main;protocol=ssh;..."
.. note::
When using ``git`` as the fetcher of the main source code of your software,
``S`` should be set accordingly::
S = "${WORKDIR}/git"
.. note::
Specifying passwords directly in ``git://`` urls is not supported.
There are several reasons: :term:`SRC_URI` is often written out to logs and
other places, and that could easily leak passwords; it is also all too
easy to share metadata without removing passwords. SSH keys, ``~/.netrc``
and ``~/.ssh/config`` files can be used as alternatives.
.. _gitsm-fetcher:
Git Submodule Fetcher (``gitsm://``)
------------------------------------
This fetcher submodule inherits from the :ref:`Git
fetcher<bitbake-user-manual/bitbake-user-manual-fetching:git fetcher
(\`\`git://\`\`)>` and extends that fetcher's behavior by fetching a
repository's submodules. :term:`SRC_URI` is passed to the Git fetcher as
described in the :ref:`bitbake-user-manual/bitbake-user-manual-fetching:git
fetcher (\`\`git://\`\`)` section.
.. note::
You must clean a recipe when switching between '``git://``' and
'``gitsm://``' URLs.
The Git Submodules fetcher is not a complete fetcher implementation.
The fetcher has known issues where it does not use the normal source
mirroring infrastructure properly. Further, the submodule sources it
fetches are not visible to the licensing and source archiving
infrastructures.
.. _clearcase-fetcher:
ClearCase Fetcher (``ccrc://``)
-------------------------------
This fetcher submodule fetches code from a
`ClearCase <http://en.wikipedia.org/wiki/Rational_ClearCase>`__
repository.
To use this fetcher, make sure your recipe has proper
:term:`SRC_URI`, :term:`SRCREV`, and
:term:`PV` settings. Here is an example::
SRC_URI = "ccrc://cc.example.org/ccrc;vob=/example_vob;module=/example_module"
SRCREV = "EXAMPLE_CLEARCASE_TAG"
PV = "${@d.getVar("SRCREV", False).replace("/", "+")}"
The fetcher uses the ``rcleartool`` or
``cleartool`` remote client, depending on which one is available.
Following are options for the :term:`SRC_URI` statement:
- *vob*: The name, which must include the prepending "/" character,
of the ClearCase VOB. This option is required.
- *module*: The module, which must include the prepending "/"
character, in the selected VOB.
.. note::
The module and vob options are combined to create the load rule in the
view config spec. As an example, consider the vob and module values from
the SRC_URI statement at the start of this section. Combining those values
results in the following::
load /example_vob/example_module
- *proto*: The protocol, which can be either ``http`` or ``https``.
By default, the fetcher creates a configuration specification. If you
want this specification written to an area other than the default, use
the ``CCASE_CUSTOM_CONFIG_SPEC`` variable in your recipe to define where
the specification is written.
.. note::
the SRCREV loses its functionality if you specify this variable. However,
SRCREV is still used to label the archive after a fetch even though it does
not define what is fetched.
Here are a couple of other behaviors worth mentioning:
- When using ``cleartool``, the login of ``cleartool`` is handled by
the system. The login require no special steps.
- In order to use ``rcleartool`` with authenticated users, an
"rcleartool login" is necessary before using the fetcher.
.. _perforce-fetcher:
Perforce Fetcher (``p4://``)
----------------------------
This fetcher submodule fetches code from the
`Perforce <https://www.perforce.com/>`__ source control system. The
executable used is specified by ``FETCHCMD_p4``, which defaults to "p4".
The fetcher's temporary working directory is set by
:term:`P4DIR`, which defaults to "DL_DIR/p4".
The fetcher does not make use of a perforce client, instead it
relies on ``p4 files`` to retrieve a list of
files and ``p4 print`` to transfer the content
of those files locally.
To use this fetcher, make sure your recipe has proper
:term:`SRC_URI`, :term:`SRCREV`, and
:term:`PV` values. The p4 executable is able to use the
config file defined by your system's ``P4CONFIG`` environment variable
in order to define the Perforce server URL and port, username, and
password if you do not wish to keep those values in a recipe itself. If
you choose not to use ``P4CONFIG``, or to explicitly set variables that
``P4CONFIG`` can contain, you can specify the ``P4PORT`` value, which is
the server's URL and port number, and you can specify a username and
password directly in your recipe within :term:`SRC_URI`.
Here is an example that relies on ``P4CONFIG`` to specify the server URL
and port, username, and password, and fetches the Head Revision::
SRC_URI = "p4://example-depot/main/source/..."
SRCREV = "${AUTOREV}"
PV = "p4-${SRCPV}"
S = "${WORKDIR}/p4"
Here is an example that specifies the server URL and port, username, and
password, and fetches a Revision based on a Label::
P4PORT = "tcp:p4server.example.net:1666"
SRC_URI = "p4://user:passwd@example-depot/main/source/..."
SRCREV = "release-1.0"
PV = "p4-${SRCPV}"
S = "${WORKDIR}/p4"
.. note::
You should always set S to "${WORKDIR}/p4" in your recipe.
By default, the fetcher strips the depot location from the local file paths. In
the above example, the content of ``example-depot/main/source/`` will be placed
in ``${WORKDIR}/p4``. For situations where preserving parts of the remote depot
paths locally is desirable, the fetcher supports two parameters:
- *"module":*
The top-level depot location or directory to fetch. The value of this
parameter can also point to a single file within the depot, in which case
the local file path will include the module path.
- *"remotepath":*
When used with the value "``keep``", the fetcher will mirror the full depot
paths locally for the specified location, even in combination with the
``module`` parameter.
Here is an example use of the the ``module`` parameter::
SRC_URI = "p4://user:passwd@example-depot/main;module=source/..."
In this case, the content of the top-level directory ``source/`` will be fetched
to ``${P4DIR}``, including the directory itself. The top-level directory will
be accesible at ``${P4DIR}/source/``.
Here is an example use of the the ``remotepath`` parameter::
SRC_URI = "p4://user:passwd@example-depot/main;module=source/...;remotepath=keep"
In this case, the content of the top-level directory ``source/`` will be fetched
to ``${P4DIR}``, but the complete depot paths will be mirrored locally. The
top-level directory will be accessible at
``${P4DIR}/example-depot/main/source/``.
.. _repo-fetcher:
Repo Fetcher (``repo://``)
--------------------------
This fetcher submodule fetches code from ``google-repo`` source control
system. The fetcher works by initiating and syncing sources of the
repository into :term:`REPODIR`, which is usually
``${DL_DIR}/repo``.
This fetcher supports the following parameters:
- *"protocol":* Protocol to fetch the repository manifest (default:
git).
- *"branch":* Branch or tag of repository to get (default: master).
- *"manifest":* Name of the manifest file (default: ``default.xml``).
Here are some example URLs::
SRC_URI = "repo://REPOROOT;protocol=git;branch=some_branch;manifest=my_manifest.xml"
SRC_URI = "repo://REPOROOT;protocol=file;branch=some_branch;manifest=my_manifest.xml"
.. _az-fetcher:
Az Fetcher (``az://``)
--------------------------
This submodule fetches data from an
`Azure Storage account <https://docs.microsoft.com/en-us/azure/storage/>`__ ,
it inherits its functionality from the HTTP wget fetcher, but modifies its
behavior to accomodate the usage of a
`Shared Access Signature (SAS) <https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview>`__
for non-public data.
Such functionality is set by the variable:
- :term:`AZ_SAS`: The Azure Storage Shared Access Signature provides secure
delegate access to resources, if this variable is set, the Az Fetcher will
use it when fetching artifacts from the cloud.
You can specify the AZ_SAS variable as shown below::
AZ_SAS = "se=2021-01-01&sp=r&sv=2018-11-09&sr=c&skoid=<skoid>&sig=<signature>"
Here is an example URL::
SRC_URI = "az://<azure-storage-account>.blob.core.windows.net/<foo_container>/<bar_file>"
It can also be used when setting mirrors definitions using the :term:`PREMIRRORS` variable.
.. _crate-fetcher:
Crate Fetcher (``crate://``)
----------------------------
This submodule fetches code for
`Rust language "crates" <https://doc.rust-lang.org/reference/glossary.html?highlight=crate#crate>`__
corresponding to Rust libraries and programs to compile. Such crates are typically shared
on https://crates.io/ but this fetcher supports other crate registries too.
The format for the :term:`SRC_URI` setting must be::
SRC_URI = "crate://REGISTRY/NAME/VERSION"
Here is an example URL::
SRC_URI = "crate://crates.io/glob/0.2.11"
.. _npm-fetcher:
NPM Fetcher (``npm://``)
------------------------
This submodule fetches source code from an
`NPM <https://en.wikipedia.org/wiki/Npm_(software)>`__
Javascript package registry.
The format for the :term:`SRC_URI` setting must be::
SRC_URI = "npm://some.registry.url;ParameterA=xxx;ParameterB=xxx;..."
This fetcher supports the following parameters:
- *"package":* The NPM package name. This is a mandatory parameter.
- *"version":* The NPM package version. This is a mandatory parameter.
- *"downloadfilename":* Specifies the filename used when storing the downloaded file.
- *"destsuffix":* Specifies the directory to use to unpack the package (default: ``npm``).
Note that NPM fetcher only fetches the package source itself. The dependencies
can be fetched through the `npmsw-fetcher`_.
Here is an example URL with both fetchers::
SRC_URI = " \
npm://registry.npmjs.org/;package=cute-files;version=${PV} \
npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
"
See :yocto_docs:`Creating Node Package Manager (NPM) Packages
</dev-manual/common-tasks.html#creating-node-package-manager-npm-packages>`
in the Yocto Project manual for details about using
:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
to automatically create a recipe from an NPM URL.
.. _npmsw-fetcher:
NPM shrinkwrap Fetcher (``npmsw://``)
-------------------------------------
This submodule fetches source code from an
`NPM shrinkwrap <https://docs.npmjs.com/cli/v8/commands/npm-shrinkwrap>`__
description file, which lists the dependencies
of an NPM package while locking their versions.
The format for the :term:`SRC_URI` setting must be::
SRC_URI = "npmsw://some.registry.url;ParameterA=xxx;ParameterB=xxx;..."
This fetcher supports the following parameters:
- *"dev":* Set this parameter to ``1`` to install "devDependencies".
- *"destsuffix":* Specifies the directory to use to unpack the dependencies
(``${S}`` by default).
Note that the shrinkwrap file can also be provided by the recipe for
the package which has such dependencies, for example::
SRC_URI = " \
npm://registry.npmjs.org/;package=cute-files;version=${PV} \
npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
"
Such a file can automatically be generated using
:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
as described in the :yocto_docs:`Creating Node Package Manager (NPM) Packages
</dev-manual/common-tasks.html#creating-node-package-manager-npm-packages>`
section of the Yocto Project.
Other Fetchers
--------------
Fetch submodules also exist for the following:
- Bazaar (``bzr://``)
- Mercurial (``hg://``)
- OSC (``osc://``)
- Secure FTP (``sftp://``)
- Secure Shell (``ssh://``)
- Trees using Git Annex (``gitannex://``)
No documentation currently exists for these lesser used fetcher
submodules. However, you might find the code helpful and readable.
Auto Revisions
==============
We need to document ``AUTOREV`` and :term:`SRCREV_FORMAT` here.

View File

@@ -1,415 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
===================
Hello World Example
===================
BitBake Hello World
===================
The simplest example commonly used to demonstrate any new programming
language or tool is the "`Hello
World <http://en.wikipedia.org/wiki/Hello_world_program>`__" example.
This appendix demonstrates, in tutorial form, Hello World within the
context of BitBake. The tutorial describes how to create a new project
and the applicable metadata files necessary to allow BitBake to build
it.
Obtaining BitBake
=================
See the :ref:`bitbake-user-manual/bitbake-user-manual-hello:obtaining bitbake` section for
information on how to obtain BitBake. Once you have the source code on
your machine, the BitBake directory appears as follows::
$ ls -al
total 100
drwxrwxr-x. 9 wmat wmat 4096 Jan 31 13:44 .
drwxrwxr-x. 3 wmat wmat 4096 Feb 4 10:45 ..
-rw-rw-r--. 1 wmat wmat 365 Nov 26 04:55 AUTHORS
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 bin
drwxrwxr-x. 4 wmat wmat 4096 Jan 31 13:44 build
-rw-rw-r--. 1 wmat wmat 16501 Nov 26 04:55 ChangeLog
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 classes
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 conf
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 contrib
-rw-rw-r--. 1 wmat wmat 17987 Nov 26 04:55 COPYING
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 doc
-rw-rw-r--. 1 wmat wmat 69 Nov 26 04:55 .gitignore
-rw-rw-r--. 1 wmat wmat 849 Nov 26 04:55 HEADER
drwxrwxr-x. 5 wmat wmat 4096 Jan 31 13:44 lib
-rw-rw-r--. 1 wmat wmat 195 Nov 26 04:55 MANIFEST.in
-rw-rw-r--. 1 wmat wmat 2887 Nov 26 04:55 TODO
At this point, you should have BitBake cloned to a directory that
matches the previous listing except for dates and user names.
Setting Up the BitBake Environment
==================================
First, you need to be sure that you can run BitBake. Set your working
directory to where your local BitBake files are and run the following
command::
$ ./bin/bitbake --version
BitBake Build Tool Core version 1.23.0, bitbake version 1.23.0
The console output tells you what version
you are running.
The recommended method to run BitBake is from a directory of your
choice. To be able to run BitBake from any directory, you need to add
the executable binary to your binary to your shell's environment
``PATH`` variable. First, look at your current ``PATH`` variable by
entering the following::
$ echo $PATH
Next, add the directory location
for the BitBake binary to the ``PATH``. Here is an example that adds the
``/home/scott-lenovo/bitbake/bin`` directory to the front of the
``PATH`` variable::
$ export PATH=/home/scott-lenovo/bitbake/bin:$PATH
You should now be able to enter the ``bitbake`` command from the command
line while working from any directory.
The Hello World Example
=======================
The overall goal of this exercise is to build a complete "Hello World"
example utilizing task and layer concepts. Because this is how modern
projects such as OpenEmbedded and the Yocto Project utilize BitBake, the
example provides an excellent starting point for understanding BitBake.
To help you understand how to use BitBake to build targets, the example
starts with nothing but the ``bitbake`` command, which causes BitBake to
fail and report problems. The example progresses by adding pieces to the
build to eventually conclude with a working, minimal "Hello World"
example.
While every attempt is made to explain what is happening during the
example, the descriptions cannot cover everything. You can find further
information throughout this manual. Also, you can actively participate
in the :oe_lists:`/g/bitbake-devel`
discussion mailing list about the BitBake build tool.
.. note::
This example was inspired by and drew heavily from
`Mailing List post - The BitBake equivalent of "Hello, World!"
<https://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html>`_.
As stated earlier, the goal of this example is to eventually compile
"Hello World". However, it is unknown what BitBake needs and what you
have to provide in order to achieve that goal. Recall that BitBake
utilizes three types of metadata files:
:ref:`bitbake-user-manual/bitbake-user-manual-intro:configuration files`,
:ref:`bitbake-user-manual/bitbake-user-manual-intro:classes`, and
:ref:`bitbake-user-manual/bitbake-user-manual-intro:recipes`.
But where do they go? How does BitBake find
them? BitBake's error messaging helps you answer these types of
questions and helps you better understand exactly what is going on.
Following is the complete "Hello World" example.
#. **Create a Project Directory:** First, set up a directory for the
"Hello World" project. Here is how you can do so in your home
directory::
$ mkdir ~/hello
$ cd ~/hello
This is the directory that
BitBake will use to do all of its work. You can use this directory
to keep all the metafiles needed by BitBake. Having a project
directory is a good way to isolate your project.
#. **Run BitBake:** At this point, you have nothing but a project
directory. Run the ``bitbake`` command and see what it does::
$ bitbake
The BBPATH variable is not set and bitbake did not
find a conf/bblayers.conf file in the expected location.
Maybe you accidentally invoked bitbake from the wrong directory?
DEBUG: Removed the following variables from the environment:
GNOME_DESKTOP_SESSION_ID, XDG_CURRENT_DESKTOP,
GNOME_KEYRING_CONTROL, DISPLAY, SSH_AGENT_PID, LANG, no_proxy,
XDG_SESSION_PATH, XAUTHORITY, SESSION_MANAGER, SHLVL,
MANDATORY_PATH, COMPIZ_CONFIG_PROFILE, WINDOWID, EDITOR,
GPG_AGENT_INFO, SSH_AUTH_SOCK, GDMSESSION, GNOME_KEYRING_PID,
XDG_SEAT_PATH, XDG_CONFIG_DIRS, LESSOPEN, DBUS_SESSION_BUS_ADDRESS,
_, XDG_SESSION_COOKIE, DESKTOP_SESSION, LESSCLOSE, DEFAULTS_PATH,
UBUNTU_MENUPROXY, OLDPWD, XDG_DATA_DIRS, COLORTERM, LS_COLORS
The majority of this output is specific to environment variables that
are not directly relevant to BitBake. However, the very first
message regarding the :term:`BBPATH` variable and the
``conf/bblayers.conf`` file is relevant.
When you run BitBake, it begins looking for metadata files. The
:term:`BBPATH` variable is what tells BitBake where
to look for those files. :term:`BBPATH` is not set and you need to set
it. Without :term:`BBPATH`, BitBake cannot find any configuration files
(``.conf``) or recipe files (``.bb``) at all. BitBake also cannot
find the ``bitbake.conf`` file.
#. **Setting BBPATH:** For this example, you can set :term:`BBPATH` in
the same manner that you set ``PATH`` earlier in the appendix. You
should realize, though, that it is much more flexible to set the
:term:`BBPATH` variable up in a configuration file for each project.
From your shell, enter the following commands to set and export the
:term:`BBPATH` variable::
$ BBPATH="projectdirectory"
$ export BBPATH
Use your actual project directory in the command. BitBake uses that
directory to find the metadata it needs for your project.
.. note::
When specifying your project directory, do not use the tilde
("~") character as BitBake does not expand that character as the
shell would.
#. **Run BitBake:** Now that you have :term:`BBPATH` defined, run the
``bitbake`` command again::
$ bitbake
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 173, in parse_config_file
return bb.parse.handle(fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 99, in handle
return h['handle'](fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 120, in handle
abs_fn = resolve_file(fn, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 117, in resolve_file
raise IOError("file %s not found in %s" % (fn, bbpath))
IOError: file conf/bitbake.conf not found in /home/scott-lenovo/hello
ERROR: Unable to parse conf/bitbake.conf: file conf/bitbake.conf not found in /home/scott-lenovo/hello
This sample output shows that BitBake could not find the
``conf/bitbake.conf`` file in the project directory. This file is
the first thing BitBake must find in order to build a target. And,
since the project directory for this example is empty, you need to
provide a ``conf/bitbake.conf`` file.
#. **Creating conf/bitbake.conf:** The ``conf/bitbake.conf`` includes
a number of configuration variables BitBake uses for metadata and
recipe files. For this example, you need to create the file in your
project directory and define some key BitBake variables. For more
information on the ``bitbake.conf`` file, see
https://git.openembedded.org/bitbake/tree/conf/bitbake.conf.
Use the following commands to create the ``conf`` directory in the
project directory::
$ mkdir conf
From within the ``conf`` directory,
use some editor to create the ``bitbake.conf`` so that it contains
the following::
PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
TMPDIR = "${TOPDIR}/tmp"
CACHE = "${TMPDIR}/cache"
STAMP = "${TMPDIR}/${PN}/stamps"
T = "${TMPDIR}/${PN}/work"
B = "${TMPDIR}/${PN}"
.. note::
Without a value for PN , the variables STAMP , T , and B , prevent more
than one recipe from working. You can fix this by either setting PN to
have a value similar to what OpenEmbedded and BitBake use in the default
bitbake.conf file (see previous example). Or, by manually updating each
recipe to set PN . You will also need to include PN as part of the STAMP
, T , and B variable definitions in the local.conf file.
The ``TMPDIR`` variable establishes a directory that BitBake uses
for build output and intermediate files other than the cached
information used by the
:ref:`bitbake-user-manual/bitbake-user-manual-execution:setscene`
process. Here, the ``TMPDIR`` directory is set to ``hello/tmp``.
.. tip::
You can always safely delete the tmp directory in order to rebuild a
BitBake target. The build process creates the directory for you when you
run BitBake.
For information about each of the other variables defined in this
example, check :term:`PN`, :term:`TOPDIR`, :term:`CACHE`, :term:`STAMP`,
:term:`T` or :term:`B` to take you to the definitions in the
glossary.
#. **Run BitBake:** After making sure that the ``conf/bitbake.conf`` file
exists, you can run the ``bitbake`` command again::
$ bitbake
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 177, in _inherit
bb.parse.BBHandler.inherit(bbclass, "configuration INHERITs", 0, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py", line 92, in inherit
include(fn, file, lineno, d, "inherit")
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 100, in include
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), oldfn, lineno)
ParseError: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
In the sample output,
BitBake could not find the ``classes/base.bbclass`` file. You need
to create that file next.
#. **Creating classes/base.bbclass:** BitBake uses class files to
provide common code and functionality. The minimally required class
for BitBake is the ``classes/base.bbclass`` file. The ``base`` class
is implicitly inherited by every recipe. BitBake looks for the class
in the ``classes`` directory of the project (i.e ``hello/classes``
in this example).
Create the ``classes`` directory as follows::
$ cd $HOME/hello
$ mkdir classes
Move to the ``classes`` directory and then create the
``base.bbclass`` file by inserting this single line: addtask build
The minimal task that BitBake runs is the ``do_build`` task. This is
all the example needs in order to build the project. Of course, the
``base.bbclass`` can have much more depending on which build
environments BitBake is supporting.
#. **Run BitBake:** After making sure that the ``classes/base.bbclass``
file exists, you can run the ``bitbake`` command again::
$ bitbake
Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.
BitBake is finally reporting
no errors. However, you can see that it really does not have
anything to do. You need to create a recipe that gives BitBake
something to do.
#. **Creating a Layer:** While it is not really necessary for such a
small example, it is good practice to create a layer in which to
keep your code separate from the general metadata used by BitBake.
Thus, this example creates and uses a layer called "mylayer".
.. note::
You can find additional information on layers in the
":ref:`bitbake-user-manual/bitbake-user-manual-intro:Layers`" section.
Minimally, you need a recipe file and a layer configuration file in
your layer. The configuration file needs to be in the ``conf``
directory inside the layer. Use these commands to set up the layer
and the ``conf`` directory::
$ cd $HOME
$ mkdir mylayer
$ cd mylayer
$ mkdir conf
Move to the ``conf`` directory and create a ``layer.conf`` file that has the
following::
BBPATH .= ":${LAYERDIR}"
BBFILES += "${LAYERDIR}/*.bb"
BBFILE_COLLECTIONS += "mylayer"
BBFILE_PATTERN_mylayer := "^${LAYERDIR_RE}/"
For information on these variables, click on :term:`BBFILES`,
:term:`LAYERDIR`, :term:`BBFILE_COLLECTIONS` or :term:`BBFILE_PATTERN_mylayer <BBFILE_PATTERN>`
to go to the definitions in the glossary.
You need to create the recipe file next. Inside your layer at the
top-level, use an editor and create a recipe file named
``printhello.bb`` that has the following::
DESCRIPTION = "Prints Hello World"
PN = 'printhello'
PV = '1'
python do_build() {
bb.plain("********************");
bb.plain("* *");
bb.plain("* Hello, World! *");
bb.plain("* *");
bb.plain("********************");
}
The recipe file simply provides
a description of the recipe, the name, version, and the ``do_build``
task, which prints out "Hello World" to the console. For more
information on :term:`DESCRIPTION`, :term:`PN` or :term:`PV`
follow the links to the glossary.
#. **Run BitBake With a Target:** Now that a BitBake target exists, run
the command and provide that target::
$ cd $HOME/hello
$ bitbake printhello
ERROR: no recipe files to build, check your BBPATH and BBFILES?
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
We have created the layer with the recipe and
the layer configuration file but it still seems that BitBake cannot
find the recipe. BitBake needs a ``conf/bblayers.conf`` that lists
the layers for the project. Without this file, BitBake cannot find
the recipe.
#. **Creating conf/bblayers.conf:** BitBake uses the
``conf/bblayers.conf`` file to locate layers needed for the project.
This file must reside in the ``conf`` directory of the project (i.e.
``hello/conf`` for this example).
Set your working directory to the ``hello/conf`` directory and then
create the ``bblayers.conf`` file so that it contains the following::
BBLAYERS ?= " \
/home/<you>/mylayer \
"
You need to provide your own information for ``you`` in the file.
#. **Run BitBake With a Target:** Now that you have supplied the
``bblayers.conf`` file, run the ``bitbake`` command and provide the
target::
$ bitbake printhello
Parsing recipes: 100% |##################################################################################|
Time: 00:00:00
Parsing of 1 .bb files complete (0 cached, 1 parsed). 1 targets, 0 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
NOTE: Preparing RunQueue
NOTE: Executing RunQueue Tasks
********************
* *
* Hello, World! *
* *
********************
NOTE: Tasks Summary: Attempted 1 tasks of which 0 didn't need to be rerun and all succeeded.
.. note::
After the first execution, re-running bitbake printhello again will not
result in a BitBake run that prints the same console output. The reason
for this is that the first time the printhello.bb recipe's do_build task
executes successfully, BitBake writes a stamp file for the task. Thus,
the next time you attempt to run the task using that same bitbake
command, BitBake notices the stamp and therefore determines that the task
does not need to be re-run. If you delete the tmp directory or run
bitbake -c clean printhello and then re-run the build, the "Hello,
World!" message will be printed again.

View File

@@ -1,653 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
========
Overview
========
|
Welcome to the BitBake User Manual. This manual provides information on
the BitBake tool. The information attempts to be as independent as
possible regarding systems that use BitBake, such as OpenEmbedded and
the Yocto Project. In some cases, scenarios or examples within the
context of a build system are used in the manual to help with
understanding. For these cases, the manual clearly states the context.
.. _intro:
Introduction
============
Fundamentally, BitBake is a generic task execution engine that allows
shell and Python tasks to be run efficiently and in parallel while
working within complex inter-task dependency constraints. One of
BitBake's main users, OpenEmbedded, takes this core and builds embedded
Linux software stacks using a task-oriented approach.
Conceptually, BitBake is similar to GNU Make in some regards but has
significant differences:
- BitBake executes tasks according to the provided metadata that builds up
the tasks. Metadata is stored in recipe (``.bb``) and related recipe
"append" (``.bbappend``) files, configuration (``.conf``) and
underlying include (``.inc``) files, and in class (``.bbclass``)
files. The metadata provides BitBake with instructions on what tasks
to run and the dependencies between those tasks.
- BitBake includes a fetcher library for obtaining source code from
various places such as local files, source control systems, or
websites.
- The instructions for each unit to be built (e.g. a piece of software)
are known as "recipe" files and contain all the information about the
unit (dependencies, source file locations, checksums, description and
so on).
- BitBake includes a client/server abstraction and can be used from a
command line or used as a service over XML-RPC and has several
different user interfaces.
History and Goals
=================
BitBake was originally a part of the OpenEmbedded project. It was
inspired by the Portage package management system used by the Gentoo
Linux distribution. On December 7, 2004, OpenEmbedded project team
member Chris Larson split the project into two distinct pieces:
- BitBake, a generic task executor
- OpenEmbedded, a metadata set utilized by BitBake
Today, BitBake is the primary basis of the
`OpenEmbedded <https://www.openembedded.org/>`__ project, which is being
used to build and maintain Linux distributions such as the `Poky
Reference Distribution <https://www.yoctoproject.org/software-item/poky/>`__,
developed under the umbrella of the `Yocto Project <https://www.yoctoproject.org>`__.
Prior to BitBake, no other build tool adequately met the needs of an
aspiring embedded Linux distribution. All of the build systems used by
traditional desktop Linux distributions lacked important functionality,
and none of the ad hoc Buildroot-based systems, prevalent in the
embedded space, were scalable or maintainable.
Some important original goals for BitBake were:
- Handle cross-compilation.
- Handle inter-package dependencies (build time on target architecture,
build time on native architecture, and runtime).
- Support running any number of tasks within a given package,
including, but not limited to, fetching upstream sources, unpacking
them, patching them, configuring them, and so forth.
- Be Linux distribution agnostic for both build and target systems.
- Be architecture agnostic.
- Support multiple build and target operating systems (e.g. Cygwin, the
BSDs, and so forth).
- Be self-contained, rather than tightly integrated into the build
machine's root filesystem.
- Handle conditional metadata on the target architecture, operating
system, distribution, and machine.
- Be easy to use the tools to supply local metadata and packages
against which to operate.
- Be easy to use BitBake to collaborate between multiple projects for
their builds.
- Provide an inheritance mechanism to share common metadata between
many packages.
Over time it became apparent that some further requirements were
necessary:
- Handle variants of a base recipe (e.g. native, sdk, and multilib).
- Split metadata into layers and allow layers to enhance or override
other layers.
- Allow representation of a given set of input variables to a task as a
checksum. Based on that checksum, allow acceleration of builds with
prebuilt components.
BitBake satisfies all the original requirements and many more with
extensions being made to the basic functionality to reflect the
additional requirements. Flexibility and power have always been the
priorities. BitBake is highly extensible and supports embedded Python
code and execution of any arbitrary tasks.
.. _Concepts:
Concepts
========
BitBake is a program written in the Python language. At the highest
level, BitBake interprets metadata, decides what tasks are required to
run, and executes those tasks. Similar to GNU Make, BitBake controls how
software is built. GNU Make achieves its control through "makefiles",
while BitBake uses "recipes".
BitBake extends the capabilities of a simple tool like GNU Make by
allowing for the definition of much more complex tasks, such as
assembling entire embedded Linux distributions.
The remainder of this section introduces several concepts that should be
understood in order to better leverage the power of BitBake.
Recipes
-------
BitBake Recipes, which are denoted by the file extension ``.bb``, are
the most basic metadata files. These recipe files provide BitBake with
the following:
- Descriptive information about the package (author, homepage, license,
and so on)
- The version of the recipe
- Existing dependencies (both build and runtime dependencies)
- Where the source code resides and how to fetch it
- Whether the source code requires any patches, where to find them, and
how to apply them
- How to configure and compile the source code
- How to assemble the generated artifacts into one or more installable
packages
- Where on the target machine to install the package or packages
created
Within the context of BitBake, or any project utilizing BitBake as its
build system, files with the ``.bb`` extension are referred to as
recipes.
.. note::
The term "package" is also commonly used to describe recipes.
However, since the same word is used to describe packaged output from
a project, it is best to maintain a single descriptive term -
"recipes". Put another way, a single "recipe" file is quite capable
of generating a number of related but separately installable
"packages". In fact, that ability is fairly common.
Configuration Files
-------------------
Configuration files, which are denoted by the ``.conf`` extension,
define various configuration variables that govern the project's build
process. These files fall into several areas that define machine
configuration, distribution configuration, possible compiler tuning,
general common configuration, and user configuration. The main
configuration file is the sample ``bitbake.conf`` file, which is located
within the BitBake source tree ``conf`` directory.
Classes
-------
Class files, which are denoted by the ``.bbclass`` extension, contain
information that is useful to share between metadata files. The BitBake
source tree currently comes with one class metadata file called
``base.bbclass``. You can find this file in the ``classes`` directory.
The ``base.bbclass`` class files is special since it is always included
automatically for all recipes and classes. This class contains
definitions for standard basic tasks such as fetching, unpacking,
configuring (empty by default), compiling (runs any Makefile present),
installing (empty by default) and packaging (empty by default). These
tasks are often overridden or extended by other classes added during the
project development process.
Layers
------
Layers allow you to isolate different types of customizations from each
other. While you might find it tempting to keep everything in one layer
when working on a single project, the more modular your metadata, the
easier it is to cope with future changes.
To illustrate how you can use layers to keep things modular, consider
customizations you might make to support a specific target machine.
These types of customizations typically reside in a special layer,
rather than a general layer, called a Board Support Package (BSP) layer.
Furthermore, the machine customizations should be isolated from recipes
and metadata that support a new GUI environment, for example. This
situation gives you a couple of layers: one for the machine
configurations and one for the GUI environment. It is important to
understand, however, that the BSP layer can still make machine-specific
additions to recipes within the GUI environment layer without polluting
the GUI layer itself with those machine-specific changes. You can
accomplish this through a recipe that is a BitBake append
(``.bbappend``) file.
.. _append-bbappend-files:
Append Files
------------
Append files, which are files that have the ``.bbappend`` file
extension, extend or override information in an existing recipe file.
BitBake expects every append file to have a corresponding recipe file.
Furthermore, the append file and corresponding recipe file must use the
same root filename. The filenames can differ only in the file type
suffix used (e.g. ``formfactor_0.0.bb`` and
``formfactor_0.0.bbappend``).
Information in append files extends or overrides the information in the
underlying, similarly-named recipe files.
When you name an append file, you can use the "``%``" wildcard character
to allow for matching recipe names. For example, suppose you have an
append file named as follows::
busybox_1.21.%.bbappend
That append file
would match any ``busybox_1.21.``\ x\ ``.bb`` version of the recipe. So,
the append file would match the following recipe names::
busybox_1.21.1.bb
busybox_1.21.2.bb
busybox_1.21.3.bb
.. note::
The use of the " % " character is limited in that it only works directly in
front of the .bbappend portion of the append file's name. You cannot use the
wildcard character in any other location of the name.
If the ``busybox`` recipe was updated to ``busybox_1.3.0.bb``, the
append name would not match. However, if you named the append file
``busybox_1.%.bbappend``, then you would have a match.
In the most general case, you could name the append file something as
simple as ``busybox_%.bbappend`` to be entirely version independent.
Obtaining BitBake
=================
You can obtain BitBake several different ways:
- **Cloning BitBake:** Using Git to clone the BitBake source code
repository is the recommended method for obtaining BitBake. Cloning
the repository makes it easy to get bug fixes and have access to
stable branches and the master branch. Once you have cloned BitBake,
you should use the latest stable branch for development since the
master branch is for BitBake development and might contain less
stable changes.
You usually need a version of BitBake that matches the metadata you
are using. The metadata is generally backwards compatible but not
forward compatible.
Here is an example that clones the BitBake repository::
$ git clone git://git.openembedded.org/bitbake
This command clones the BitBake
Git repository into a directory called ``bitbake``. Alternatively,
you can designate a directory after the ``git clone`` command if you
want to call the new directory something other than ``bitbake``. Here
is an example that names the directory ``bbdev``::
$ git clone git://git.openembedded.org/bitbake bbdev
- **Installation using your Distribution Package Management System:**
This method is not recommended because the BitBake version that is
provided by your distribution, in most cases, is several releases
behind a snapshot of the BitBake repository.
- **Taking a snapshot of BitBake:** Downloading a snapshot of BitBake
from the source code repository gives you access to a known branch or
release of BitBake.
.. note::
Cloning the Git repository, as described earlier, is the preferred
method for getting BitBake. Cloning the repository makes it easier
to update as patches are added to the stable branches.
The following example downloads a snapshot of BitBake version 1.17.0::
$ wget https://git.openembedded.org/bitbake/snapshot/bitbake-1.17.0.tar.gz
$ tar zxpvf bitbake-1.17.0.tar.gz
After extraction of the tarball using
the tar utility, you have a directory entitled ``bitbake-1.17.0``.
- **Using the BitBake that Comes With Your Build Checkout:** A final
possibility for getting a copy of BitBake is that it already comes
with your checkout of a larger BitBake-based build system, such as
Poky. Rather than manually checking out individual layers and gluing
them together yourself, you can check out an entire build system. The
checkout will already include a version of BitBake that has been
thoroughly tested for compatibility with the other components. For
information on how to check out a particular BitBake-based build
system, consult that build system's supporting documentation.
.. _bitbake-user-manual-command:
The BitBake Command
===================
The ``bitbake`` command is the primary interface to the BitBake tool.
This section presents the BitBake command syntax and provides several
execution examples.
Usage and syntax
----------------
Following is the usage and syntax for BitBake::
$ bitbake -h
Usage: bitbake [options] [recipename/target recipe:do_task ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.
Options:
--version show program's version number and exit
-h, --help show this help message and exit
-b BUILDFILE, --buildfile=BUILDFILE
Execute tasks from a specific .bb recipe directly.
WARNING: Does not handle any dependencies from other
recipes.
-k, --continue Continue as much as possible after an error. While the
target that failed and anything depending on it cannot
be built, as much as possible will be built before
stopping.
-f, --force Force the specified targets/task to run (invalidating
any existing stamp file).
-c CMD, --cmd=CMD Specify the task to execute. The exact options
available depend on the metadata. Some examples might
be 'compile' or 'populate_sysroot' or 'listtasks' may
give a list of the tasks available.
-C INVALIDATE_STAMP, --clear-stamp=INVALIDATE_STAMP
Invalidate the stamp for the specified task such as
'compile' and then run the default task for the
specified target(s).
-r PREFILE, --read=PREFILE
Read the specified file before bitbake.conf.
-R POSTFILE, --postread=POSTFILE
Read the specified file after bitbake.conf.
-v, --verbose Enable tracing of shell tasks (with 'set -x'). Also
print bb.note(...) messages to stdout (in addition to
writing them to ${T}/log.do_&lt;task&gt;).
-D, --debug Increase the debug level. You can specify this more
than once. -D sets the debug level to 1, where only
bb.debug(1, ...) messages are printed to stdout; -DD
sets the debug level to 2, where both bb.debug(1, ...)
and bb.debug(2, ...) messages are printed; etc.
Without -D, no debug messages are printed. Note that
-D only affects output to stdout. All debug messages
are written to ${T}/log.do_taskname, regardless of the
debug level.
-q, --quiet Output less log message data to the terminal. You can
specify this more than once.
-n, --dry-run Don't execute, just go through the motions.
-S SIGNATURE_HANDLER, --dump-signatures=SIGNATURE_HANDLER
Dump out the signature construction information, with
no task execution. The SIGNATURE_HANDLER parameter is
passed to the handler. Two common values are none and
printdiff but the handler may define more/less. none
means only dump the signature, printdiff means compare
the dumped signature with the cached one.
-p, --parse-only Quit after parsing the BB recipes.
-s, --show-versions Show current and preferred versions of all recipes.
-e, --environment Show the global or per-recipe environment complete
with information about where variables were
set/changed.
-g, --graphviz Save dependency tree information for the specified
targets in the dot syntax.
-I EXTRA_ASSUME_PROVIDED, --ignore-deps=EXTRA_ASSUME_PROVIDED
Assume these dependencies don't exist and are already
provided (equivalent to ASSUME_PROVIDED). Useful to
make dependency graphs more appealing
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
-P, --profile Profile the command and save reports.
-u UI, --ui=UI The user interface to use (knotty, ncurses, taskexp or
teamcity - default knotty).
--token=XMLRPCTOKEN Specify the connection token to be used when
connecting to a remote server.
--revisions-changed Set the exit code depending on whether upstream
floating revisions have changed or not.
--server-only Run bitbake without a UI, only starting a server
(cooker) process.
-B BIND, --bind=BIND The name/address for the bitbake xmlrpc server to bind
to.
-T SERVER_TIMEOUT, --idle-timeout=SERVER_TIMEOUT
Set timeout to unload bitbake server due to
inactivity, set to -1 means no unload, default:
Environment variable BB_SERVER_TIMEOUT.
--no-setscene Do not run any setscene tasks. sstate will be ignored
and everything needed, built.
--skip-setscene Skip setscene tasks if they would be executed. Tasks
previously restored from sstate will be kept, unlike
--no-setscene
--setscene-only Only run setscene tasks, don't run any real tasks.
--remote-server=REMOTE_SERVER
Connect to the specified server.
-m, --kill-server Terminate any running bitbake server.
--observe-only Connect to a server as an observing-only client.
--status-only Check the status of the remote bitbake server.
-w WRITEEVENTLOG, --write-log=WRITEEVENTLOG
Writes the event log of the build to a bitbake event
json file. Use '' (empty string) to assign the name
automatically.
--runall=RUNALL Run the specified task for any recipe in the taskgraph
of the specified target (even if it wouldn't otherwise
have run).
--runonly=RUNONLY Run only the specified task within the taskgraph of
the specified targets (and any task dependencies those
tasks may have).
.. _bitbake-examples:
Examples
--------
This section presents some examples showing how to use BitBake.
.. _example-executing-a-task-against-a-single-recipe:
Executing a Task Against a Single Recipe
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Executing tasks for a single recipe file is relatively simple. You
specify the file in question, and BitBake parses it and executes the
specified task. If you do not specify a task, BitBake executes the
default task, which is "build". BitBake obeys inter-task dependencies
when doing so.
The following command runs the build task, which is the default task, on
the ``foo_1.0.bb`` recipe file::
$ bitbake -b foo_1.0.bb
The following command runs the clean task on the ``foo.bb`` recipe file::
$ bitbake -b foo.bb -c clean
.. note::
The "-b" option explicitly does not handle recipe dependencies. Other
than for debugging purposes, it is instead recommended that you use
the syntax presented in the next section.
Executing Tasks Against a Set of Recipe Files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are a number of additional complexities introduced when one wants
to manage multiple ``.bb`` files. Clearly there needs to be a way to
tell BitBake what files are available and, of those, which you want to
execute. There also needs to be a way for each recipe to express its
dependencies, both for build-time and runtime. There must be a way for
you to express recipe preferences when multiple recipes provide the same
functionality, or when there are multiple versions of a recipe.
The ``bitbake`` command, when not using "--buildfile" or "-b" only
accepts a "PROVIDES". You cannot provide anything else. By default, a
recipe file generally "PROVIDES" its "packagename" as shown in the
following example::
$ bitbake foo
This next example "PROVIDES" the
package name and also uses the "-c" option to tell BitBake to just
execute the ``do_clean`` task::
$ bitbake -c clean foo
Executing a List of Task and Recipe Combinations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The BitBake command line supports specifying different tasks for
individual targets when you specify multiple targets. For example,
suppose you had two targets (or recipes) ``myfirstrecipe`` and
``mysecondrecipe`` and you needed BitBake to run ``taskA`` for the first
recipe and ``taskB`` for the second recipe::
$ bitbake myfirstrecipe:do_taskA mysecondrecipe:do_taskB
Generating Dependency Graphs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
BitBake is able to generate dependency graphs using the ``dot`` syntax.
You can convert these graphs into images using the ``dot`` tool from
`Graphviz <http://www.graphviz.org>`__.
When you generate a dependency graph, BitBake writes two files to the
current working directory:
- ``task-depends.dot``: Shows dependencies between tasks. These
dependencies match BitBake's internal task execution list.
- ``pn-buildlist``: Shows a simple list of targets that are to be
built.
To stop depending on common depends, use the ``-I`` depend option and
BitBake omits them from the graph. Leaving this information out can
produce more readable graphs. This way, you can remove from the graph
:term:`DEPENDS` from inherited classes such as ``base.bbclass``.
Here are two examples that create dependency graphs. The second example
omits depends common in OpenEmbedded from the graph::
$ bitbake -g foo
$ bitbake -g -I virtual/kernel -I eglibc foo
Executing a Multiple Configuration Build
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
BitBake is able to build multiple images or packages using a single
command where the different targets require different configurations
(multiple configuration builds). Each target, in this scenario, is
referred to as a "multiconfig".
To accomplish a multiple configuration build, you must define each
target's configuration separately using a parallel configuration file in
the build directory. The location for these multiconfig configuration
files is specific. They must reside in the current build directory in a
sub-directory of ``conf`` named ``multiconfig``. Following is an example
for two separate targets:
.. image:: figures/bb_multiconfig_files.png
:align: center
The reason for this required file hierarchy is because the :term:`BBPATH`
variable is not constructed until the layers are parsed. Consequently,
using the configuration file as a pre-configuration file is not possible
unless it is located in the current working directory.
Minimally, each configuration file must define the machine and the
temporary directory BitBake uses for the build. Suggested practice
dictates that you do not overlap the temporary directories used during
the builds.
Aside from separate configuration files for each target, you must also
enable BitBake to perform multiple configuration builds. Enabling is
accomplished by setting the
:term:`BBMULTICONFIG` variable in the
``local.conf`` configuration file. As an example, suppose you had
configuration files for ``target1`` and ``target2`` defined in the build
directory. The following statement in the ``local.conf`` file both
enables BitBake to perform multiple configuration builds and specifies
the two extra multiconfigs::
BBMULTICONFIG = "target1 target2"
Once the target configuration files are in place and BitBake has been
enabled to perform multiple configuration builds, use the following
command form to start the builds::
$ bitbake [mc:multiconfigname:]target [[[mc:multiconfigname:]target] ... ]
Here is an example for two extra multiconfigs: ``target1`` and ``target2``::
$ bitbake mc::target mc:target1:target mc:target2:target
.. _bb-enabling-multiple-configuration-build-dependencies:
Enabling Multiple Configuration Build Dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sometimes dependencies can exist between targets (multiconfigs) in a
multiple configuration build. For example, suppose that in order to
build an image for a particular architecture, the root filesystem of
another build for a different architecture needs to exist. In other
words, the image for the first multiconfig depends on the root
filesystem of the second multiconfig. This dependency is essentially
that the task in the recipe that builds one multiconfig is dependent on
the completion of the task in the recipe that builds another
multiconfig.
To enable dependencies in a multiple configuration build, you must
declare the dependencies in the recipe using the following statement
form::
task_or_package[mcdepends] = "mc:from_multiconfig:to_multiconfig:recipe_name:task_on_which_to_depend"
To better show how to use this statement, consider an example with two
multiconfigs: ``target1`` and ``target2``::
image_task[mcdepends] = "mc:target1:target2:image2:rootfs_task"
In this example, the
``from_multiconfig`` is "target1" and the ``to_multiconfig`` is "target2". The
task on which the image whose recipe contains image_task depends on the
completion of the rootfs_task used to build out image2, which is
associated with the "target2" multiconfig.
Once you set up this dependency, you can build the "target1" multiconfig
using a BitBake command as follows::
$ bitbake mc:target1:image1
This command executes all the tasks needed to create ``image1`` for the "target1"
multiconfig. Because of the dependency, BitBake also executes through
the ``rootfs_task`` for the "target2" multiconfig build.
Having a recipe depend on the root filesystem of another build might not
seem that useful. Consider this change to the statement in the image1
recipe::
image_task[mcdepends] = "mc:target1:target2:image2:image_task"
In this case, BitBake must create ``image2`` for the "target2" build since
the "target1" build depends on it.
Because "target1" and "target2" are enabled for multiple configuration
builds and have separate configuration files, BitBake places the
artifacts for each build in the respective temporary build directories.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.0 KiB

View File

@@ -89,7 +89,7 @@ quit after parsing the BB files (developers only)
show current and preferred versions of all packages
.TP
.B \-e, \-\-environment
show the global or per-recipe environment (this is what used to be bbread)
show the global or per-package environment (this is what used to be bbread)
.TP
.B \-g, \-\-graphviz
emit the dependency trees of the specified packages in the dot syntax
@@ -104,30 +104,6 @@ Show debug logging for the specified logging domains
.B \-P, \-\-profile
profile the command and print a report
.TP
.B \-uUI, \-\-ui=UI
User interface to use. Currently, knotty, taskexp or ncurses can be specified as UI.
.TP
.B \-tSERVERTYPE, \-\-servertype=SERVERTYPE
Choose which server to use, none, process or xmlrpc.
.TP
.B \-\-revisions-changed
Set the exit code depending on whether upstream floating revisions have changed or not.
.TP
.B \-\-server-only
Run bitbake without UI, the frontend can connect with bitbake server itself.
.TP
.B \-BBIND, \-\-bind=BIND
The name/address for the bitbake server to bind to.
.TP
.B \-\-no\-setscene
Do not run any setscene tasks, forces builds.
.SH ENVIRONMENT VARIABLES
bitbake uses the following environment variables to control its
operation:
.TP
.B BITBAKE_UI
The bitbake user interface; overridden by the \fB-u\fP commandline option.
.SH AUTHORS
BitBake was written by

View File

@@ -1,101 +0,0 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import sys
import datetime
current_version = "dev"
# String used in sidebar
version = 'Version: ' + current_version
if current_version == 'dev':
version = 'Version: Current Development'
# Version seen in documentation_options.js and hence in js switchers code
release = current_version
# -- Project information -----------------------------------------------------
project = 'Bitbake'
copyright = '2004-%s, Richard Purdie, Chris Larson, and Phil Blundell' \
% datetime.datetime.now().year
author = 'Richard Purdie, Chris Larson, and Phil Blundell'
# external links and substitutions
extlinks = {
'yocto_docs': ('https://docs.yoctoproject.org%s', None),
'oe_lists': ('https://lists.openembedded.org%s', None),
}
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autosectionlabel',
'sphinx.ext.extlinks',
]
autosectionlabel_prefix_document = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# master document name. The default changed from contents to index. so better
# set it ourselves.
master_doc = 'index'
# create substitution for project configuration variables
rst_prolog = """
.. |project_name| replace:: %s
.. |copyright| replace:: %s
.. |author| replace:: %s
""" % (project, copyright, author)
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
try:
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
except ImportError:
sys.stderr.write("The Sphinx sphinx_rtd_theme HTML theme was not found.\
\nPlease make sure to install the sphinx_rtd_theme python package.\n")
sys.exit(1)
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['sphinx-static']
# Add customm CSS and JS files
html_css_files = ['theme_overrides.css']
html_js_files = ['switchers.js']
# Hide 'Created using Sphinx' text
html_show_sphinx = False
# Add 'Last updated' on each page
html_last_updated_fmt = '%b %d, %Y'
# Remove the trailing 'dot' in section numbers
html_secnumber_suffix = " "

View File

@@ -1,3 +0,0 @@
=====
Index
=====

View File

@@ -1,38 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
===================
BitBake User Manual
===================
|
.. toctree::
:caption: Table of Contents
:numbered:
bitbake-user-manual/bitbake-user-manual-intro
bitbake-user-manual/bitbake-user-manual-execution
bitbake-user-manual/bitbake-user-manual-metadata
bitbake-user-manual/bitbake-user-manual-fetching
bitbake-user-manual/bitbake-user-manual-ref-variables
bitbake-user-manual/bitbake-user-manual-hello
.. toctree::
:maxdepth: 1
:hidden:
genindex
releases
----
.. include:: <xhtml1-lat1.txt>
| BitBake Community
| Copyright |copy| |copyright|
| <bitbake-devel@lists.openembedded.org>
This work is licensed under the Creative Commons Attribution License. To view a
copy of this license, visit http://creativecommons.org/licenses/by/2.5/ or send
a letter to Creative Commons, 444 Castro Street, Suite 900, Mountain View,
California 94041, USA.

View File

@@ -0,0 +1,56 @@
topdir = .
manual = $(topdir)/usermanual.xml
# types = pdf txt rtf ps xhtml html man tex texi dvi
# types = pdf txt
types = $(xmltotypes) $(htmltypes)
xmltotypes = pdf txt
htmltypes = html xhtml
htmlxsl = $(if $(filter $@,$(foreach type,$(htmltypes),$(type)-nochunks)),http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl,http://docbook.sourceforge.net/release/xsl/current/$@/chunk.xsl)
htmlcssfile = docbook.css
htmlcss = $(topdir)/html.css
# htmlcssfile =
# htmlcss =
cleanfiles = $(foreach i,$(types),$(topdir)/$(i))
ifdef DEBUG
define command
$(1)
endef
else
define command
@echo $(2) $(3) $(4)
@$(1) >/dev/null
endef
endif
all: $(types)
lint: $(manual) FORCE
$(call command,xmllint --xinclude --postvalid --noout $(manual),XMLLINT $(manual))
$(types) $(foreach type,$(htmltypes),$(type)-nochunks): lint FORCE
$(foreach type,$(htmltypes),$(type)-nochunks): $(if $(htmlcss),$(htmlcss)) $(manual)
@mkdir -p $@
ifdef htmlcss
$(call command,install -m 0644 $(htmlcss) $@/$(htmlcssfile),CP $(htmlcss) $@/$(htmlcssfile))
endif
$(call command,xsltproc --stringparam base.dir $@/ $(if $(htmlcssfile),--stringparam html.stylesheet $(htmlcssfile)) $(htmlxsl) $(manual) > $@/index.$(patsubst %-nochunks,%,$@),XSLTPROC $@ $(manual))
$(htmltypes): $(if $(htmlcss),$(htmlcss)) $(manual)
@mkdir -p $@
ifdef htmlcss
$(call command,install -m 0644 $(htmlcss) $@/$(htmlcssfile),CP $(htmlcss) $@/$(htmlcssfile))
endif
$(call command,xsltproc --stringparam base.dir $@/ $(if $(htmlcssfile),--stringparam html.stylesheet $(htmlcssfile)) $(htmlxsl) $(manual),XSLTPROC $@ $(manual))
$(xmltotypes): $(manual)
$(call command,xmlto --with-dblatex --extensions -o $(topdir)/$@ $@ $(manual),XMLTO $@ $(manual))
clean:
rm -rf $(cleanfiles)
$(foreach i,$(types) $(foreach type,$(htmltypes),$(type)-nochunks),clean-$(i)):
rm -rf $(patsubst clean-%,%,$@)
FORCE:

281
bitbake/doc/manual/html.css Normal file
View File

@@ -0,0 +1,281 @@
/* Feuille de style DocBook du projet Traduc.org */
/* DocBook CSS stylesheet of the Traduc.org project */
/* (c) Jean-Philippe Gu<47>rard - 14 ao<61>t 2004 */
/* (c) Jean-Philippe Gu<47>rard - 14 August 2004 */
/* Cette feuille de style est libre, vous pouvez la */
/* redistribuer et la modifier selon les termes de la Licence */
/* Art Libre. Vous trouverez un exemplaire de cette Licence sur */
/* http://tigreraye.org/Petit-guide-du-traducteur.html#licence-art-libre */
/* This work of art is free, you can redistribute it and/or */
/* modify it according to terms of the Free Art license. You */
/* will find a specimen of this license on the Copyleft */
/* Attitude web site: http://artlibre.org as well as on other */
/* sites. */
/* Please note that the French version of this licence as shown */
/* on http://tigreraye.org/Petit-guide-du-traducteur.html#licence-art-libre */
/* is only official licence of this document. The English */
/* is only provided to help you understand this licence. */
/* La derni<6E>re version de cette feuille de style est toujours */
/* disponible sur<75>: http://tigreraye.org/style.css */
/* Elle est <20>galement disponible sur<75>: */
/* http://www.traduc.org/docs/HOWTO/lecture/style.css */
/* The latest version of this stylesheet is available from: */
/* http://tigreraye.org/style.css */
/* It is also available on: */
/* http://www.traduc.org/docs/HOWTO/lecture/style.css */
/* N'h<>sitez pas <20> envoyer vos commentaires et corrections <20> */
/* Jean-Philippe Gu<47>rard <jean-philippe.guerard@tigreraye.org> */
/* Please send feedback and bug reports to */
/* Jean-Philippe Gu<47>rard <jean-philippe.guerard@tigreraye.org> */
/* $Id: style.css,v 1.14 2004/09/10 20:12:09 fevrier Exp fevrier $ */
/* Pr<50>sentation g<>n<EFBFBD>rale du document */
/* Overall document presentation */
body {
/*
font-family: Apolline, "URW Palladio L", Garamond, jGaramond,
"Bitstream Cyberbit", "Palatino Linotype", serif;
*/
margin: 7%;
background-color: white;
}
/* Taille du texte */
/* Text size */
* { font-size: 100%; }
/* Gestion des textes mis en relief imbriqu<71>s */
/* Embedded emphasis */
em { font-style: italic; }
em em { font-style: normal; }
em em em { font-style: italic; }
/* Titres */
/* Titles */
h1 { font-size: 200%; font-weight: 900; }
h2 { font-size: 160%; font-weight: 900; }
h3 { font-size: 130%; font-weight: bold; }
h4 { font-size: 115%; font-weight: bold; }
h5 { font-size: 108%; font-weight: bold; }
h6 { font-weight: bold; }
/* Nom de famille en petites majuscules (uniquement en fran<61>ais) */
/* Last names in small caps (for French only) */
*[class~="surname"]:lang(fr) { font-variant: small-caps; }
/* Blocs de citation */
/* Quotation blocs */
div[class~="blockquote"] {
border: solid 2px #AAA;
padding: 5px;
margin: 5px;
}
div[class~="blockquote"] > table {
border: none;
}
/* Blocs lit<69>raux<75>: fond gris clair */
/* Literal blocs: light gray background */
*[class~="literallayout"] {
background: #f0f0f0;
padding: 5px;
margin: 5px;
}
/* Programmes et captures texte<74>: fond bleu clair */
/* Listing and text screen snapshots: light blue background */
*[class~="programlisting"], *[class~="screen"] {
background: #f0f0ff;
padding: 5px;
margin: 5px;
}
/* Les textes <20> remplacer sont surlign<67>s en vert p<>le */
/* Replaceable text in highlighted in pale green */
*[class~="replaceable"] {
background-color: #98fb98;
font-style: normal; }
/* Tables<65>: fonds gris clair & bords simples */
/* Tables: light gray background and solid borders */
*[class~="table"] *[class~="title"] { width:100%; border: 0px; }
table {
border: 1px solid #aaa;
border-collapse: collapse;
padding: 2px;
margin: 5px;
}
/* Listes simples en style table */
/* Simples lists in table presentation */
table[class~="simplelist"] {
background-color: #F0F0F0;
margin: 5px;
border: solid 1px #AAA;
}
table[class~="simplelist"] td {
border: solid 1px #AAA;
}
/* Les tables */
/* Tables */
*[class~="table"] table {
background-color: #F0F0F0;
border: solid 1px #AAA;
}
*[class~="informaltable"] table { background-color: #F0F0F0; }
th,td {
vertical-align: baseline;
text-align: left;
padding: 0.1em 0.3em;
empty-cells: show;
}
/* Alignement des colonnes */
/* Colunms alignment */
td[align=center] , th[align=center] { text-align: center; }
td[align=right] , th[align=right] { text-align: right; }
td[align=left] , th[align=left] { text-align: left; }
td[align=justify] , th[align=justify] { text-align: justify; }
/* Pas de marge autour des images */
/* No inside margins for images */
img { border: 0; }
/* Les liens ne sont pas soulign<67>s */
/* No underlines for links */
:link , :visited , :active { text-decoration: none; }
/* Prudence<63>: cadre jaune et fond jaune clair */
/* Caution: yellow border and light yellow background */
*[class~="caution"] {
border: solid 2px yellow;
background-color: #ffffe0;
padding: 1em 6px 1em ;
margin: 5px;
}
*[class~="caution"] th {
vertical-align: middle
}
*[class~="caution"] table {
background-color: #ffffe0;
border: none;
}
/* Note importante<74>: cadre jaune et fond jaune clair */
/* Important: yellow border and light yellow background */
*[class~="important"] {
border: solid 2px yellow;
background-color: #ffffe0;
padding: 1em 6px 1em;
margin: 5px;
}
*[class~="important"] th {
vertical-align: middle
}
*[class~="important"] table {
background-color: #ffffe0;
border: none;
}
/* Mise en <20>vidence<63>: texte l<>g<EFBFBD>rement plus grand */
/* Highlights: slightly larger texts */
*[class~="highlights"] {
font-size: 110%;
}
/* Note<74>: cadre bleu et fond bleu clair */
/* Notes: blue border and light blue background */
*[class~="note"] {
border: solid 2px #7099C5;
background-color: #f0f0ff;
padding: 1em 6px 1em ;
margin: 5px;
}
*[class~="note"] th {
vertical-align: middle
}
*[class~="note"] table {
background-color: #f0f0ff;
border: none;
}
/* Astuce<63>: cadre vert et fond vert clair */
/* Tip: green border and light green background */
*[class~="tip"] {
border: solid 2px #00ff00;
background-color: #f0ffff;
padding: 1em 6px 1em ;
margin: 5px;
}
*[class~="tip"] th {
vertical-align: middle;
}
*[class~="tip"] table {
background-color: #f0ffff;
border: none;
}
/* Avertissement<6E>: cadre rouge et fond rouge clair */
/* Warning: red border and light red background */
*[class~="warning"] {
border: solid 2px #ff0000;
background-color: #fff0f0;
padding: 1em 6px 1em ;
margin: 5px;
}
*[class~="warning"] th {
vertical-align: middle;
}
*[class~="warning"] table {
background-color: #fff0f0;
border: none;
}
/* Fin */
/* The End */

View File

@@ -0,0 +1,611 @@
<?xml version="1.0"?>
<!--
ex:ts=4:sw=4:sts=4:et
-*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
-->
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<book>
<bookinfo>
<title>BitBake User Manual</title>
<authorgroup>
<corpauthor>BitBake Team</corpauthor>
</authorgroup>
<copyright>
<year>2004, 2005, 2006, 2011</year>
<holder>Chris Larson</holder>
<holder>Phil Blundell</holder>
<holder>Richard Purdie</holder>
</copyright>
<legalnotice>
<para>This work is licensed under the Creative Commons Attribution License. To view a copy of this license, visit <ulink url="http://creativecommons.org/licenses/by/2.5/">http://creativecommons.org/licenses/by/2.5/</ulink> or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.</para>
</legalnotice>
</bookinfo>
<chapter>
<title>Introduction</title>
<section>
<title>Overview</title>
<para>BitBake is, at its simplest, a tool for executing
tasks and managing metadata. As such, its similarities to GNU make and other
build tools are readily apparent. It was inspired by Portage, the package management system used by the Gentoo Linux distribution. BitBake is the basis of the <ulink url="http://www.openembedded.org/">OpenEmbedded</ulink> project, which is being used to build and maintain a number of embedded Linux distributions/projects such as Angstrom and the Yocto project.</para>
</section>
<section>
<title>Background and goals</title>
<para>Prior to BitBake, no other build tool adequately met
the needs of an aspiring embedded Linux distribution. All of the
buildsystems used by traditional desktop Linux distributions lacked
important functionality, and none of the ad-hoc
<emphasis>buildroot</emphasis> systems, prevalent in the
embedded space, were scalable or maintainable.</para>
<para>Some important original goals for BitBake were:
<itemizedlist>
<listitem><para>Handle crosscompilation.</para></listitem>
<listitem><para>Handle interpackage dependencies (build time on target architecture, build time on native architecture, and runtime).</para></listitem>
<listitem><para>Support running any number of tasks within a given package, including, but not limited to, fetching upstream sources, unpacking them, patching them, configuring them, et cetera.</para></listitem>
<listitem><para>Must be Linux distribution agnostic (both build and target).</para></listitem>
<listitem><para>Must be architecture agnostic</para></listitem>
<listitem><para>Must support multiple build and target operating systems (including Cygwin, the BSDs, etc).</para></listitem>
<listitem><para>Must be able to be self contained, rather than tightly integrated into the build machine's root filesystem.</para></listitem>
<listitem><para>There must be a way to handle conditional metadata (on target architecture, operating system, distribution, machine).</para></listitem>
<listitem><para>It must be easy for the person using the tools to supply their own local metadata and packages to operate against.</para></listitem>
<listitem><para>Must make it easy to collaborate
between multiple projects using BitBake for their
builds.</para></listitem>
<listitem><para>Should provide an inheritance mechanism to
share common metadata between many packages.</para></listitem>
</itemizedlist>
</para>
<para>Over time it has become apparent that some further requirements were necessary:
<itemizedlist>
<listitem><para>Handle variants of a base recipe (native, sdk, multilib).</para></listitem>
<listitem><para>Able to split metadata into layers and allow layers to override each other.</para></listitem>
<listitem><para>Allow representation of a given set of input variables to a task as a checksum.</para></listitem>
<listitem><para>based on that checksum, allow acceleration of builds with prebuilt components.</para></listitem>
</itemizedlist>
</para>
<para>BitBake satisfies all the original requirements and many more with extensions being made to the basic functionality to reflect the additionl requirements. Flexibility and power have always been the priorities. It is highly extensible, supporting embedded Python code and execution of any arbitrary tasks.</para>
</section>
</chapter>
<chapter>
<title>Metadata</title>
<section>
<title>Description</title>
<itemizedlist>
<para>BitBake metadata can be classified into 3 major areas:</para>
<listitem>
<para>Configuration Files</para>
</listitem>
<listitem>
<para>.bb Files</para>
</listitem>
<listitem>
<para>Classes</para>
</listitem>
</itemizedlist>
<para>What follows are a large number of examples of BitBake metadata. Any syntax which isn't supported in any of the aforementioned areas will be documented as such.</para>
<section>
<title>Basic variable setting</title>
<para><screen><varname>VARIABLE</varname> = "value"</screen></para>
<para>In this example, <varname>VARIABLE</varname> is <literal>value</literal>.</para>
</section>
<section>
<title>Variable expansion</title>
<para>BitBake supports variables referencing one another's contents using a syntax which is similar to shell scripting</para>
<para><screen><varname>A</varname> = "aval"
<varname>B</varname> = "pre${A}post"</screen></para>
<para>This results in <varname>A</varname> containing <literal>aval</literal> and <varname>B</varname> containing <literal>preavalpost</literal>.</para>
</section>
<section>
<title>Setting a default value (?=)</title>
<para><screen><varname>A</varname> ?= "aval"</screen></para>
<para>If <varname>A</varname> is set before the above is called, it will retain its previous value. If <varname>A</varname> is unset prior to the above call, <varname>A</varname> will be set to <literal>aval</literal>. Note that this assignment is immediate, so if there are multiple ?= assignments to a single variable, the first of those will be used.</para>
</section>
<section>
<title>Setting a weak default value (??=)</title>
<para><screen><varname>A</varname> ??= "somevalue"
<varname>A</varname> ??= "someothervalue"</screen></para>
<para>If <varname>A</varname> is set before the above, it will retain that value. If <varname>A</varname> is unset prior to the above, <varname>A</varname> will be set to <literal>someothervalue</literal>. This is a lazy/weak assignment in that the assignment does not occur until the end of the parsing process, so that the last, rather than the first, ??= assignment to a given variable will be used. Any other setting of A using = or ?= will however override the value set with ??=</para>
</section>
<section>
<title>Immediate variable expansion (:=)</title>
<para>:= results in a variable's contents being expanded immediately, rather than when the variable is actually used.</para>
<para><screen><varname>T</varname> = "123"
<varname>A</varname> := "${B} ${A} test ${T}"
<varname>T</varname> = "456"
<varname>B</varname> = "${T} bval"
<varname>C</varname> = "cval"
<varname>C</varname> := "${C}append"</screen></para>
<para>In that example, <varname>A</varname> would contain <literal> test 123</literal>, <varname>B</varname> would contain <literal>456 bval</literal>, and <varname>C</varname> would be <literal>cvalappend</literal>.</para>
</section>
<section>
<title>Appending (+=) and prepending (=+)</title>
<para><screen><varname>B</varname> = "bval"
<varname>B</varname> += "additionaldata"
<varname>C</varname> = "cval"
<varname>C</varname> =+ "test"</screen></para>
<para>In this example, <varname>B</varname> is now <literal>bval additionaldata</literal> and <varname>C</varname> is <literal>test cval</literal>.</para>
</section>
<section>
<title>Appending (.=) and prepending (=.) without spaces</title>
<para><screen><varname>B</varname> = "bval"
<varname>B</varname> .= "additionaldata"
<varname>C</varname> = "cval"
<varname>C</varname> =. "test"</screen></para>
<para>In this example, <varname>B</varname> is now <literal>bvaladditionaldata</literal> and <varname>C</varname> is <literal>testcval</literal>. In contrast to the above appending and prepending operators, no additional space
will be introduced.</para>
</section>
<section>
<title>Conditional metadata set</title>
<para>OVERRIDES is a <quote>:</quote> separated variable containing each item you want to satisfy conditions. So, if you have a variable which is conditional on <quote>arm</quote>, and <quote>arm</quote> is in OVERRIDES, then the <quote>arm</quote> specific version of the variable is used rather than the non-conditional version. Example:</para>
<para><screen><varname>OVERRIDES</varname> = "architecture:os:machine"
<varname>TEST</varname> = "defaultvalue"
<varname>TEST_os</varname> = "osspecificvalue"
<varname>TEST_condnotinoverrides</varname> = "othercondvalue"</screen></para>
<para>In this example, <varname>TEST</varname> would be <literal>osspecificvalue</literal>, due to the condition <quote>os</quote> being in <varname>OVERRIDES</varname>.</para>
</section>
<section>
<title>Conditional appending</title>
<para>BitBake also supports appending and prepending to variables based on whether something is in OVERRIDES. Example:</para>
<para><screen><varname>DEPENDS</varname> = "glibc ncurses"
<varname>OVERRIDES</varname> = "machine:local"
<varname>DEPENDS_append_machine</varname> = " libmad"</screen></para>
<para>In this example, <varname>DEPENDS</varname> is set to <literal>glibc ncurses libmad</literal>.</para>
</section>
<section>
<title>Inclusion</title>
<para>Next, there is the <literal>include</literal> directive, which causes BitBake to parse whatever file you specify, and insert it at that location, which is not unlike <command>make</command>. However, if the path specified on the <literal>include</literal> line is a relative path, BitBake will locate the first one it can find within <envar>BBPATH</envar>.</para>
</section>
<section>
<title>Requiring inclusion</title>
<para>In contrast to the <literal>include</literal> directive, <literal>require</literal> will
raise an ParseError if the file to be included cannot be found. Otherwise it will behave just like the <literal>
include</literal> directive.</para>
</section>
<section>
<title>Python variable expansion</title>
<para><screen><varname>DATE</varname> = "${@time.strftime('%Y%m%d',time.gmtime())}"</screen></para>
<para>This would result in the <varname>DATE</varname> variable containing today's date.</para>
</section>
<section>
<title>Defining executable metadata</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para><screen>do_mytask () {
echo "Hello, world!"
}</screen></para>
<para>This is essentially identical to setting a variable, except that this variable happens to be executable shell code.</para>
<para><screen>python do_printdate () {
import time
print time.strftime('%Y%m%d', time.gmtime())
}</screen></para>
<para>This is the similar to the previous, but flags it as Python so that BitBake knows it is Python code.</para>
</section>
<section>
<title>Defining Python functions into the global Python namespace</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para><screen>def get_depends(bb, d):
if d.getVar('SOMECONDITION', True):
return "dependencywithcond"
else:
return "dependency"
<varname>SOMECONDITION</varname> = "1"
<varname>DEPENDS</varname> = "${@get_depends(bb, d)}"</screen></para>
<para>This would result in <varname>DEPENDS</varname> containing <literal>dependencywithcond</literal>.</para>
</section>
<section>
<title>Variable flags</title>
<para>Variables can have associated flags which provide a way of tagging extra information onto a variable. Several flags are used internally by BitBake but they can be used externally too if needed. The standard operations mentioned above also work on flags.</para>
<para><screen><varname>VARIABLE</varname>[<varname>SOMEFLAG</varname>] = "value"</screen></para>
<para>In this example, <varname>VARIABLE</varname> has a flag, <varname>SOMEFLAG</varname> which is set to <literal>value</literal>.</para>
</section>
<section>
<title>Inheritance</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para>The <literal>inherit</literal> directive is a means of specifying what classes of functionality your .bb requires. It is a rudimentary form of inheritance. For example, you can easily abstract out the tasks involved in building a package that uses autoconf and automake, and put that into a bbclass for your packages to make use of. A given bbclass is located by searching for classes/filename.bbclass in <envar>BBPATH</envar>, where filename is what you inherited.</para>
</section>
<section>
<title>Tasks</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para>In BitBake, each step that needs to be run for a given .bb is known as a task. There is a command <literal>addtask</literal> to add new tasks (must be a defined Python executable metadata and must start with <quote>do_</quote>) and describe intertask dependencies.</para>
<para><screen>python do_printdate () {
import time
print time.strftime('%Y%m%d', time.gmtime())
}
addtask printdate before do_build</screen></para>
<para>This defines the necessary Python function and adds it as a task which is now a dependency of do_build, the default task. If anyone executes the do_build task, that will result in do_printdate being run first.</para>
</section>
<section>
<title>Task Flags</title>
<para>Tasks support a number of flags which control various functionality of the task. These are as follows:</para>
<para>'dirs' - directories which should be created before the task runs</para>
<para>'cleandirs' - directories which should created before the task runs but should be empty</para>
<para>'noexec' - marks the tasks as being empty and no execution required. These are used as dependency placeholders or used when added tasks need to be subsequently disabled.</para>
<para>'nostamp' - don't generate a stamp file for a task. This means the task is always rexecuted.</para>
<para>'fakeroot' - this task needs to be run in a fakeroot environment, obtained by adding the variables in FAKEROOTENV to the environment.</para>
<para>'umask' - the umask to run the task under.</para>
<para> For the 'deptask', 'rdeptask', 'recdeptask' and 'recrdeptask' flags please see the dependencies section.</para>
</section>
<section>
<title>Events</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para>BitBake allows installation of event handlers. Events are triggered at certain points during operation, such as the beginning of operation against a given .bb, the start of a given task, task failure, task success, et cetera. The intent is to make it easy to do things like email notification on build failure.</para>
<para><screen>addhandler myclass_eventhandler
python myclass_eventhandler() {
from bb.event import getName
from bb import data
print("The name of the Event is %s" % getName(e))
print("The file we run for is %s" % data.getVar('FILE', e.data, True))
}
</screen></para><para>
This event handler gets called every time an event is triggered. A global variable <varname>e</varname> is defined. <varname>e</varname>.data contains an instance of bb.data. With the getName(<varname>e</varname>)
method one can get the name of the triggered event.</para><para>The above event handler prints the name
of the event and the content of the <varname>FILE</varname> variable.</para>
</section>
<section>
<title>Variants</title>
<para>Two BitBake features exist to facilitate the creation of multiple buildable incarnations from a single recipe file.</para>
<para>The first is <varname>BBCLASSEXTEND</varname>. This variable is a space separated list of classes used to "extend" the recipe for each variant. As an example, setting <screen>BBCLASSEXTEND = "native"</screen> results in a second incarnation of the current recipe being available. This second incarnation will have the "native" class inherited.</para>
<para>The second feature is <varname>BBVERSIONS</varname>. This variable allows a single recipe to build multiple versions of a project from a single recipe file, and allows you to specify conditional metadata (using the <varname>OVERRIDES</varname> mechanism) for a single version, or an optionally named range of versions:</para>
<para><screen>BBVERSIONS = "1.0 2.0 git"
SRC_URI_git = "git://someurl/somepath.git"</screen></para>
<para><screen>BBVERSIONS = "1.0.[0-6]:1.0.0+ \
1.0.[7-9]:1.0.7+"
SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;patch=1"</screen></para>
<para>Note that the name of the range will default to the original version of the recipe, so given OE, a recipe file of foo_1.0.0+.bb will default the name of its versions to 1.0.0+. This is useful, as the range name is not only placed into overrides; it's also made available for the metadata to use in the form of the <varname>BPV</varname> variable, for use in file:// search paths (<varname>FILESPATH</varname>).</para>
</section>
</section>
<section>
<title>Variable interaction: Worked Examples</title>
<para>Despite the documentation of the different forms of variable definition above, it can be hard to work out what happens when variable operators are combined. This section documents some common questions people have regarding the way variables interact.</para>
<section>
<title>Override and append ordering</title>
<para>There is often confusion about which order overrides and the various append operators take effect.</para>
<para><screen><varname>OVERRIDES</varname> = "foo"
<varname>A_foo_append</varname> = "X"</screen></para>
<para>In this case, X is unconditionally appended to the variable <varname>A_foo</varname>. Since foo is an override, A_foo would then replace <varname>A</varname>.</para>
<para><screen><varname>OVERRIDES</varname> = "foo"
<varname>A</varname> = "X"
<varname>A_append_foo</varname> = "Y"</screen></para>
<para>In this case, only when foo is in OVERRIDES, Y is appended to the variable <varname>A</varname> so the value of <varname>A</varname> would become XY (NB: no spaces are appended).</para>
<para><screen><varname>OVERRIDES</varname> = "foo"
<varname>A_foo_append</varname> = "X"
<varname>A_foo_append</varname> += "Y"</screen></para>
<para>This behaves as per the first case above, but the value of <varname>A</varname> would be "X Y" instead of just "X".</para>
<para><screen><varname>A</varname> = "1"
<varname>A_append</varname> = "2"
<varname>A_append</varname> = "3"
<varname>A</varname> += "4"
<varname>A</varname> .= "5"</screen></para>
<para>Would ultimately result in <varname>A</varname> taking the value "1 4523" since the _append operator executes at the same time as the expansion of other overrides.</para>
</section>
<section>
<title>Key Expansion</title>
<para>Key expansion happens at the data store finalisation time just before overrides are expanded.</para>
<para><screen><varname>A${B}</varname> = "X"
<varname>B</varname> = "2"
<varname>A2</varname> = "Y"</screen></para>
<para>So in this case <varname>A2</varname> would take the value of "X".</para>
</section>
</section>
<section>
<title>Dependency handling</title>
<para>BitBake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
<section>
<title>Dependencies internal to the .bb file</title>
<para>Where the dependencies are internal to a given .bb file, the dependencies are handled by the previously detailed addtask directive.</para>
</section>
<section>
<title>DEPENDS</title>
<para>DEPENDS lists build time dependencies. The 'deptask' flag for tasks is used to signify the task of each item listed in DEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_configure[deptask] = "do_populate_staging"</screen></para>
<para>means the do_populate_staging task of each item in DEPENDS must have completed before do_configure can execute.</para>
</section>
<section>
<title>RDEPENDS</title>
<para>RDEPENDS lists runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each item listed in RDEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_package_write[rdeptask] = "do_package"</screen></para>
<para>means the do_package task of each item in RDEPENDS must have completed before do_package_write can execute.</para>
</section>
<section>
<title>Recursive DEPENDS</title>
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
</section>
<section>
<title>Recursive RDEPENDS</title>
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first.</para>
</section>
<section>
<title>Inter task</title>
<para>The 'depends' flag for tasks is a more generic form of which allows an interdependency on specific tasks rather than specifying the data in DEPENDS or RDEPENDS.</para>
<para><screen>do_patch[depends] = "quilt-native:do_populate_staging"</screen></para>
<para>means the do_populate_staging task of the target quilt-native must have completed before the do_patch can execute.</para>
</section>
</section>
<section>
<title>Parsing</title>
<section>
<title>Configuration files</title>
<para>The first kind of metadata in BitBake is configuration metadata. This metadata is global, and therefore affects <emphasis>all</emphasis> packages and tasks which are executed.</para>
<para>BitBake will first search the current working directory for an optional "conf/bblayers.conf" configuration file. This file is expected to contain a BBLAYERS variable which is a space delimited list of 'layer' directories. For each directory in this list, a "conf/layer.conf" file will be searched for and parsed with the LAYERDIR variable being set to the directory where the layer was found. The idea is these files will setup BBPATH and other variables correctly for a given build directory automatically for the user.</para>
<para>BitBake will then expect to find 'conf/bitbake.conf' somewhere in the user specified <envar>BBPATH</envar>. That configuration file generally has include directives to pull in any other metadata (generally files specific to architecture, machine, <emphasis>local</emphasis> and so on).</para>
<para>Only variable definitions and include directives are allowed in .conf files.</para>
</section>
<section>
<title>Classes</title>
<para>BitBake classes are our rudimentary inheritance mechanism. As briefly mentioned in the metadata introduction, they're parsed when an <literal>inherit</literal> directive is encountered, and they are located in classes/ relative to the directories in <envar>BBPATH</envar>.</para>
</section>
<section>
<title>.bb files</title>
<para>A BitBake (.bb) file is a logical unit of tasks to be executed. Normally this is a package to be built. Inter-.bb dependencies are obeyed. The files themselves are located via the <varname>BBFILES</varname> variable, which is set to a space separated list of .bb files, and does handle wildcards.</para>
</section>
</section>
</chapter>
<chapter>
<title>File download support</title>
<section>
<title>Overview</title>
<para>BitBake provides support to download files this procedure is called fetching and it handled by the fetch and fetch2 modules. At this point the original fetch code is considered to be replaced by fetch2 and this manual only related to the fetch2 codebase.</para>
<para>The SRC_URI is normally used to tell BitBake which files to fetch. The next sections will describe the available fetchers and their options. Each fetcher honors a set of variables and per URI parameters separated by a <quote>;</quote> consisting of a key and a value. The semantics of the variables and parameters are defined by the fetcher. BitBake tries to have consistent semantics between the different fetchers.
</para>
<para>The overall fetch process is that first, fetches are attempted from PREMIRRORS. If those don't work, the original SRC_URI is attempted and if that fails, BitBake will fall back to MIRRORS. Cross urls are supported, so its possible to mirror a git repository on an http server as a tarball for example. Some example commonly used mirror definitions are:</para>
<para><screen><varname>PREMIRRORS</varname> ?= "\
bzr://.*/.* http://somemirror.org/sources/ \n \
cvs://.*/.* http://somemirror.org/sources/ \n \
git://.*/.* http://somemirror.org/sources/ \n \
hg://.*/.* http://somemirror.org/sources/ \n \
osc://.*/.* http://somemirror.org/sources/ \n \
p4://.*/.* http://somemirror.org/sources/ \n \
svk://.*/.* http://somemirror.org/sources/ \n \
svn://.*/.* http://somemirror.org/sources/ \n"
<varname>MIRRORS</varname> =+ "\
ftp://.*/.* http://somemirror.org/sources/ \n \
http://.*/.* http://somemirror.org/sources/ \n \
https://.*/.* http://somemirror.org/sources/ \n"</screen></para>
<para>Non-local downloaded output is placed into the directory specified by the <varname>DL_DIR</varname>. For non local archive downloads the code can verify sha256 and md5 checksums for the download to ensure the file has been downloaded correctly. These may be specified either in the form <varname>SRC_URI[md5sum]</varname> for the md5 checksum and <varname>SRC_URI[sha256sum]</varname> for the sha256 checksum or as parameters on the SRC_URI such as SRC_URI="http://example.com/foobar.tar.bz2;md5sum=4a8e0f237e961fd7785d19d07fdb994d". If <varname>BB_STRICT_CHECKSUM</varname> is set, any download without a checksum will trigger an error message. In cases where multiple files are listed in SRC_URI, the name parameter is used assign names to the urls and these are then specified in the checksums in the form SRC_URI[name.sha256sum].</para>
</section>
<section>
<title>Local file fetcher</title>
<para>The URN for the local file fetcher is <emphasis>file</emphasis>. The filename can be either absolute or relative. If the filename is relative, <varname>FILESPATH</varname> and failing that <varname>FILESDIR</varname> will be used to find the appropriate relative file. The metadata usually extend these variables to include variations of the values in <varname>OVERRIDES</varname>. Single files and complete directories can be specified.
<screen><varname>SRC_URI</varname>= "file://relativefile.patch"
<varname>SRC_URI</varname>= "file://relativefile.patch;this=ignored"
<varname>SRC_URI</varname>= "file:///Users/ich/very_important_software"
</screen>
</para>
</section>
<section>
<title>CVS fetcher</title>
<para>The URN for the CVS fetcher is <emphasis>cvs</emphasis>. This fetcher honors the variables <varname>CVSDIR</varname>, <varname>SRCDATE</varname>, <varname>FETCHCOMMAND_cvs</varname>, <varname>UPDATECOMMAND_cvs</varname>. <varname>DL_DIR</varname> specifies where a temporary checkout is saved. <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build). <varname>FETCHCOMMAND</varname> and <varname>UPDATECOMMAND</varname> specify which executables to use for the CVS checkout or update.
</para>
<para>The supported parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname> and <varname>scmdata</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout. By default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>. If <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally, <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR</varname>.
<screen><varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
<varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
</screen>
</para>
</section>
<section>
<title>HTTP/FTP fetcher</title>
<para>The URNs for the HTTP/FTP fetcher are <emphasis>http</emphasis>, <emphasis>https</emphasis> and <emphasis>ftp</emphasis>. This fetcher honors the variables <varname>FETCHCOMMAND_wget</varname>. <varname>FETCHCOMMAND</varname> contains the command used for fetching. <quote>${URI}</quote> and <quote>${FILES}</quote> will be replaced by the URI and basename of the file to be fetched.
</para>
<para><screen><varname>SRC_URI</varname> = "http://oe.handhelds.org/not_there.aac"
<varname>SRC_URI</varname> = "ftp://oe.handhelds.org/not_there_as_well.aac"
<varname>SRC_URI</varname> = "ftp://you@oe.handheld.sorg/home/you/secret.plan"
</screen></para>
</section>
<section>
<title>SVN fetcher</title>
<para>The URN for the SVN fetcher is <emphasis>svn</emphasis>.
</para>
<para>This fetcher honors the variables <varname>FETCHCOMMAND_svn</varname>, <varname>SVNDIR</varname>, <varname>SRCREV</varname>. <varname>FETCHCOMMAND</varname> contains the subversion command. <varname>SRCREV</varname> specifies which revision to use when doing the fetching.
</para>
<para>The supported parameters are <varname>proto</varname>, <varname>rev</varname> and <varname>scmdata</varname>. <varname>proto</varname> is the Subversion protocol, <varname>rev</varname> is the Subversion revision. If <varname>scmdata</varname> is set to <quote>keep</quote>, the <quote>.svn</quote> directories will be available during compile-time.
</para>
<para><screen><varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn;module=vip;proto=http;rev=667"
<varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn/;module=opie;proto=svn+ssh;date=20060126"
</screen></para>
</section>
<section>
<title>GIT fetcher</title>
<para>The URN for the GIT Fetcher is <emphasis>git</emphasis>.
</para>
<para>The variable <varname>GITDIR</varname> will be used as the base directory where the git tree is cloned to.
</para>
<para>The parameters are <emphasis>tag</emphasis>, <emphasis>protocol</emphasis> and <emphasis>scmdata</emphasis>. <emphasis>tag</emphasis> is a Git tag, the default is <quote>master</quote>. <emphasis>protocol</emphasis> is the Git protocol to use and defaults to <quote>git</quote> if a hostname is set, otherwise its <quote>file</quote>. If <emphasis>scmdata</emphasis> is set to <quote>keep</quote>, the <quote>.git</quote> directory will be available during compile-time.
</para>
<para><screen><varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
<varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
</screen></para>
</section>
</chapter>
<chapter>
<title>The BitBake command</title>
<section>
<title>Introduction</title>
<para>bitbake is the primary command in the system. It facilitates executing tasks in a single .bb file, or executing a given task on a set of multiple .bb files, accounting for interdependencies amongst them.</para>
</section>
<section>
<title>Usage and syntax</title>
<para>
<screen><prompt>$ </prompt>bitbake --help
usage: bitbake [options] [package ...]
Executes the specified task (default is 'build') for a given set of BitBake files.
It expects that BBFILES is defined, which is a space separated list of files to
be executed. BBFILES does support wildcards.
Default BBFILES are the .bb files in the current directory.
options:
--version show program's version number and exit
-h, --help show this help message and exit
-b BUILDFILE, --buildfile=BUILDFILE
execute the task against this .bb file, rather than a
package from BBFILES.
-k, --continue continue as much as possible after an error. While the
target that failed, and those that depend on it,
cannot be remade, the other dependencies of these
targets can be processed all the same.
-f, --force force run of specified cmd, regardless of stamp status
-i, --interactive drop into the interactive mode also called the BitBake
shell.
-c CMD, --cmd=CMD Specify task to execute. Note that this only executes
the specified task for the providee and the packages
it depends on, i.e. 'compile' does not implicitly call
stage for the dependencies (IOW: use only if you know
what you are doing). Depending on the base.bbclass a
listtasks task is defined and will show available
tasks
-r FILE, --read=FILE read the specified file before bitbake.conf
-v, --verbose output more chit-chat to the terminal
-D, --debug Increase the debug level. You can specify this more
than once.
-n, --dry-run don't execute, just go through the motions
-p, --parse-only quit after parsing the BB files (developers only)
-s, --show-versions show current and preferred versions of all packages
-e, --environment show the global or per-package environment (this is
what used to be bbread)
-g, --graphviz emit the dependency trees of the specified packages in
the dot syntax
-I IGNORED_DOT_DEPS, --ignore-deps=IGNORED_DOT_DEPS
Stop processing at the given list of dependencies when
generating dependency graphs. This can help to make
the graph more appealing
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
-P, --profile profile the command and print a report
</screen>
</para>
<para>
<example>
<title>Executing a task against a single .bb</title>
<para>Executing tasks for a single file is relatively simple. You specify the file in question, and BitBake parses it and executes the specified task (or <quote>build</quote> by default). It obeys intertask dependencies when doing so.</para>
<para><quote>clean</quote> task:</para>
<para><screen><prompt>$ </prompt>bitbake -b blah_1.0.bb -c clean</screen></para>
<para><quote>build</quote> task:</para>
<para><screen><prompt>$ </prompt>bitbake -b blah_1.0.bb</screen></para>
</example>
</para>
<para>
<example>
<title>Executing tasks against a set of .bb files</title>
<para>There are a number of additional complexities introduced when one wants to manage multiple .bb files. Clearly there needs to be a way to tell BitBake what files are available, and of those, which we want to execute at this time. There also needs to be a way for each .bb to express its dependencies, both for build time and runtime. There must be a way for the user to express their preferences when multiple .bb's provide the same functionality, or when there are multiple versions of a .bb.</para>
<para>The next section, Metadata, outlines how to specify such things.</para>
<para>Note that the bitbake command, when not using --buildfile, accepts a <varname>PROVIDER</varname>, not a filename or anything else. By default, a .bb generally PROVIDES its packagename, packagename-version, and packagename-version-revision.</para>
<screen><prompt>$ </prompt>bitbake blah</screen>
<screen><prompt>$ </prompt>bitbake blah-1.0</screen>
<screen><prompt>$ </prompt>bitbake blah-1.0-r0</screen>
<screen><prompt>$ </prompt>bitbake -c clean blah</screen>
<screen><prompt>$ </prompt>bitbake virtual/whatever</screen>
<screen><prompt>$ </prompt>bitbake -c clean virtual/whatever</screen>
</example>
<example>
<title>Generating dependency graphs</title>
<para>BitBake is able to generate dependency graphs using the dot syntax. These graphs can be converted
to images using the <application>dot</application> application from <ulink url="http://www.graphviz.org">Graphviz</ulink>.
Two files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing dependency information at the package level and <emphasis>task-depends.dot</emphasis> containing a breakdown of the dependencies at the task level. To stop depending on common depends, one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. This way, <varname>DEPENDS</varname> from inherited classes such as base.bbclass can be removed from the graph.</para>
<screen><prompt>$ </prompt>bitbake -g blah</screen>
<screen><prompt>$ </prompt>bitbake -g -I virtual/whatever -I bloom blah</screen>
</example>
</para>
</section>
<section>
<title>Special variables</title>
<para>Certain variables affect BitBake operation:</para>
<section>
<title><varname>BB_NUMBER_THREADS</varname></title>
<para> The number of threads BitBake should run at once (default: 1).</para>
</section>
</section>
<section>
<title>Metadata</title>
<para>As you may have seen in the usage information, or in the information about .bb files, the <varname>BBFILES</varname> variable is how the BitBake tool locates its files. This variable is a space separated list of files that are available, and supports wildcards.
<example>
<title>Setting BBFILES</title>
<programlisting><varname>BBFILES</varname> = "/path/to/bbfiles/*.bb"</programlisting>
</example></para>
<para>With regard to dependencies, it expects the .bb to define a <varname>DEPENDS</varname> variable, which contains a space separated list of <quote>package names</quote>, which themselves are the <varname>PN</varname> variable. The <varname>PN</varname> variable is, in general, set to a component of the .bb filename by default.</para>
<example>
<title>Depending on another .bb</title>
<para>a.bb:
<screen>PN = "package-a"
DEPENDS += "package-b"</screen>
</para>
<para>b.bb:
<screen>PN = "package-b"</screen>
</para>
</example>
<example>
<title>Using PROVIDES</title>
<para>This example shows the usage of the <varname>PROVIDES</varname> variable, which allows a given .bb to specify what functionality it provides.</para>
<para>package1.bb:
<screen>PROVIDES += "virtual/package"</screen>
</para>
<para>package2.bb:
<screen>DEPENDS += "virtual/package"</screen>
</para>
<para>package3.bb:
<screen>PROVIDES += "virtual/package"</screen>
</para>
<para>As you can see, we have two different .bb's that provide the same functionality (virtual/package). Clearly, there needs to be a way for the person running BitBake to control which of those providers gets used. There is, indeed, such a way.</para>
<para>The following would go into a .conf file, to select package1:
<screen>PREFERRED_PROVIDER_virtual/package = "package1"</screen>
</para>
</example>
<example>
<title>Specifying version preference</title>
<para>When there are multiple <quote>versions</quote> of a given package, BitBake defaults to selecting the most recent version, unless otherwise specified. If the .bb in question has a <varname>DEFAULT_PREFERENCE</varname> set lower than the other .bb's (default is 0), then it will not be selected. This allows the person or persons maintaining the repository of .bb files to specify their preference for the default selected version. In addition, the user can specify their preferred version.</para>
<para>If the first .bb is named <filename>a_1.1.bb</filename>, then the <varname>PN</varname> variable will be set to <quote>a</quote>, and the <varname>PV</varname> variable will be set to 1.1.</para>
<para>If we then have an <filename>a_1.2.bb</filename>, BitBake will choose 1.2 by default. However, if we define the following variable in a .conf that BitBake parses, we can change that.
<screen>PREFERRED_VERSION_a = "1.1"</screen>
</para>
</example>
<example>
<title>Using <quote>bbfile collections</quote></title>
<para>bbfile collections exist to allow the user to have multiple repositories of bbfiles that contain the same exact package. For example, one could easily use them to make one's own local copy of an upstream repository, but with custom modifications that one does not want upstream. Usage:</para>
<screen>BBFILES = "/stuff/openembedded/*/*.bb /stuff/openembedded.modified/*/*.bb"
BBFILE_COLLECTIONS = "upstream local"
BBFILE_PATTERN_upstream = "^/stuff/openembedded/"
BBFILE_PATTERN_local = "^/stuff/openembedded.modified/"
BBFILE_PRIORITY_upstream = "5"
BBFILE_PRIORITY_local = "10"</screen>
</example>
</section>
</chapter>
</book>

View File

@@ -1,172 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
===========================
Supported Release Manuals
===========================
******************************
Release Series 3.4 (honister)
******************************
- :yocto_docs:`3.4 BitBake User Manual </3.4/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.4.1 BitBake User Manual </3.4.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.4.2 BitBake User Manual </3.4.2/bitbake-user-manual/bitbake-user-manual.html>`
******************************
Release Series 3.3 (hardknott)
******************************
- :yocto_docs:`3.3 BitBake User Manual </3.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.3.1 BitBake User Manual </3.3.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.3.2 BitBake User Manual </3.3.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.3.3 BitBake User Manual </3.3.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.3.4 BitBake User Manual </3.3.4/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.3.5 BitBake User Manual </3.3.5/bitbake-user-manual/bitbake-user-manual.html>`
****************************
Release Series 3.1 (dunfell)
****************************
- :yocto_docs:`3.1 BitBake User Manual </3.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.1 BitBake User Manual </3.1.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.2 BitBake User Manual </3.1.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.3 BitBake User Manual </3.1.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.4 BitBake User Manual </3.1.4/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.5 BitBake User Manual </3.1.5/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.6 BitBake User Manual </3.1.6/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.7 BitBake User Manual </3.1.7/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.8 BitBake User Manual </3.1.8/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.9 BitBake User Manual </3.1.9/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.10 BitBake User Manual </3.1.10/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.11 BitBake User Manual </3.1.11/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.12 BitBake User Manual </3.1.12/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.13 BitBake User Manual </3.1.13/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.14 BitBake User Manual </3.1.14/bitbake-user-manual/bitbake-user-manual.html>`
==========================
Outdated Release Manuals
==========================
*******************************
Release Series 3.2 (gatesgarth)
*******************************
- :yocto_docs:`3.2 BitBake User Manual </3.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.2.1 BitBake User Manual </3.2.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.2.2 BitBake User Manual </3.2.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.2.3 BitBake User Manual </3.2.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.2.4 BitBake User Manual </3.2.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 3.0 (zeus)
*************************
- :yocto_docs:`3.0 BitBake User Manual </3.0/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.1 BitBake User Manual </3.0.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.2 BitBake User Manual </3.0.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.3 BitBake User Manual </3.0.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.4 BitBake User Manual </3.0.4/bitbake-user-manual/bitbake-user-manual.html>`
****************************
Release Series 2.7 (warrior)
****************************
- :yocto_docs:`2.7 BitBake User Manual </2.7/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.7.1 BitBake User Manual </2.7.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.7.2 BitBake User Manual </2.7.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.7.3 BitBake User Manual </2.7.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.7.4 BitBake User Manual </2.7.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 2.6 (thud)
*************************
- :yocto_docs:`2.6 BitBake User Manual </2.6/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.6.1 BitBake User Manual </2.6.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.6.2 BitBake User Manual </2.6.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.6.3 BitBake User Manual </2.6.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.6.4 BitBake User Manual </2.6.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 2.5 (sumo)
*************************
- :yocto_docs:`2.5 Documentation </2.5>`
- :yocto_docs:`2.5.1 Documentation </2.5.1>`
- :yocto_docs:`2.5.2 Documentation </2.5.2>`
- :yocto_docs:`2.5.3 Documentation </2.5.3>`
**************************
Release Series 2.4 (rocko)
**************************
- :yocto_docs:`2.4 BitBake User Manual </2.4/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.4.1 BitBake User Manual </2.4.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.4.2 BitBake User Manual </2.4.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.4.3 BitBake User Manual </2.4.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.4.4 BitBake User Manual </2.4.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 2.3 (pyro)
*************************
- :yocto_docs:`2.3 BitBake User Manual </2.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.3.1 BitBake User Manual </2.3.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.3.2 BitBake User Manual </2.3.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.3.3 BitBake User Manual </2.3.3/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.3.4 BitBake User Manual </2.3.4/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 2.2 (morty)
**************************
- :yocto_docs:`2.2 BitBake User Manual </2.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.2.1 BitBake User Manual </2.2.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.2.2 BitBake User Manual </2.2.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.2.3 BitBake User Manual </2.2.3/bitbake-user-manual/bitbake-user-manual.html>`
****************************
Release Series 2.1 (krogoth)
****************************
- :yocto_docs:`2.1 BitBake User Manual </2.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.1.1 BitBake User Manual </2.1.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.1.2 BitBake User Manual </2.1.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.1.3 BitBake User Manual </2.1.3/bitbake-user-manual/bitbake-user-manual.html>`
***************************
Release Series 2.0 (jethro)
***************************
- :yocto_docs:`1.9 BitBake User Manual </1.9/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.0 BitBake User Manual </2.0/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.0.1 BitBake User Manual </2.0.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.0.2 BitBake User Manual </2.0.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`2.0.3 BitBake User Manual </2.0.3/bitbake-user-manual/bitbake-user-manual.html>`
*************************
Release Series 1.8 (fido)
*************************
- :yocto_docs:`1.8 BitBake User Manual </1.8/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.8.1 BitBake User Manual </1.8.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.8.2 BitBake User Manual </1.8.2/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 1.7 (dizzy)
**************************
- :yocto_docs:`1.7 BitBake User Manual </1.7/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.7.1 BitBake User Manual </1.7.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.7.2 BitBake User Manual </1.7.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.7.3 BitBake User Manual </1.7.3/bitbake-user-manual/bitbake-user-manual.html>`
**************************
Release Series 1.6 (daisy)
**************************
- :yocto_docs:`1.6 BitBake User Manual </1.6/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.6.1 BitBake User Manual </1.6.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.6.2 BitBake User Manual </1.6.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`1.6.3 BitBake User Manual </1.6.3/bitbake-user-manual/bitbake-user-manual.html>`

View File

@@ -1,233 +0,0 @@
(function() {
'use strict';
var all_versions = {
'dev': 'dev (3.2)',
'3.1.2': '3.1.2',
'3.0.3': '3.0.3',
'2.7.4': '2.7.4',
};
var all_doctypes = {
'single': 'Individual Webpages',
'mega': "All-in-one 'Mega' Manual",
};
// Simple version comparision
// Return 1 if a > b
// Return -1 if a < b
// Return 0 if a == b
function ver_compare(a, b) {
if (a == "dev") {
return 1;
}
if (a === b) {
return 0;
}
var a_components = a.split(".");
var b_components = b.split(".");
var len = Math.min(a_components.length, b_components.length);
// loop while the components are equal
for (var i = 0; i < len; i++) {
// A bigger than B
if (parseInt(a_components[i]) > parseInt(b_components[i])) {
return 1;
}
// B bigger than A
if (parseInt(a_components[i]) < parseInt(b_components[i])) {
return -1;
}
}
// If one's a prefix of the other, the longer one is greater.
if (a_components.length > b_components.length) {
return 1;
}
if (a_components.length < b_components.length) {
return -1;
}
// Otherwise they are the same.
return 0;
}
function build_version_select(current_series, current_version) {
var buf = ['<select>'];
$.each(all_versions, function(version, title) {
var series = version.substr(0, 3);
if (series == current_series) {
if (version == current_version)
buf.push('<option value="' + version + '" selected="selected">' + title + '</option>');
else
buf.push('<option value="' + version + '">' + title + '</option>');
if (version != current_version)
buf.push('<option value="' + current_version + '" selected="selected">' + current_version + '</option>');
} else {
buf.push('<option value="' + version + '">' + title + '</option>');
}
});
buf.push('</select>');
return buf.join('');
}
function build_doctype_select(current_doctype) {
var buf = ['<select>'];
$.each(all_doctypes, function(doctype, title) {
if (doctype == current_doctype)
buf.push('<option value="' + doctype + '" selected="selected">' +
all_doctypes[current_doctype] + '</option>');
else
buf.push('<option value="' + doctype + '">' + title + '</option>');
});
if (!(current_doctype in all_doctypes)) {
// In case we're browsing a doctype that is not yet in all_doctypes.
buf.push('<option value="' + current_doctype + '" selected="selected">' +
current_doctype + '</option>');
all_doctypes[current_doctype] = current_doctype;
}
buf.push('</select>');
return buf.join('');
}
function navigate_to_first_existing(urls) {
// Navigate to the first existing URL in urls.
var url = urls.shift();
// Web browsers won't redirect file:// urls to file urls using ajax but
// its useful for local testing
if (url.startsWith("file://")) {
window.location.href = url;
return;
}
if (urls.length == 0) {
window.location.href = url;
return;
}
$.ajax({
url: url,
success: function() {
window.location.href = url;
},
error: function() {
navigate_to_first_existing(urls);
}
});
}
function get_docroot_url() {
var url = window.location.href;
var root = DOCUMENTATION_OPTIONS.URL_ROOT;
var urlarray = url.split('/');
// Trim off anything after '/'
urlarray.pop();
var depth = (root.match(/\.\.\//g) || []).length;
for (var i = 0; i < depth; i++) {
urlarray.pop();
}
return urlarray.join('/') + '/';
}
function on_version_switch() {
var selected_version = $(this).children('option:selected').attr('value');
var url = window.location.href;
var current_version = DOCUMENTATION_OPTIONS.VERSION;
var docroot = get_docroot_url()
var new_versionpath = selected_version + '/';
if (selected_version == "dev")
new_versionpath = '';
// dev versions have no version prefix
if (current_version == "dev") {
var new_url = docroot + new_versionpath + url.replace(docroot, "");
var fallback_url = docroot + new_versionpath;
} else {
var new_url = url.replace('/' + current_version + '/', '/' + new_versionpath);
var fallback_url = new_url.replace(url.replace(docroot, ""), "");
}
console.log(get_docroot_url())
console.log(url + " to url " + new_url);
console.log(url + " to fallback " + fallback_url);
if (new_url != url) {
navigate_to_first_existing([
new_url,
fallback_url,
'https://www.yoctoproject.org/docs/',
]);
}
}
function on_doctype_switch() {
var selected_doctype = $(this).children('option:selected').attr('value');
var url = window.location.href;
if (selected_doctype == 'mega') {
var docroot = get_docroot_url()
var current_version = DOCUMENTATION_OPTIONS.VERSION;
// Assume manuals before 3.2 are using old docbook mega-manual
if (ver_compare(current_version, "3.2") < 0) {
var new_url = docroot + "mega-manual/mega-manual.html";
} else {
var new_url = docroot + "singleindex.html";
}
} else {
var new_url = url.replace("singleindex.html", "index.html")
}
if (new_url != url) {
navigate_to_first_existing([
new_url,
'https://www.yoctoproject.org/docs/',
]);
}
}
// Returns the current doctype based upon the url
function doctype_segment_from_url(url) {
if (url.includes("singleindex") || url.includes("mega-manual"))
return "mega";
return "single";
}
$(document).ready(function() {
var release = DOCUMENTATION_OPTIONS.VERSION;
var current_doctype = doctype_segment_from_url(window.location.href);
var current_series = release.substr(0, 3);
var version_select = build_version_select(current_series, release);
$('.version_switcher_placeholder').html(version_select);
$('.version_switcher_placeholder select').bind('change', on_version_switch);
var doctype_select = build_doctype_select(current_doctype);
$('.doctype_switcher_placeholder').html(doctype_select);
$('.doctype_switcher_placeholder select').bind('change', on_doctype_switch);
if (ver_compare(release, "3.1") < 0) {
$('#outdated-warning').html('Version ' + release + ' of the project is now considered obsolete, please select and use a more recent version');
$('#outdated-warning').css('padding', '.5em');
} else if (release != "dev") {
$.each(all_versions, function(version, title) {
var series = version.substr(0, 3);
if (series == current_series && version != release) {
$('#outdated-warning').html('This document is for outdated version ' + release + ', you should select the latest release version in this series, ' + version + '.');
$('#outdated-warning').css('padding', '.5em');
}
});
}
});
})();

View File

@@ -1,162 +0,0 @@
/*
SPDX-License-Identifier: CC-BY-2.0-UK
*/
body {
font-family: Verdana, Sans, sans-serif;
margin: 0em auto;
color: #333;
}
h1,h2,h3,h4,h5,h6,h7 {
font-family: Arial, Sans;
color: #00557D;
clear: both;
}
h1 {
font-size: 2em;
text-align: left;
padding: 0em 0em 0em 0em;
margin: 2em 0em 0em 0em;
}
h2.subtitle {
margin: 0.10em 0em 3.0em 0em;
padding: 0em 0em 0em 0em;
font-size: 1.8em;
padding-left: 20%;
font-weight: normal;
font-style: italic;
}
h2 {
margin: 2em 0em 0.66em 0em;
padding: 0.5em 0em 0em 0em;
font-size: 1.5em;
font-weight: bold;
}
h3.subtitle {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
font-size: 142.14%;
text-align: right;
}
h3 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 140%;
font-weight: bold;
}
h4 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 120%;
font-weight: bold;
}
h5 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 110%;
font-weight: bold;
}
h6 {
margin: 1em 0em 0em 0em;
padding: 1em 0em 0em 0em;
font-size: 110%;
font-weight: bold;
}
em {
font-weight: bold;
}
.pre {
font-size: medium;
font-family: Courier, monospace;
}
.wy-nav-content a {
text-decoration: underline;
color: #444;
background: transparent;
}
.wy-nav-content a:hover {
text-decoration: underline;
background-color: #dedede;
}
.wy-nav-content a:visited {
color: #444;
}
[alt='Permalink'] { color: #eee; }
[alt='Permalink']:hover { color: black; }
@media screen {
/* content column
*
* RTD theme's default is 800px as max width for the content, but we have
* tables with tons of columns, which need the full width of the view-port.
*/
.wy-nav-content{max-width: none; }
/* inline literal: drop the borderbox, padding and red color */
code, .rst-content tt, .rst-content code {
color: inherit;
border: none;
padding: unset;
background: inherit;
font-size: 85%;
}
.rst-content tt.literal,.rst-content tt.literal,.rst-content code.literal {
color: inherit;
}
/* Admonition should be gray, not blue or green */
.rst-content .note .admonition-title,
.rst-content .tip .admonition-title,
.rst-content .warning .admonition-title,
.rst-content .caution .admonition-title,
.rst-content .important .admonition-title {
background: #f0f0f2;
color: #00557D;
}
.rst-content .note,
.rst-content .tip,
.rst-content .important,
.rst-content .warning,
.rst-content .caution {
background: #f0f0f2;
}
/* Remove the icon in front of note/tip element, and before the logo */
.icon-home:before, .rst-content .admonition-title:before {
display: none
}
/* a custom informalexample container is used in some doc */
.informalexample {
border: 1px solid;
border-color: #aaa;
margin: 1em 0em;
padding: 1em;
page-break-inside: avoid;
}
/* Remove the blue background in the top left corner, around the logo */
.wy-side-nav-search {
background: inherit;
}
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

View File

@@ -1,35 +1,48 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# This is a copy on write dictionary and set which abuses classes to try and be nice and fast.
#
# Copyright (C) 2006 Tim Ansell
# Copyright (C) 2006 Tim Amsell
#
# SPDX-License-Identifier: GPL-2.0-only
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# Please Note:
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
#Please Note:
# Be careful when using mutable types (ie Dict and Lists) - operations involving these are SLOW.
# Assign a file to __warn__ to get warnings about slow operations.
#
from __future__ import print_function
import copy
import types
ImmutableTypes = (
types.NoneType,
bool,
complex,
float,
int,
long,
tuple,
frozenset,
str
basestring
)
MUTABLE = "__mutable__"
class COWMeta(type):
pass
class COWDictMeta(COWMeta):
__warn__ = False
__hasmutable__ = False
@@ -38,20 +51,17 @@ class COWDictMeta(COWMeta):
def __str__(cls):
# FIXME: I have magic numbers!
return "<COWDict Level: %i Current Keys: %i>" % (cls.__count__, len(cls.__dict__) - 3)
__repr__ = __str__
def cow(cls):
class C(cls):
__count__ = cls.__count__ + 1
return C
copy = cow
__call__ = cow
def __setitem__(cls, key, value):
if value is not None and not isinstance(value, ImmutableTypes):
if not isinstance(value, ImmutableTypes):
if not isinstance(value, COWMeta):
cls.__hasmutable__ = True
key += MUTABLE
@@ -78,9 +88,8 @@ class COWDictMeta(COWMeta):
return value
__getmarker__ = []
def __getreadonly__(cls, key, default=__getmarker__):
"""
"""\
Get a value (even if mutable) which you promise not to change.
"""
return cls.__getitem__(key, default, True)
@@ -107,7 +116,7 @@ class COWDictMeta(COWMeta):
cls.__setitem__(key, cls.__marker__)
def __revertitem__(cls, key):
if key not in cls.__dict__:
if not cls.__dict__.has_key(key):
key += MUTABLE
delattr(cls, key)
@@ -143,33 +152,28 @@ class COWDictMeta(COWMeta):
yield value
if type == "items":
yield (key, value)
return
raise StopIteration()
def iterkeys(cls):
return cls.iter("keys")
def itervalues(cls, readonly=False):
if not cls.__warn__ is False and cls.__hasmutable__ and readonly is False:
print("Warning: If you aren't going to change any of the values call with True.", file=cls.__warn__)
print("Warning: If you arn't going to change any of the values call with True.", file=cls.__warn__)
return cls.iter("values", readonly)
def iteritems(cls, readonly=False):
if not cls.__warn__ is False and cls.__hasmutable__ and readonly is False:
print("Warning: If you aren't going to change any of the values call with True.", file=cls.__warn__)
print("Warning: If you arn't going to change any of the values call with True.", file=cls.__warn__)
return cls.iter("items", readonly)
class COWSetMeta(COWDictMeta):
def __str__(cls):
# FIXME: I have magic numbers!
return "<COWSet Level: %i Current Keys: %i>" % (cls.__count__, len(cls.__dict__) - 3)
return "<COWSet Level: %i Current Keys: %i>" % (cls.__count__, len(cls.__dict__) -3)
__repr__ = __str__
def cow(cls):
class C(cls):
__count__ = cls.__count__ + 1
return C
def add(cls, value):
@@ -179,7 +183,7 @@ class COWSetMeta(COWDictMeta):
COWDictMeta.__delitem__(cls, repr(hash(value)))
def __in__(cls, value):
return repr(hash(value)) in COWDictMeta
return COWDictMeta.has_key(repr(hash(value)))
def iterkeys(cls):
raise TypeError("sets don't have keys")
@@ -187,11 +191,133 @@ class COWSetMeta(COWDictMeta):
def iteritems(cls):
raise TypeError("sets don't have 'items'")
# These are the actual classes you use!
class COWDictBase(metaclass=COWDictMeta):
class COWDictBase(object):
__metaclass__ = COWDictMeta
__count__ = 0
class COWSetBase(metaclass=COWSetMeta):
class COWSetBase(object):
__metaclass__ = COWSetMeta
__count__ = 0
if __name__ == "__main__":
import sys
COWDictBase.__warn__ = sys.stderr
a = COWDictBase()
print("a", a)
a['a'] = 'a'
a['b'] = 'b'
a['dict'] = {}
b = a.copy()
print("b", b)
b['c'] = 'b'
print()
print("a", a)
for x in a.iteritems():
print(x)
print("--")
print("b", b)
for x in b.iteritems():
print(x)
print()
b['dict']['a'] = 'b'
b['a'] = 'c'
print("a", a)
for x in a.iteritems():
print(x)
print("--")
print("b", b)
for x in b.iteritems():
print(x)
print()
try:
b['dict2']
except KeyError as e:
print("Okay!")
a['set'] = COWSetBase()
a['set'].add("o1")
a['set'].add("o1")
a['set'].add("o2")
print("a", a)
for x in a['set'].itervalues():
print(x)
print("--")
print("b", b)
for x in b['set'].itervalues():
print(x)
print()
b['set'].add('o3')
print("a", a)
for x in a['set'].itervalues():
print(x)
print("--")
print("b", b)
for x in b['set'].itervalues():
print(x)
print()
a['set2'] = set()
a['set2'].add("o1")
a['set2'].add("o1")
a['set2'].add("o2")
print("a", a)
for x in a.iteritems():
print(x)
print("--")
print("b", b)
for x in b.iteritems(readonly=True):
print(x)
print()
del b['b']
try:
print(b['b'])
except KeyError:
print("Yay! deleted key raises error")
if b.has_key('b'):
print("Boo!")
else:
print("Yay - has_key with delete works!")
print("a", a)
for x in a.iteritems():
print(x)
print("--")
print("b", b)
for x in b.iteritems(readonly=True):
print(x)
print()
b.__revertitem__('b')
print("a", a)
for x in a.iteritems():
print(x)
print("--")
print("b", b)
for x in b.iteritems(readonly=True):
print(x)
print()
b.__revertitem__('dict')
print("a", a)
for x in a.iteritems():
print(x)
print("--")
print("b", b)
for x in b.iteritems(readonly=True):
print(x)
print()

View File

@@ -1,3 +1,5 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# BitBake Build System Python Library
#
@@ -6,14 +8,24 @@
#
# Based on Gentoo's portage.py.
#
# SPDX-License-Identifier: GPL-2.0-only
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
__version__ = "2.0.1"
__version__ = "1.15.2"
import sys
if sys.version_info < (3, 6, 0):
raise RuntimeError("Sorry, python 3.6.0 or later is required for this version of bitbake")
if sys.version_info < (2, 6, 0):
raise RuntimeError("Sorry, python 2.6.0 or later is required for this version of bitbake")
class BBHandledException(Exception):
@@ -21,8 +33,8 @@ class BBHandledException(Exception):
The big dilemma for generic bitbake code is what information to give the user
when an exception occurs. Any exception inheriting this base exception class
has already provided information to the user via some 'fired' message type such as
an explicitly fired event using bb.fire, or a bb.error message. If bitbake
encounters an exception derived from this class, no backtrace or other information
an explicitly fired event using bb.fire, or a bb.error message. If bitbake
encounters an exception derived from this class, no backtrace or other information
will be given to the user, its assumed the earlier event provided the relevant information.
"""
pass
@@ -35,32 +47,15 @@ class NullHandler(logging.Handler):
def emit(self, record):
pass
class BBLoggerMixin(object):
def __init__(self, *args, **kwargs):
# Does nothing to allow calling super() from derived classes
pass
def setup_bblogger(self, name):
Logger = logging.getLoggerClass()
class BBLogger(Logger):
def __init__(self, name):
if name.split(".")[0] == "BitBake":
self.debug = self._debug_helper
def _debug_helper(self, *args, **kwargs):
return self.bbdebug(1, *args, **kwargs)
def debug2(self, *args, **kwargs):
return self.bbdebug(2, *args, **kwargs)
def debug3(self, *args, **kwargs):
return self.bbdebug(3, *args, **kwargs)
self.debug = self.bbdebug
Logger.__init__(self, name)
def bbdebug(self, level, msg, *args, **kwargs):
loglevel = logging.DEBUG - level + 1
if not bb.event.worker_pid:
if self.name in bb.msg.loggerDefaultDomains and loglevel > (bb.msg.loggerDefaultDomains[self.name]):
return
if loglevel < bb.msg.loggerDefaultLogLevel:
return
return self.log(loglevel, msg, *args, **kwargs)
return self.log(logging.DEBUG - level + 1, msg, *args, **kwargs)
def plain(self, msg, *args, **kwargs):
return self.log(logging.INFO + 1, msg, *args, **kwargs)
@@ -68,118 +63,53 @@ class BBLoggerMixin(object):
def verbose(self, msg, *args, **kwargs):
return self.log(logging.INFO - 1, msg, *args, **kwargs)
def verbnote(self, msg, *args, **kwargs):
return self.log(logging.INFO + 2, msg, *args, **kwargs)
def warnonce(self, msg, *args, **kwargs):
return self.log(logging.WARNING - 1, msg, *args, **kwargs)
def erroronce(self, msg, *args, **kwargs):
return self.log(logging.ERROR - 1, msg, *args, **kwargs)
Logger = logging.getLoggerClass()
class BBLogger(Logger, BBLoggerMixin):
def __init__(self, name, *args, **kwargs):
self.setup_bblogger(name)
super().__init__(name, *args, **kwargs)
logging.raiseExceptions = False
logging.setLoggerClass(BBLogger)
class BBLoggerAdapter(logging.LoggerAdapter, BBLoggerMixin):
def __init__(self, logger, *args, **kwargs):
self.setup_bblogger(logger.name)
super().__init__(logger, *args, **kwargs)
if sys.version_info < (3, 6):
# These properties were added in Python 3.6. Add them in older versions
# for compatibility
@property
def manager(self):
return self.logger.manager
@manager.setter
def manager(self, value):
self.logger.manager = value
@property
def name(self):
return self.logger.name
def __repr__(self):
logger = self.logger
level = logger.getLevelName(logger.getEffectiveLevel())
return '<%s %s (%s)>' % (self.__class__.__name__, logger.name, level)
logging.LoggerAdapter = BBLoggerAdapter
logger = logging.getLogger("BitBake")
logger.addHandler(NullHandler())
logger.setLevel(logging.DEBUG - 2)
mainlogger = logging.getLogger("BitBake.Main")
class PrefixLoggerAdapter(logging.LoggerAdapter):
def __init__(self, prefix, logger):
super().__init__(logger, {})
self.__msg_prefix = prefix
def process(self, msg, kwargs):
return "%s%s" %(self.__msg_prefix, msg), kwargs
# This has to be imported after the setLoggerClass, as the import of bb.msg
# can result in construction of the various loggers.
import bb.msg
if "BBDEBUG" in os.environ:
level = int(os.environ["BBDEBUG"])
if level:
bb.msg.set_debug_level(level)
from bb import fetch2 as fetch
sys.modules['bb.fetch'] = sys.modules['bb.fetch2']
# Messaging convenience functions
def plain(*args):
mainlogger.plain(''.join(args))
logger.plain(''.join(args))
def debug(lvl, *args):
if isinstance(lvl, str):
mainlogger.warning("Passed invalid debug level '%s' to bb.debug", lvl)
if isinstance(lvl, basestring):
logger.warn("Passed invalid debug level '%s' to bb.debug", lvl)
args = (lvl,) + args
lvl = 1
mainlogger.bbdebug(lvl, ''.join(args))
logger.debug(lvl, ''.join(args))
def note(*args):
mainlogger.info(''.join(args))
logger.info(''.join(args))
#
# A higher prioity note which will show on the console but isn't a warning
#
# Something is happening the user should be aware of but they probably did
# something to make it happen
#
def verbnote(*args):
mainlogger.verbnote(''.join(args))
#
# Warnings - things the user likely needs to pay attention to and fix
#
def warn(*args):
mainlogger.warning(''.join(args))
logger.warn(''.join(args))
def warnonce(*args):
mainlogger.warnonce(''.join(args))
def error(*args):
logger.error(''.join(args))
def error(*args, **kwargs):
mainlogger.error(''.join(args), extra=kwargs)
def fatal(*args):
logger.critical(''.join(args))
sys.exit(1)
def erroronce(*args):
mainlogger.erroronce(''.join(args))
def fatal(*args, **kwargs):
mainlogger.critical(''.join(args), extra=kwargs)
raise BBHandledException()
def deprecated(func, name=None, advice=""):
"""This is a decorator which can be used to mark functions
as deprecated. It will result in a warning being emitted
as deprecated. It will result in a warning being emmitted
when the function is used."""
import warnings
@@ -216,3 +146,6 @@ def deprecate_import(current, modulename, fromlist, renames = None):
setattr(sys.modules[current], newname, newobj)
deprecate_import(__name__, "bb.fetch", ("MalformedUrl", "encodeurl", "decodeurl"))
deprecate_import(__name__, "bb.utils", ("mkdirhier", "movefile", "copyfile", "which"))
deprecate_import(__name__, "bb.utils", ["vercmp_string"], ["vercmp"])

View File

@@ -1,33 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import itertools
import json
# The Python async server defaults to a 64K receive buffer, so we hardcode our
# maximum chunk size. It would be better if the client and server reported to
# each other what the maximum chunk sizes were, but that will slow down the
# connection setup with a round trip delay so I'd rather not do that unless it
# is necessary
DEFAULT_MAX_CHUNK = 32 * 1024
def chunkify(msg, max_chunk):
if len(msg) < max_chunk - 1:
yield ''.join((msg, "\n"))
else:
yield ''.join((json.dumps({
'chunk-stream': None
}), "\n"))
args = [iter(msg)] * (max_chunk - 1)
for m in map(''.join, itertools.zip_longest(*args, fillvalue='')):
yield ''.join(itertools.chain(m, "\n"))
yield "\n"
from .client import AsyncClient, Client
from .serv import AsyncServer, AsyncServerConnection, ClientError, ServerError

View File

@@ -1,174 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import abc
import asyncio
import json
import os
import socket
import sys
from . import chunkify, DEFAULT_MAX_CHUNK
class AsyncClient(object):
def __init__(self, proto_name, proto_version, logger, timeout=30):
self.reader = None
self.writer = None
self.max_chunk = DEFAULT_MAX_CHUNK
self.proto_name = proto_name
self.proto_version = proto_version
self.logger = logger
self.timeout = timeout
async def connect_tcp(self, address, port):
async def connect_sock():
return await asyncio.open_connection(address, port)
self._connect_sock = connect_sock
async def connect_unix(self, path):
async def connect_sock():
return await asyncio.open_unix_connection(path)
self._connect_sock = connect_sock
async def setup_connection(self):
s = '%s %s\n\n' % (self.proto_name, self.proto_version)
self.writer.write(s.encode("utf-8"))
await self.writer.drain()
async def connect(self):
if self.reader is None or self.writer is None:
(self.reader, self.writer) = await self._connect_sock()
await self.setup_connection()
async def close(self):
self.reader = None
if self.writer is not None:
self.writer.close()
self.writer = None
async def _send_wrapper(self, proc):
count = 0
while True:
try:
await self.connect()
return await proc()
except (
OSError,
ConnectionError,
json.JSONDecodeError,
UnicodeDecodeError,
) as e:
self.logger.warning("Error talking to server: %s" % e)
if count >= 3:
if not isinstance(e, ConnectionError):
raise ConnectionError(str(e))
raise e
await self.close()
count += 1
async def send_message(self, msg):
async def get_line():
try:
line = await asyncio.wait_for(self.reader.readline(), self.timeout)
except asyncio.TimeoutError:
raise ConnectionError("Timed out waiting for server")
if not line:
raise ConnectionError("Connection closed")
line = line.decode("utf-8")
if not line.endswith("\n"):
raise ConnectionError("Bad message %r" % (line))
return line
async def proc():
for c in chunkify(json.dumps(msg), self.max_chunk):
self.writer.write(c.encode("utf-8"))
await self.writer.drain()
l = await get_line()
m = json.loads(l)
if m and "chunk-stream" in m:
lines = []
while True:
l = (await get_line()).rstrip("\n")
if not l:
break
lines.append(l)
m = json.loads("".join(lines))
return m
return await self._send_wrapper(proc)
async def ping(self):
return await self.send_message(
{'ping': {}}
)
class Client(object):
def __init__(self):
self.client = self._get_async_client()
self.loop = asyncio.new_event_loop()
# Override any pre-existing loop.
# Without this, the PR server export selftest triggers a hang
# when running with Python 3.7. The drawback is that there is
# potential for issues if the PR and hash equiv (or some new)
# clients need to both be instantiated in the same process.
# This should be revisited if/when Python 3.9 becomes the
# minimum required version for BitBake, as it seems not
# required (but harmless) with it.
asyncio.set_event_loop(self.loop)
self._add_methods('connect_tcp', 'ping')
@abc.abstractmethod
def _get_async_client(self):
pass
def _get_downcall_wrapper(self, downcall):
def wrapper(*args, **kwargs):
return self.loop.run_until_complete(downcall(*args, **kwargs))
return wrapper
def _add_methods(self, *methods):
for m in methods:
downcall = getattr(self.client, m)
setattr(self, m, self._get_downcall_wrapper(downcall))
def connect_unix(self, path):
# AF_UNIX has path length issues so chdir here to workaround
cwd = os.getcwd()
try:
os.chdir(os.path.dirname(path))
self.loop.run_until_complete(self.client.connect_unix(os.path.basename(path)))
self.loop.run_until_complete(self.client.connect())
finally:
os.chdir(cwd)
@property
def max_chunk(self):
return self.client.max_chunk
@max_chunk.setter
def max_chunk(self, value):
self.client.max_chunk = value
def close(self):
self.loop.run_until_complete(self.client.close())
if sys.version_info >= (3, 6):
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
self.loop.close()

View File

@@ -1,295 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import abc
import asyncio
import json
import os
import signal
import socket
import sys
import multiprocessing
from . import chunkify, DEFAULT_MAX_CHUNK
class ClientError(Exception):
pass
class ServerError(Exception):
pass
class AsyncServerConnection(object):
def __init__(self, reader, writer, proto_name, logger):
self.reader = reader
self.writer = writer
self.proto_name = proto_name
self.max_chunk = DEFAULT_MAX_CHUNK
self.handlers = {
'chunk-stream': self.handle_chunk,
'ping': self.handle_ping,
}
self.logger = logger
async def process_requests(self):
try:
self.addr = self.writer.get_extra_info('peername')
self.logger.debug('Client %r connected' % (self.addr,))
# Read protocol and version
client_protocol = await self.reader.readline()
if client_protocol is None:
return
(client_proto_name, client_proto_version) = client_protocol.decode('utf-8').rstrip().split()
if client_proto_name != self.proto_name:
self.logger.debug('Rejecting invalid protocol %s' % (self.proto_name))
return
self.proto_version = tuple(int(v) for v in client_proto_version.split('.'))
if not self.validate_proto_version():
self.logger.debug('Rejecting invalid protocol version %s' % (client_proto_version))
return
# Read headers. Currently, no headers are implemented, so look for
# an empty line to signal the end of the headers
while True:
line = await self.reader.readline()
if line is None:
return
line = line.decode('utf-8').rstrip()
if not line:
break
# Handle messages
while True:
d = await self.read_message()
if d is None:
break
await self.dispatch_message(d)
await self.writer.drain()
except ClientError as e:
self.logger.error(str(e))
finally:
self.writer.close()
async def dispatch_message(self, msg):
for k in self.handlers.keys():
if k in msg:
self.logger.debug('Handling %s' % k)
await self.handlers[k](msg[k])
return
raise ClientError("Unrecognized command %r" % msg)
def write_message(self, msg):
for c in chunkify(json.dumps(msg), self.max_chunk):
self.writer.write(c.encode('utf-8'))
async def read_message(self):
l = await self.reader.readline()
if not l:
return None
try:
message = l.decode('utf-8')
if not message.endswith('\n'):
return None
return json.loads(message)
except (json.JSONDecodeError, UnicodeDecodeError) as e:
self.logger.error('Bad message from client: %r' % message)
raise e
async def handle_chunk(self, request):
lines = []
try:
while True:
l = await self.reader.readline()
l = l.rstrip(b"\n").decode("utf-8")
if not l:
break
lines.append(l)
msg = json.loads(''.join(lines))
except (json.JSONDecodeError, UnicodeDecodeError) as e:
self.logger.error('Bad message from client: %r' % lines)
raise e
if 'chunk-stream' in msg:
raise ClientError("Nested chunks are not allowed")
await self.dispatch_message(msg)
async def handle_ping(self, request):
response = {'alive': True}
self.write_message(response)
class AsyncServer(object):
def __init__(self, logger):
self._cleanup_socket = None
self.logger = logger
self.start = None
self.address = None
self.loop = None
def start_tcp_server(self, host, port):
def start_tcp():
self.server = self.loop.run_until_complete(
asyncio.start_server(self.handle_client, host, port)
)
for s in self.server.sockets:
self.logger.debug('Listening on %r' % (s.getsockname(),))
# Newer python does this automatically. Do it manually here for
# maximum compatibility
s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
s.setsockopt(socket.SOL_TCP, socket.TCP_QUICKACK, 1)
# Enable keep alives. This prevents broken client connections
# from persisting on the server for long periods of time.
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 30)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 15)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 4)
name = self.server.sockets[0].getsockname()
if self.server.sockets[0].family == socket.AF_INET6:
self.address = "[%s]:%d" % (name[0], name[1])
else:
self.address = "%s:%d" % (name[0], name[1])
self.start = start_tcp
def start_unix_server(self, path):
def cleanup():
os.unlink(path)
def start_unix():
cwd = os.getcwd()
try:
# Work around path length limits in AF_UNIX
os.chdir(os.path.dirname(path))
self.server = self.loop.run_until_complete(
asyncio.start_unix_server(self.handle_client, os.path.basename(path))
)
finally:
os.chdir(cwd)
self.logger.debug('Listening on %r' % path)
self._cleanup_socket = cleanup
self.address = "unix://%s" % os.path.abspath(path)
self.start = start_unix
@abc.abstractmethod
def accept_client(self, reader, writer):
pass
async def handle_client(self, reader, writer):
# writer.transport.set_write_buffer_limits(0)
try:
client = self.accept_client(reader, writer)
await client.process_requests()
except Exception as e:
import traceback
self.logger.error('Error from client: %s' % str(e), exc_info=True)
traceback.print_exc()
writer.close()
self.logger.debug('Client disconnected')
def run_loop_forever(self):
try:
self.loop.run_forever()
except KeyboardInterrupt:
pass
def signal_handler(self):
self.logger.debug("Got exit signal")
self.loop.stop()
def _serve_forever(self):
try:
self.loop.add_signal_handler(signal.SIGTERM, self.signal_handler)
signal.pthread_sigmask(signal.SIG_UNBLOCK, [signal.SIGTERM])
self.run_loop_forever()
self.server.close()
self.loop.run_until_complete(self.server.wait_closed())
self.logger.debug('Server shutting down')
finally:
if self._cleanup_socket is not None:
self._cleanup_socket()
def serve_forever(self):
"""
Serve requests in the current process
"""
# Create loop and override any loop that may have existed in
# a parent process. It is possible that the usecases of
# serve_forever might be constrained enough to allow using
# get_event_loop here, but better safe than sorry for now.
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
self.start()
self._serve_forever()
def serve_as_process(self, *, prefunc=None, args=()):
"""
Serve requests in a child process
"""
def run(queue):
# Create loop and override any loop that may have existed
# in a parent process. Without doing this and instead
# using get_event_loop, at the very minimum the hashserv
# unit tests will hang when running the second test.
# This happens since get_event_loop in the spawned server
# process for the second testcase ends up with the loop
# from the hashserv client created in the unit test process
# when running the first testcase. The problem is somewhat
# more general, though, as any potential use of asyncio in
# Cooker could create a loop that needs to replaced in this
# new process.
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
try:
self.start()
finally:
queue.put(self.address)
queue.close()
if prefunc is not None:
prefunc(self, *args)
self._serve_forever()
if sys.version_info >= (3, 6):
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
self.loop.close()
queue = multiprocessing.Queue()
# Temporarily block SIGTERM. The server process will inherit this
# block which will ensure it doesn't receive the SIGTERM until the
# handler is ready for it
mask = signal.pthread_sigmask(signal.SIG_BLOCK, [signal.SIGTERM])
try:
self.process = multiprocessing.Process(target=run, args=(queue,))
self.process.start()
self.address = queue.get()
queue.close()
queue.join_thread()
return self.process
finally:
signal.pthread_sigmask(signal.SIG_SETMASK, mask)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,5 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Extra RecipeInfo will be all defined in this file. Currently,
# Only Hob (Image Creator) Requests some extra fields. So
@@ -10,8 +12,18 @@
# Copyright (C) 2011, Intel Corporation. All rights reserved.
# SPDX-License-Identifier: GPL-2.0-only
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from bb.cache import RecipeInfoCommon
@@ -23,22 +35,12 @@ class HobRecipeInfo(RecipeInfoCommon):
# such as (bb_cache.dat, bb_extracache_hob.dat)
cachefile = "bb_extracache_" + classname +".dat"
# override this member with the list of extra cache fields
# that this class will provide
cachefields = ['summary', 'license', 'section',
'description', 'homepage', 'bugtracker',
'prevision', 'files_info']
def __init__(self, filename, metadata):
self.summary = self.getvar('SUMMARY', metadata)
self.license = self.getvar('LICENSE', metadata)
self.section = self.getvar('SECTION', metadata)
self.description = self.getvar('DESCRIPTION', metadata)
self.homepage = self.getvar('HOMEPAGE', metadata)
self.bugtracker = self.getvar('BUGTRACKER', metadata)
self.prevision = self.getvar('PR', metadata)
self.files_info = self.getvar('FILES_INFO', metadata)
@classmethod
def init_cacheData(cls, cachedata):
@@ -47,17 +49,9 @@ class HobRecipeInfo(RecipeInfoCommon):
cachedata.license = {}
cachedata.section = {}
cachedata.description = {}
cachedata.homepage = {}
cachedata.bugtracker = {}
cachedata.prevision = {}
cachedata.files_info = {}
def add_cacheData(self, cachedata, fn):
cachedata.summary[fn] = self.summary
cachedata.license[fn] = self.license
cachedata.section[fn] = self.section
cachedata.description[fn] = self.description
cachedata.homepage[fn] = self.homepage
cachedata.bugtracker[fn] = self.bugtracker
cachedata.prevision[fn] = self.prevision
cachedata.files_info[fn] = self.files_info

View File

@@ -1,144 +0,0 @@
# Local file checksum cache implementation
#
# Copyright (C) 2012 Intel Corporation
#
# SPDX-License-Identifier: GPL-2.0-only
#
import glob
import operator
import os
import stat
import bb.utils
import logging
import re
from bb.cache import MultiProcessCache
logger = logging.getLogger("BitBake.Cache")
filelist_regex = re.compile(r'(?:(?<=:True)|(?<=:False))\s+')
# mtime cache (non-persistent)
# based upon the assumption that files do not change during bitbake run
class FileMtimeCache(object):
cache = {}
def cached_mtime(self, f):
if f not in self.cache:
self.cache[f] = os.stat(f)[stat.ST_MTIME]
return self.cache[f]
def cached_mtime_noerror(self, f):
if f not in self.cache:
try:
self.cache[f] = os.stat(f)[stat.ST_MTIME]
except OSError:
return 0
return self.cache[f]
def update_mtime(self, f):
self.cache[f] = os.stat(f)[stat.ST_MTIME]
return self.cache[f]
def clear(self):
self.cache.clear()
# Checksum + mtime cache (persistent)
class FileChecksumCache(MultiProcessCache):
cache_file_name = "local_file_checksum_cache.dat"
CACHE_VERSION = 1
def __init__(self):
self.mtime_cache = FileMtimeCache()
MultiProcessCache.__init__(self)
def get_checksum(self, f):
f = os.path.normpath(f)
entry = self.cachedata[0].get(f)
cmtime = self.mtime_cache.cached_mtime(f)
if entry:
(mtime, hashval) = entry
if cmtime == mtime:
return hashval
else:
bb.debug(2, "file %s changed mtime, recompute checksum" % f)
hashval = bb.utils.md5_file(f)
self.cachedata_extras[0][f] = (cmtime, hashval)
return hashval
def merge_data(self, source, dest):
for h in source[0]:
if h in dest:
(smtime, _) = source[0][h]
(dmtime, _) = dest[0][h]
if smtime > dmtime:
dest[0][h] = source[0][h]
else:
dest[0][h] = source[0][h]
def get_checksums(self, filelist, pn, localdirsexclude):
"""Get checksums for a list of files"""
def checksum_file(f):
try:
checksum = self.get_checksum(f)
except OSError as e:
bb.warn("Unable to get checksum for %s SRC_URI entry %s: %s" % (pn, os.path.basename(f), e))
return None
return checksum
#
# Changing the format of file-checksums is problematic as both OE and Bitbake have
# knowledge of them. We need to encode a new piece of data, the portion of the path
# we care about from a checksum perspective. This means that files that change subdirectory
# are tracked by the task hashes. To do this, we do something horrible and put a "/./" into
# the path. The filesystem handles it but it gives us a marker to know which subsection
# of the path to cache.
#
def checksum_dir(pth):
# Handle directories recursively
if pth == "/":
bb.fatal("Refusing to checksum /")
pth = pth.rstrip("/")
dirchecksums = []
for root, dirs, files in os.walk(pth, topdown=True):
[dirs.remove(d) for d in list(dirs) if d in localdirsexclude]
for name in files:
fullpth = os.path.join(root, name).replace(pth, os.path.join(pth, "."))
checksum = checksum_file(fullpth)
if checksum:
dirchecksums.append((fullpth, checksum))
return dirchecksums
checksums = []
for pth in filelist_regex.split(filelist):
if not pth:
continue
pth = pth.strip()
if not pth:
continue
exist = pth.split(":")[1]
if exist == "False":
continue
pth = pth.split(":")[0]
if '*' in pth:
# Handle globs
for f in glob.glob(pth):
if os.path.isdir(f):
if not os.path.islink(f):
checksums.extend(checksum_dir(f))
else:
checksum = checksum_file(f)
if checksum:
checksums.append((f, checksum))
elif os.path.isdir(pth):
if not os.path.islink(pth):
checksums.extend(checksum_dir(pth))
else:
checksum = checksum_file(pth)
if checksum:
checksums.append((pth, checksum))
checksums.sort(key=operator.itemgetter(1))
return checksums

View File

@@ -1,43 +1,21 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
"""
BitBake code parser
Parses actual code (i.e. python and shell) for functions and in-line
expressions. Used mainly to determine dependencies on other functions
and variables within the BitBake metadata. Also provides a cache for
this information in order to speed up processing.
(Not to be confused with the code that parses the metadata itself,
see lib/bb/parse/ for that).
NOTE: if you change how the parsers gather information you will almost
certainly need to increment CodeParserCache.CACHE_VERSION below so that
any existing codeparser cache gets invalidated. Additionally you'll need
to increment __cache_version__ in cache.py in order to ensure that old
recipe caches don't trigger "Taskhash mismatch" errors.
"""
import ast
import sys
import codegen
import logging
import bb.pysh as pysh
import os.path
import bb.utils, bb.data
import hashlib
from itertools import chain
from bb.pysh import pyshyacc, pyshlex
from bb.cache import MultiProcessCache
from pysh import pyshyacc, pyshlex, sherrors
logger = logging.getLogger('BitBake.CodeParser')
PARSERCACHE_VERSION = 2
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info('Importing cPickle failed. Falling back to a very slow implementation.')
def bbhash(s):
return hashlib.sha256(s.encode("utf-8")).hexdigest()
def check_indent(codestr):
"""If the code is indented, add a top level piece of code to 'remove' the indentation"""
@@ -50,135 +28,137 @@ def check_indent(codestr):
return codestr
if codestr[i-1] == "\t" or codestr[i-1] == " ":
if codestr[0] == "\n":
# Since we're adding a line, we need to remove one line of any empty padding
# to ensure line numbers are correct
codestr = codestr[1:]
return "if 1:\n" + codestr
return codestr
# A custom getstate/setstate using tuples is actually worth 15% cachesize by
# avoiding duplication of the attribute names!
pythonparsecache = {}
shellparsecache = {}
pythonparsecacheextras = {}
shellparsecacheextras = {}
class SetCache(object):
def __init__(self):
self.setcache = {}
def internSet(self, items):
new = []
for i in items:
new.append(sys.intern(i))
s = frozenset(new)
h = hash(s)
if h in self.setcache:
return self.setcache[h]
self.setcache[h] = s
return s
codecache = SetCache()
class pythonCacheLine(object):
def __init__(self, refs, execs, contains):
self.refs = codecache.internSet(refs)
self.execs = codecache.internSet(execs)
self.contains = {}
for c in contains:
self.contains[c] = codecache.internSet(contains[c])
def __getstate__(self):
return (self.refs, self.execs, self.contains)
def __setstate__(self, state):
(refs, execs, contains) = state
self.__init__(refs, execs, contains)
def __hash__(self):
l = (hash(self.refs), hash(self.execs))
for c in sorted(self.contains.keys()):
l = l + (c, hash(self.contains[c]))
return hash(l)
def __repr__(self):
return " ".join([str(self.refs), str(self.execs), str(self.contains)])
class shellCacheLine(object):
def __init__(self, execs):
self.execs = codecache.internSet(execs)
def __getstate__(self):
return (self.execs)
def __setstate__(self, state):
(execs) = state
self.__init__(execs)
def __hash__(self):
return hash(self.execs)
def __repr__(self):
return str(self.execs)
class CodeParserCache(MultiProcessCache):
cache_file_name = "bb_codeparser.dat"
# NOTE: you must increment this if you change how the parsers gather information,
# so that an existing cache gets invalidated. Additionally you'll need
# to increment __cache_version__ in cache.py in order to ensure that old
# recipe caches don't trigger "Taskhash mismatch" errors.
CACHE_VERSION = 11
def __init__(self):
MultiProcessCache.__init__(self)
self.pythoncache = self.cachedata[0]
self.shellcache = self.cachedata[1]
self.pythoncacheextras = self.cachedata_extras[0]
self.shellcacheextras = self.cachedata_extras[1]
# To avoid duplication in the codeparser cache, keep
# a lookup of hashes of objects we already have
self.pythoncachelines = {}
self.shellcachelines = {}
def newPythonCacheLine(self, refs, execs, contains):
cacheline = pythonCacheLine(refs, execs, contains)
h = hash(cacheline)
if h in self.pythoncachelines:
return self.pythoncachelines[h]
self.pythoncachelines[h] = cacheline
return cacheline
def newShellCacheLine(self, execs):
cacheline = shellCacheLine(execs)
h = hash(cacheline)
if h in self.shellcachelines:
return self.shellcachelines[h]
self.shellcachelines[h] = cacheline
return cacheline
def init_cache(self, d):
# Check if we already have the caches
if self.pythoncache:
return
MultiProcessCache.init_cache(self, d)
# cachedata gets re-assigned in the parent
self.pythoncache = self.cachedata[0]
self.shellcache = self.cachedata[1]
def create_cachedata(self):
data = [{}, {}]
return data
codeparsercache = CodeParserCache()
def parser_cachefile(d):
cachedir = (d.getVar("PERSISTENT_DIR", True) or
d.getVar("CACHE", True))
if cachedir in [None, '']:
return None
bb.utils.mkdirhier(cachedir)
cachefile = os.path.join(cachedir, "bb_codeparser.dat")
logger.debug(1, "Using cache in '%s' for codeparser cache", cachefile)
return cachefile
def parser_cache_init(d):
codeparsercache.init_cache(d)
global pythonparsecache
global shellparsecache
def parser_cache_save():
codeparsercache.save_extras()
cachefile = parser_cachefile(d)
if not cachefile:
return
try:
p = pickle.Unpickler(file(cachefile, "rb"))
data, version = p.load()
except:
return
if version != PARSERCACHE_VERSION:
return
pythonparsecache = data[0]
shellparsecache = data[1]
def parser_cache_save(d):
cachefile = parser_cachefile(d)
if not cachefile:
return
glf = bb.utils.lockfile(cachefile + ".lock", shared=True)
i = os.getpid()
lf = None
while not lf:
shellcache = {}
pythoncache = {}
lf = bb.utils.lockfile(cachefile + ".lock." + str(i), retry=False)
if not lf or os.path.exists(cachefile + "-" + str(i)):
if lf:
bb.utils.unlockfile(lf)
lf = None
i = i + 1
continue
shellcache = shellparsecacheextras
pythoncache = pythonparsecacheextras
p = pickle.Pickler(file(cachefile + "-" + str(i), "wb"), -1)
p.dump([[pythoncache, shellcache], PARSERCACHE_VERSION])
bb.utils.unlockfile(lf)
bb.utils.unlockfile(glf)
def internSet(items):
new = set()
for i in items:
new.add(intern(i))
return new
def parser_cache_savemerge(d):
cachefile = parser_cachefile(d)
if not cachefile:
return
glf = bb.utils.lockfile(cachefile + ".lock")
try:
p = pickle.Unpickler(file(cachefile, "rb"))
data, version = p.load()
except (IOError, EOFError):
data, version = None, None
if version != PARSERCACHE_VERSION:
data = [{}, {}]
for f in [y for y in os.listdir(os.path.dirname(cachefile)) if y.startswith(os.path.basename(cachefile) + '-')]:
f = os.path.join(os.path.dirname(cachefile), f)
try:
p = pickle.Unpickler(file(f, "rb"))
extradata, version = p.load()
except (IOError, EOFError):
extradata, version = [{}, {}], None
if version != PARSERCACHE_VERSION:
continue
for h in extradata[0]:
if h not in data[0]:
data[0][h] = extradata[0][h]
for h in extradata[1]:
if h not in data[1]:
data[1][h] = extradata[1][h]
os.unlink(f)
# When the dicts are originally created, python calls intern() on the set keys
# which significantly improves memory usage. Sadly the pickle/unpickle process
# doesn't call intern() on the keys and results in the same strings being duplicated
# in memory. This also means pickle will save the same string multiple times in
# the cache file. By interning the data here, the cache file shrinks dramatically
# meaning faster load times and the reloaded cache files also consume much less
# memory. This is worth any performance hit from this loops and the use of the
# intern() data storage.
# Python 3.x may behave better in this area
for h in data[0]:
data[0][h]["refs"] = internSet(data[0][h]["refs"])
data[0][h]["execs"] = internSet(data[0][h]["execs"])
for h in data[1]:
data[1][h]["execs"] = internSet(data[1][h]["execs"])
p = pickle.Pickler(file(cachefile, "wb"), -1)
p.dump([data, PARSERCACHE_VERSION])
bb.utils.unlockfile(glf)
def parser_cache_savemerge():
codeparsercache.save_merge()
Logger = logging.getLoggerClass()
class BufferedLogger(Logger):
@@ -193,19 +173,11 @@ class BufferedLogger(Logger):
def flush(self):
for record in self.buffer:
if self.target.isEnabledFor(record.levelno):
self.target.handle(record)
self.target.handle(record)
self.buffer = []
class DummyLogger():
def flush(self):
return
class PythonParser():
getvars = (".getVar", ".appendVar", ".prependVar", "oe.utils.conditional")
getvarflags = (".getVarFlag", ".appendVarFlag", ".prependVarFlag")
containsfuncs = ("bb.utils.contains", "base_contains")
containsanyfuncs = ("bb.utils.contains_any", "bb.utils.filter")
getvars = ("d.getVar", "bb.data.getVar", "data.getVar")
execfuncs = ("bb.build.exec_func", "bb.build.exec_task")
def warn(self, func, arg):
@@ -218,43 +190,17 @@ class PythonParser():
funcstr = codegen.to_source(func)
argstr = codegen.to_source(arg)
except TypeError:
self.log.debug2('Failed to convert function and argument to source form')
self.log.debug(2, 'Failed to convert function and argument to source form')
else:
self.log.debug(self.unhandled_message % (funcstr, argstr))
self.log.debug(1, self.unhandled_message % (funcstr, argstr))
def visit_Call(self, node):
name = self.called_node_name(node.func)
if name and (name.endswith(self.getvars) or name.endswith(self.getvarflags) or name in self.containsfuncs or name in self.containsanyfuncs):
if name in self.getvars:
if isinstance(node.args[0], ast.Str):
varname = node.args[0].s
if name in self.containsfuncs and isinstance(node.args[1], ast.Str):
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname].add(node.args[1].s)
elif name in self.containsanyfuncs and isinstance(node.args[1], ast.Str):
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname].update(node.args[1].s.split())
elif name.endswith(self.getvarflags):
if isinstance(node.args[1], ast.Str):
self.references.add('%s[%s]' % (varname, node.args[1].s))
else:
self.warn(node.func, node.args[1])
else:
self.references.add(varname)
self.var_references.add(node.args[0].s)
else:
self.warn(node.func, node.args[0])
elif name and name.endswith(".expand"):
if isinstance(node.args[0], ast.Str):
value = node.args[0].s
d = bb.data.init()
parser = d.expandWithRefs(value, self.name)
self.references |= parser.references
self.execs |= parser.execs
for varname in parser.contains:
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname] |= parser.contains[varname]
elif name in self.execfuncs:
if isinstance(node.args[0], ast.Str):
self.var_execs.add(node.args[0].s)
@@ -277,66 +223,49 @@ class PythonParser():
break
def __init__(self, name, log):
self.name = name
self.var_references = set()
self.var_execs = set()
self.contains = {}
self.execs = set()
self.references = set()
self._log = log
# Defer init as expensive
self.log = DummyLogger()
self.log = BufferedLogger('BitBake.Data.%s' % name, logging.DEBUG, log)
self.unhandled_message = "in call of %s, argument '%s' is not a string literal"
self.unhandled_message = "while parsing %s, %s" % (name, self.unhandled_message)
def parse_python(self, node, lineno=0, filename="<string>"):
if not node or not node.strip():
def parse_python(self, node):
h = hash(str(node))
if h in pythonparsecache:
self.references = pythonparsecache[h]["refs"]
self.execs = pythonparsecache[h]["execs"]
return
h = bbhash(str(node))
if h in codeparsercache.pythoncache:
self.references = set(codeparsercache.pythoncache[h].refs)
self.execs = set(codeparsercache.pythoncache[h].execs)
self.contains = {}
for i in codeparsercache.pythoncache[h].contains:
self.contains[i] = set(codeparsercache.pythoncache[h].contains[i])
if h in pythonparsecacheextras:
self.references = pythonparsecacheextras[h]["refs"]
self.execs = pythonparsecacheextras[h]["execs"]
return
if h in codeparsercache.pythoncacheextras:
self.references = set(codeparsercache.pythoncacheextras[h].refs)
self.execs = set(codeparsercache.pythoncacheextras[h].execs)
self.contains = {}
for i in codeparsercache.pythoncacheextras[h].contains:
self.contains[i] = set(codeparsercache.pythoncacheextras[h].contains[i])
return
# Need to parse so take the hit on the real log buffer
self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, self._log)
# We can't add to the linenumbers for compile, we can pad to the correct number of blank lines though
node = "\n" * int(lineno) + node
code = compile(check_indent(str(node)), filename, "exec",
code = compile(check_indent(str(node)), "<string>", "exec",
ast.PyCF_ONLY_AST)
for n in ast.walk(code):
if n.__class__.__name__ == "Call":
self.visit_Call(n)
self.execs.update(self.var_execs)
self.references.update(self.var_references)
self.references.update(self.var_execs)
codeparsercache.pythoncacheextras[h] = codeparsercache.newPythonCacheLine(self.references, self.execs, self.contains)
pythonparsecacheextras[h] = {}
pythonparsecacheextras[h]["refs"] = self.references
pythonparsecacheextras[h]["execs"] = self.execs
class ShellParser():
def __init__(self, name, log):
self.funcdefs = set()
self.allexecs = set()
self.execs = set()
self._name = name
self._log = log
# Defer init as expensive
self.log = DummyLogger()
self.log = BufferedLogger('BitBake.Data.%s' % name, logging.DEBUG, log)
self.unhandled_template = "unable to handle non-literal command '%s'"
self.unhandled_template = "while parsing %s, %s" % (name, self.unhandled_template)
@@ -345,34 +274,29 @@ class ShellParser():
commands it executes.
"""
h = bbhash(str(value))
h = hash(str(value))
if h in codeparsercache.shellcache:
self.execs = set(codeparsercache.shellcache[h].execs)
if h in shellparsecache:
self.execs = shellparsecache[h]["execs"]
return self.execs
if h in codeparsercache.shellcacheextras:
self.execs = set(codeparsercache.shellcacheextras[h].execs)
if h in shellparsecacheextras:
self.execs = shellparsecacheextras[h]["execs"]
return self.execs
# Need to parse so take the hit on the real log buffer
self.log = BufferedLogger('BitBake.Data.%s' % self._name, logging.DEBUG, self._log)
self._parse_shell(value)
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)
codeparsercache.shellcacheextras[h] = codeparsercache.newShellCacheLine(self.execs)
return self.execs
def _parse_shell(self, value):
try:
tokens, _ = pyshyacc.parse(value, eof=True, debug=False)
except Exception:
bb.error('Error during parse shell code, the last 5 lines are:\n%s' % '\n'.join(value.split('\n')[-5:]))
raise
except pyshlex.NeedMore:
raise sherrors.ShellSyntaxError("Unexpected EOF")
self.process_tokens(tokens)
for token in tokens:
self.process_tokens(token)
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)
shellparsecacheextras[h] = {}
shellparsecacheextras[h]["execs"] = self.execs
return self.execs
def process_tokens(self, tokens):
"""Process a supplied portion of the syntax tree as returned by
@@ -418,24 +342,18 @@ class ShellParser():
"case_clause": case_clause,
}
def process_token_list(tokens):
for token in tokens:
if isinstance(token, list):
process_token_list(token)
continue
name, value = token
try:
more_tokens, words = token_handlers[name](value)
except KeyError:
raise NotImplementedError("Unsupported token type " + name)
for token in tokens:
name, value = token
try:
more_tokens, words = token_handlers[name](value)
except KeyError:
raise NotImplementedError("Unsupported token type " + name)
if more_tokens:
self.process_tokens(more_tokens)
if more_tokens:
self.process_tokens(more_tokens)
if words:
self.process_words(words)
process_token_list(tokens)
if words:
self.process_words(words)
def process_words(self, words):
"""Process a set of 'words' in pyshyacc parlance, which includes
@@ -452,7 +370,7 @@ class ShellParser():
if part[0] in ('`', '$('):
command = pyshlex.wordtree_as_string(part[1:-1])
self._parse_shell(command)
self.parse_shell(command)
if word[0] in ("cmd_name", "cmd_word"):
if word in words:
@@ -468,10 +386,10 @@ class ShellParser():
cmd = word[1]
if cmd.startswith("$"):
self.log.debug(self.unhandled_template % cmd)
self.log.debug(1, self.unhandled_template % cmd)
elif cmd == "eval":
command = " ".join(word for _, word in words[1:])
self._parse_shell(command)
self.parse_shell(command)
else:
self.allexecs.add(cmd)
break

View File

@@ -6,8 +6,18 @@ Provide an interface to interact with the bitbake server through 'commands'
# Copyright (C) 2006-2007 Richard Purdie
#
# SPDX-License-Identifier: GPL-2.0-only
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
The bitbake server takes 'commands' from its UI/commandline.
@@ -18,16 +28,8 @@ and must not trigger events, directly or indirectly.
Commands are queued in a CommandQueue
"""
from collections import OrderedDict, defaultdict
import io
import bb.event
import bb.cooker
import bb.remotedata
class DataStoreConnectionHandle(object):
def __init__(self, dsindex=0):
self.dsindex = dsindex
class CommandCompleted(bb.event.Event):
pass
@@ -41,8 +43,6 @@ class CommandFailed(CommandExit):
def __init__(self, message):
self.error = message
CommandExit.__init__(self, 1)
def __str__(self):
return "Command execution failed: %s" % self.error
class CommandError(Exception):
pass
@@ -55,47 +55,21 @@ class Command:
self.cooker = cooker
self.cmds_sync = CommandsSync()
self.cmds_async = CommandsAsync()
self.remotedatastores = None
# FIXME Add lock for this
self.currentAsyncCommand = None
def runCommand(self, commandline, ro_only = False):
def runCommand(self, commandline):
command = commandline.pop(0)
# Ensure cooker is ready for commands
if command != "updateConfig" and command != "setFeatures":
try:
self.cooker.init_configdata()
if not self.remotedatastores:
self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
except (Exception, SystemExit) as exc:
import traceback
if isinstance(exc, bb.BBHandledException):
# We need to start returning real exceptions here. Until we do, we can't
# tell if an exception is an instance of bb.BBHandledException
return None, "bb.BBHandledException()\n" + traceback.format_exc()
return None, traceback.format_exc()
if hasattr(CommandsSync, command):
# Can run synchronous commands straight away
command_method = getattr(self.cmds_sync, command)
if ro_only:
if not hasattr(command_method, 'readonly') or not getattr(command_method, 'readonly'):
return None, "Not able to execute not readonly commands in readonly mode"
try:
self.cooker.process_inotify_updates()
if getattr(command_method, 'needconfig', True):
self.cooker.updateCacheSync()
result = command_method(self, commandline)
except CommandError as exc:
return None, exc.args[0]
except (Exception, SystemExit) as exc:
except Exception:
import traceback
if isinstance(exc, bb.BBHandledException):
# We need to start returning real exceptions here. Until we do, we can't
# tell if an exception is an instance of bb.BBHandledException
return None, "bb.BBHandledException()\n" + traceback.format_exc()
return None, traceback.format_exc()
else:
return result, None
@@ -104,22 +78,17 @@ class Command:
if command not in CommandsAsync.__dict__:
return None, "No such command"
self.currentAsyncCommand = (command, commandline)
self.cooker.idleCallBackRegister(self.cooker.runCommands, self.cooker)
self.cooker.server_registration_cb(self.cooker.runCommands, self.cooker)
return True, None
def runAsyncCommand(self):
try:
self.cooker.process_inotify_updates()
if self.cooker.state in (bb.cooker.state.error, bb.cooker.state.shutdown, bb.cooker.state.forceshutdown):
# updateCache will trigger a shutdown of the parser
# and then raise BBHandledException triggering an exit
self.cooker.updateCache()
return False
if self.currentAsyncCommand is not None:
(command, options) = self.currentAsyncCommand
commandmethod = getattr(CommandsAsync, command)
needcache = getattr( commandmethod, "needcache" )
if needcache and self.cooker.state != bb.cooker.state.running:
if (needcache and self.cooker.state in
(bb.cooker.state.initial, bb.cooker.state.parsing)):
self.cooker.updateCache()
return True
else:
@@ -132,7 +101,7 @@ class Command:
return False
except SystemExit as exc:
arg = exc.args[0]
if isinstance(arg, str):
if isinstance(arg, basestring):
self.finishAsyncCommand(arg)
else:
self.finishAsyncCommand("Exited with %s" % arg)
@@ -146,18 +115,14 @@ class Command:
return False
def finishAsyncCommand(self, msg=None, code=None):
if msg or msg == "":
bb.event.fire(CommandFailed(msg), self.cooker.data)
if msg:
bb.event.fire(CommandFailed(msg), self.cooker.configuration.event_data)
elif code:
bb.event.fire(CommandExit(code), self.cooker.data)
bb.event.fire(CommandExit(code), self.cooker.configuration.event_data)
else:
bb.event.fire(CommandCompleted(), self.cooker.data)
bb.event.fire(CommandCompleted(), self.cooker.configuration.event_data)
self.currentAsyncCommand = None
self.cooker.finishcommand()
def reset(self):
if self.remotedatastores:
self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
class CommandsSync:
"""
@@ -170,408 +135,70 @@ class CommandsSync:
"""
Trigger cooker 'shutdown' mode
"""
command.cooker.shutdown(False)
command.cooker.shutdown()
def stateForceShutdown(self, command, params):
def stateStop(self, command, params):
"""
Stop the cooker
"""
command.cooker.shutdown(True)
command.cooker.stop()
def getAllKeysWithFlags(self, command, params):
def getCmdLineAction(self, command, params):
"""
Returns a dump of the global state. Call with
variable flags to be retrieved as params.
Get any command parsed from the commandline
"""
flaglist = params[0]
return command.cooker.getAllKeysWithFlags(flaglist)
getAllKeysWithFlags.readonly = True
cmd_action = command.cooker.commandlineAction
if cmd_action is None:
return None
elif 'msg' in cmd_action and cmd_action['msg']:
raise CommandError(cmd_action['msg'])
else:
return cmd_action['action']
def getVariable(self, command, params):
"""
Read the value of a variable from data
Read the value of a variable from configuration.data
"""
varname = params[0]
expand = True
if len(params) > 1:
expand = (params[1] == "True")
expand = params[1]
return command.cooker.data.getVar(varname, expand)
getVariable.readonly = True
return command.cooker.configuration.data.getVar(varname, expand)
def setVariable(self, command, params):
"""
Set the value of variable in data
Set the value of variable in configuration.data
"""
varname = params[0]
value = str(params[1])
command.cooker.extraconfigdata[varname] = value
command.cooker.data.setVar(varname, value)
value = params[1]
command.cooker.configuration.data.setVar(varname, value)
def getSetVariable(self, command, params):
def initCooker(self, command, params):
"""
Read the value of a variable from data and set it into the datastore
which effectively expands and locks the value.
Init the cooker to initial state with nothing parsed
"""
varname = params[0]
result = self.getVariable(command, params)
command.cooker.data.setVar(varname, result)
return result
command.cooker.initialize()
def setConfig(self, command, params):
def resetCooker(self, command, params):
"""
Set the value of variable in configuration
Reset the cooker to its initial state, thus forcing a reparse for
any async command that has the needcache property set to True
"""
varname = params[0]
value = str(params[1])
setattr(command.cooker.configuration, varname, value)
command.cooker.reset()
def enableDataTracking(self, command, params):
def getCpuCount(self, command, params):
"""
Enable history tracking for variables
Get the CPU count on the bitbake server
"""
command.cooker.enableDataTracking()
return bb.utils.cpu_count()
def disableDataTracking(self, command, params):
def setConfFilter(self, command, params):
"""
Disable history tracking for variables
Set the configuration file parsing filter
"""
command.cooker.disableDataTracking()
def setPrePostConfFiles(self, command, params):
prefiles = params[0].split()
postfiles = params[1].split()
command.cooker.configuration.prefile = prefiles
command.cooker.configuration.postfile = postfiles
setPrePostConfFiles.needconfig = False
def matchFile(self, command, params):
fMatch = params[0]
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.matchFile(fMatch, mc)
matchFile.needconfig = False
def getUIHandlerNum(self, command, params):
return bb.event.get_uihandler()
getUIHandlerNum.needconfig = False
getUIHandlerNum.readonly = True
def setEventMask(self, command, params):
handlerNum = params[0]
llevel = params[1]
debug_domains = params[2]
mask = params[3]
return bb.event.set_UIHmask(handlerNum, llevel, debug_domains, mask)
setEventMask.needconfig = False
setEventMask.readonly = True
def setFeatures(self, command, params):
"""
Set the cooker features to include the passed list of features
"""
features = params[0]
command.cooker.setFeatures(features)
setFeatures.needconfig = False
# although we change the internal state of the cooker, this is transparent since
# we always take and leave the cooker in state.initial
setFeatures.readonly = True
def updateConfig(self, command, params):
options = params[0]
environment = params[1]
cmdline = params[2]
command.cooker.updateConfigOpts(options, environment, cmdline)
updateConfig.needconfig = False
def parseConfiguration(self, command, params):
"""Instruct bitbake to parse its configuration
NOTE: it is only necessary to call this if you aren't calling any normal action
(otherwise parsing is taken care of automatically)
"""
command.cooker.parseConfiguration()
parseConfiguration.needconfig = False
def getLayerPriorities(self, command, params):
command.cooker.parseConfiguration()
ret = []
# regex objects cannot be marshalled by xmlrpc
for collection, pattern, regex, pri in command.cooker.bbfile_config_priorities:
ret.append((collection, pattern, regex.pattern, pri))
return ret
getLayerPriorities.readonly = True
def getRecipes(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return list(command.cooker.recipecaches[mc].pkg_pn.items())
getRecipes.readonly = True
def getRecipeDepends(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return list(command.cooker.recipecaches[mc].deps.items())
getRecipeDepends.readonly = True
def getRecipeVersions(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].pkg_pepvpr
getRecipeVersions.readonly = True
def getRecipeProvides(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].fn_provides
getRecipeProvides.readonly = True
def getRecipePackages(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].packages
getRecipePackages.readonly = True
def getRecipePackagesDynamic(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].packages_dynamic
getRecipePackagesDynamic.readonly = True
def getRProviders(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].rproviders
getRProviders.readonly = True
def getRuntimeDepends(self, command, params):
ret = []
try:
mc = params[0]
except IndexError:
mc = ''
rundeps = command.cooker.recipecaches[mc].rundeps
for key, value in rundeps.items():
if isinstance(value, defaultdict):
value = dict(value)
ret.append((key, value))
return ret
getRuntimeDepends.readonly = True
def getRuntimeRecommends(self, command, params):
ret = []
try:
mc = params[0]
except IndexError:
mc = ''
runrecs = command.cooker.recipecaches[mc].runrecs
for key, value in runrecs.items():
if isinstance(value, defaultdict):
value = dict(value)
ret.append((key, value))
return ret
getRuntimeRecommends.readonly = True
def getRecipeInherits(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].inherits
getRecipeInherits.readonly = True
def getBbFilePriority(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].bbfile_priority
getBbFilePriority.readonly = True
def getDefaultPreference(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].pkg_dp
getDefaultPreference.readonly = True
def getSkippedRecipes(self, command, params):
# Return list sorted by reverse priority order
import bb.cache
def sortkey(x):
vfn, _ = x
realfn, _, mc = bb.cache.virtualfn2realfn(vfn)
return (-command.cooker.collections[mc].calc_bbfile_priority(realfn)[0], vfn)
skipdict = OrderedDict(sorted(command.cooker.skiplist.items(), key=sortkey))
return list(skipdict.items())
getSkippedRecipes.readonly = True
def getOverlayedRecipes(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return list(command.cooker.collections[mc].overlayed.items())
getOverlayedRecipes.readonly = True
def getFileAppends(self, command, params):
fn = params[0]
try:
mc = params[1]
except IndexError:
mc = ''
return command.cooker.collections[mc].get_file_appends(fn)
getFileAppends.readonly = True
def getAllAppends(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.collections[mc].bbappends
getAllAppends.readonly = True
def findProviders(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.findProviders(mc)
findProviders.readonly = True
def findBestProvider(self, command, params):
(mc, pn) = bb.runqueue.split_mc(params[0])
return command.cooker.findBestProvider(pn, mc)
findBestProvider.readonly = True
def allProviders(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return list(bb.providers.allProviders(command.cooker.recipecaches[mc]).items())
allProviders.readonly = True
def getRuntimeProviders(self, command, params):
rprovide = params[0]
try:
mc = params[1]
except IndexError:
mc = ''
all_p = bb.providers.getRuntimeProviders(command.cooker.recipecaches[mc], rprovide)
if all_p:
best = bb.providers.filterProvidersRunTime(all_p, rprovide,
command.cooker.data,
command.cooker.recipecaches[mc])[0][0]
else:
best = None
return all_p, best
getRuntimeProviders.readonly = True
def dataStoreConnectorCmd(self, command, params):
dsindex = params[0]
method = params[1]
args = params[2]
kwargs = params[3]
d = command.remotedatastores[dsindex]
ret = getattr(d, method)(*args, **kwargs)
if isinstance(ret, bb.data_smart.DataSmart):
idx = command.remotedatastores.store(ret)
return DataStoreConnectionHandle(idx)
return ret
def dataStoreConnectorVarHistCmd(self, command, params):
dsindex = params[0]
method = params[1]
args = params[2]
kwargs = params[3]
d = command.remotedatastores[dsindex].varhistory
return getattr(d, method)(*args, **kwargs)
def dataStoreConnectorVarHistCmdEmit(self, command, params):
dsindex = params[0]
var = params[1]
oval = params[2]
val = params[3]
d = command.remotedatastores[params[4]]
o = io.StringIO()
command.remotedatastores[dsindex].varhistory.emit(var, oval, val, o, d)
return o.getvalue()
def dataStoreConnectorIncHistCmd(self, command, params):
dsindex = params[0]
method = params[1]
args = params[2]
kwargs = params[3]
d = command.remotedatastores[dsindex].inchistory
return getattr(d, method)(*args, **kwargs)
def dataStoreConnectorRelease(self, command, params):
dsindex = params[0]
if dsindex <= 0:
raise CommandError('dataStoreConnectorRelease: invalid index %d' % dsindex)
command.remotedatastores.release(dsindex)
def parseRecipeFile(self, command, params):
"""
Parse the specified recipe file (with or without bbappends)
and return a datastore object representing the environment
for the recipe.
"""
fn = params[0]
mc = bb.runqueue.mc_from_tid(fn)
appends = params[1]
appendlist = params[2]
if len(params) > 3:
config_data = command.remotedatastores[params[3]]
else:
config_data = None
if appends:
if appendlist is not None:
appendfiles = appendlist
else:
appendfiles = command.cooker.collections[mc].get_file_appends(fn)
else:
appendfiles = []
# We are calling bb.cache locally here rather than on the server,
# but that's OK because it doesn't actually need anything from
# the server barring the global datastore (which we have a remote
# version of)
if config_data:
# We have to use a different function here if we're passing in a datastore
# NOTE: we took a copy above, so we don't do it here again
envdata = bb.cache.parse_recipe(config_data, fn, appendfiles, mc)['']
else:
# Use the standard path
parser = bb.cache.NoCache(command.cooker.databuilder)
envdata = parser.loadDataFull(fn, appendfiles)
idx = command.remotedatastores.store(envdata)
return DataStoreConnectionHandle(idx)
parseRecipeFile.readonly = True
filterfunc = params[0]
bb.parse.parse_py.ConfHandler.confFilters.append(filterfunc)
class CommandsAsync:
"""
@@ -586,15 +213,8 @@ class CommandsAsync:
"""
bfile = params[0]
task = params[1]
if len(params) > 2:
internal = params[2]
else:
internal = False
if internal:
command.cooker.buildFileInternal(bfile, task, fireevents=False, quietlog=True)
else:
command.cooker.buildFile(bfile, task)
command.cooker.buildFile(bfile, task)
buildFile.needcache = False
def buildTargets(self, command, params):
@@ -644,6 +264,17 @@ class CommandsAsync:
command.finishAsyncCommand()
generateTargetsTree.needcache = True
def findCoreBaseFiles(self, command, params):
"""
Find certain files in COREBASE directory. i.e. Layers
"""
subdir = params[0]
filename = params[1]
command.cooker.findCoreBaseFiles(subdir, filename)
command.finishAsyncCommand()
findCoreBaseFiles.needcache = False
def findConfigFiles(self, command, params):
"""
Find config files which provide appropriate values
@@ -667,16 +298,6 @@ class CommandsAsync:
command.finishAsyncCommand()
findFilesMatchingInDir.needcache = False
def testCookerCommandEvent(self, command, params):
"""
Dummy command used by OEQA selftest to test tinfoil without IO
"""
pattern = params[0]
command.cooker.testCookerCommandEvent(pattern)
command.finishAsyncCommand()
testCookerCommandEvent.needcache = False
def findConfigFilePath(self, command, params):
"""
Find the path of the requested configuration file
@@ -725,50 +346,40 @@ class CommandsAsync:
command.finishAsyncCommand()
parseFiles.needcache = True
def reparseFiles(self, command, params):
"""
Reparse .bb files
"""
command.cooker.reparseFiles()
command.finishAsyncCommand()
reparseFiles.needcache = True
def compareRevisions(self, command, params):
"""
Parse the .bb files
"""
if bb.fetch.fetcher_compare_revisions(command.cooker.data):
if bb.fetch.fetcher_compare_revisions(command.cooker.configuration.data):
command.finishAsyncCommand(code=1)
else:
command.finishAsyncCommand()
compareRevisions.needcache = True
def parseConfigurationFiles(self, command, params):
"""
Parse the configuration files
"""
prefiles = params[0]
postfiles = params[1]
command.cooker.parseConfigurationFiles(prefiles, postfiles)
command.finishAsyncCommand()
parseConfigurationFiles.needcache = False
def triggerEvent(self, command, params):
"""
Trigger a certain event
"""
event = params[0]
bb.event.fire(eval(event), command.cooker.data)
bb.event.fire(eval(event), command.cooker.configuration.data)
command.currentAsyncCommand = None
triggerEvent.needcache = False
def resetCooker(self, command, params):
"""
Reset the cooker to its initial state, thus forcing a reparse for
any async command that has the needcache property set to True
"""
command.cooker.reset()
command.finishAsyncCommand()
resetCooker.needcache = False
def clientComplete(self, command, params):
"""
Do the right thing when the controlling client exits
"""
command.cooker.clientComplete()
command.finishAsyncCommand()
clientComplete.needcache = False
def findSigInfo(self, command, params):
"""
Find signature info files via the signature generator
"""
(mc, pn) = bb.runqueue.split_mc(params[0])
taskname = params[1]
sigs = params[2]
res = bb.siggen.find_siginfo(pn, taskname, sigs, command.cooker.databuilder.mcdata[mc])
bb.event.fire(bb.event.FindSigInfoResult(res), command.cooker.databuilder.mcdata[mc])
command.finishAsyncCommand()
findSigInfo.needcache = False

28
bitbake/lib/bb/compat.py Normal file
View File

@@ -0,0 +1,28 @@
"""Code pulled from future python versions, here for compatibility"""
def total_ordering(cls):
"""Class decorator that fills in missing ordering methods"""
convert = {
'__lt__': [('__gt__', lambda self, other: other < self),
('__le__', lambda self, other: not other < self),
('__ge__', lambda self, other: not self < other)],
'__le__': [('__ge__', lambda self, other: other <= self),
('__lt__', lambda self, other: not other <= self),
('__gt__', lambda self, other: not self <= other)],
'__gt__': [('__lt__', lambda self, other: other > self),
('__ge__', lambda self, other: not other > self),
('__le__', lambda self, other: not self > other)],
'__ge__': [('__le__', lambda self, other: other >= self),
('__gt__', lambda self, other: not other >= self),
('__lt__', lambda self, other: not self >= other)]
}
roots = set(dir(cls)) & set(convert)
if not roots:
raise ValueError('must define at least one ordering operation: < > <= >=')
root = max(roots) # prefer __lt__ to __le__ to __gt__ to __ge__
for opname, opfunc in convert[root]:
if opname not in roots:
opfunc.__name__ = opname
opfunc.__doc__ = getattr(int, opname).__doc__
setattr(cls, opname, opfunc)
return cls

View File

@@ -1,196 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
# Helper library to implement streaming compression and decompression using an
# external process
#
# This library should be used directly by end users; a wrapper library for the
# specific compression tool should be created
import builtins
import io
import os
import subprocess
def open_wrap(
cls, filename, mode="rb", *, encoding=None, errors=None, newline=None, **kwargs
):
"""
Open a compressed file in binary or text mode.
Users should not call this directly. A specific compression library can use
this helper to provide it's own "open" command
The filename argument can be an actual filename (a str or bytes object), or
an existing file object to read from or write to.
The mode argument can be "r", "rb", "w", "wb", "x", "xb", "a" or "ab" for
binary mode, or "rt", "wt", "xt" or "at" for text mode. The default mode is
"rb".
For binary mode, this function is equivalent to the cls constructor:
cls(filename, mode). In this case, the encoding, errors and newline
arguments must not be provided.
For text mode, a cls object is created, and wrapped in an
io.TextIOWrapper instance with the specified encoding, error handling
behavior, and line ending(s).
"""
if "t" in mode:
if "b" in mode:
raise ValueError("Invalid mode: %r" % (mode,))
else:
if encoding is not None:
raise ValueError("Argument 'encoding' not supported in binary mode")
if errors is not None:
raise ValueError("Argument 'errors' not supported in binary mode")
if newline is not None:
raise ValueError("Argument 'newline' not supported in binary mode")
file_mode = mode.replace("t", "")
if isinstance(filename, (str, bytes, os.PathLike, int)):
binary_file = cls(filename, file_mode, **kwargs)
elif hasattr(filename, "read") or hasattr(filename, "write"):
binary_file = cls(None, file_mode, fileobj=filename, **kwargs)
else:
raise TypeError("filename must be a str or bytes object, or a file")
if "t" in mode:
return io.TextIOWrapper(
binary_file, encoding, errors, newline, write_through=True
)
else:
return binary_file
class CompressionError(OSError):
pass
class PipeFile(io.RawIOBase):
"""
Class that implements generically piping to/from a compression program
Derived classes should add the function get_compress() and get_decompress()
that return the required commands. Input will be piped into stdin and the
(de)compressed output should be written to stdout, e.g.:
class FooFile(PipeCompressionFile):
def get_decompress(self):
return ["fooc", "--decompress", "--stdout"]
def get_compress(self):
return ["fooc", "--compress", "--stdout"]
"""
READ = 0
WRITE = 1
def __init__(self, filename=None, mode="rb", *, stderr=None, fileobj=None):
if "t" in mode or "U" in mode:
raise ValueError("Invalid mode: {!r}".format(mode))
if not "b" in mode:
mode += "b"
if mode.startswith("r"):
self.mode = self.READ
elif mode.startswith("w"):
self.mode = self.WRITE
else:
raise ValueError("Invalid mode %r" % mode)
if fileobj is not None:
self.fileobj = fileobj
else:
self.fileobj = builtins.open(filename, mode or "rb")
if self.mode == self.READ:
self.p = subprocess.Popen(
self.get_decompress(),
stdin=self.fileobj,
stdout=subprocess.PIPE,
stderr=stderr,
close_fds=True,
)
self.pipe = self.p.stdout
else:
self.p = subprocess.Popen(
self.get_compress(),
stdin=subprocess.PIPE,
stdout=self.fileobj,
stderr=stderr,
close_fds=True,
)
self.pipe = self.p.stdin
self.__closed = False
def _check_process(self):
if self.p is None:
return
returncode = self.p.wait()
if returncode:
raise CompressionError("Process died with %d" % returncode)
self.p = None
def close(self):
if self.closed:
return
self.pipe.close()
if self.p is not None:
self._check_process()
self.fileobj.close()
self.__closed = True
@property
def closed(self):
return self.__closed
def fileno(self):
return self.pipe.fileno()
def flush(self):
self.pipe.flush()
def isatty(self):
return self.pipe.isatty()
def readable(self):
return self.mode == self.READ
def writable(self):
return self.mode == self.WRITE
def readinto(self, b):
if self.mode != self.READ:
import errno
raise OSError(
errno.EBADF, "read() on write-only %s object" % self.__class__.__name__
)
size = self.pipe.readinto(b)
if size == 0:
self._check_process()
return size
def write(self, data):
if self.mode != self.WRITE:
import errno
raise OSError(
errno.EBADF, "write() on read-only %s object" % self.__class__.__name__
)
data = self.pipe.write(data)
if not data:
self._check_process()
return data

View File

@@ -1,19 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import bb.compress._pipecompress
def open(*args, **kwargs):
return bb.compress._pipecompress.open_wrap(LZ4File, *args, **kwargs)
class LZ4File(bb.compress._pipecompress.PipeFile):
def get_compress(self):
return ["lz4c", "-z", "-c"]
def get_decompress(self):
return ["lz4c", "-d", "-c"]

View File

@@ -1,30 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import bb.compress._pipecompress
import shutil
def open(*args, **kwargs):
return bb.compress._pipecompress.open_wrap(ZstdFile, *args, **kwargs)
class ZstdFile(bb.compress._pipecompress.PipeFile):
def __init__(self, *args, num_threads=1, compresslevel=3, **kwargs):
self.num_threads = num_threads
self.compresslevel = compresslevel
super().__init__(*args, **kwargs)
def _get_zstd(self):
if self.num_threads == 1 or not shutil.which("pzstd"):
return ["zstd"]
return ["pzstd", "-p", "%d" % self.num_threads]
def get_compress(self):
return self._get_zstd() + ["-c", "-%d" % self.compresslevel]
def get_decompress(self):
return self._get_zstd() + ["-d", "-c"]

File diff suppressed because it is too large Load Diff

View File

@@ -1,468 +0,0 @@
#
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2003, 2004 Phil Blundell
# Copyright (C) 2003 - 2005 Michael 'Mickey' Lauer
# Copyright (C) 2005 Holger Hans Peter Freyther
# Copyright (C) 2005 ROAD GmbH
# Copyright (C) 2006 Richard Purdie
#
# SPDX-License-Identifier: GPL-2.0-only
#
import logging
import os
import re
import sys
import hashlib
from functools import wraps
import bb
from bb import data
import bb.parse
logger = logging.getLogger("BitBake")
parselog = logging.getLogger("BitBake.Parsing")
class ConfigParameters(object):
def __init__(self, argv=None):
self.options, targets = self.parseCommandLine(argv or sys.argv)
self.environment = self.parseEnvironment()
self.options.pkgs_to_build = targets or []
for key, val in self.options.__dict__.items():
setattr(self, key, val)
def parseCommandLine(self, argv=sys.argv):
raise Exception("Caller must implement commandline option parsing")
def parseEnvironment(self):
return os.environ.copy()
def updateFromServer(self, server):
if not self.options.cmd:
defaulttask, error = server.runCommand(["getVariable", "BB_DEFAULT_TASK"])
if error:
raise Exception("Unable to get the value of BB_DEFAULT_TASK from the server: %s" % error)
self.options.cmd = defaulttask or "build"
_, error = server.runCommand(["setConfig", "cmd", self.options.cmd])
if error:
raise Exception("Unable to set configuration option 'cmd' on the server: %s" % error)
if not self.options.pkgs_to_build:
bbpkgs, error = server.runCommand(["getVariable", "BBTARGETS"])
if error:
raise Exception("Unable to get the value of BBTARGETS from the server: %s" % error)
if bbpkgs:
self.options.pkgs_to_build.extend(bbpkgs.split())
def updateToServer(self, server, environment):
options = {}
for o in ["halt", "force", "invalidate_stamp",
"dry_run", "dump_signatures",
"extra_assume_provided", "profile",
"prefile", "postfile", "server_timeout",
"nosetscene", "setsceneonly", "skipsetscene",
"runall", "runonly", "writeeventlog"]:
options[o] = getattr(self.options, o)
options['build_verbose_shell'] = self.options.verbose
options['build_verbose_stdout'] = self.options.verbose
options['default_loglevel'] = bb.msg.loggerDefaultLogLevel
options['debug_domains'] = bb.msg.loggerDefaultDomains
ret, error = server.runCommand(["updateConfig", options, environment, sys.argv])
if error:
raise Exception("Unable to update the server configuration with local parameters: %s" % error)
def parseActions(self):
# Parse any commandline into actions
action = {'action':None, 'msg':None}
if self.options.show_environment:
if 'world' in self.options.pkgs_to_build:
action['msg'] = "'world' is not a valid target for --environment."
elif 'universe' in self.options.pkgs_to_build:
action['msg'] = "'universe' is not a valid target for --environment."
elif len(self.options.pkgs_to_build) > 1:
action['msg'] = "Only one target can be used with the --environment option."
elif self.options.buildfile and len(self.options.pkgs_to_build) > 0:
action['msg'] = "No target should be used with the --environment and --buildfile options."
elif self.options.pkgs_to_build:
action['action'] = ["showEnvironmentTarget", self.options.pkgs_to_build]
else:
action['action'] = ["showEnvironment", self.options.buildfile]
elif self.options.buildfile is not None:
action['action'] = ["buildFile", self.options.buildfile, self.options.cmd]
elif self.options.revisions_changed:
action['action'] = ["compareRevisions"]
elif self.options.show_versions:
action['action'] = ["showVersions"]
elif self.options.parse_only:
action['action'] = ["parseFiles"]
elif self.options.dot_graph:
if self.options.pkgs_to_build:
action['action'] = ["generateDotGraph", self.options.pkgs_to_build, self.options.cmd]
else:
action['msg'] = "Please specify a package name for dependency graph generation."
else:
if self.options.pkgs_to_build:
action['action'] = ["buildTargets", self.options.pkgs_to_build, self.options.cmd]
else:
#action['msg'] = "Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information."
action = None
self.options.initialaction = action
return action
class CookerConfiguration(object):
"""
Manages build options and configurations for one run
"""
def __init__(self):
self.debug_domains = bb.msg.loggerDefaultDomains
self.default_loglevel = bb.msg.loggerDefaultLogLevel
self.extra_assume_provided = []
self.prefile = []
self.postfile = []
self.cmd = None
self.halt = True
self.force = False
self.profile = False
self.nosetscene = False
self.setsceneonly = False
self.skipsetscene = False
self.invalidate_stamp = False
self.dump_signatures = []
self.build_verbose_shell = False
self.build_verbose_stdout = False
self.dry_run = False
self.tracking = False
self.writeeventlog = False
self.limited_deps = False
self.runall = []
self.runonly = []
self.env = {}
def __getstate__(self):
state = {}
for key in self.__dict__.keys():
state[key] = getattr(self, key)
return state
def __setstate__(self,state):
for k in state:
setattr(self, k, state[k])
def catch_parse_error(func):
"""Exception handling bits for our parsing"""
@wraps(func)
def wrapped(fn, *args):
try:
return func(fn, *args)
except IOError as exc:
import traceback
parselog.critical(traceback.format_exc())
parselog.critical("Unable to parse %s: %s" % (fn, exc))
raise bb.BBHandledException()
except bb.data_smart.ExpansionError as exc:
import traceback
bbdir = os.path.dirname(__file__) + os.sep
exc_class, exc, tb = sys.exc_info()
for tb in iter(lambda: tb.tb_next, None):
# Skip frames in bitbake itself, we only want the metadata
fn, _, _, _ = traceback.extract_tb(tb, 1)[0]
if not fn.startswith(bbdir):
break
parselog.critical("Unable to parse %s" % fn, exc_info=(exc_class, exc, tb))
raise bb.BBHandledException()
except bb.parse.ParseError as exc:
parselog.critical(str(exc))
raise bb.BBHandledException()
return wrapped
@catch_parse_error
def parse_config_file(fn, data, include=True):
return bb.parse.handle(fn, data, include)
@catch_parse_error
def _inherit(bbclass, data):
bb.parse.BBHandler.inherit(bbclass, "configuration INHERITs", 0, data)
return data
def findConfigFile(configfile, data):
search = []
bbpath = data.getVar("BBPATH")
if bbpath:
for i in bbpath.split(":"):
search.append(os.path.join(i, "conf", configfile))
path = os.getcwd()
while path != "/":
search.append(os.path.join(path, "conf", configfile))
path, _ = os.path.split(path)
for i in search:
if os.path.exists(i):
return i
return None
#
# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
# up to /. If that fails, bitbake would fall back to cwd.
#
def findTopdir():
d = bb.data.init()
bbpath = None
if 'BBPATH' in os.environ:
bbpath = os.environ['BBPATH']
d.setVar('BBPATH', bbpath)
layerconf = findConfigFile("bblayers.conf", d)
if layerconf:
return os.path.dirname(os.path.dirname(layerconf))
return os.path.abspath(os.getcwd())
class CookerDataBuilder(object):
def __init__(self, cookercfg, worker = False):
self.prefiles = cookercfg.prefile
self.postfiles = cookercfg.postfile
self.tracking = cookercfg.tracking
bb.utils.set_context(bb.utils.clean_context())
bb.event.set_class_handlers(bb.event.clean_class_handlers())
self.basedata = bb.data.init()
if self.tracking:
self.basedata.enableTracking()
# Keep a datastore of the initial environment variables and their
# values from when BitBake was launched to enable child processes
# to use environment variables which have been cleaned from the
# BitBake processes env
self.savedenv = bb.data.init()
for k in cookercfg.env:
self.savedenv.setVar(k, cookercfg.env[k])
if k in bb.data_smart.bitbake_renamed_vars:
bb.error('Shell environment variable %s has been renamed to %s' % (k, bb.data_smart.bitbake_renamed_vars[k]))
bb.fatal("Exiting to allow enviroment variables to be corrected")
filtered_keys = bb.utils.approved_variables()
bb.data.inheritFromOS(self.basedata, self.savedenv, filtered_keys)
self.basedata.setVar("BB_ORIGENV", self.savedenv)
self.basedata.setVar("__bbclasstype", "global")
if worker:
self.basedata.setVar("BB_WORKERCONTEXT", "1")
self.data = self.basedata
self.mcdata = {}
def parseBaseConfiguration(self, worker=False):
data_hash = hashlib.sha256()
try:
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
if self.data.getVar("BB_WORKERCONTEXT", False) is None and not worker:
bb.fetch.fetcher_init(self.data)
bb.parse.init_parser(self.data)
bb.codeparser.parser_cache_init(self.data)
bb.event.fire(bb.event.ConfigParsed(), self.data)
reparse_cnt = 0
while self.data.getVar("BB_INVALIDCONF", False) is True:
if reparse_cnt > 20:
logger.error("Configuration has been re-parsed over 20 times, "
"breaking out of the loop...")
raise Exception("Too deep config re-parse loop. Check locations where "
"BB_INVALIDCONF is being set (ConfigParsed event handlers)")
self.data.setVar("BB_INVALIDCONF", False)
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
reparse_cnt += 1
bb.event.fire(bb.event.ConfigParsed(), self.data)
bb.parse.init_parser(self.data)
data_hash.update(self.data.get_hash().encode('utf-8'))
self.mcdata[''] = self.data
multiconfig = (self.data.getVar("BBMULTICONFIG") or "").split()
for config in multiconfig:
if config[0].isdigit():
bb.fatal("Multiconfig name '%s' is invalid as multiconfigs cannot start with a digit" % config)
mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), mcdata)
self.mcdata[config] = mcdata
data_hash.update(mcdata.get_hash().encode('utf-8'))
if multiconfig:
bb.event.fire(bb.event.MultiConfigParsed(self.mcdata), self.data)
self.data_hash = data_hash.hexdigest()
except (SyntaxError, bb.BBHandledException):
raise bb.BBHandledException()
except bb.data_smart.ExpansionError as e:
logger.error(str(e))
raise bb.BBHandledException()
except Exception:
logger.exception("Error parsing configuration files")
raise bb.BBHandledException()
# Handle obsolete variable names
d = self.data
renamedvars = d.getVarFlags('BB_RENAMED_VARIABLES') or {}
renamedvars.update(bb.data_smart.bitbake_renamed_vars)
issues = False
for v in renamedvars:
if d.getVar(v) != None or d.hasOverrides(v):
issues = True
loginfo = {}
history = d.varhistory.get_variable_refs(v)
for h in history:
for line in history[h]:
loginfo = {'file' : h, 'line' : line}
bb.data.data_smart._print_rename_error(v, loginfo, renamedvars)
if not history:
bb.data.data_smart._print_rename_error(v, loginfo, renamedvars)
if issues:
raise bb.BBHandledException()
# Create a copy so we can reset at a later date when UIs disconnect
self.origdata = self.data
self.data = bb.data.createCopy(self.origdata)
self.mcdata[''] = self.data
def reset(self):
# We may not have run parseBaseConfiguration() yet
if not hasattr(self, 'origdata'):
return
self.data = bb.data.createCopy(self.origdata)
self.mcdata[''] = self.data
def _findLayerConf(self, data):
return findConfigFile("bblayers.conf", data)
def parseConfigurationFiles(self, prefiles, postfiles, mc = "default"):
data = bb.data.createCopy(self.basedata)
data.setVar("BB_CURRENT_MC", mc)
# Parse files for loading *before* bitbake.conf and any includes
for f in prefiles:
data = parse_config_file(f, data)
layerconf = self._findLayerConf(data)
if layerconf:
parselog.debug(2, "Found bblayers.conf (%s)", layerconf)
# By definition bblayers.conf is in conf/ of TOPDIR.
# We may have been called with cwd somewhere else so reset TOPDIR
data.setVar("TOPDIR", os.path.dirname(os.path.dirname(layerconf)))
data = parse_config_file(layerconf, data)
layers = (data.getVar('BBLAYERS') or "").split()
broken_layers = []
if not layers:
bb.fatal("The bblayers.conf file doesn't contain any BBLAYERS definition")
data = bb.data.createCopy(data)
approved = bb.utils.approved_variables()
# Check whether present layer directories exist
for layer in layers:
if not os.path.isdir(layer):
broken_layers.append(layer)
if broken_layers:
parselog.critical("The following layer directories do not exist:")
for layer in broken_layers:
parselog.critical(" %s", layer)
parselog.critical("Please check BBLAYERS in %s" % (layerconf))
raise bb.BBHandledException()
for layer in layers:
parselog.debug(2, "Adding layer %s", layer)
if 'HOME' in approved and '~' in layer:
layer = os.path.expanduser(layer)
if layer.endswith('/'):
layer = layer.rstrip('/')
data.setVar('LAYERDIR', layer)
data.setVar('LAYERDIR_RE', re.escape(layer))
data = parse_config_file(os.path.join(layer, "conf", "layer.conf"), data)
data.expandVarref('LAYERDIR')
data.expandVarref('LAYERDIR_RE')
data.delVar('LAYERDIR_RE')
data.delVar('LAYERDIR')
bbfiles_dynamic = (data.getVar('BBFILES_DYNAMIC') or "").split()
collections = (data.getVar('BBFILE_COLLECTIONS') or "").split()
invalid = []
for entry in bbfiles_dynamic:
parts = entry.split(":", 1)
if len(parts) != 2:
invalid.append(entry)
continue
l, f = parts
invert = l[0] == "!"
if invert:
l = l[1:]
if (l in collections and not invert) or (l not in collections and invert):
data.appendVar("BBFILES", " " + f)
if invalid:
bb.fatal("BBFILES_DYNAMIC entries must be of the form {!}<collection name>:<filename pattern>, not:\n %s" % "\n ".join(invalid))
layerseries = set((data.getVar("LAYERSERIES_CORENAMES") or "").split())
collections_tmp = collections[:]
for c in collections:
collections_tmp.remove(c)
if c in collections_tmp:
bb.fatal("Found duplicated BBFILE_COLLECTIONS '%s', check bblayers.conf or layer.conf to fix it." % c)
compat = set((data.getVar("LAYERSERIES_COMPAT_%s" % c) or "").split())
if compat and not layerseries:
bb.fatal("No core layer found to work with layer '%s'. Missing entry in bblayers.conf?" % c)
if compat and not (compat & layerseries):
bb.fatal("Layer %s is not compatible with the core layer which only supports these series: %s (layer is compatible with %s)"
% (c, " ".join(layerseries), " ".join(compat)))
elif not compat and not data.getVar("BB_WORKERCONTEXT"):
bb.warn("Layer %s should set LAYERSERIES_COMPAT_%s in its conf/layer.conf file to list the core layer names it is compatible with." % (c, c))
if not data.getVar("BBPATH"):
msg = "The BBPATH variable is not set"
if not layerconf:
msg += (" and bitbake did not find a conf/bblayers.conf file in"
" the expected location.\nMaybe you accidentally"
" invoked bitbake from the wrong directory?")
raise SystemExit(msg)
if not data.getVar("TOPDIR"):
data.setVar("TOPDIR", os.path.abspath(os.getcwd()))
data = parse_config_file(os.path.join("conf", "bitbake.conf"), data)
# Parse files for loading *after* bitbake.conf and any includes
for p in postfiles:
data = parse_config_file(p, data)
# Handle any INHERITs and inherit the base class
bbclasses = ["base"] + (data.getVar('INHERIT') or "").split()
for bbclass in bbclasses:
data = _inherit(bbclass, data)
# Normally we only register event handlers at the end of parsing .bb files
# We register any handlers we've found so far here...
for var in data.getVar('__BBHANDLERS', False) or []:
handlerfn = data.getVarFlag(var, "filename", False)
if not handlerfn:
parselog.critical("Undefined event handler function '%s'" % var)
raise bb.BBHandledException()
handlerln = int(data.getVarFlag(var, "lineno", False))
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln, data)
data.setVar('BBINCLUDED',bb.parse.get_file_depends(data))
return data

View File

@@ -1,22 +1,45 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
"""
Python Daemonizing helper
Python Deamonizing helper
Originally based on code Copyright (C) 2005 Chad J. Schroeder but now heavily modified
to allow a function to be daemonized and return for bitbake use by Richard Purdie
Configurable daemon behaviors:
1.) The current working directory set to the "/" directory.
2.) The current file creation mode mask set to 0.
3.) Close all open files (1024).
4.) Redirect standard I/O streams to "/dev/null".
A failed call to fork() now raises an exception.
References:
1) Advanced Programming in the Unix Environment: W. Richard Stevens
2) Unix Programming Frequently Asked Questions:
http://www.erlenstar.demon.co.uk/unix/faq_toc.html
Modified to allow a function to be daemonized and return for
bitbake use by Richard Purdie
"""
import os
import sys
import io
import traceback
__author__ = "Chad J. Schroeder"
__copyright__ = "Copyright (C) 2005 Chad J. Schroeder"
__version__ = "0.2"
import bb
# Standard Python modules.
import os # Miscellaneous OS interfaces.
import sys # System-specific parameters and functions.
# Default daemon parameters.
# File mode creation mask of the daemon.
# For BitBake's children, we do want to inherit the parent umask.
UMASK = None
# Default maximum for the number of available file descriptors.
MAXFD = 1024
# The standard I/O file descriptors are redirected to /dev/null by default.
if (hasattr(os, "devnull")):
REDIRECT_TO = os.devnull
else:
REDIRECT_TO = "/dev/null"
def createDaemon(function, logfile):
"""
@@ -24,10 +47,6 @@ def createDaemon(function, logfile):
background as a daemon, returning control to the caller.
"""
# Ensure stdout/stderror are flushed before forking to avoid duplicate output
sys.stdout.flush()
sys.stderr.flush()
try:
# Fork a child process so the parent can exit. This returns control to
# the command-line or shell. It also guarantees that the child will not
@@ -43,6 +62,36 @@ def createDaemon(function, logfile):
# leader of the new process group, we call os.setsid(). The process is
# also guaranteed not to have a controlling terminal.
os.setsid()
# Is ignoring SIGHUP necessary?
#
# It's often suggested that the SIGHUP signal should be ignored before
# the second fork to avoid premature termination of the process. The
# reason is that when the first child terminates, all processes, e.g.
# the second child, in the orphaned group will be sent a SIGHUP.
#
# "However, as part of the session management system, there are exactly
# two cases where SIGHUP is sent on the death of a process:
#
# 1) When the process that dies is the session leader of a session that
# is attached to a terminal device, SIGHUP is sent to all processes
# in the foreground process group of that terminal device.
# 2) When the death of a process causes a process group to become
# orphaned, and one or more processes in the orphaned group are
# stopped, then SIGHUP and SIGCONT are sent to all members of the
# orphaned group." [2]
#
# The first case can be ignored since the child is guaranteed not to have
# a controlling terminal. The second case isn't so easy to dismiss.
# The process group is orphaned when the first child terminates and
# POSIX.1 requires that every STOPPED process in an orphaned process
# group be sent a SIGHUP signal followed by a SIGCONT signal. Since the
# second child is not STOPPED though, we can safely forego ignoring the
# SIGHUP signal. In any case, there are no ill-effects if it is ignored.
#
# import signal # Set handlers for asynchronous events.
# signal.signal(signal.SIGHUP, signal.SIG_IGN)
try:
# Fork a second child and exit immediately to prevent zombies. This
# causes the second child process to be orphaned, making the init
@@ -56,46 +105,86 @@ def createDaemon(function, logfile):
except OSError as e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
if (pid != 0):
if (pid == 0): # The second child.
# We probably don't want the file mode creation mask inherited from
# the parent, so we give the child complete control over permissions.
if UMASK is not None:
os.umask(UMASK)
else:
# Parent (the first child) of the second child.
# exit() or _exit()?
# _exit is like exit(), but it doesn't call any functions registered
# with atexit (and on_exit) or any registered signal handlers. It also
# closes any open file descriptors, but doesn't flush any buffered output.
# Using exit() may cause all any temporary files to be unexpectedly
# removed. It's therefore recommended that child branches of a fork()
# and the parent branch(es) of a daemon use _exit().
os._exit(0)
else:
os.waitpid(pid, 0)
# exit() or _exit()?
# _exit is like exit(), but it doesn't call any functions registered
# with atexit (and on_exit) or any registered signal handlers. It also
# closes any open file descriptors. Using exit() may cause all stdio
# streams to be flushed twice and any temporary files may be unexpectedly
# removed. It's therefore recommended that child branches of a fork()
# and the parent branch(es) of a daemon use _exit().
return
# The second child.
# Close all open file descriptors. This prevents the child from keeping
# open any file descriptors inherited from the parent. There is a variety
# of methods to accomplish this task. Three are listed below.
#
# Try the system configuration variable, SC_OPEN_MAX, to obtain the maximum
# number of open file descriptors to close. If it doesn't exists, use
# the default value (configurable).
#
# try:
# maxfd = os.sysconf("SC_OPEN_MAX")
# except (AttributeError, ValueError):
# maxfd = MAXFD
#
# OR
#
# if (os.sysconf_names.has_key("SC_OPEN_MAX")):
# maxfd = os.sysconf("SC_OPEN_MAX")
# else:
# maxfd = MAXFD
#
# OR
#
# Use the getrlimit method to retrieve the maximum file descriptor number
# that can be opened by this process. If there is not limit on the
# resource, use the default value.
#
import resource # Resource usage information.
maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
if (maxfd == resource.RLIM_INFINITY):
maxfd = MAXFD
# Iterate through and close all file descriptors.
# for fd in range(0, maxfd):
# try:
# os.close(fd)
# except OSError: # ERROR, fd wasn't open to begin with (ignored)
# pass
# Replace standard fds with our own
with open('/dev/null', 'r') as si:
os.dup2(si.fileno(), sys.stdin.fileno())
# Redirect the standard I/O file descriptors to the specified file. Since
# the daemon has no controlling terminal, most daemons redirect stdin,
# stdout, and stderr to /dev/null. This is done to prevent side-effects
# from reads and writes to the standard I/O file descriptors.
with open(logfile, 'a+') as so:
try:
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(so.fileno(), sys.stderr.fileno())
except io.UnsupportedOperation:
sys.stdout = so
# This call to open is guaranteed to return the lowest file descriptor,
# which will be 0 (stdin), since it was closed above.
# os.open(REDIRECT_TO, os.O_RDWR) # standard input (0)
# Have stdout and stderr be the same so log output matches chronologically
# and there aren't two separate buffers
sys.stderr = sys.stdout
# Duplicate standard input to standard output and standard error.
# os.dup2(0, 1) # standard output (1)
# os.dup2(0, 2) # standard error (2)
try:
function()
except Exception as e:
traceback.print_exc()
finally:
bb.event.print_ui_queue()
# os._exit() doesn't flush open files like os.exit() does. Manually flush
# stdout and stderr so that any logging output will be seen, particularly
# exception tracebacks.
sys.stdout.flush()
sys.stderr.flush()
os._exit(0)
si = file('/dev/null', 'r')
so = file(logfile, 'w')
se = so
# Replace those fds with our own
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
function()
os._exit(0)

View File

@@ -1,10 +1,12 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Data' implementations
Functions for interacting with the data structure used by the
BitBake build tools.
The expandKeys and update_data are the most expensive
The expandData and update_data are the most expensive
operations. At night the cookie monster came by and
suggested 'give me cookies on setting the variables and
things will work out'. Taking this suggestion into account
@@ -13,19 +15,29 @@ Analyse von Algorithmen' lecture and the cookie
monster seems to be right. We will track setVar more carefully
to have faster update_data and expandKeys operations.
This is a trade-off between speed and memory again but
This is a treade-off between speed and memory again but
the speed is more critical here.
"""
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2005 Holger Hans Peter Freyther
#
# SPDX-License-Identifier: GPL-2.0-only
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
import sys, os, re
import hashlib
if sys.argv[0][-5:] == "pydoc":
path = os.path.dirname(os.path.dirname(sys.argv[1]))
else:
@@ -47,7 +59,7 @@ def init():
def init_db(parent = None):
"""Return a new object representing the Bitbake data,
optionally based on an existing object"""
if parent is not None:
if parent:
return parent.createCopy()
else:
return _dict_type()
@@ -66,6 +78,55 @@ def initVar(var, d):
"""Non-destructive var init for data structure"""
d.initVar(var)
def setVar(var, value, d):
"""Set a variable to a given value"""
d.setVar(var, value)
def getVar(var, d, exp = 0):
"""Gets the value of a variable"""
return d.getVar(var, exp)
def renameVar(key, newkey, d):
"""Renames a variable from key to newkey"""
d.renameVar(key, newkey)
def delVar(var, d):
"""Removes a variable from the data set"""
d.delVar(var)
def setVarFlag(var, flag, flagvalue, d):
"""Set a flag for a given variable to a given value"""
d.setVarFlag(var, flag, flagvalue)
def getVarFlag(var, flag, d):
"""Gets given flag from given var"""
return d.getVarFlag(var, flag)
def delVarFlag(var, flag, d):
"""Removes a given flag from the variable's flags"""
d.delVarFlag(var, flag)
def setVarFlags(var, flags, d):
"""Set the flags for a given variable
Note:
setVarFlags will not clear previous
flags. Think of this method as
addVarFlags
"""
d.setVarFlags(var, flags)
def getVarFlags(var, d):
"""Gets a variable's flags"""
return d.getVarFlags(var)
def delVarFlags(var, d):
"""Removes a variable's flags"""
d.delVarFlags(var)
def keys(d):
"""Return a list of keys in d"""
return d.keys()
@@ -79,11 +140,11 @@ def expand(s, d, varname = None):
return d.expand(s, varname)
def expandKeys(alterdata, readdata = None):
if readdata is None:
if readdata == None:
readdata = alterdata
todolist = {}
for key in alterdata:
for key in keys(alterdata):
if not '${' in key:
continue
@@ -94,14 +155,10 @@ def expandKeys(alterdata, readdata = None):
# These two for loops are split for performance to maximise the
# usefulness of the expand cache
for key in sorted(todolist):
for key in todolist:
ekey = todolist[key]
newval = alterdata.getVar(ekey, False)
if newval is not None:
val = alterdata.getVar(key, False)
if val is not None:
bb.warn("Variable key %s (%s) replaces original key %s (%s)." % (key, val, ekey, newval))
alterdata.renameVar(key, ekey)
renameVar(key, ekey, alterdata)
def inheritFromOS(d, savedenv, permitted):
"""Inherit variables from the initial environment."""
@@ -109,66 +166,53 @@ def inheritFromOS(d, savedenv, permitted):
for s in savedenv.keys():
if s in permitted:
try:
d.setVar(s, savedenv.getVar(s), op = 'from env')
setVar(s, getVar(s, savedenv, True), d)
if s in exportlist:
d.setVarFlag(s, "export", True, op = 'auto env export')
setVarFlag(s, "export", True, d)
except TypeError:
pass
def emit_var(var, o=sys.__stdout__, d = init(), all=False):
"""Emit a variable to be sourced by a shell."""
func = d.getVarFlag(var, "func", False)
if d.getVarFlag(var, 'python', False) and func:
return False
if getVarFlag(var, "python", d):
return 0
export = d.getVarFlag(var, "export", False)
unexport = d.getVarFlag(var, "unexport", False)
export = getVarFlag(var, "export", d)
unexport = getVarFlag(var, "unexport", d)
func = getVarFlag(var, "func", d)
if not all and not export and not unexport and not func:
return False
return 0
try:
if all:
oval = d.getVar(var, False)
val = d.getVar(var)
except (KeyboardInterrupt):
oval = getVar(var, d, 0)
val = getVar(var, d, 1)
except (KeyboardInterrupt, bb.build.FuncFailed):
raise
except Exception as exc:
o.write('# expansion of %s threw %s: %s\n' % (var, exc.__class__.__name__, str(exc)))
return False
return 0
if all:
d.varhistory.emit(var, oval, val, o, d)
commentVal = re.sub('\n', '\n#', str(oval))
o.write('# %s=%s\n' % (var, commentVal))
if (var.find("-") != -1 or var.find(".") != -1 or var.find('{') != -1 or var.find('}') != -1 or var.find('+') != -1) and not all:
return False
return 0
varExpanded = d.expand(var)
varExpanded = expand(var, d)
if unexport:
o.write('unset %s\n' % varExpanded)
return False
return 0
if val is None:
return False
if not val:
return 0
val = str(val)
if varExpanded.startswith("BASH_FUNC_"):
varExpanded = varExpanded[10:-2]
val = val[3:] # Strip off "() "
o.write("%s() %s\n" % (varExpanded, val))
o.write("export -f %s\n" % (varExpanded))
return True
if func:
# Write a comment indicating where the shell function came from (line number and filename) to make it easier
# for the user to diagnose task failures. This comment is also used by build.py to determine the metadata
# location of shell functions.
o.write("# line: {0}, file: {1}\n".format(
d.getVarFlag(var, "lineno", False),
d.getVarFlag(var, "filename", False)))
# NOTE: should probably check for unbalanced {} within the var
val = val.rstrip('\n')
o.write("%s() {\n%s\n}\n" % (varExpanded, val))
return 1
@@ -177,35 +221,32 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
# if we're going to output this within doublequotes,
# to a shell, we need to escape the quotes in the var
alter = re.sub('"', '\\"', val)
alter = re.sub('"', '\\"', val.strip())
alter = re.sub('\n', ' \\\n', alter)
alter = re.sub('\\$', '\\\\$', alter)
o.write('%s="%s"\n' % (varExpanded, alter))
return False
return 0
def emit_env(o=sys.__stdout__, d = init(), all=False):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
isfunc = lambda key: bool(d.getVarFlag(key, "func", False))
isfunc = lambda key: bool(d.getVarFlag(key, "func"))
keys = sorted((key for key in d.keys() if not key.startswith("__")), key=isfunc)
grouped = groupby(keys, isfunc)
for isfunc, keys in grouped:
for key in sorted(keys):
for key in keys:
emit_var(key, o, d, all and not isfunc) and o.write('\n')
def exported_keys(d):
return (key for key in d.keys() if not key.startswith('__') and
d.getVarFlag(key, 'export', False) and
not d.getVarFlag(key, 'unexport', False))
d.getVarFlag(key, 'export') and
not d.getVarFlag(key, 'unexport'))
def exported_vars(d):
k = list(exported_keys(d))
for key in k:
for key in exported_keys(d):
try:
value = d.getVar(key)
except Exception as err:
bb.warn("%s: Unable to export ${%s}: %s" % (d.getVar("FILE"), key, err))
continue
value = d.getVar(key, True)
except Exception:
pass
if value is not None:
yield key, str(value)
@@ -213,236 +254,90 @@ def exported_vars(d):
def emit_func(func, o=sys.__stdout__, d = init()):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
keys = (key for key in d.keys() if not key.startswith("__") and not d.getVarFlag(key, "func", False))
for key in sorted(keys):
emit_var(key, o, d, False)
keys = (key for key in d.keys() if not key.startswith("__") and not d.getVarFlag(key, "func"))
for key in keys:
emit_var(key, o, d, False) and o.write('\n')
o.write('\n')
emit_var(func, o, d, False) and o.write('\n')
newdeps = bb.codeparser.ShellParser(func, logger).parse_shell(d.getVar(func))
newdeps |= set((d.getVarFlag(func, "vardeps") or "").split())
seen = set()
while newdeps:
deps = newdeps
seen |= deps
newdeps = set()
for dep in sorted(deps):
if d.getVarFlag(dep, "func", False) and not d.getVarFlag(dep, "python", False):
emit_var(dep, o, d, False) and o.write('\n')
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep))
newdeps |= set((d.getVarFlag(dep, "vardeps") or "").split())
newdeps -= seen
_functionfmt = """
def {function}(d):
{body}"""
def emit_func_python(func, o=sys.__stdout__, d = init()):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
def write_func(func, o, call = False):
body = d.getVar(func, False)
if not body.startswith("def"):
body = _functionfmt.format(function=func, body=body)
o.write(body.strip() + "\n\n")
if call:
o.write(func + "(d)" + "\n\n")
write_func(func, o, True)
pp = bb.codeparser.PythonParser(func, logger)
pp.parse_python(d.getVar(func, False))
newdeps = pp.execs
newdeps |= set((d.getVarFlag(func, "vardeps") or "").split())
newdeps = bb.codeparser.ShellParser(func, logger).parse_shell(d.getVar(func, True))
seen = set()
while newdeps:
deps = newdeps
seen |= deps
newdeps = set()
for dep in deps:
if d.getVarFlag(dep, "func", False) and d.getVarFlag(dep, "python", False):
write_func(dep, o)
pp = bb.codeparser.PythonParser(dep, logger)
pp.parse_python(d.getVar(dep, False))
newdeps |= pp.execs
newdeps |= set((d.getVarFlag(dep, "vardeps") or "").split())
if d.getVarFlag(dep, "func"):
emit_var(dep, o, d, False) and o.write('\n')
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep, True))
newdeps -= seen
def update_data(d):
"""Performs final steps upon the datastore, including application of overrides"""
d.finalize(parent = True)
d.finalize()
def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
def build_dependencies(key, keys, shelldeps, vardepvals, d):
deps = set()
vardeps = d.getVarFlag(key, "vardeps", True)
try:
if key[-1] == ']':
vf = key[:-1].split('[')
if vf[1] == "vardepvalueexclude":
return deps, ""
value, parser = d.getVarFlag(vf[0], vf[1], False, retparser=True)
deps |= parser.references
deps = deps | (keys & parser.execs)
return deps, value
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
vardeps = varflags.get("vardeps")
exclusions = varflags.get("vardepsexclude", "").split()
def handle_contains(value, contains, exclusions, d):
newvalue = []
if value:
newvalue.append(str(value))
for k in sorted(contains):
if k in exclusions or k in ignored_vars:
continue
l = (d.getVar(k) or "").split()
for item in sorted(contains[k]):
for word in item.split():
if not word in l:
newvalue.append("\n%s{%s} = Unset" % (k, item))
break
else:
newvalue.append("\n%s{%s} = Set" % (k, item))
return "".join(newvalue)
def handle_remove(value, deps, removes, d):
for r in sorted(removes):
r2 = d.expandWithRefs(r, None)
value += "\n_remove of %s" % r
deps |= r2.references
deps = deps | (keys & r2.execs)
return value
if "vardepvalue" in varflags:
value = varflags.get("vardepvalue")
elif varflags.get("func"):
if varflags.get("python"):
value = d.getVarFlag(key, "_content", False)
value = d.getVar(key, False)
if key in vardepvals:
value = d.getVarFlag(key, "vardepvalue", True)
elif d.getVarFlag(key, "func"):
if d.getVarFlag(key, "python"):
parsedvar = d.expandWithRefs(value, key)
parser = bb.codeparser.PythonParser(key, logger)
parser.parse_python(value, filename=varflags.get("filename"), lineno=varflags.get("lineno"))
parser.parse_python(parsedvar.value)
deps = deps | parser.references
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, exclusions, d)
else:
value, parsedvar = d.getVarFlag(key, "_content", False, retparser=True)
parsedvar = d.expandWithRefs(value, key)
parser = bb.codeparser.ShellParser(key, logger)
parser.parse_shell(parsedvar.value)
deps = deps | shelldeps
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
value = handle_contains(value, parsedvar.contains, exclusions, d)
if hasattr(parsedvar, "removes"):
value = handle_remove(value, deps, parsedvar.removes, d)
if vardeps is None:
parser.log.flush()
if "prefuncs" in varflags:
deps = deps | set(varflags["prefuncs"].split())
if "postfuncs" in varflags:
deps = deps | set(varflags["postfuncs"].split())
if "exports" in varflags:
deps = deps | set(varflags["exports"].split())
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
else:
value, parser = d.getVarFlag(key, "_content", False, retparser=True)
parser = d.expandWithRefs(value, key)
deps |= parser.references
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, exclusions, d)
if hasattr(parser, "removes"):
value = handle_remove(value, deps, parser.removes, d)
if "vardepvalueexclude" in varflags:
exclude = varflags.get("vardepvalueexclude")
for excl in exclude.split('|'):
if excl:
value = value.replace(excl, '')
# Add varflags, assuming an exclusion list is set
if varflagsexcl:
varfdeps = []
for f in varflags:
if f not in varflagsexcl:
varfdeps.append('%s[%s]' % (key, f))
if varfdeps:
deps |= set(varfdeps)
deps |= set((vardeps or "").split())
deps -= set(exclusions)
except bb.parse.SkipRecipe:
raise
except Exception as e:
bb.warn("Exception during build_dependencies for %s" % key)
deps -= set((d.getVarFlag(key, "vardepsexclude", True) or "").split())
except:
bb.note("Error expanding variable %s" % key)
raise
return deps, value
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
#d.setVarFlag(key, "vardeps", deps)
def generate_dependencies(d, ignored_vars):
def generate_dependencies(d):
keys = set(key for key in d if not key.startswith("__"))
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export", False) and not d.getVarFlag(key, "unexport", False))
varflagsexcl = d.getVar('BB_SIGNATURE_EXCLUDE_FLAGS')
keys = set(key for key in d.keys() if not key.startswith("__"))
shelldeps = set(key for key in keys if d.getVarFlag(key, "export") and not d.getVarFlag(key, "unexport"))
vardepvals = set(key for key in keys if d.getVarFlag(key, "vardepvalue"))
deps = {}
values = {}
tasklist = d.getVar('__BBTASKS', False) or []
tasklist = d.getVar('__BBTASKS') or []
for task in tasklist:
deps[task], values[task] = build_dependencies(task, keys, shelldeps, varflagsexcl, ignored_vars, d)
deps[task], values[task] = build_dependencies(task, keys, shelldeps, vardepvals, d)
newdeps = deps[task]
seen = set()
while newdeps:
nextdeps = newdeps - ignored_vars
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep not in deps:
deps[dep], values[dep] = build_dependencies(dep, keys, shelldeps, varflagsexcl, ignored_vars, d)
newdeps |= deps[dep]
newdeps -= seen
#print "For %s: %s" % (task, str(deps[task]))
return tasklist, deps, values
def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
taskdeps = {}
basehash = {}
for task in tasklist:
data = lookupcache[task]
if data is None:
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
data = []
else:
data = [data]
gendeps[task] -= ignored_vars
newdeps = gendeps[task]
seen = set()
while newdeps:
nextdeps = newdeps
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep in ignored_vars:
continue
gendeps[dep] -= ignored_vars
newdeps |= gendeps[dep]
if dep not in deps:
deps[dep], values[dep] = build_dependencies(dep, keys, shelldeps, vardepvals, d)
newdeps |= deps[dep]
newdeps -= seen
alldeps = sorted(seen)
for dep in alldeps:
data.append(dep)
var = lookupcache[dep]
if var is not None:
data.append(str(var))
k = fn + ":" + task
basehash[k] = hashlib.sha256("".join(data).encode("utf-8")).hexdigest()
taskdeps[task] = alldeps
return taskdeps, basehash
#print "For %s: %s" % (task, str(deps[task]))
return tasklist, deps, values
def inherits_class(klass, d):
val = d.getVar('__inherit_cache', False) or []
needle = '/%s.bbclass' % klass
for v in val:
if v.endswith(needle):
return True
val = getVar('__inherit_cache', d) or []
if os.path.join('classes', '%s.bbclass' % klass) in val:
return True
return False

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,5 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Event' implementation
@@ -7,25 +9,34 @@ BitBake build tools.
# Copyright (C) 2003, 2004 Chris Larson
#
# SPDX-License-Identifier: GPL-2.0-only
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import ast
import atexit
import collections
import os, sys
import warnings
try:
import cPickle as pickle
except ImportError:
import pickle
import logging
import pickle
import sys
import threading
import atexit
import traceback
import bb.exceptions
import bb.utils
# This is the pid for which we should generate the event. This is set when
# the runqueue forks off.
worker_pid = 0
worker_fire = None
worker_pipe = None
logger = logging.getLogger('BitBake.Event')
@@ -35,63 +46,26 @@ class Event(object):
def __init__(self):
self.pid = worker_pid
class HeartbeatEvent(Event):
"""Triggered at regular time intervals of 10 seconds. Other events can fire much more often
(runQueueTaskStarted when there are many short tasks) or not at all for long periods
of time (again runQueueTaskStarted, when there is just one long-running task), so this
event is more suitable for doing some task-independent work occasionally."""
def __init__(self, time):
Event.__init__(self)
self.time = time
NotHandled = 0
Handled = 1
Registered = 10
AlreadyRegistered = 14
def get_class_handlers():
return _handlers
def set_class_handlers(h):
global _handlers
_handlers = h
def clean_class_handlers():
return collections.OrderedDict()
# Internal
_handlers = clean_class_handlers()
_handlers = {}
_ui_handlers = {}
_ui_logfilters = {}
_ui_handler_seq = 0
_event_handler_map = {}
_catchall_handlers = {}
_eventfilter = None
_uiready = False
_thread_lock = threading.Lock()
_thread_lock_enabled = False
if hasattr(__builtins__, '__setitem__'):
builtins = __builtins__
else:
builtins = __builtins__.__dict__
def enable_threadlock():
global _thread_lock_enabled
_thread_lock_enabled = True
def disable_threadlock():
global _thread_lock_enabled
_thread_lock_enabled = False
# For compatibility
bb.utils._context["NotHandled"] = NotHandled
bb.utils._context["Handled"] = Handled
def execute_handler(name, handler, event, d):
event.data = d
addedd = False
if 'd' not in builtins:
builtins['d'] = d
addedd = True
try:
ret = handler(event)
except (bb.parse.SkipRecipe, bb.BBHandledException):
except bb.parse.SkipPackage:
raise
except Exception:
etype, value, tb = sys.exc_info()
@@ -104,98 +78,46 @@ def execute_handler(name, handler, event, d):
raise
finally:
del event.data
if addedd:
del builtins['d']
if ret is not None:
warnings.warn("Using Handled/NotHandled in event handlers is deprecated",
DeprecationWarning, stacklevel = 2)
def fire_class_handlers(event, d):
if isinstance(event, logging.LogRecord):
return
eid = str(event.__class__)[8:-2]
evt_hmap = _event_handler_map.get(eid, {})
for name, handler in list(_handlers.items()):
if name in _catchall_handlers or name in evt_hmap:
if _eventfilter:
if not _eventfilter(name, handler, event, d):
continue
if d is not None and not name in (d.getVar("__BBHANDLERS_MC") or set()):
continue
for name, handler in _handlers.iteritems():
try:
execute_handler(name, handler, event, d)
except Exception:
continue
ui_queue = []
@atexit.register
def print_ui_queue():
global ui_queue
"""If we're exiting before a UI has been spawned, display any queued
LogRecords to the console."""
logger = logging.getLogger("BitBake")
if not _uiready:
if not _ui_handlers:
from bb.msg import BBLogFormatter
# Flush any existing buffered content
try:
sys.stdout.flush()
except:
pass
try:
sys.stderr.flush()
except:
pass
stdout = logging.StreamHandler(sys.stdout)
stderr = logging.StreamHandler(sys.stderr)
formatter = BBLogFormatter("%(levelname)s: %(message)s")
stdout.setFormatter(formatter)
stderr.setFormatter(formatter)
# First check to see if we have any proper messages
msgprint = False
msgerrs = False
# Should we print to stderr?
for event in ui_queue[:]:
if isinstance(event, logging.LogRecord) and event.levelno >= logging.WARNING:
msgerrs = True
break
if msgerrs:
logger.addHandler(stderr)
else:
logger.addHandler(stdout)
for event in ui_queue[:]:
console = logging.StreamHandler(sys.stdout)
console.setFormatter(BBLogFormatter("%(levelname)s: %(message)s"))
logger.handlers = [console]
for event in ui_queue:
if isinstance(event, logging.LogRecord):
if event.levelno > logging.DEBUG:
logger.handle(event)
msgprint = True
# Nope, so just print all of the messages we have (including debug messages)
if not msgprint:
for event in ui_queue[:]:
if isinstance(event, logging.LogRecord):
logger.handle(event)
if msgerrs:
logger.removeHandler(stderr)
else:
logger.removeHandler(stdout)
ui_queue = []
logger.handle(event)
def fire_ui_handlers(event, d):
global _thread_lock
global _thread_lock_enabled
if not _uiready:
if not _ui_handlers:
# No UI handlers registered yet, queue up the messages
ui_queue.append(event)
return
if _thread_lock_enabled:
_thread_lock.acquire()
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
try:
if not _ui_logfilters[h].filter(event):
continue
# We use pickle here since it better handles object instances
# which xmlrpc's marshaller does not. Events *must* be serializable
# by pickle.
@@ -208,9 +130,6 @@ def fire_ui_handlers(event, d):
for h in errors:
del _ui_handlers[h]
if _thread_lock_enabled:
_thread_lock.release()
def fire(event, d):
"""Fire off an Event"""
@@ -220,165 +139,67 @@ def fire(event, d):
# don't have a datastore so the datastore context isn't a problem.
fire_class_handlers(event, d)
if worker_fire:
if worker_pid != 0:
worker_fire(event, d)
else:
# If messages have been queued up, clear the queue
global _uiready, ui_queue
if _uiready and ui_queue:
for queue_event in ui_queue:
fire_ui_handlers(queue_event, d)
ui_queue = []
fire_ui_handlers(event, d)
def worker_fire(event, d):
data = "<event>" + pickle.dumps(event) + "</event>"
worker_pipe.write(data)
def fire_from_worker(event, d):
if not event.startswith("<event>") or not event.endswith("</event>"):
print("Error, not an event %s" % event)
return
event = pickle.loads(event[7:-8])
fire_ui_handlers(event, d)
noop = lambda _: None
def register(name, handler, mask=None, filename=None, lineno=None, data=None):
def register(name, handler):
"""Register an Event handler"""
if data is not None and data.getVar("BB_CURRENT_MC"):
mc = data.getVar("BB_CURRENT_MC")
name = '%s%s' % (mc.replace('-', '_'), name)
# already registered
if name in _handlers:
if data is not None:
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
bbhands_mc.add(name)
data.setVar("__BBHANDLERS_MC", bbhands_mc)
return AlreadyRegistered
if handler is not None:
# handle string containing python code
if isinstance(handler, str):
if isinstance(handler, basestring):
tmp = "def %s(e):\n%s" % (name, handler)
try:
code = bb.methodpool.compile_cache(tmp)
if not code:
if filename is None:
filename = "%s(e)" % name
code = compile(tmp, filename, "exec", ast.PyCF_ONLY_AST)
if lineno is not None:
ast.increment_lineno(code, lineno-1)
code = compile(code, filename, "exec")
bb.methodpool.compile_cache_add(tmp, code)
code = compile(tmp, "%s(e)" % name, "exec")
except SyntaxError:
logger.error("Unable to register event handler '%s':\n%s", name,
''.join(traceback.format_exc(limit=0)))
_handlers[name] = noop
return
env = {}
bb.utils.better_exec(code, env)
bb.utils.simple_exec(code, env)
func = bb.utils.better_eval(name, env)
_handlers[name] = func
else:
_handlers[name] = handler
if not mask or '*' in mask:
_catchall_handlers[name] = True
else:
for m in mask:
if _event_handler_map.get(m, None) is None:
_event_handler_map[m] = {}
_event_handler_map[m][name] = True
if data is not None:
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
bbhands_mc.add(name)
data.setVar("__BBHANDLERS_MC", bbhands_mc)
return Registered
def remove(name, handler, data=None):
def remove(name, handler):
"""Remove an Event handler"""
if data is not None:
if data.getVar("BB_CURRENT_MC"):
mc = data.getVar("BB_CURRENT_MC")
name = '%s%s' % (mc.replace('-', '_'), name)
_handlers.pop(name)
if name in _catchall_handlers:
_catchall_handlers.pop(name)
for event in _event_handler_map.keys():
if name in _event_handler_map[event]:
_event_handler_map[event].pop(name)
if data is not None:
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
if name in bbhands_mc:
bbhands_mc.remove(name)
data.setVar("__BBHANDLERS_MC", bbhands_mc)
def get_handlers():
return _handlers
def set_handlers(handlers):
global _handlers
_handlers = handlers
def set_eventfilter(func):
global _eventfilter
_eventfilter = func
def register_UIHhandler(handler, mainui=False):
def register_UIHhandler(handler):
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
level, debug_domains = bb.msg.constructLogOptions()
_ui_logfilters[_ui_handler_seq] = UIEventFilter(level, debug_domains)
if mainui:
global _uiready
_uiready = _ui_handler_seq
return _ui_handler_seq
def unregister_UIHhandler(handlerNum, mainui=False):
if mainui:
global _uiready
_uiready = False
def unregister_UIHhandler(handlerNum):
if handlerNum in _ui_handlers:
del _ui_handlers[handlerNum]
return
def get_uihandler():
if _uiready is False:
return None
return _uiready
# Class to allow filtering of events and specific filtering of LogRecords *before* we put them over the IPC
class UIEventFilter(object):
def __init__(self, level, debug_domains):
self.update(None, level, debug_domains)
def update(self, eventmask, level, debug_domains):
self.eventmask = eventmask
self.stdlevel = level
self.debug_domains = debug_domains
def filter(self, event):
if isinstance(event, logging.LogRecord):
if event.levelno >= self.stdlevel:
return True
if event.name in self.debug_domains and event.levelno >= self.debug_domains[event.name]:
return True
return False
eid = str(event.__class__)[8:-2]
if self.eventmask and eid not in self.eventmask:
return False
return True
def set_UIHmask(handlerNum, level, debug_domains, mask):
if not handlerNum in _ui_handlers:
return False
if '*' in mask:
_ui_logfilters[handlerNum].update(None, level, debug_domains)
else:
_ui_logfilters[handlerNum].update(mask, level, debug_domains)
return True
def getName(e):
"""Returns the name of a class or class instance"""
if getattr(e, "__name__", None) is None:
if getattr(e, "__name__", None) == None:
return e.__class__.__name__
else:
return e.__name__
@@ -407,40 +228,36 @@ class OperationProgress(Event):
class ConfigParsed(Event):
"""Configuration Parsing Complete"""
class MultiConfigParsed(Event):
"""Multi-Config Parsing Complete"""
def __init__(self, mcdata):
self.mcdata = mcdata
Event.__init__(self)
class RecipeEvent(Event):
def __init__(self, fn):
self.fn = fn
Event.__init__(self)
class RecipePreFinalise(RecipeEvent):
""" Recipe Parsing Complete but not yet finalised"""
class RecipePostKeyExpansion(RecipeEvent):
""" Recipe Parsing Complete but not yet finalised"""
class RecipeTaskPreProcess(RecipeEvent):
"""
Recipe Tasks about to be finalised
The list of tasks should be final at this point and handlers
are only able to change interdependencies
"""
def __init__(self, fn, tasklist):
self.fn = fn
self.tasklist = tasklist
Event.__init__(self)
""" Recipe Parsing Complete but not yet finialised"""
class RecipeParsed(RecipeEvent):
""" Recipe Parsing Complete """
class StampUpdate(Event):
"""Trigger for any adjustment of the stamp files to happen"""
def __init__(self, targets, stampfns):
self._targets = targets
self._stampfns = stampfns
Event.__init__(self)
def getStampPrefix(self):
return self._stampfns
def getTargets(self):
return self._targets
stampPrefix = property(getStampPrefix)
targets = property(getTargets)
class BuildBase(Event):
"""Base class for bitbake build events"""
"""Base class for bbmake run events"""
def __init__(self, n, p, failures = 0):
self._name = n
@@ -460,6 +277,12 @@ class BuildBase(Event):
def setName(self, name):
self._name = name
def getCfg(self):
return self.data
def setCfg(self, cfg):
self.data = cfg
def getFailures(self):
"""
Return the number of failed packages
@@ -468,65 +291,37 @@ class BuildBase(Event):
pkgs = property(getPkgs, setPkgs, None, "pkgs property")
name = property(getName, setName, None, "name property")
cfg = property(getCfg, setCfg, None, "cfg property")
class BuildInit(BuildBase):
"""buildFile or buildTargets was invoked"""
def __init__(self, p=[]):
name = None
BuildBase.__init__(self, name, p)
class BuildStarted(BuildBase, OperationStarted):
"""Event when builds start"""
"""bbmake build run started"""
def __init__(self, n, p, failures = 0):
OperationStarted.__init__(self, "Building Started")
BuildBase.__init__(self, n, p, failures)
class BuildCompleted(BuildBase, OperationCompleted):
"""Event when builds have completed"""
def __init__(self, total, n, p, failures=0, interrupted=0):
"""bbmake build run completed"""
def __init__(self, total, n, p, failures = 0):
if not failures:
OperationCompleted.__init__(self, total, "Building Succeeded")
else:
OperationCompleted.__init__(self, total, "Building Failed")
self._interrupted = interrupted
BuildBase.__init__(self, n, p, failures)
class DiskFull(Event):
"""Disk full case build halted"""
def __init__(self, dev, type, freespace, mountpoint):
Event.__init__(self)
self._dev = dev
self._type = type
self._free = freespace
self._mountpoint = mountpoint
class DiskUsageSample:
def __init__(self, available_bytes, free_bytes, total_bytes):
# Number of bytes available to non-root processes.
self.available_bytes = available_bytes
# Number of bytes available to root processes.
self.free_bytes = free_bytes
# Total capacity of the volume.
self.total_bytes = total_bytes
class MonitorDiskEvent(Event):
"""If BB_DISKMON_DIRS is set, then this event gets triggered each time disk space is checked.
Provides information about devices that are getting monitored."""
def __init__(self, disk_usage):
Event.__init__(self)
# hash of device root path -> DiskUsageSample
self.disk_usage = disk_usage
class NoProvider(Event):
"""No Provider for an Event"""
def __init__(self, item, runtime=False, dependees=None, reasons=None, close_matches=None):
def __init__(self, item, runtime=False, dependees=None, reasons=[]):
Event.__init__(self)
self._item = item
self._runtime = runtime
self._dependees = dependees
self._reasons = reasons
self._close_matches = close_matches
def getItem(self):
return self._item
@@ -534,28 +329,6 @@ class NoProvider(Event):
def isRuntime(self):
return self._runtime
def __str__(self):
msg = ''
if self._runtime:
r = "R"
else:
r = ""
extra = ''
if not self._reasons:
if self._close_matches:
extra = ". Close matches:\n %s" % '\n '.join(sorted(set(self._close_matches)))
if self._dependees:
msg = "Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)%s" % (r, self._item, ", ".join(self._dependees), r, extra)
else:
msg = "Nothing %sPROVIDES '%s'%s" % (r, self._item, extra)
if self._reasons:
for reason in self._reasons:
msg += '\n' + reason
return msg
class MultipleProviders(Event):
"""Multiple Providers"""
@@ -583,16 +356,6 @@ class MultipleProviders(Event):
"""
return self._candidates
def __str__(self):
msg = "Multiple providers are available for %s%s (%s)" % (self._is_runtime and "runtime " or "",
self._item,
", ".join(self._candidates))
rtime = ""
if self._is_runtime:
rtime = "R"
msg += "\nConsider defining a PREFERRED_%sPROVIDER entry to match %s" % (rtime, self._item)
return msg
class ParseStarted(OperationStarted):
"""Recipe parsing for the runqueue has begun"""
def __init__(self, total):
@@ -666,27 +429,6 @@ class TargetsTreeGenerated(Event):
Event.__init__(self)
self._model = model
class ReachableStamps(Event):
"""
An event listing all stamps reachable after parsing
which the metadata may use to clean up stale data
"""
def __init__(self, stamps):
Event.__init__(self)
self.stamps = stamps
class StaleSetSceneTasks(Event):
"""
An event listing setscene tasks which are 'stale' and will
be rerun. The metadata may use to clean up stale data.
tasks is a mapping of tasks and matching stale stamps.
"""
def __init__(self, tasks):
Event.__init__(self)
self.tasks = tasks
class FilesMatchingFound(Event):
"""
Event when a list of files matching the supplied pattern has
@@ -697,6 +439,14 @@ class FilesMatchingFound(Event):
self._pattern = pattern
self._matches = matches
class CoreBaseFilesFound(Event):
"""
Event when a list of appropriate config files has been generated
"""
def __init__(self, paths):
Event.__init__(self)
self._paths = paths
class ConfigFilesFound(Event):
"""
Event when a list of appropriate config files has been generated
@@ -739,15 +489,6 @@ class MsgFatal(MsgBase):
class MsgPlain(MsgBase):
"""General output"""
class LogExecTTY(Event):
"""Send event containing program to spawn on tty of the logger"""
def __init__(self, msg, prog, sleep_delay, retries):
Event.__init__(self)
self.msg = msg
self.prog = prog
self.sleep_delay = sleep_delay
self.retries = retries
class LogHandler(logging.Handler):
"""Dispatch logging messages as bitbake events"""
@@ -756,10 +497,7 @@ class LogHandler(logging.Handler):
etype, value, tb = record.exc_info
if hasattr(tb, 'tb_next'):
tb = list(bb.exceptions.extract_traceback(tb, context=3))
# Need to turn the value into something the logging system can pickle
record.bb_exc_info = (etype, value, tb)
record.bb_exc_formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
value = str(value)
record.exc_info = None
fire(record, None)
@@ -767,87 +505,33 @@ class LogHandler(logging.Handler):
record.taskpid = worker_pid
return True
class MetadataEvent(Event):
class RequestPackageInfo(Event):
"""
Generic event that target for OE-Core classes
to report information during asynchronous execution
Event to request package information
"""
def __init__(self, eventtype, eventdata):
Event.__init__(self)
self.type = eventtype
self._localdata = eventdata
class ProcessStarted(Event):
class PackageInfo(Event):
"""
Generic process started event (usually part of the initial startup)
where further progress events will be delivered
Package information for GUI
"""
def __init__(self, processname, total):
def __init__(self, pkginfolist):
Event.__init__(self)
self.processname = processname
self.total = total
class ProcessProgress(Event):
"""
Generic process progress event (usually part of the initial startup)
"""
def __init__(self, processname, progress):
Event.__init__(self)
self.processname = processname
self.progress = progress
class ProcessFinished(Event):
"""
Generic process finished event (usually part of the initial startup)
"""
def __init__(self, processname):
Event.__init__(self)
self.processname = processname
self._pkginfolist = pkginfolist
class SanityCheck(Event):
"""
Event to run sanity checks, either raise errors or generate events as return status.
Event to issue sanity check
"""
def __init__(self, generateevents = True):
Event.__init__(self)
self.generateevents = generateevents
class SanityCheckPassed(Event):
"""
Event to indicate sanity check has passed
Event to indicate sanity check is passed
"""
class SanityCheckFailed(Event):
"""
Event to indicate sanity check has failed
"""
def __init__(self, msg, network_error=False):
def __init__(self, msg):
Event.__init__(self)
self._msg = msg
self._network_error = network_error
class NetworkTest(Event):
"""
Event to run network connectivity tests, either raise errors or generate events as return status.
"""
def __init__(self, generateevents = True):
Event.__init__(self)
self.generateevents = generateevents
class NetworkTestPassed(Event):
"""
Event to indicate network test has passed
"""
class NetworkTestFailed(Event):
"""
Event to indicate network test has failed
"""
class FindSigInfoResult(Event):
"""
Event to return results from findSigInfo command
"""
def __init__(self, result):
Event.__init__(self)
self.result = result

View File

@@ -1,9 +1,4 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
from __future__ import absolute_import
import inspect
import traceback
import bb.namedtuple_with_abc
@@ -37,14 +32,7 @@ class TracebackEntry(namedtuple.abc):
def _get_frame_args(frame):
"""Get the formatted arguments and class (if available) for a frame"""
arginfo = inspect.getargvalues(frame)
try:
if not arginfo.args:
return '', None
# There have been reports from the field of python 2.6 which doesn't
# return a namedtuple here but simply a tuple so fallback gracefully if
# args isn't present.
except AttributeError:
if not arginfo.args:
return '', None
firstarg = arginfo.args[0]
@@ -91,6 +79,6 @@ def format_exception(etype, value, tb, context=1, limit=None, formatter=None):
def to_string(exc):
if isinstance(exc, SystemExit):
if not isinstance(exc.code, str):
if not isinstance(exc.code, basestring):
return 'Exited with "%d"' % exc.code
return str(exc)

Some files were not shown because too many files have changed in this diff Show More