Compare commits

..

711 Commits
2.4_M1 ... dora

Author SHA1 Message Date
Scott Rifenbark
8e9e7e8def ref-manual: Corrected the "package_rpm.bbclass" section.
A cut-and-paste error had left a "package_deb" string in the
first sentence of the section.  Replaced with "package_rpm."

Reported-by: Geoffroy VanCutsem <geoffroy.vancutsem@intel.com>
(From yocto-docs rev: 8eb1ef532754def5d1d15b98b3b45f259af0edc0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-04-10 14:29:00 +01:00
Scott Rifenbark
e035c59d7d documentation: Updated the date in Manual Revision History table
Updated the manuals for a December 2014 1.5.4 release date.

(From yocto-docs rev: 34d5133e8b97a8debb2458733a2b15ad3f0988d0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-09 15:11:29 +00:00
Richard Purdie
2490a020db yocto-bsp: Update qemu inclusion lists
Update qemu tune definitions to match changes in main qemu machines.

[YOCTO #6482]

(From meta-yocto rev: 6c9bbe848b4134a38ffac2c1ac02f32368f55081)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-12-04 16:19:08 +00:00
Richard Purdie
de14d21bc6 build-appliance-image: Update to dora head revision
(From OE-Core rev: 3f615a120c96b2b4f4312764d0b882f12bb40597)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-25 15:30:35 +00:00
Richard Purdie
0f1edf6ea5 poky: Update version to 1.5.4
(From meta-yocto rev: 814bf2d26e6f7a13d8eeaf88cbb0c23024e7c4ab)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-25 15:30:20 +00:00
Richard Purdie
b852276a7f build-appliance-image: Update to dora head revision
(From OE-Core rev: 176ba0980711067687d755fe152bbb1655474a55)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-25 15:29:15 +00:00
Hongxu Jia
9b385da2f0 systemtap: fix do_compile failed on fedora21
For dora, the systemtap-native do_compile failed on fedora21
...
| In file included from /usr/include/stdio.h:27:0,
|                  from tmp/work/x86_64-linux/systemtap-native/
2.3+gitAUTOINC+e58138572e-r0/git/staprun/staprun.h:18,
|                  from tmp/work/x86_64-linux/systemtap-native/
2.3+gitAUTOINC+e58138572e-r0/git/staprun/staprun.c:24:
| /usr/include/features.h:148:3: error: #warning "_BSD_SOURCE and
_SVID_SOURCE are deprecated, use _DEFAULT_SOURCE" [-Werror=cpp]
|  # warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use
_DEFAULT_SOURCE"
...

We backport a patch from 2.6 to fix this issue

(From OE-Core rev: ecc9bb098d69fb1b8920d8ecfaff5c9741859c4d)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-25 15:28:54 +00:00
Alejandro Hernandez
12dd4279c5 ltp: Added zip-native as a DEPENDS
The Makefile checks for zip during installation

[YOCTO #6699]

(From OE-Core rev: a6e8ced3fa8e8e2aa3df0798b80eb26e5ebc4b15)

(Backport to older version 20130503)

(From OE-Core rev: f9c647b4373e09e788580fd59c4c42a9bcec9268)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-21 16:49:41 +00:00
Richard Purdie
ae9e8e177f package.bbclass: Add CONFFILES to list of package specific variables
Changes to CONFFILES should change the sstate checksum. To make that happen,
it needs to be listed in the list of package specific variables, therefore
add it.

(From OE-Core rev: 9db71fa03b9d5f5307b2d09e7aa89f46f622aa09)

(From OE-Core rev: 874f7953d96367e4b2310b73a787ceb853891a64)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-12 13:17:21 +00:00
Scott Rifenbark
4a73e4b463 profile-manual: Updates to the LTTng Documentation section.
The LTTng Documentation website has been updated to actually
have extensive documentation now.  Previously, in the profile-manual,
we were stating that documentation did not exist, which was true
at the time of writing.  I updated the section to link to the
main LTTng documentation website and altered some other text in
the section appropriately.

Additionally, I found and corrected a couple spelling errors in
this chapter.

(From yocto-docs rev: c56dab059b6edc02ffa2406b8e51c591e20fbe65)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-08 11:14:57 +00:00
Richard Purdie
14dac36b87 build-appliance-image: Update to dora head revision
(From OE-Core rev: 9e757511d50bbb57805a43ef9fcbcb9f44939b50)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-06 14:27:25 +00:00
Richard Purdie
c3319b70c7 build-appliance-image: Update to daisy head revision
(From OE-Core rev: 6d962bbfb811e6474eeebb9b54e73479344dc70e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-06 14:23:14 +00:00
Sona Sarmadi
faaf75d24f openssl: Fix for CVE-2014-3568
Fix for no-ssl3 configuration option

This patch is a backport from OpenSSL_1.0.1j.

(From OE-Core rev: 97e7b7a96178cf32411309f3e9e3e3b138d2050b)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-06 11:42:31 +00:00
Sona Sarmadi
10b5ec0ec8 openssl: Fix for CVE-2014-3567
Fix for session tickets memory leak.

This patch is a backport from OpenSSL_1.0.1j.

(From OE-Core rev: 420a8dc7b84b03a9c0a56280132e15b6c9a8b4df)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-06 11:42:31 +00:00
Sona Sarmadi
f4e20ca712 openssl: Fix for CVE-2014-3513
Fix for SRTP Memory Leak

This patch is a backport from OpenSSL_1.0.1j.

(From OE-Core rev: 6c19ca0d5aa6094aa2cfede821d63c008951cfb7)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-06 11:42:30 +00:00
Sona Sarmadi
98408832c2 openssl: Fix for CVE-2014-3566
OpenSSL_1.0.1 SSLV3 POODLE VULNERABILITY (CVE-2014-3566)

This patch is a backport from OpenSSL_1.0.1j.

(From OE-Core rev: 47633059a8556c03c0eaff2dd310af87d33e2b28)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-11-06 11:42:30 +00:00
Scott Rifenbark
f529d7c727 poky.ent: Updated the YOCTO_RELEASE_NOTES variable.
This variable now needs to have the form
"&YOCTO_HOME_URL;/downloads/core/&DISTRO_NAME;&DISTRO_COMPRESSED;"
The old form was causing the release team to have to hand-redirect
the three links in the YP manuals that resolve to the release notes.

(From yocto-docs rev: 312790be8f7ff8089213f14cf2d7765dc43f5977)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-31 17:53:46 +00:00
Catalin Popeanga
bf7ac0aaa8 bash: Fix-for-CVE-2014-6278
This vulnerability exists because of an incomplete fix for CVE-2014-6271, CVE-2014-7169, and CVE-2014-6277

See: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-6278

(From OE-Core daisy rev: de596b5f31e837dcd2ce991245eb5548f12d72ae)

(From OE-Core rev: 1e155330f6cf132997b91a7cfdfe7de319410566)

Signed-off-by: Catalin Popeanga <Catalin.Popeanga@enea.com>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-13 11:18:39 +01:00
Catalin Popeanga
b03f4da548 bash: Fix for CVE-2014-6277
Follow up bash42-049 to parse properly function definitions in the
values of environment variables, to not allow remote attackers to
execute arbitrary code or to cause a denial of service.

See: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-6277

(From OE-Core daisy rev: 85961bcf81650992259cebb0ef1f1c6cdef3fefa)

(From OE-Core rev: 5a802295d1f40af6f21dd3ed7e4549fe033f03a0)

Signed-off-by: Catalin Popeanga <Catalin.Popeanga@enea.com>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-13 11:18:39 +01:00
Catalin Popeanga
db7891c164 bash: Fix for CVE-2014-7186 and CVE-2014-7187
This is a followup patch to incomplete CVE-2014-6271 fix code execution via
specially-crafted environment

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-7186
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-7187

(From OE-Core daisy rev: 153d1125659df9e5c09e35a58bd51be184cb13c1)

(From OE-Core rev: bdfe1e3770aeee9a1a7c65d4834f1a99820d3140)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-13 11:18:38 +01:00
Catalin Popeanga
f7dba9940c bash: Fix for exported function namespace change
This is a followup patch to incomplete CVE-2014-6271 fix code execution via
specially-crafted environment

This patch changes the encoding bash uses for exported functions to avoid
clashes with shell variables and to avoid depending only on an environment
variable's contents to determine whether or not to interpret it as a shell
function.

(From OE-Core daisy rev: 6c51cc96d03df26d1c10867633e7a10dfbec7c45)

(From OE-Core rev: af1f65b57dbfcaf5fc7c254dce80ac55f3a632cb)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-13 11:18:38 +01:00
Paul Eggleton
634b753f84 bash: add missing patch for CVE-2014-7169 to 4.2 recipe
The bash_4.2 recipe was missed when the fix was backported to the dora
branch.

Patch from OE-Core master rev: 76a2d6b83472995edbe967aed80f0fcbb784b3fc
by Khem Raj <raj.khem@gmail.com>

(From OE-Core rev: a71680ec6e12c17159336dc34d904cb70155d0d7)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-02 16:39:42 +01:00
Paul Eggleton
5881ef9299 bash: add missing patch for CVE-2014-6271 to 4.2 recipe
The bash_4.2 recipe was missed when the fix was backported to the dora
branch.

Patch based on the one from OE-Core master rev
798d833c9d4bd9ab287fa86b85b4d5f128170ed3 by Ross Burton
<ross.burton@intel.com>, with the content replaced from the
appropriate upstream patch.

(From OE-Core rev: 74d45affd5cda2e388d42db3322b4a0d5aff07e8)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-02 16:39:42 +01:00
Khem Raj
51477e3c54 bash: Fix CVE-2014-7169
This is a followup patch to incomplete CVE-2014-6271 fix
code execution via specially-crafted environment

Change-Id: Ibb0a587ee6e09b8174e92d005356e822ad40d4ed
(From OE-Core master rev: 76a2d6b83472995edbe967aed80f0fcbb784b3fc)

(From OE-Core rev: 1c8f43767c7d78872d38652ea808f30ea825bbef)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-02 12:48:10 +01:00
Ross Burton
3bde5804e3 bash: fix CVE-2014-6271
CVE-2014-6271 aka ShellShock.

"GNU Bash through 4.3 processes trailing strings after function definitions in
the values of environment variables, which allows remote attackers to execute
arbitrary code via a crafted environment."

(From OE-Core master rev: 798d833c9d4bd9ab287fa86b85b4d5f128170ed3)

(From OE-Core rev: 05eecceb4d2a5821cd0ca0164610e9e6d68bb22c)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-02 12:48:10 +01:00
Tobias Blom
c084a21b1a apmd.service: Fix typo (not mandatory EnvironmentFile prefix)
Prefix to EnvironmentFile should be preciding the filenamn.

(From OE-Core rev: 1f694e4cb493b0737b6009382c0957e6837ebbed)

(From OE-Core rev: 32e43d08533a20d2d8be7f6cb83360564601f4a4)

Signed-off-by: Tobias Blom <tobias.blom@techne-dev.se>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-10-02 12:48:08 +01:00
Scott Rifenbark
7231fc0b4f documentation: Steps to prepare for 1.5.4 YP doc set.
With a release of YP, there are certain steps you need to take
to the documentation set to prepare it for development of the
next release in the branch.  This commit takes care of those
steps in preparation for the YP 1.5.4 release:

1. Updated all manuals that have a manual revision history
   table so that they have a new "TBD" entry for the 1.5.4
   release.

2. Updated the poky.ent file so that the appropriate variables
   support the 1.5.4 work.

3. Updated the mega-manual.sed file to replace the 1.5.3
   strings with 1.5.4 so that all links in the manual are
   self-contained and properly processed.

(From yocto-docs rev: ffb3175cb6cf5859a7bb134af4c9f49e9e350c30)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-09-22 13:04:51 +01:00
Diego Sueiro
6d0a1893d1 qt4: Fix Qt 4.8.5 source to new location
Qt 4.8.5 was moved from http://download.qt-project.org/official_releases/qt/4.8/ to
http://download.qt-project.org/archive/qt/4.8/

Thi fix must be applied for dora and daisy branches.

(From OE-Core rev: 5c51dd2e9bab54013652475888554bc4660dcff3)

Signed-off-by: Diego Sueiro <diego.sueiro@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-08-11 12:08:13 +01:00
Martin Jansa
ed68cb87bc gcc-4.8: backport fix for ICE when building opus
* backported from 4.8.2, so daisy isn't affected

(From OE-Core rev: 3aba676cb5d81ceaee85ca87d9ae706242f3454b)

Signed-off-by: Martin Jansa <martin.jansa@lge.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-08-11 12:06:56 +01:00
Martin Jansa
60cdebd50c cairo: explicitly disable LTO support by backporting patch which removes it
* cairo-native was failing to build in gentoo with gcc-4.9 and LTO
  enabled, more details in upstream bug
  https://bugs.freedesktop.org/show_bug.cgi?id=77060

(From OE-Core rev: 7b5c0f7dae89c9b46ffeb31d98cbfe286b55dc13)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-08-11 12:06:56 +01:00
Khem Raj
e72727500d binutils: Fix building nativesdk binutils with gcc 4.9
Patches explain the issue in detail but this is exposed
with gcc 4.9 in binutils 2.23.2

(From OE-Core rev: fc5c467b680fc5aef4b0f689e6988e17a9322ae0)

(From OE-Core rev: 4dfb8847ebf8aab90ad8888933468e2899c96998)

(From OE-Core rev: af347d3298e15552d502d5b2ce497bbda9705bc7)

(From OE-Core rev: 07a7228392ec5157616888cee1eb119f4adb39a7)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-07-27 08:37:49 +01:00
Scott Rifenbark
90ea79e515 ref-manual: Updated note in the "CentOS Packages" section.
We want to encourage installation of the buildtools tarball for
getting the most up-to-date packages on this build host.

(From yocto-docs rev: 92dbc6e90dffaefc4a91bab81532d24de0d631cc)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-07-25 09:18:28 +01:00
Scott Rifenbark
c84c536019 dev-manual: Fixed broken link to MACHINE variable.
(From yocto-docs rev: bdbadd1ccb2648482a40335921b2076f0149a0c0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-07-23 09:59:20 +01:00
Scott Rifenbark
60907ba907 dev-manual: Updates to the "Creating Partitioned Images" section.
These updates are to the wic section.  I have updated the syntax
and some requirements for running and using wic.  The original
information was never reviewed before appearing in only the 1.5.2
verison of the dev-manual.

(From yocto-docs rev: 66c755f2753c52bdb304281d2109c2c253941d35)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-07-22 15:05:14 +01:00
Richard Purdie
dc743744d8 build-appliance-image: Update to dora head revision
(From OE-Core rev: 026d26e3b6c2f608cc03aa00fe1fb1ace9e070d8)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-07-08 16:27:53 +01:00
Richard Purdie
4278b11da9 poky.conf: Bump version for 1.5.3 dora release
(From meta-yocto rev: 9ad69dd83856cd5a9fd4b1fc50fc6d5d6d349560)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-07-08 16:26:41 +01:00
Richard Purdie
5d1f0c0160 build-appliance-image: Update to dora head revision
(From OE-Core rev: 2bfb8cbe773f6e496ed6192c94a74db1293d72eb)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-07-08 16:23:43 +01:00
Roy Li
acb65ef18e opkg: putting the service files into PN
(From OE-Core rev: f0ec7f81c1951211f049c342fd6bd1cad424564a)

[YOCTO #6392]

(From OE-Core rev: b76a5dd195000d157034f1f0a9a35d4ba4680e60)

Signed-off-by: Roy Li <rongqing.li@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-07-08 16:16:10 +01:00
Chen Qi
c4a539c8c8 populate-extfs.sh: fix to handle special file names correctly
`debugfs' treats spaces and "" specially. So when we are dealing with
file names, great care should be taken to make sure that `debugfs'
recognizes file names correctly.

The basic solution here is:
1. Use quotation marks to handle spaces correctly.
2. Replace "xxx" with ""xxx"" so that debugfs knows that the quotation
   marks are parts of the file name.

[YOCTO #6503]

(From OE-Core rev: 24f17607e996c499c8f86eda0588d02af1e960b9)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-07-08 16:09:58 +01:00
Richard Purdie
845df01345 libtool-cross/native: Force usage of bash due to sstate inconsistencies
Scenario:
a) libtool script is built on system with bash as /bin/sh
b) machine B installs sstate from build a)
c) machine B has dash as /bin/sh

In this scenario, the script fails to work properly since its expecting
/bin/sh to have bash like syntax and it no longer does have it.

This patch forces the configure process to use /bin/bash, not /bin/sh
and hence allows the scripts to work correctly when used from sstate.

(From OE-Core rev: 24d5b449e5f4d91119f0d8e13c457618811aadfc)

(From OE-Core rev: 330c3085317a0b0981163ff5c41c54596e0d127d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-07-03 14:56:45 +01:00
Henning Heinold
2e2a6d0c4e perf: split packging
* some fundamental perf commands can work
  without the dependency on perl, python or bash
  make them separate packages and RSUGGEST them

* bump PR

The patch was sponsored by sysmocom

(From OE-Core rev: a6f79561f7a2f6bc354d5ea8d84b836ac5c9b08f)

Signed-off-by: Henning Heinold <henning@itconsulting-heinold.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-07-03 13:47:22 +01:00
Henning Heinold
a63f07c4ce perf: add slang to the dependencies
* TUI/GUI support was added in 2.6.35 based on libnewt
* since 3.10 slang replaced libnewt completly
* changing TUI_DEFINES is not necessary, because NO_NEWT is
  still respected with newer kernels
* add comment about the gui history to the recipe

The patch was sponsored by sysmocom

(From OE-Core rev: 104e317f1fe68244d31c72897df2e5c997ff502a)

Signed-off-by: Henning Heinold <henning@itconsulting-heinold.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-07-03 13:47:22 +01:00
Henning Heinold
19f3e362b3 perf: fix broken shell comparsion in do_install
The patch was sponsored by sysmocom

(From OE-Core rev: 7e38d8ad6f7f4c289975acdac5c4d254ff3df7e6)

Signed-off-by: Henning Heinold <henning@itconsulting-heinold.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-07-03 13:47:22 +01:00
Stéphane Cerveau
4a18e162d8 e2fsprogs: Fix populate-extfs.sh
Fix the use of command dirname on ubuntu 12.04.
dirname does not accept space in file name.

(From OE-Core rev: ab6bd289d51c3c44862b43241a99d3e4f3ff13c0)

Signed-off-by: Stéphane Cerveau <scerveau@connected-labs.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-07-03 13:47:22 +01:00
Khem Raj
7c3f509c06 prelink: Fix SRC_URI
The SHA we use it actually on cross_prelink branch
if you do not use yocto source mirrors then the fetch
for prelink on dora fails due to missing branch in SRC_URI

(From OE-Core rev: 13b57cab7cdd2bf967622ec5015478dc56938b8b)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-24 11:06:32 +01:00
Chen Qi
05f172c745 populate-extfs.sh: keep file timestamps
Fix populate-extfs.sh to keep file timestamps while generating the
ext file systems.

[YOCTO #6348]

(From OE-Core rev: f8c0359edc2ce740e13e874ea189770ff99d1525)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-24 11:05:31 +01:00
Mark Hatle
47afe5bcfa rpm: Fix rpm -V usage
[YOCTO #6309]

It appears a logic issue has caused rpm -V to no longer
verify the files on the filesystem match what was installed.

(From OE-Core master rev: 117862cd0eebf6887c2ea6cc353432caee2653aa)

(From OE-Core rev: 9f9bcad51381887819d58ffdde2e41307d342473)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-24 11:05:31 +01:00
Jonathan Liu
c60886f9f5 consolekit: fix console-kit-log-system-start.service startup
console-kit-log-system-start.service fails to to start if the
/var/log/ConsoleKit directory does not exist. Normally it is created
automatically but as we mount a tmpfs at /var/log, we need to add
a tmpfiles.d entry to create it.

(From OE-Core master rev: 2a9a14bf400fe0c263c58aa85b02aba7311b1328)

(From OE-Core rev: 305da37a4dc0fba2b8f3219cfae47a1d4228f244)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-24 11:05:31 +01:00
Chen Qi
3ceb90eacd populate-extfs.sh: error out if debugfs encounters some error
Previously, even if we encounter some error when populating the
ext filesystem, we don't error out and the rootfs process still
succeeds.

However, what's really expected is that the populate-extfs.sh script
should error out if something wrong happens when using `debugfs' to
generate the ext filesystem. For example, if there's not enough block
in the filesystem, and allocating a block for some file fails, the
failure should not be ignored. Otherwise, we will have a successful
build but a corrupted filesystem.

The debugfs returns 0 as long as the command is valid. That is, even
if the command fails, the debugfs still returns 0. That's really a
pain here. That's why this patch checks the error output to see whether
there's any error logged.

(From OE-Core rev: 468d3e60ee10348578f78f846e87c02359fdb8bf)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-24 11:04:18 +01:00
Chen Qi
8c346a66b5 populate-extfs.sh: fix to handle /var/lib/opkg/alternatives/[[ correctly
There was a patch trying to fix this problem by using 'dirname', but it
caused some build failures, thus got reverted.

The problem is that $DIR might be empty and we should first do the check
before trying to use $(dirname $DIR).

[YOCTO #5712]

(From OE-Core rev: 8277c71747758e2ba0815a6f5cd11c9e0c9c90ce)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-24 11:04:18 +01:00
Scott Rifenbark
09d260e3e5 profile-manual: Fixed a transposed title.
I had the actual title of the manual as displayed in the section
heading for Chapter One wrong.

(From yocto-docs rev: e61b251da0d8225f7497b2b7a0a8c8d1510a429b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-18 10:30:31 +01:00
Scott Rifenbark
8cc8941821 dev-manual: Fixed a link that was broke in the mega-manual.
Found a link in the dev-manual that had a hard return splitting
the link across two lines.  The mega-manual.sed file cannot process
those links so it ignores them.

(From yocto-docs rev: fabd8d47b4a5ce1e108ad282d9903e3b1daa5f3d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-18 10:30:31 +01:00
Scott Rifenbark
afec960d87 mega-manual.sed: Fixed search string problem for profile-manual.
Found a very subtle problem with the search string that processes
links to the Yocto Project Profiling and Tracing Manual where the
links go to the top-level (i.e. no ID tag in the link).

I had the name of the manual as "Yocto Project Profile and
Tracing Manual", which means there would never be a match.
Consequently, when the Makefile called the mega-manual.sed file
to process the links in mega-manual.html, any top-level link
to that manual was not processed and was being left as a hard
link to the versioned manual.  Processing a top-link should
convert it to a non-link (for now).

(From yocto-docs rev: 38c7971abe19293657f0170ecd8dc28c1047859b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>

Conflicts:

	documentation/tools/mega-manual.sed
        Had to clean up some conflicts to get the cherry-pick
        to work.  It seems the line for the profile manual was
        not even in this sed file.  Also, had to reset the
        1.4.4 strings to 1.5.3.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-18 10:30:31 +01:00
Scott Rifenbark
3a980abd28 documentation: Updated manual history tables.
Added a new entry to support the 1.5.3 release.  Using July 2014
as the release month and year.

(From yocto-docs rev: fcd6046b8b2a5606e77d14cffa0bd2eebbe1748a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-18 10:30:31 +01:00
Scott Rifenbark
780d5d0b91 mega-manual.sed: Updated release string to support 1.5.3 release.
(From yocto-docs rev: d89818c7e258a546726c9fbe5f338f7917773a29)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-18 10:30:31 +01:00
Scott Rifenbark
3fb2ce03a2 poky.ent: Updated variables to support 1.5.3 release.
(From yocto-docs rev: bb35f7584ab40d5689d3d4ff27410b106f1e9bd6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-18 10:30:30 +01:00
Khem Raj
527868fbfc x264: Update SRCREV to match commit in upstream git repo
It seems that 585324fee380109acd9986388f857f413a60b896 is no
longer there in git and it has been rewritten to
ffc3ad4945da69f3caa2b40e4eed715a9a8d9526

Change-Id: I9ffe8bd9bcef0d2dc5e6f6d3a6e4317bada8f4be
(master rev: b193c7f251542aa76cb5a4d6dcb71d15b27005eb)

(From OE-Core rev: b7371b49b4b83c2e864126480b65363fe9f2cfd2)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Patrick Doyle <wpdster@gmail.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-17 18:00:07 +01:00
Yue Tao
381c6b8957 openssl: fix for CVE-2010-5298
Race condition in the ssl3_read_bytes function in s3_pkt.c in OpenSSL
through 1.0.1g, when SSL_MODE_RELEASE_BUFFERS is enabled, allows remote
attackers to inject data across sessions or cause a denial of service
(use-after-free and parsing error) via an SSL connection in a
multithreaded environment.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2010-5298

(From OE-Core master rev: 751f81ed8dc488c500837aeb3eb41ebf3237e10b)

(From OE-Core rev: 3cc799213e6528fc9fb4a0c40a01a1817484f499)

Signed-off-by: Yue Tao <Yue.Tao@windriver.com>
Signed-off-by: Roy Li <rongqing.li@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-10 17:12:24 +01:00
Paul Eggleton
8ac53f3c2d openssl: fix CVE-2014-3470
http://www.openssl.org/news/secadv_20140605.txt

Anonymous ECDH denial of service (CVE-2014-3470)

OpenSSL TLS clients enabling anonymous ECDH ciphersuites are subject to a
denial of service attack.

(Patch borrowed from Fedora.)

(From OE-Core rev: fe4e278f1794dda2e1aded56360556fe933614ca)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-10 17:12:24 +01:00
Paul Eggleton
0ea0a14bd9 openssl: fix CVE-2014-0224
http://www.openssl.org/news/secadv_20140605.txt

SSL/TLS MITM vulnerability (CVE-2014-0224)

An attacker using a carefully crafted handshake can force the use of weak
keying material in OpenSSL SSL/TLS clients and servers. This can be exploited
by a Man-in-the-middle (MITM) attack where the attacker can decrypt and
modify traffic from the attacked client and server.

The attack can only be performed between a vulnerable client *and*
server. OpenSSL clients are vulnerable in all versions of OpenSSL. Servers
are only known to be vulnerable in OpenSSL 1.0.1 and 1.0.2-beta1. Users
of OpenSSL servers earlier than 1.0.1 are advised to upgrade as a precaution.

(Patch borrowed from Fedora.)

(From OE-Core rev: f19dbbc864b12b0f87248d3199296b41a0dcd5b0)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-10 17:12:24 +01:00
Paul Eggleton
bd1a6f3d56 openssl: fix CVE-2014-0221
http://www.openssl.org/news/secadv_20140605.txt

DTLS recursion flaw (CVE-2014-0221)

By sending an invalid DTLS handshake to an OpenSSL DTLS client the code
can be made to recurse eventually crashing in a DoS attack.

Only applications using OpenSSL as a DTLS client are affected.

(Patch borrowed from Fedora.)

(From OE-Core rev: 6506f8993c84b966642ef857bb15cf96eada32e8)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-10 17:12:24 +01:00
Paul Eggleton
d6f29c0154 openssl: use upstream fix for CVE-2014-0198
This replaces the fix for CVE-2014-0198 with one borrowed from Fedora,
which is the same as the patch which was actually applied upstream for
the issue, i.e.:

https://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=b107586c0c3447ea22dba8698ebbcd81bb29d48c

(From OE-Core rev: 21fa437a37dad14145b6c8c8c16c95f1b074e09c)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-10 17:12:24 +01:00
Paul Eggleton
c5d81c3386 openssl: fix CVE-2014-0195
http://www.openssl.org/news/secadv_20140605.txt

DTLS invalid fragment vulnerability (CVE-2014-0195)

A buffer overrun attack can be triggered by sending invalid DTLS fragments
to an OpenSSL DTLS client or server. This is potentially exploitable to
run arbitrary code on a vulnerable client or server.

Only applications using OpenSSL as a DTLS client or server affected.

(Patch borrowed from Fedora.)

(From OE-Core rev: c707b3ea9e1fbff2c6a82670e4b1af2b4f53d5e2)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-10 17:12:23 +01:00
Valentin Popa
ad2c79b0fd gnutls: patch for CVE-2014-3466 backported
Backported patch for CVE-2014-3466.
This patch is for dora.

(From OE-Core rev: 68da848e0f7f026bf18707d8d59143177ff66f9b)

Signed-off-by: Valentin Popa <valentin.popa@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-06 10:27:51 +01:00
Saul Wold
c7432a006e busybox: fix meta-yocto's bbappend's FILESEXTRAPATH
The FILESEXTRAPATH was not getting used correctly since our distro
OVERRIDE is for poky-tiny, not poky, so just remove it, also we are
not using a version directory so ensure we get correct BPN (Base Package
Name).

[YOCTO #6353]

(From meta-yocto rev: 43e5c7a92dc06f95ef3110fb404bd07eccc2140a)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-06 09:30:25 +01:00
Richard Purdie
e6aafde7d2 poky.conf: Fix DISTRO_VERSION to be 1.5.2
(From meta-yocto rev: a55c4e66c2cdf72576baa9bb431ccfababcac585)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-06-06 09:30:25 +01:00
Maxin B. John
1974599046 openssl: fix CVE-2014-0198
A null pointer dereference bug was discovered in do_ssl3_write().
An attacker could possibly use this to cause OpenSSL to crash, resulting
in a denial of service.

https://access.redhat.com/security/cve/CVE-2014-0198

(From OE-Core rev: 4c58fe468790822fe48e0a570779979c831d0f10)

Signed-off-by: Maxin B. John <maxin.john@enea.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-05-21 09:32:55 +01:00
Scott Rifenbark
0a6f0dbf94 mega-manual.sed: Updated the link version to 1.5.2
(From yocto-docs rev: 2e0cf7319ec72e8ccbf93b4a6602f3ab20259588)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-05-12 20:31:59 +01:00
Scott Rifenbark
3decaf2620 documentation: Updated the manual revision history tables for 1.5.2
(From yocto-docs rev: c3674816afea52cc37ae842577f8eebf34d20d69)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-05-12 20:31:59 +01:00
Scott Rifenbark
24935b0c09 poky.ent: Updated the variables to support the 1.5.2 point release.
(From yocto-docs rev: 5d1921371e44c7830a2e2f1d6b6b7553277a3370)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-05-12 20:31:59 +01:00
Robert Yang
6a92f7ede3 bitbake: fetch2/__init__.py: let try_mirror_url return correct value
The fetcher will try:

1) PREMIRROR
2) Upstream
3) MIRROR

If it fails to download from the Upstream, but succeeds from the MIRROR,
and ud.localpath != origud.localpath (for example, the git tarball),
then we will get the error (e.g.: xf86-video-omapfb):

ERROR: Function failed: Fetcher failure for URL: 'xxx'. Unable to fetch URL from any source.
ERROR: Logfile of failure stored in: /path/to/log.do_fetch.28024

It should not show the error and let the build go on since it succeeds.
(e.g.: xf86-video-omapfb)

[YOCTO #5686]

(Bitbake rev: 3bb3f1823bdd46ab34577d43f1e39046a32bca77)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-05-06 16:16:33 +01:00
Richard Purdie
8a5af7ff33 bitbake: fetch2: Fix mirror repo tarball creation
A typo was meaning that the mirror creation method wasn't being called
when it should have been. Fix the type to fix mirror tarball creation.

[YOCTO #5284]

(Bitbake rev: 66cdc2e21660847c50317e8bfd28cf3595422e28)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-05-06 16:16:33 +01:00
Saul Wold
b626e109e8 build-appliance: Update to Dora 1.5.2
Fix to be HEAD of Dora, not master

(From OE-Core rev: abc158bf873bb7c01414e437eea2b538eb73881c)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-29 22:46:16 +01:00
Richard Purdie
e34b38b723 build-appliance-image: Update to head revision
(From OE-Core rev: d18553830ed3377b40878df1b0bef4e8e109bec3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-29 18:01:09 +01:00
Hongxu Jia
e07904836a make: fix invoking makeinfo failed at do_install time
Reproduce steps:
$ bitbake texinfo-native
$ bitbake make
$ bitbake make -cdevshell
In the devshell:
root:make-3.82# echo "" >> doc/make.texi
root:make-3.82# ../temp/run.do_install

Failed Log:
...
tmp/work/i586-poky-linux/make/3.81-r1/make-3.81/doc/make.texi:8165: @itemx must follow @item
...

Backport from make 4.0 to fix this issue.

[YOCTO #6219]

(From OE-Core rev: b191d869e86c7d4393716eee6ac27aa259d6521c)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-29 17:31:47 +01:00
Richard Purdie
50e9ccb2af bitbake: bitbake: fetch2/git: Anchor names when using ls-remote
When specifying tags, they're searched for unanchored so foo/bar could
match:

refs/heads/abc/foo/bar
refs/heads/xyz/foo/bar
refs/heads/foo/bar

This change anchors the expressions so they are based against heads
or tags (or any other base level tree that has been created).

(Bitbake master rev: df2e0972cd1db7abd5ec8b7cb295fb0c42e284a4)

(Bitbake rev: da93afe9834e137ed1e9410380181286c80198b5)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-19 11:04:48 +01:00
Valentin Popa
c65c136746 mesa: double check for eglplatform.h
Even if 'egl' is in PACKAGECONFIG, mesa egl support
can be disabled explicitly (changing configure flags
using a .bbappend, for example).
On dora, meta-fsl-arm is an example of this kind.
On master there are no known cases, and we should
encourge package configuration through PACKAGECONFIG.

This patch adds another check for the existence
of eglplatform.h before 'sed' can alter it.

(From OE-Core rev: 97bc1bce9a226cc02db8a5afc2c0d4f4f70034a6)

Signed-off-by: Valentin Popa <valentin.popa@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-19 11:04:48 +01:00
Paul Eggleton
99f46fd25c openssl: bump PR
We don't normally do this, but with the recent CVE fixes (most
importantly the one for the serious CVE-2014-0160 vulnerability) I am
bumping PR explicitly to make it a bit more obvious that the patch has
been applied.

(From OE-Core rev: 813fa9ed5e492e5dc08155d23d74127ca87304df)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-11 18:15:34 +01:00
Richard Purdie
e5cb267922 sstatesig: Anchor inherits class tests
This avoids a nasty sstate hash corruption issue where the
fact the testimage bbclass was inherited meant that the checksum
changed due to testimage.bbclass being confused with image.bbclass.

This patch anchors the bbclass names to avoid this confusion.

(From OE-Core master rev: 943a75a4f3b6877e4092dae14b59b7afef8cad3d)

(From OE-Core rev: 71b15a41652e280aca2a451073a83a25fb4e6f50)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-11 12:02:50 +01:00
Paul Eggleton
b96f0217e4 classes/image: ignore modules.* changing during multilib image construction
Since we now run depmod when building images (as the postinst that does
this is now on kernel-base instead of kernel-image) it is possible to
have module file differences between the two halves of the multilib image,
and the code that checks for such differences detects this and fails.
Whitelist this file to avoid the failure.

Specifically, modules.alias, modules.dep and modules.symbol can differ
along with their .bin counterparts.

Related to fix for [YOCTO #5392].

(From OE-Core master rev: 0a315804bf991664c0948e3024b8e8b9e9085808)

(From OE-Core rev: a2c026cf565897e4b0ba4c31c8762b41361649f4)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-11 12:02:50 +01:00
Paul Eggleton
337de046c8 classes/kernel: move module postinst commands to kernel-base
Since kernel-base is the package that contains the files that depmod
needs to run, we should be running depmod from the kernel-base
postinstall rather than kernel-image.

Fixes [YOCTO #5392].

(From OE-Core master rev: f7d2cb383281ec8dfa90950ba04d87dd29ffc676)

(From OE-Core rev: ac92a5ab25ddfd8462c43bac6f93730b1e454a4f)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-11 12:02:49 +01:00
Richard Purdie
8d0f411fdb sstatesig.py: Fix image regeneration issue
With the "ABI safe" recipes, we've been excluding those from signatures. This
is fine in the general case but in the specific case of image recipes it breaks.

A good test case is the interfaces file. Editting this causes init-ifupdown
to rebuild but not an image containing it (e.g. core-image-minimal).

We need to ensure the checksums are added to the image recipes and this change
does that.

(From OE-Core master rev: fd085f15e7cd093953f974f69277e130174d551d)

(From OE-Core rev: 946ec90c5de1faa18c899e9b45efedc3d47b93bd)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-11 12:02:49 +01:00
Paul Eggleton
609ae39284 openssl: backport fix for CVE-2014-0160
Fixes the "heartbleed" TLS vulnerability (CVE-2014-0160). More
information here:

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0160

Patch borrowed from Debian; this is just a tweaked version of the
upstream commit (without patching the CHANGES file which otherwise
would fail to apply on top of this version).

(From OE-Core rev: c3acfdfe0c0c3579c5f469f10b87a2926214ba5d)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-09 09:00:40 +01:00
Yue Tao
7f9dd3ff42 Security Advisory - openssl - CVE-2013-6449
The ssl_get_algorithm2 function in ssl/s3_lib.c in OpenSSL before 1.0.2
obtains a certain version number from an incorrect data structure, which
allows remote attackers to cause a denial of service (daemon crash) via
crafted traffic from a TLS 1.2 client.

(From OE-Core master rev: 3e0ac7357a962e3ef6595d21ec4843b078a764dd)

(From OE-Core rev: 33b6441429603b82cfca3d35e68e47e1ca021fd7)

Signed-off-by: Yue Tao <Yue.Tao@windriver.com>
Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-09 09:00:40 +01:00
Yue Tao
0cdc1147d3 Security Advisory - openssl - CVE-2013-6450
The DTLS retransmission implementation in OpenSSL through 0.9.8y and 1.x
through 1.0.1e does not properly maintain data structures for digest and
encryption contexts, which might allow man-in-the-middle attackers to
trigger the use of a different context by interfering with packet delivery,
related to ssl/d1_both.c and ssl/t1_enc.c.

(From OE-Core master rev: 94352e694cd828aa84abd846149712535f48ab0f)

(From OE-Core rev: 1e934529e501110a7bfe1cb09fe89dd0078bd426)

Signed-off-by: Yue Tao <Yue.Tao@windriver.com>
Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-09 09:00:40 +01:00
Yue Tao
2b09b26cb7 Security Advisory - openssl - CVE-2013-4353
The ssl3_take_mac function in ssl/s3_both.c in OpenSSL 1.0.1 before
1.0.1f allows remote TLS servers to cause a denial of service (NULL
pointer dereference and application crash) via a crafted Next Protocol
Negotiation record in a TLS handshake.

(From OE-Core master rev: 35ccce7002188c8270d2fead35f9763b22776877)

(From OE-Core rev: a5060594208de172cb31ad406b34b25decd061e4)

Signed-off-by: Yue Tao <Yue.Tao@windriver.com>
Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-09 09:00:40 +01:00
Richard Purdie
98bd952a5b Revert "buildhistory_analysis: fix error when comparing image contents"
This reverts commit 5b616aa7b6.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-04 16:16:50 +01:00
Mats Kärrman
75c9f43129 eglibc 2.18: powerpc: Fix time related syscalls
Concatenated fix of PowerPC time related system calls in eglibc 2.18 taken
from upstream glibc. See credits in patch header.

The effect is that some time related system calls returns nothing or garbage.
Fix tested on PowerPC e300c3.

Eglibc 2.17 does not have this issue and the patches are already part of 2.19.

(From OE-Core rev: fae2f635e795d496228dd5d302e99d9ab7706900)

Signed-off-by: Mats Karrman <mats.karrman@tritech.se>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-04 11:54:34 +01:00
Valentin Popa
d04d0c0735 mesa: build fix for gallium-egl
(*) add MESA_EGL_NO_X11_HEADERS to defines
(*) avoid altering eglplatform.h from {top_srcdir}/include
using an alternative to
0003-EGL-Mutate-NativeDisplayType-depending-on-config
patch.

[YOCTO #5882]

(From OE-Core rev: 4c6340dba65185acef7301762270fa1dc7e0afda)

Signed-off-by: Valentin Popa <valentin.popa@intel.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-04-04 11:54:34 +01:00
Robert Yang
f1276b0662 image_types.bbclass: use 4096 instead of 8192 bytes-per-inode
The image not correctly created if 'ptest-pkgs' is in IMAGE_FEATURES,
this is because there is no free inode left. We can use 4096 instead of
8192 bytes-per-inode to fix the problem, and most of the distributions
us 4096, such as Ubuntu, Suse, Fedora and CentOS.

There are another problems:
* There are error message when there is no free inode left if we run the
  mke2fs command manually, but they are not in log.do_rootfs.

* The image generation doesn't stop when error happens because mke2fs
  doesn't return failed for this case.

Will fix them in other threads.

[YOCTO #5957]

(From OE-Core master rev: 09ab3a00598d06e3a1bf871811c2ac37359c74da)

(From OE-Core rev: ec8ae16e35fd7db6a5bb12412d50ab6f355b0f6e)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-26 17:46:56 +00:00
Scott Rifenbark
0468067e23 Revert "poky.ent, ref-manual: Updated list of Fedora packages"
This reverts commit 3143176a2ff2444ba753cea64e0de6796cfb06ae.
No need for perl-Thread-Queue as a Fedora package for 1.5.1
release.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-25 12:29:59 +00:00
Scott Rifenbark
e128cfdf9d poky.ent, ref-manual: Updated list of Fedora packages
I added perl-Thread-Queue to the essential and graphical
package sets for the Fedora distribution.

Reported-by: Richard Purdie <richard.purdie@intel.com>
(From yocto-docs rev: 3143176a2ff2444ba753cea64e0de6796cfb06ae)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-25 12:29:59 +00:00
Chengwei Yang
9a6398c144 ref-manual: Fixed error in example for export XDG_RUNTIME_DIR
export XDG_RUNTIME_DIR=/tmp/$USER-weston used instead of
export XDG_RUNTIME_DIR=/tmp/$USER=weston

(From yocto-docs rev: 08789cd32f110f0a32e19fa1a8499076ca02a317)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-25 12:29:59 +00:00
Scott Rifenbark
75dfa194d8 ref-manual: Added directfb to DISTRO_FEATURES list chapter.
Reported-by: Ross Burton <ross.burton@intel.com>
(From yocto-docs rev: c646f87697edaa18266e18c8608a91444930be6a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-25 12:29:59 +00:00
Hongxu Jia
e44e75e576 license.bbclass: fix copying license directories failed
For each recipe, it populated license files to ${LICENSE_DIRECTORY}/${PN},
such as kernel's license dir was ${LICENSE_DIRECTORY}/kernel-3.10.17-yocto-standard;

In do_rootfs task, it copied license directories from ${LICENSE_DIRECTORY}/
${pkg}, and ${pkg} was listed in ${INSTALLED_PKGS};

We got ${INSTALLED_PKGS} by rpm query, such as the kernel were 'kernel-*',
but the kernel's PN was linux-yocto, so searching ${LICENSE_DIRECTORY}/
kernel-* failed.

Copied license directories from ${LICENSE_DIRECTORY}/${PN} fixed this
issue.

[YOCTO #5572]

(From OE-Core rev: 4e00554dfc68b1aad07e161921c27807511420b1)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-25 11:17:20 +00:00
Stefan Stanacar
023c35795f lsbtest: fix comparison bashism
== is a bashism use = instead.

(Based on OE-Core master rev: c90d1047c41148cbd57f26b5a34563346602a71b)

(From OE-Core rev: 9981f760ac890d01a07db8faa24ceee2bea78b62)

Signed-off-by: Stefan Stanacar <stefanx.stanacar@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-13 15:38:08 -07:00
Chen Qi
8d326e6728 iproute2: de-bash its scripts to remove the bash dependency
If we build a minimal image with iproute2 installed, the following
error will appear during rootfs.

error: Can't install iproute2-3.10.0-r0.0@i586: no package provides /bin/bash

The problem is that iproute2 has an implicit dependency on 'bash'.
This dependency is from per-file dependency checking.

Patch two scripts, ifcfg and rtpr, from iproute2 to remove the bash
specific syntax.

[YOCTO #5415]

(From OE-Core master rev: 1132c4210eddd59b22b2640935ab0bb8f48c0124)

(From OE-Core rev: ca55e7321f0c52fbe13d301d0dfe3adff5435639)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-11 07:56:39 -07:00
Sébastien Mennetrier
6c9133d887 libomxil: Fix link issue for gst-omx
gst-omx element can not load due to a missing symbol.
Missing symbol RM_Deinit.

(From OE-Core master rev: 56301698a55bcbab4272b273fd98ce4de84cbfac)

(From OE-Core rev: a77984aef1ef9f351a9ee0a30893e24034ed0aed)

Signed-off-by: Sébastien Mennetrier <s.mennetrier@innotis.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-11 07:56:39 -07:00
Ming Liu
a43f60cdea grub: move xz to DEPENDS list from RDEPENDS list
liblzma5 is really requiring by grub, setting RDEPENDS to xz would pull
unneeded xz binaries into rootfs.

(From OE-Core master rev: 78526905999fa38047ae8f3491127cc03de3e3f6)

(From OE-Core rev: 33a352f45ab05f4c81b860b1b369bde429dbff1d)

Signed-off-by: Ming Liu <ming.liu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-11 07:56:39 -07:00
Ross Burton
ec578fa12d avahi: handle SO_REUSEPORT not being available
Linux < 3.9 doesn't have the SO_REUSEPORT option so instead of failing to start
when built with >=3.9 kernel headers but booted on <3.9 kernels, continue as if
SO_REUSEPORT wasn't available.

(From OE-Core rev: 85e89da55f778ad3713460cb0df1435d82e94510)

(From OE-Core rev: 704361888958ec790aa2855e22df2d2d87a5d982)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-11 07:56:39 -07:00
Richard Purdie
9e89b4eec4 Revert "license.bbclass: fix copying license directories failed"
This reverts commit e58a1499ac.

It depends on other functionality not backported to dora.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-11 07:56:39 -07:00
Paul Eggleton
d0e55dd0ef gnutls: fix failure during do_compile
Add a Debian patch to fix a load of errors building the documentation
within do_compile e.g.:

| ./x509-api.texi:15: misplaced {
| ./x509-api.texi:15: misplaced }

(From OE-Core master rev: b09a9a5f298596795f17243e5ffcf7dab295a8e6)

(From OE-Core rev: 18f34944696a8098daf33a94bc2f532deb217d0a)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-06 10:32:42 +00:00
Karl Hiramoto
0ce26e16d1 gnutls: Fixed bug that prevented the rejection of v1 intermediate CA certificates.
This patch is for the OE-Core dora branch - it comes from upstream:

>From 467478d8ff08a3cb4be3034ff04c9d08a0ceba3e
From: Nikos Mavrogiannopoulos <nmav@redhat.com>
Date: Wed, 12 Feb 2014 16:41:33 +0100

For more info see:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-1959
http://www.gnutls.org/security.html#GNUTLS-SA-2014-1
467478d8ff

(From OE-Core rev: 74bcafd4949b3505bff4c38de6e68ad62f0fe5f6)

Signed-off-by: Karl Hiramoto <karl@hiramoto.org>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-05 12:40:14 +00:00
Karl Hiramoto
9f4ebcf2f9 gnutls: CVE-2014-0092 correct return codes
This patch is for the OE-Core dora branch - it comes from upstream:

git://gitorious.org/gnutls/gnutls.git
branch: gnutls_2_12_x
commit: 6aa26f78150ccbdf0aec1878a41c17c41d358a3b
Author: Nikos Mavrogiannopoulos <nmav@gnutls.org>
Date:   Thu Feb 27 19:42:26 2014 +0100

For more info see:
http://www.gnutls.org/security.html#GNUTLS-SA-2014-2
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0092
6aa26f7815

(From OE-Core rev: d9a5578da93d79c8edfaf773bdb56018046046ea)

Signed-off-by: Karl Hiramoto <karl@hiramoto.org>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-03-05 12:40:14 +00:00
Scott Rifenbark
84c2763fa0 dev-manual: Fixed code block example for ARCHIVER.
There was an incorrect wrapping of code in the source
example.

Reported-by: Ross Burton <ross.burton@intel.com>
(From yocto-docs rev: 5aaa1e2651ec404af1aea5caa4c9f1f63a760e95)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-17 15:39:51 +00:00
Martin Jansa
e8dfafda71 bitbake: fetch2: Don't allow '/' in user:pass, fix branch containing '@'
* currently decode_url regexp parses branch=@foo as username so it ends like this:
  - ('git', '', 'foo', 'git.openembedded.org/bitbake;branch=', '', {})
  + ('git', 'git.openembedded.org', '/bitbake', '', '', {'branch': '@foo'})
* http://hg.python.org/cpython/file/2.7/Lib/urlparse.py also assumes
  that there is at least one '/' as separator between netloc and path,
  params, so it looks reasonable to prevent including '/' in username

(Bitbake rev: 3c694e20df3b1d442603300786580e4b2f4bf5f3)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-17 14:44:30 +00:00
Diego Sueiro
f0b1753b90 systemd: journald fix ignored disk space restrictions
The upstream bug report can be seen at:
[Systemd #68161] -- https://bugs.freedesktop.org/show_bug.cgi?id=68161

This backports patches come from 207 and need to address this in the 206 version for dora branch.

(From OE-Core rev: 07df3db5dd62e793770af6e47ea2f830272e8afc)

Signed-off-by: Diego Sueiro <diego.sueiro@gmail.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-14 12:30:30 +00:00
Holger Hans Peter Freyther
97c9163d97 gcc: Include patch scheduled for GCC 4.8.3 to fix epilogue on ARM
GCC 4.8.0, 4.8.1 and 4.8.2 can generate broken epilogues for the
ABI used by the kernel. Apply the patch that is included for GCC
4.8.3 from http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58854.

The issue was found on Yocto/Dora and the patch should be backported
to this branch. A kernel built with Dora's GCC 4.8.1 misbehaved on:

 while true;
 do
    (for i in `seq 1 100`;
        do
            echo "Log message... $RANDOM";
        done) | logger;
 done

busybox's syslogd would from time to read a huge negative value and
then exit, strace would get stuck waiting on a syscall. After this
patch it appears to work better.

(From OE-Core master rev: 3004eb3b7ee5fd8dfe9c4e5749b4e125d0bd4b59)

(From OE-Core rev: acef5185492287b9569f7fbbc3e9570d688e9c9f)

Signed-off-by: Holger Hans Peter Freyther <holger@moiji-mobile.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-14 12:30:30 +00:00
Christopher Larson
1a4fd0dd66 pulseaudio: only package consolekit module when x11 is enabled
As requested by Martin Jansa <martin.jansa@gmail.com>.

(From OE-Core master rev: 3e148f863d55728bbfa2d94b602b03dc56b70d4c)

(From OE-Core rev: 7ee4d9e1b29a1c0a2552a008fc264c592ef5ae4a)

Signed-off-by: Christopher Larson <kergoth@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-13 12:36:06 +00:00
Martin Jansa
aa7eb9544a cpan-base: Add vardepvalue to get_perl_version function
* without this bitbake -S perf shows following error:
  ERROR: Bitbake's cached basehash does not match the one we just generated
    (/OE/oe-core/meta/recipes-kernel/perf/perf.bb.do_package)!
  if you run it twice, once without perl in sysroot and once with perl
  already built

(From OE-Core master rev: f31f6a70ec24e8c9515d69c5092e15effc5e7d4d)

(From OE-Core rev: 7c161e05fcbe92a5ac076d8611f6237ca69d34f7)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-11 12:01:01 +00:00
Ross Burton
a642700cd9 binconfig: mangle ${base_libdir}
Some recipes are installing libraries into ${base_libdir} (typically /lib) and
also use a foo-config binary to identify compile paths, for example
libusb-compat.  Without mangling ${base_libdir} the ${base_libdir} path is
passed to the compiler, where it looks like a host path and results in
compile-host-path QA errors.

(From OE-Core master rev: ccd9abdccb84d713427541b6ee29a0e217360e74)

(From OE-Core rev: cf978595ae0563c26dcaaa03059ab54a744dbc35)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-11 12:01:01 +00:00
Olof Johansson
9047dee64c poky.conf: add Debian 7.3 to SANITY_TESTED_DISTROS
7.3 is a point release with security and bug fixes only, and I can
confirm that it works.

(From meta-yocto rev: baf65c002f6bc2ecf6c61a8ec5f1ad8b994b033d)

(From meta-yocto rev: 710c1270a63b0e078c46ce0cc536fcc56e9d18fb)

Signed-off-by: Olof Johansson <olof.johansson@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:07:04 +00:00
Olof Johansson
e46d9e9f41 poky.conf: add Debian 7.2 to SANITY_TESTED_DISTROS
7.2 is a point release with security and bug fixes only, and I can
confirm that it works.

(From meta-yocto rev: d5b180b97711bd3899f63a7a468544bb94573ae1)

(From meta-yocto rev: 5d426df41c7032dfeae0176339ff4374277d00b2)

Signed-off-by: Olof Johansson <olof.johansson@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:07:04 +00:00
Christopher Larson
64909c693a base.bbclass: pull in file-native for src.rpm
Unpacking an src.rpm uses rpm2cpio.sh, which requires 'file'.

Without this, builds of rpm on a host without 'file' installed will fail with
very strange messages.

(From OE-Core master rev: 97e1d84e2d1a74791ce6af88ddc27963bc0e1bec)

(From OE-Core rev: a4ae70638314a88c3abfcca0d29e1c425f86bea0)

Signed-off-by: Christopher Larson <kergoth@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:07:03 +00:00
Laurentiu Palcu
5b4c3955f0 x11vnc: fix CAPS_LOCK issues
Currently, pressing CAPS_LOCK on the viewer changes the lock state on
the server and the key will not change the case.

To fix this, use -skip_lockkeys option to ignore all Caps_Lock,
Shift_Lock, Num_Lock, Scroll_Lock keysyms received from viewers, in
order to leave the lock state on the server side unchanged. However, the
keys will appear correctly on the remote side.

[YOCTO #4149]

(From OE-Core master rev: 1e06d5ce83439b5bd75a958f305e6a880d40333d)

(From OE-Core rev: 7b4790b67e53071e19a243b31c159b2f1014575f)

Signed-off-by: Laurentiu Palcu <laurentiu.palcu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:17 +00:00
Alexandre Belloni
0671c08f13 wpa-supplicant-2.0: don't exit in pkg_postinst
Exiting explicitly in pkg_postinst makes it impossible to use the
update-rc.d class in a .bbappend because the link creation is appended
to the pkg_postinst script.

(From OE-Core master rev: 758d53d3044f29f3c33ffee3ada88c9edc9f864f)

(From OE-Core rev: 7d7481667fcf4550513aec1eca20d87b4ddfd40e)

Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:16 +00:00
Richard Purdie
10aff7de08 eglibc-locale: Fix multilib builds to depend on the correct binutils
(From OE-Core master rev: 858c60adbcc5e21c585383fe90f6803d52f0807f)

(From OE-Core rev: b7016947b29e24a627f441920d3fab8bfd1c6621)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:16 +00:00
Richard Purdie
48d88754d8 eglibc-locale: Fix previous dependency change to properly work in nativesdk case
(From OE-Core master rev: 7a4f3ee1b137cc8465f0cd9b1e461d3643182a81)

(From OE-Core rev: 740d56ac2861ab4e0ec1f69586d6bb34dad4f519)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:16 +00:00
Richard Purdie
cf2c660c91 eglibc-locale: Fix depends on binutils
The dependency here needs to apply for nativesdk as well as target packages
as the autobuilder just tripped over that. We'd never want a native version
so I'm not sure why the target class override was even present. The dependency
also applies to do_package so lets be explicit about that in case sstate
decides to get clever.

(From OE-Core master rev: b7ec21ac8ebac9d7fba34d6f11d93ecb8f561ca8)

(From OE-Core rev: 405a62954be71a476ded6a429ec895c5b5fec1a4)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:16 +00:00
Anders Darander
d3f96d104b terminal.bbclass: do not export PS1
With a complex PS1 setup, PS1 might not have all characters correctly escaped
when terminal.bbclass writes the export. This caused the run.do_terminal.PID to
terminate, making it impossible to use the devshell.

As the spawned shell will parse e.g. .bashrc (or whatever rc-file is being
used), PS1 will be reset in the devshell.

(From OE-Core master rev: a5e6926cd409140d16391c72316da00ffbfe5429)

(From OE-Core rev: a7d489f3341262b662e720170d64caf7092a956b)

Signed-off-by: Anders Darander <anders@chargestorm.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:16 +00:00
Phil Blundell
301ae75773 binutils: Also add autoconf-native to DEPENDS
Commit 616354f13732d13c17434d5b60b166f691c25761 is insufficient because
gnu-config-native's gnu-configize script uses perl modules from autoconf
and hence doesn't work unless autoconf-native is staged (which it may
not be if building from sstate).

Ideally g-c-n would itself declare a dependency on autoconf-native but this
is difficult to arrange without creating a dependency loop.  autoconf-native
already depends on gnu-config-native (because autoreconf invokes gnu-configize)
and has a build dependency on m4-native, which in turn build-depends on g-c-n
because it configizes itself by steam in do_configure and needs config.{guess,sub}
to be available.  Adding some sort of gnu-config-initial-native recipe would
fix the latter problem, but this would be ugly because it would need special-casing
in (at least) autotools.bbclass, and in any case this still wouldn't solve
the problem of autoconf itself depending on g-c-n.

So, the easiest solution to the problem at hand is to arrange for those
few recipes that depend on g-c-n but not autoconf-native to gain that
latter dependency as well.

(From OE-Core master rev: 507199e57acfcc99639dc2c53abe194d77d60866)

(From OE-Core rev: bbf8f596ca51aa33bdb5b0d5664827d62408863c)

Signed-off-by: Phil Blundell <pb@pbcl.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:16 +00:00
Phil Blundell
abc38c0259 libsoup: Remove libproxy from DEPENDS
Although libsoup did use to support direct usage of libproxy, it hasn't
done so for some time.  Worse, if libsoup depends on libproxy then it
is impossible to build libproxy against webkit since webkit itself
depends on libsoup in some configurations.  Fix this by removing the
extraneous entry from DEPENDS.

(From OE-Core master rev: e588ba009402be27c643f2596acea0f178d4e42f)

(From OE-Core rev: 18b0c51668a8da50d679c6e9a55ccd3c0595c011)

Signed-off-by: Phil Blundell <pb@pbcl.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:16 +00:00
Ming Liu
f4f84e410b grub: add PACKAGECONFIG for device-mapper
Avoids it's auto-detected from sysroot, which will lead implicit results.

(From OE-Core master rev: 6f9e72f77cd0b06c5ad753cb9ab05dd681690c6b)

(From OE-Core rev: 384bb308edc35fbd6538aed90512f5fcdce7575c)

Signed-off-by: Ming Liu <ming.liu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:16 +00:00
Jackie Huang
c1891d482b guile: fix the depends for target recipes
The depenency on guild-native and libatomics-ops is missing
in multilib build, fix the depends with class-target.

(From OE-Core master rev: 88f1913f7cea54f0e4e1024ea506b5ce9faea96b)

(From OE-Core rev: 2e72d04883c20018ae28c0ffde0a8466662648b2)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:16 +00:00
Ming Liu
1481046811 libpthread-stubs: should set ALLOW_EMPTY
The package might be empty while pthread functions are being provided by
libc, so we need set ALLOW_EMPTY with it or it will break do_rootfs task.

(From OE-Core master rev: 53efd76f7955375986a036924513bb374a918f0b)

(From OE-Core rev: 0043f274d7670eb72b762c75a4f116673000b226)

Signed-off-by: Ming Liu <ming.liu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:16 +00:00
Jackie Huang
0903aab236 kconfig-frontends: fix the incorrect depends on gperf
The gperf-native is actually needed to generate hash functions,
change to depend on the native one.

(From OE-Core master rev: 3285fdfe7dc13b068e7f3cd727e5c789cd22b26b)

(From OE-Core rev: 547c3ba542041d5fa7af170fb09ef76d44f19e9d)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:16 +00:00
Ross Burton
f59b445bde dbus: use PACKAGECONFIG for X11 and systemd
Instead of several variables and overrides, use PACKAGECONFIG to respect X11 and
systemd DISTRO_FEATURES.

(From OE-Core master rev: 963da99c77ad28bd184a4de59af9cbcfaef62358)

(From OE-Core rev: 2e7c07c6b670a68e20e11a22716f2b69e49cac5e)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:15 +00:00
Jacob Kroon
9797e78a9b meta/lib/oe/terminal.py: Don't pass non-supported '--disable-factory' flag to gnome-terminal
By default, all GNOME terminals share a single process,
reducing memory usage.  This can be disabled by starting gnome-terminal
with the --disable-factory option

However, gnome-terminal in Fedora 20 does no longer support the
'--disable-factory' flag, so remove it. As the support for 'mate' terminals was
added as a copy of the gnome code in 8cc078a9c679845464c59028f584d7aba098cc1f,
remove the flag here aswell.

(From OE-Core master rev: e8dca725ed8211a874472300a3ed50e494039ab9)

(From OE-Core rev: b3c051aecde0c0662b352eb789ed276ef00e626f)

Signed-off-by: Jacob Kroon <jacob.kroon@mikrodidakt.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:15 +00:00
Saul Wold
73880876b0 openssl: use PACKAGECONFIG to disable perl bits
Adding perl to the RDEPENDS caused a performance hit to the overall build time since this was
the only package that depended on perl.  The openssl-misc package is not installed by default
so use a PACKAGECONFIG which can be overridden to allow the perl scripts along with  perl to
 be installed.

(From OE-Core master rev: 421e927bd453259f4b3cdbd1676f6e12f97bf34f)

(From OE-Core rev: 16aac35467087e8cd72308505ac1f9d8d8eb8def)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:15 +00:00
Paul Eggleton
7c7349efd0 libav: add libpostproc to PROVIDES (for 0.8.x version only)
There is a separate libpostproc recipe in meta-oe for use with 9.x and
later versions of libav for those few that need libpostproc; however if
you just add meta-oe and try to build libpostproc without selecting the
libav 9.x version recipe, you'll be building the libpostproc recipe
together with libav 0.8.x, which provides its own libpostproc; this
leads to confusing errors at packaging time. In order to flag up that
these conflict more appropriately, add libpostproc to PROVIDES
explicitly so that you at least get a multiple providers error at the
start of the build.

Fixes [YOCTO #5335].

(From OE-Core master rev: e8f9420fe901675fc1a8d4e41302c2faa4a7dc4a)

(From OE-Core rev: 9ec143438d0bffd2ff95c6d74194a53e5fed7f3a)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:15 +00:00
Otavio Salvador
b1bf4ebb9d gcc-4.8: Backport PR c++/57532 fix from 4.8.2
Bug: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57532

Log:
r200836 | jason | 2013-07-09 14:52:17 -0300 (Tue, 09 Jul 2013) | 3 lines

        PR c++/57532
        * parser.c (cp_parser_ref_qualifier_opt): Don't tentatively parse
        a ref-qualifier in C++98 mode.

(From OE-Core rev: dd2891db2e25f09a15f621d1b132603128c9a673)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:15 +00:00
Tom Zanussi
0be9520921 systemtap: Add --enable-prologues to configuration
In some cases, the debuginfo generated by the compiler is insufficient
for systemtap to figure out function param locations; using -P allows
it to use prologue searching to find the correct locations.

Enable prologue searching in the configuration so the user doesn't
have to specify it manually.

Fixes [YOCTO #5403].

(From OE-Core master rev: 798faec374cac7743d2b5bf390ef6263a0e6cdf4)

(From OE-Core rev: bff5199a17c7ab293f5bfdd3cdc19c39887dc00d)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:15 +00:00
Scott Garman
81f4de35fc runqemu: remove core-image-* whitelist
Using a whitelist for image names to default to when none are
specified on the command line is no longer desired. Instead,
choose the most recently created image filename that conforms
to typical image naming conventions.

Fixes [YOCTO #5617].

(From OE-Core master rev: 9f69e00200cdbd5ba2e46a54f33c29797816e43f)

(From OE-Core rev: 48f4b36c1a6ad71da752866b8c28885d95444b4e)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:15 +00:00
Krzysztof Sywula
1762f4fc7a Minicom depends on libiconv
(From OE-Core master rev: 1d6caef7222d0c1086a08a109ea4135a388c88e6)

(From OE-Core rev: f9b4ca8304a657f167cc78055631e9b0b996a28e)

Signed-off-by: Krzysztof Sywula <krzysztof.m.sywula@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:15 +00:00
Ross Burton
6d0a79526b useradd.bbclass: add dependency on base-files
Packages that use useradd.bbclass should have a dependency on base-files so that
the /etc/skel directory is populated.  Without this dependency base-files may or
may not be installed when the postinst runs, and the skel content may or may not
be copied.

(From OE-Core master rev: 556368ba8a1f933a86b69be024bd0711d4bfe0a3)

(From OE-Core rev: c5756a0041837d030429a8a1ecd30cd5f7082137)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:15 +00:00
mykhani
d81dd16ce4 openssl.inc: Install c_rehash utility with openssl
c_rehash utility is not being installed with openssl.It conveniently
generates hash and symbolic links based on it for CA certificates
stored locally for SSL based server authentication

(From OE-Core master rev: 3c2f9cf615c964e8303fd3e225ea7dd7b5485155)

(From OE-Core rev: fdf04f50dfa3bd8861cb08c80ae149dddce4aa58)

Signed-off-by: Yasir-Khan <yasir_khan@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:15 +00:00
Robert Yang
59cf6ae333 gcc-4.8/libstdc++-v3: disable sdt
We may meet such an error when building gcc/libstdc++-v3:

gcc-4.8.1/libstdc++-v3/libsupc++/unwind-cxx.h:41:21: fatal error:
sys/sdt.h: No such file or directory

We already have a patch to disable the sdt for gcc, we also need disable
it for libstdc++-v3.

BTW, we need edit both configure.ac and configure to make them keep
compatible.

NOTE, this commit edit the patch gcc-4.8/0031-Disable-sdt.patch directly.

[YOCTO #5657]

(From OE-Core master rev: 32854af3cc6c0626620e827dc1915f61c51250b8)

(From OE-Core rev: 718932dda21323607fe2bff459babbebcae302c5)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:15 +00:00
Christopher Larson
05e1e29c11 xz: make the LICENSE info more accurate
(From OE-Core master rev: fb9b12121f97f59d92ec2b8fdbe0e68f336f0576)

(From OE-Core rev: c560ada1bc2aa127b7dd7bf10722c90c10c7dcc4)

Signed-off-by: Christopher Larson <kergoth@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:14 +00:00
Richard Purdie
af9e19bfca gcc-crosssdk.inc: Fix missing dependencies (such as libmpc-native)
Without this sstate builds can fail with missing dependencies.

(From OE-Core master rev: f92ebf78d94cb8f4010f8d444d1d0336c1fb1341)

(From OE-Core rev: ffff0c8ba6df643f9da307d675aa2ff91d62c46d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:14 +00:00
Bruce Ashfield
ad9a84b058 linux-libc-headers: fix MIPS klibc build error
As reported by Andrea Adami, klibc fails to build for MIPS with the 3.10 libc-headers

commit ca044f9a [UAPI: fix endianness conditionals in linux/raid/md_p.h] is the root
cause of the breakage.

This is fixed in the kernel source itself, but we must also carry the
change in the linux-libc-headers recipe, until we update past the
3.13 kernel.

With this change, we can again build klibc for mips, with no impact
on the rest of the system.

cc: Andrea Adami <andrea.adami@gmail.com>
(From OE-Core master rev: f2f8a2a05cbfff7e1d5d979ec1b9f4f371579fb9)

(From OE-Core rev: 45dbd265e08e69cbc95625d57e9b0c0cb5b0b225)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:14 +00:00
Yue Tao
6757c59442 icu: CVE-2013-2924
Use-after-free vulnerability in International Components for Unicode (ICU),
as used in Google Chrome before 30.0.1599.66 and other products, allows
remote attackers to cause a denial of service or possibly have unspecified
other impact via unknown vectors.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-2924

(From OE-Core master rev: 36e2981687acc5b7a74f08718d4578f92af4dc8b)

(From OE-Core rev: ab2d452fd9e177017c57d411ebb61728845f97bf)

Signed-off-by: Yue Tao <Yue.Tao@windriver.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:14 +00:00
Yue Tao
d426450b0b acpid: CVE-2011-1159
acpid.c in acpid before 2.0.9 does not properly handle a situation in which
a process has connected to acpid.socket but is not reading any data, which
allows local users to cause a denial of service (daemon hang) via a crafted
application that performs a connect system call but no read system calls.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-1159

(From OE-Core master rev: e7b2b84dece29d16b8f05daf962b69e78dd64cb3)

(From OE-Core rev: 56f1ea2cd8eeb189ee345f215deb5867a77cd0a1)

Signed-off-by: Yue Tao <yue.tao@windriver.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:14 +00:00
Li Wang
a64da6a9b3 xinetd: CVE-2013-4342
xinetd does not enforce the user and group configuration directives
for TCPMUX services, which causes these services to be run as root
and makes it easier for remote attackers to gain privileges by
leveraging another vulnerability in a service.
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-4342

the patch come from:
https://bugzilla.redhat.com/attachment.cgi?id=799732&action=diff

(From OE-Core master rev: c6ccb09cee54a7b9d953f58fbb8849fd7d7de6a9)

(From OE-Core rev: 478b7f533c6664f1e4cab9950f257d927d32bb28)

Signed-off-by: Li Wang <li.wang@windriver.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:14 +00:00
Roy Li
f2921bdf51 multilib: Ensure we map the SYSTEMD_PACKAGES variable
If we don't do this, systemd.bbclase will complain to unable to find multilib
packages since PACKAGES is expand with mlprefix, but SYSTEMD_PACKAGES is not,
like in ntp.inc:

    $grep PACKAGES meta-oe/meta-networking/recipes-support/ntp/ntp.inc
    PACKAGES += "ntpdate sntp ${PN}-tickadj ${PN}-utils"
    SYSTEMD_PACKAGES = "${PN} ntpdate sntp"
    $

    $bitbake ntp
    ERROR: ntpdate does not appear in package list, please add it
    ERROR: sntp does not appear in package list, please add it
    $

(From OE-Core master rev: 84f1d3252c369dff06a517baa4fd7fe274782e40)

(From OE-Core rev: 5748342f445d4233af838a6a65449a5d1baeb3c2)

Signed-off-by: Roy Li <rongqing.li@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:14 +00:00
Hongxu Jia
01a023a5d8 nativesdk.bbclass: support nativesdk to override with the PACKAGES_DYNAMIC statement
While compiling nativesdk-mtools, there was failure:
...
Nothing PROVIDES 'nativesdk-glibc-gconv-ibm850'. Close matches:
...
This patch supports nativesdk to override with the PACKAGES_DYNAMIC statement

[YOCTO #5623]
(From OE-Core master rev: 315367ea9526186d5836c64867ce0cd40d9d8412)

(From OE-Core rev: a2f8a7e2fb868b8dc00d1f50fffb5744d4d34e72)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:14 +00:00
Hongxu Jia
e58a1499ac license.bbclass: fix copying license directories failed
For each recipe, it populated license files to ${LICENSE_DIRECTORY}/${PN},
such as kernel's license dir was ${LICENSE_DIRECTORY}/kernel-3.10.17-yocto-standard;

In do_rootfs task, it copied license directories from ${LICENSE_DIRECTORY}/
${pkg}, and ${pkg} was listed in ${INSTALLED_PKGS};

We got ${INSTALLED_PKGS} by rpm query, such as the kernel were 'kernel-*',
but the kernel's PN was linux-yocto, so searching ${LICENSE_DIRECTORY}/
kernel-* failed.

Copied license directories from ${LICENSE_DIRECTORY}/${PN} fixed this issue.

[YOCTO #5572]

(From OE-Core master rev: 8968f9a3461912c8de217135f3691c86e2a58e86)

(From OE-Core rev: 49a8573e8645830c7fce5fba9dd0b1c97b09e181)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:14 +00:00
Hongxu Jia
a39125ce01 qemu: add bash and python to qemu's RDEPENDS
| Note: adding Smart RPM DB channel
|
| Note: to be installed:  qemu@x86_64 run-postinsts@x86_64 kernel-modules@qemux86_64 packagegroup-core-boot@qemux86_64
| Loading cache...
| Updating cache...               ######################################## [100%]
|
| Computing transaction...error: Can't install qemu-1.5.0-r0.0@x86_64: no package provides /usr/bin/python
|
| Saving cache...
|
| WARNING: exit code 1 from a shell command.
| ERROR: Function failed: do_rootfs (log file is located at tmp/work/qemux86_64-wrs-linux/wrlinux-image-glibc-small/1.0-r1/temp/do_rootfs/log.do_rootfs.21373)

(From OE-Core master rev: 44806d40a8061a3dbab4cc19d1200c032781e0b4)

(From OE-Core rev: 95e983f84add8a618258c73844e360631d0511a2)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:13 +00:00
Jackie Huang
239330fa8c grub: add explicit dependency on bison-native
grub requires bison and it will fail to configure on
the host without bison installed, add the dependency
to fix it.

(From OE-Core master rev: a1510c1a8e6b8a652ae65b7e3910501f1055f87f)

(From OE-Core rev: 6f6e292a102e8ae8d607bb9dac4f3d3e2bced1ea)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:13 +00:00
Andreas Oberritter
396a6f6b2d cogl-1.0: explicitly disable cairo
- Cairo was auto-detected, but not listed as a dependency.

(From OE-Core master rev: 33bcc9361fa732c36d92128c7f23a308f455297c)

(From OE-Core rev: 9ed692eb20fed4ff9d9c39435000535df5a67286)

Signed-off-by: Andreas Oberritter <obi@opendreambox.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:13 +00:00
Christopher Larson
fb43320d67 pulseaudio: fix RDEPENDS traversal for consolekit
Include the console-kit module in PACKSGES explicitly so bitbake can map to
the RDEPENDS we define for it in this recipe, and thereby ensure that when
adding the console-kit module to an image, we also get the necessary
consolekit package produced.

(From OE-Core master rev: 7e7ff7d1e5e86f097ef40befcf00dd28657e26f8)

(From OE-Core rev: 029b225cd7491a1efdc42593460a57d9eb865427)

Signed-off-by: Christopher Larson <kergoth@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:13 +00:00
Ross Burton
8299a1ee71 ptest: ensure do_install_ptest_base task runs in fakeroot context
As this task is installing files into $D it needs to run inside pseudo so that
special permissions and owners are preserved.

(From OE-Core master rev: 64f0a0bc408d8e32d5e795aeb9fffee0539f5e22)

(From OE-Core rev: dbfc8b977dd9c8223568bbaf58e236461d6adcf6)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:13 +00:00
yzhu1
32c7150b04 populate_sdk: verify executable or dynamically linked library
When toolchain directory is changed to execute mode, some non-executable
files or empty files are sorted. This will result in some errors. Thus when
sorting executable files or dynamically linked library, additional conditions
are to exclude non-executable files or empty files.

(From OE-Core master rev: c9d56308bfa9ee7f4a9b22eae86390626ddc1c35)

(From OE-Core rev: 7565b6220e596687ee97cf06256a7065a55bb297)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:13 +00:00
Christopher Larson
89a5d5ec5b cairo: add/use packageconfig for valgrind support
It was currently autodetecting.

(From OE-Core master rev: 68fc138d172d491e16d5e6f2fc21fc779c04b92f)

(From OE-Core rev: f6496a471d8944357067c7573d23f612f3af1f72)

Signed-off-by: Christopher Larson <kergoth@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:13 +00:00
Christopher Larson
ccbdac5d97 python, python-native: fix PARALLEL_MAKEINST failure
When using make -j with the 'install' target, it's possible for altbininstall
(which normally creates BINDIR) and libainstall (which doesn't, though it
installs python-config there) to race, resulting in a failure due to
attempting to install python-config into a nonexistent BINDIR. Ensure it also
exists in the libainstall target.

(From OE-Core master rev: 54da47f3ddc1c009594744793060ffd09db3ad11)

(From OE-Core rev: 4766c03a208f37276b4778a4edc20e1e19faebeb)

Signed-off-by: Christopher Larson <kergoth@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:13 +00:00
Chen Qi
1ed91fcd03 subversion: fix build problem when sysroot contains '-D' or '-I'
If sysroot contains '-D' or '-I' characters, the SVN_NEON_INCLUDES and
the corresponding CFLAGS will not get the correct value.

This will cause build failures.

This patch fixes the above problem.

[YOCTO #5458]

(From OE-Core master rev: 7078397ef39de43244fca7e24683b2a83913cbbf)

(From OE-Core rev: 6a0187d6ce57552c9f00f74dc9001ce0400f0608)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:13 +00:00
Yue Tao
2ab41dfb60 python: do not replace ccache in the middle of a path
Python recipe did a sed s/ccache/$(CCACHE) on the Makefile, which
replaces all "ccache" including ones that consist of a full path.
This leads to build error when building in a project path with
"ccache" in its name. Fix it by only replacing "ccache " with
"$(CCACHE) ".

(From OE-Core master rev: 1181112cf65bc0186807fc59399c5dddcb9f9449)

(From OE-Core rev: 3081a1d16d2e1928f11c221c02d92b2c23fa1d85)

Signed-off-by: Lei Liu <lei.liu2@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:13 +00:00
Cristiana Voicu
fdad6bae43 babeltrace: correct PV variable
(From OE-Core master rev: ceba66859f87904066e67504a53fc8b07f4b1111)

(From OE-Core rev: c4fa4b8f42a8d4476e623c014604ffc14a762c24)

Signed-off-by: Cristiana Voicu <cristiana.voicu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:13 +00:00
Richard Purdie
88d7fd1b09 gcc-cross-canadian: Fix fortran build
When fortran was enabled, builds were failing due to a extra files.
For now we can remove these and avoid the build failure.

(From OE-Core master rev: 2e60ef7fe63974e443a9ddc25c5eb4249ec37963)

(From OE-Core rev: 2c0a74e2a77ffd542cf45868646a054e0952c77a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:12 +00:00
Richard Purdie
6241dcf765 image.bbclass: Depend on virtual/kernel:do_deploy
Now that none of the packagegroups depend on virtual/kernel, we have the problem
that MACHINE=qemumips bitbake core-image-minimal doesn't put a kernel
into the deploy directory. This breaks many common usecases and
user expectations.

To avoid this, add a dependency on the kernel deploy to image do_build tasks.
This should avoid any circular dependency issues but equally ensure users
have their expectations met.

[YOCTO #5581]

(From OE-Core master rev: fe26b2379ecdbdb56acde8592bc0c2d95092a207)

(From OE-Core rev: 444a8a0b235c0c48685fe84f0c37031906d03921)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Conflicts:
	meta/classes/image.bbclass

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:12 +00:00
Richard Purdie
ef00c0c0b5 base/gcc-common: Ensure umask setting is consistent for shared workdir
gcc has cross and target components with a shared workdir. The unpack umask
settings need to match for all of these. We need to use strings in each
case to ensure the sstate code matches them correctly.

This patch tweaks various things to ensure the change adding the unpack umask
change doesn't break the compiler builds.

(From OE-Core master rev: 67162438ee9c402b23c32853af9d313949eb6e4a)

(From OE-Core rev: 3e8776e3fc09ba11867457e0be6b4c3a4a01e2c6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:12 +00:00
Martin Jansa
675d4b4c0e base.bbclass: Set umask 022 also for do_unpack task
* when git checkouts files from fetched clone it respects system umask
  and creates files with different permissions, if such files are copied
  to packages, resulting target images have also different permissions
  on them.
* we need reproducible builds across different builders with different
  system umask, so set 022 umask

[YOCTO #5590]

(From OE-Core master rev: c9289c506633ffe5c482000d8d225e45454c064d)

(From OE-Core rev: 8ce99fa4e3868450d7339edf5e8e02bd99117893)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:12 +00:00
Martin Jansa
a79904aee4 xinput-calibrator: add formfactor to RDEPENDS
* 30xinput_calibrate.sh is calling ". /etc/formfactor/config"
  breaking Xsession for images without formfactor

(From OE-Core master rev: 181a46da02d6ae74a8d1b5d06c547e0d213767ea)

(From OE-Core rev: 33ceb17db1c1cb80492a7d25bc1f95cfe9d7bb76)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:12 +00:00
Nick D'Ademo
9366a6fbb2 libav: install libraries to right directory when multilib is enabled
Explicitly set libdir and shlibdir to ${libdir} in EXTRA_OECONF. Otherwise, default library path of ${prefix}/lib is used which is incorrect in a multilib build.

(From OE-Core master rev: e16b6bab8d5286cdf58d808ef4c195127d69a8c8)

(From OE-Core rev: 62e98e8f70adbf0b878d8410a801f94eba938ba5)

Signed-off-by: Nick D'Ademo <nickdademo@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:12 +00:00
Robert Yang
22247dcebc coreutils 6.9: fix coreutils.texi
This is used for fixing coreutils 6.9 (GPLv2+) do_installed failed:

[snip]
| coreutils.texi:2499: @itemx must follow @item
| coreutils.texi:2636: @itemx must follow @item
| coreutils.texi:2644: @itemx must follow @item
| coreutils.texi:2654: @itemx must follow @item
| coreutils.texi:2677: @itemx must follow @item
| coreutils.texi:2689: @itemx must follow @item
| coreutils.texi:2820: @itemx must follow @item
| coreutils.texi:3058: @itemx must follow @item
| coreutils.texi:3253: @itemx must follow @item
[snip]

Use '@item' instead of '@itemx' in several places, as Texinfo 5 refuses
to process an '@itemx' that is not preceded by an '@item'.  Ensure that
node extended names in menus and sectioning are consistent, and that
ordering and presence of nodes in menus and in the actual text are
consistent as well.

[YOCTO #5593]

(From OE-Core master rev: 04fab782f42b8f5047390042618f9c841b8c3a96)

(From OE-Core rev: dfea78ff2f0479fae436462aa424b3f065c4baf3)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:12 +00:00
Hongxu Jia
ceacf28277 qemu: add PACKAGECONFIG for vnc, libcurl, nss, uuid, curses, gtk+, libcap-ng
Use PACKAGECONFIG to explicitly address vnc, libcurl, nss, uuid, curses, gtk+,
libcap-ng dependencies rather than tested by configure.

It avoided potential errors while multiple builds shared a common state_cache.

(From OE-Core master rev: 4482af07df26644885bae49b98f5d765a5caa68c)

(From OE-Core rev: ea9eda2fc54b9dec0f4bd039bcda039388f0a95d)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:12 +00:00
Ming Liu
a639dd8673 qemu: explicitly disable xen support
We don't make use of xen and when building on Ubuntu 13.04 when
libxen-dev is installed on the build host you will get errors like the
following:

| /usr/include/x86_64-linux-gnu/bits/string3.h:81: warning: memset used with constant zero length parameter; this could be due to transposed parameters
| /usr/lib/gcc/x86_64-linux-gnu/4.7/../../../../lib/libxenguest.so: undefined reference to `lzma_alone_decoder@XZ_5.0'
| /usr/lib/gcc/x86_64-linux-gnu/4.7/../../../../lib/libxenguest.so: undefined reference to `lzma_code@XZ_5.0'
| /usr/lib/gcc/x86_64-linux-gnu/4.7/../../../../lib/libxenguest.so: undefined reference to `lzma_stream_decoder@XZ_5.0'
| /usr/lib/gcc/x86_64-linux-gnu/4.7/../../../../lib/libxenguest.so: undefined reference to `lzma_end@XZ_5.0'

This change disables xen for both -native and target packages but
since it is a PACKAGECONFIG a user could tune this to have xen support
in the target package.

(From OE-Core master rev: fd638b975aac826d7137fd11db94b64ba82de592)

(From OE-Core rev: e0947757829b4e1f953bec7b42b28d2c452442c2)

Signed-off-by: Mark Asselstine <mark.asselstine@windriver.com>
Signed-off-by: Ming Liu <ming.liu@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:12 +00:00
Ming Liu
64dfb07861 qemu: use PACKAGECONFIG to address xfsprogs dependency
To avoid a implicit build result.

(From OE-Core master rev: 3e302e94ba5bcbba2736f37c0f67cfaf7fa45c0c)

(From OE-Core rev: b94e55928a07e672610e264f934b90649282ed53)

Signed-off-by: Ming Liu <ming.liu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:12 +00:00
Paul Eggleton
966f24b2ae linux-firmware: add missing linux-firmware-iwlwifi-7260-7 package
The FILES / RDEPENDS lines were added for this package, but not the
entry in PACKAGES, so it was never being created.

(From OE-Core master rev: 25a75e83550fab0f9d2486b13ec9ab6339b6a8b0)

(From OE-Core rev: 70ff14833d770ff25baf86116430ea37b5859f11)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:11 +00:00
Enrico Scholz
7a08c3d230 bluez4: added dependency on 'libsndfile1'
bluez4 detects and uses libsndfile1 and the compilation can fail with

| sbc/sbctester.c:32:21: fatal error: sndfile.h: No such file or directory
| ...
| compilation terminated.
| make[1]: *** [sbc/sbctester.o] Error 1

in rebuilds (image with libsndfile1 was built, then some change ->
bluez4 do_configure runs with libsndfile1 -> libsndfile1 gets removed
-> bluez4 do_compile fails).

As there is no trivial way to disable its detection and to make it a
PACKAGECONFIG option, 'libsndfile1' was put into static DEPENDS.

(From OE-Core master rev: b9571256f8996d1eb4b9a09b3b5b862a13f1b414)

(From OE-Core rev: 2e747793922aa8dbfd7050e074994b9686e0c9f0)

Signed-off-by: Enrico Scholz <enrico.scholz@sigma-chemnitz.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:11 +00:00
Martin Jansa
d12d209442 avahi: add leading space to RRECOMMENDS append
* in case update-rc.d is already in RRECOMMENDS it fails with
  ERROR: Nothing RPROVIDES 'update-rc.dlibnss-mdns' (but
  meta/recipes-connectivity/avahi/avahi_0.6.31.bb
  RDEPENDS on or otherwise requires it)

(From OE-Core master rev: 70dedb67c2b8b7302dc4c51e8c607e57f61f530a)

(From OE-Core rev: 8491f6b78591d611ae93fd6015b38c0eccedc9b2)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:11 +00:00
Kai Kang
26e4693667 x264: install libraries to right directory when enable multilib
x264 use [EPREFIX/lib] as default libdir. When multlib is enabled that
is not right. Packages depends on x264 such as libav configure fails
that can't find library x264.

Pass the right libdir to configure script to fix it.

(From OE-Core master rev: d1deb07d158cf27bce2ee95e2f02b4fd1d00fe21)

(From OE-Core rev: baa97fc4baf4c87bb850b88a55144395b5c7e11e)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:11 +00:00
Hongxu Jia
575612402a kconfig-frontends: add python to kconfig-frontends's RDEPENDS
| Note: adding Smart RPM DB channel
|
| Note: to be installed:  kconfig-frontends@x86_64 run-postinsts@x86_64 kernel-modules@qemux86_64 packagegroup-core-boot@qemux86_64
| Loading cache...
| Updating cache...               ######################################## [100%]
|
| Computing transaction...error: Can't install kconfig-frontends-3.10.0.0-r0.0@x86_64: no package provides /usr/bin/python
|
| Saving cache...
|
| WARNING: exit code 1 from a shell command.
| ERROR: Function failed: do_rootfs (log file is located at tmp/work/qemux86_64-wrs-linux/wrlinux-image-glibc-small/1.0-r1/temp/do_rootfs/log.do_rootfs.30959)

(From OE-Core master rev: f15af9a8d603b2eb3a8433367ddadecd714128d3)

(From OE-Core rev: 0415e098d35f163fcfc87c9ca6338ace60b1cc93)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-02-09 11:04:11 +00:00
Robert Yang
c3cccbea7a poky: Add Fedora-20 to supported distros list
(From meta-yocto rev: cf42610ac26307f28d5b3fea6be8bde223c0ed40)

(From meta-yocto rev: aa345747f3ece836c209f2623c21acb294f07e96)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-28 00:37:58 +00:00
Richard Purdie
ed63827ed2 poky: Add Ubuntu-13.10 to supported distros list
(From meta-yocto rev: bbde6b42ff2556d090410b49c083609956789eda)

(From meta-yocto rev: 99d193f8246a6c306b1b54ef67045907f0f31ca5)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-28 00:37:58 +00:00
Martin Jansa
7f070ca4f1 ltp: set PREFERRED_PROVIDER and rename runtests_noltp.sh script
* ltp installs 2 different runtests_noltp.sh files from different
  directories into /opt/ltp/testcases/bin/runtests_noltp.sh
  last one installed wins and causes unexpected changes in
  buildhistory's files-in-image.txt report, rename them to have
  unique name as other ltp scripts have.

* also define PREFERRED_PROVIDER to resolve note shown when
  building with meta-oe layer:
  NOTE: multiple providers are available for ltp (ltp, ltp-ddt)
  NOTE: consider defining a PREFERRED_PROVIDER entry to match ltp

* use patch generated without -M
  in my builds both versions worked, but Saul reported that it fails to
  apply with:
  Applying patch
  0001-Rename-runtests_noltp.sh-script-so-have-unique-name.patch
  patch: **** Only garbage was found in the patch input.

  Now I've see the same issue on different builder (with Ubuntu 12.04).

  (From OE-Core master rev: ec3bb2c2203b2e8bafc1a631f623f858779e20b7)

(From OE-Core rev: 198623d80d31f19c963e61d03cbcb12dd318dfdf)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-24 12:48:39 +00:00
Mike Crowe
97af6e46da kernel.bbclass: Stop bundle_initramfs thwarting sstate cache and fix race
The new do_bundle_initramfs task introduced in
609d5a9ab9e58bb1c2bcc2145399fbc8b701b85a defeats using the sstate
cache. The kernel is resurrected from the sstate cache but ends up being
built again since do_bundle_initramfs depends on do_compile.

The task is no longer nostamp to avoid causing unnecessary rebuilds. The
sstate checksum stamps should know when to rebuild.

The task now runs before do_deploy and part of the work has been moved to
do_deploy where it now writes to ${DEPLOYDIR} rather than
${DEPLOY_DIR_IMAGE} so that the files end up in sstate.

The task can also race against do_install since both call into the kernel
build system. This is fixed by making do_bundle_initramfs run after
do_install (which therefore also fixes the problem that
3baa63b4d588c3262254528b406ede265dd117bf was addressing.)

(From OE-Core master rev: 55989cb509340bd265d0ce0d8bfe849681be4616)

(From OE-Core rev: e64d2a6d408ac542bdaa42680d10d0fb05b92a60)

Signed-off-by: Mike Crowe <mac@mcrowe.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-24 12:48:39 +00:00
Mike Crowe
3c9d11bf7f Revert "kernel.bbclass: move bundle_initramfs after kernel_link_vmlinux"
This reverts commit 3baa63b4d588c3262254528b406ede265dd117bf. It broke
builds that aren't using kernel-yocto.

(From OE-Core master rev: 81831db1c32afa3346f3ed9f4325ad280e5bb005)

(From OE-Core rev: 7d98d619462151db73611b48c5944339c40ae805)

Signed-off-by: Mike Crowe <mac@mcrowe.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-24 12:48:39 +00:00
Scott Rifenbark
d211e8f47a poky.ent: Fixed broken OE_LISTS_URL.
This variable was wrong and it was causing six mailing links in
the manual set to no resolve.  Who knows how long they have been
broken.  They work now.

(From yocto-docs rev: 977f372228e8d1af016c973c4543bdd803bfe546)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-21 21:57:34 +00:00
Scott Rifenbark
74a0e16571 poky.ent: Updated lists.linuxtogo.org with lists.openembedded.org
(From yocto-docs rev: 8391b80060e04bb218ea249795a26d3d106cce7e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-21 21:57:33 +00:00
Richard Purdie
bee7e3756a eglibc-initial.inc: Drop duplicate include
There is little point in including the file twice so lets not. The
main recipe already included it.

(From OE-Core rev: 243c5a38cc4e95f47ba18210fea1b86a7f58b099)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-16 12:25:03 +00:00
Cristiana Voicu
8c102dba49 bitbake: hob/hoblistmodel: check if vals of packages/recipes names are not None
[YOCTO #5053]

(Corresponds to BitBake master rev: ba9fe77e37be31e8246431578902e871dd94515e)

(Bitbake rev: a1ced88ae46926e28005b83a74eaf89d70dc2b74)

Signed-off-by: Cristiana Voicu <cristiana.voicu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-08 17:41:17 +00:00
Scott Rifenbark
c89cfd5720 adt-manual: Deleted mis-leading sentence from configure section.
A customer reported a wrong and mis-leading sentence in the
"Configuring and Running the ADT Installer Script" section.
Jessica Zhang pointed this out.  I have removed the sentence
altogether.

Reported-by: Jessica Zhang <jessica.Zhang@intel.com>
(From yocto-docs rev: 4b8f882037de3e853d00552af5cff83afac18a66)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-08 13:01:44 +00:00
Scott Rifenbark
ed0353828e poky.ent, documentation: Updated manual release dates.
Updated poky.ent to use 2014 as the top-end copyright year.
Updated all the Manual Revision History tables to use January
2014 as the 1.5.1 release date.

(From yocto-docs rev: 885a89231c664ccbd9032c45584aa19dce7c0b38)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-08 13:01:44 +00:00
Richard Purdie
9cea2e49d0 bitbake: imagedetailspage: Fix crash with more than 15 layers
If you had more than 15 layers the system would crash since one more
value is added to one array than the other. This fixes the code
so equal numbers of values are added to the arrays and hence
doesn't crash when many layers are enabled.

(Bitbake rev: ae420d37fd57a567cf3d2d8096cc9aa28ed01385)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-02 13:08:48 +00:00
Richard Purdie
564c687c2a bitbake: parse/ConfHander/BBHandler/utils: Fix cache dependency bugs
Currently bitbake only adds files to its dependency list if they exist.
If you add 'include foo.inc' to your recipe and the file doesn't exist,
then later you add the file, the cache will not be invalidated.

This leads to another bug which is that if files don't exist and then
you add them and they should be found first due to BBPATH, again the
cache won't invalidate.

This patch adds in tracking of files we check for the existence of so
that if they are added later, the cache correctly invalidates. This
necessitated a new version of bb.utils.which which returns a list of
files tested for.

The patch also adds in checks for duplicate file includes and for now
prints a warning about this. That will likely become a fatal error at
some point since its never usually desired to include a file twice.

The same issue is also fixed for class inheritance. Now when a class
is added which would be found in the usual search path, it will cause
the cache to be invalidated.

Unfortunately this is old code in bitbake and the patch isn't the
neatest since we have to work within that framework.

[YOCTO #5611]
[YOCTO #4425]

(Bitbake rev: 22e6b1c4c4afb27057689bbc94cbdf1f19f93e3d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-02 13:08:47 +00:00
Richard Purdie
54534a6100 bitbake: data: Fix output inconsistencies for emit_var
VAL = ""     (not shown)
VAL = " "    (shown as "")
VAL = " x"   (shown as "x")

would all show up rather differently to what would be expected in the
bitbake -e output. This fixes things so they appear consistently.

The output for running some shell functions may also change slightly
but shouldn't change in a way that is likely to cause problems.

[YOCTO #5507]

(Bitbake rev: 9f37afff200d748beddc2a70f55a72c2714e3120)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-02 13:08:47 +00:00
Richard Purdie
faef2d9588 bitbake: runqueue/bitbake-worker: Fix dry run fakeroot issues
When using the dry run option (-n), bitbake would still try and fire
a specific fakeroot worker. This is doomed to failure since it might
well not have been built.

Add in some checks to prevent the failures.

[YOCTO #5367]

(Bitbake rev: 78ae96e667d3fbb8649fe25eb073e15a99d61cc8)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2014-01-02 13:08:47 +00:00
Paul Eggleton
3cf2d23252 libsoup-2.4: add intltool-native to DEPENDS
The configure script looks for this; most of the time dependency chains
ensure this is present but we need to be explicit or failures can
occur.

Reported by Nicolas Dechesne <nicolas.dechesne@linaro.org>

(From OE-Core master rev: 22e45ed7d74ceb4a719e7b5889400c20ed4a0783)

(From OE-Core rev: e86622a932bbd0acdea67ecfb15c8b06c27353d8)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-20 09:29:05 +00:00
Richard Purdie
8e410e9e46 build-appliance-image: Update to dora head revision
(From OE-Core rev: 2e9df12e67b3e56ed3c056559aa8eced6444ec93)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 16:34:54 +00:00
Beth Flanagan
785b7e3929 poky.conf: Flip distro to 1.5.1
DISTRO needs to be flipped for pending point release

(From meta-yocto rev: efb1dd56721320f767eb3066567f8caeb32580a2)

Signed-off-by: Beth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 16:33:58 +00:00
Scott Rifenbark
8477e77230 ref-manual: Fixed the reference to the script for icecc class.
(From yocto-docs rev: 51afdedc5c9bb6b689e7cf8771e0889d445f5326)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 16:29:53 +00:00
David Nystrom
9c371cf3f4 ref-manual: Reverted a patch that had added sdk-pms
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 16:29:53 +00:00
Scott Rifenbark
cf1a1b9a08 ref-manual: Added module and module-base classes.
(From yocto-docs rev: be1e564483299a018e28f1971dbe85f8485c9b83)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 16:29:53 +00:00
Scott Rifenbark
7cea11f64d ref-manual: Removed "work" from the SDK_DEPLOY directory.
The directory is not a temporary thing.

(From yocto-docs rev: d40d17ed80ebdb738bca9c86cd1381cd1442e5b8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 16:29:52 +00:00
Scott Rifenbark
ce2dbff102 ref-manual: Edit to SDK_DEPLOY removing "temporary" from directory.
(From yocto-docs rev: a88e4a770b1fe536285269055ba0655c702f0d70)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 16:29:52 +00:00
Scott Rifenbark
e66cff17ab ref-manual: Edits to gnomebase class.
Review comments from Ross.

(From yocto-docs rev: 88ce4d4e88671a968d3fee84dd3b8e1b64e84282)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 16:29:52 +00:00
Scott Rifenbark
dd3ddc619c ref-manual: Edits to setuptools class.
Review edits from Paul.

(From yocto-docs rev: 8089f69979f872b1c756fb1e1703fa0ea6965426)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 16:29:51 +00:00
Scott Rifenbark
61282366a6 ref-manual: Minor edits to rootfs* class.
Review comments from Paul.

(From yocto-docs rev: effc8e811020e00bfd98d065e412db5fe3f78f04)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 16:29:51 +00:00
Scott Rifenbark
6f654f107b ref-manual: Edits to GTKIMMODULES_PACKAGES variable.
Used a better word to describe the argument list.

(From yocto-docs rev: 15f14a3a36d345c655e60bc7a4b7d19c02d26f2c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 16:29:51 +00:00
Richard Purdie
7a0033cc4e build-appliance-image: Update to dora head revision
(From OE-Core rev: d68c267f3387d7fe221d3c5653a66db8b1f78fd8)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 09:18:30 +00:00
Steve Sakoman
f20ec6bd6b wpa-supplicant: enable CONFIG_CTRL_IFACE_DBUS_NEW
Without this option wifi support in connman will fail:

src/technology.c:technology_get() No matching drivers found for wifi

(From OE-Core rev: 403e365e433c54633bcc843b32487a766282226e)

(From OE-Core rev: 2e532f33c5e97751daa89c9f92c6de8513564be0)

Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 09:17:17 +00:00
Bastien JAUNY
37e9b199a6 yocto-bsp: Add missing format specifier in bblayers error message
If the build environment is misconfigured (e.g. a bad path
for a layer in bblayers.conf) the yocto-bsp script crashes with a
standard python error, not very explicit.  This fixes the problem.

 Signed-off-by: Bastien JAUNY <bastien.jauny@gmail.com>

(From meta-yocto master rev: 4a8e80b812eebdc1c9570b5d88aa0f3b34824b68)

(From meta-yocto rev: 578e06f113d870ec6a4e201458488344ca941e3d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 09:14:36 +00:00
Bruce Ashfield
a410a0a6d7 meta-yocto-bsp: update SRCREVs for 3.10.17 and beagleboard fixes
Updating the BSP SRCREVs to pull in the 3.10.17 core update and fix
USB powerup issues on the beagleboard.

(From meta-yocto master rev: d82870a9561662919a737dd126a8d26e2b78144a)

(From meta-yocto rev: 17403f07a5ec54f867515dc8cb8bd65fd232c6f5)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 09:14:35 +00:00
Saul Wold
849d440670 package-regex: Tweak python-docutils so it works correctly
(From meta-yocto master rev: 8bd33820b4d1944a9f7730f8e2676d0d45e1cd0b)

(From meta-yocto rev: 3b2b8cce34043766e31ba501052f8d1668f4a2b3)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-13 09:14:35 +00:00
Steffen Sledz
944b153d31 gdb-7.6: fix cygwin check in configure script
This is a fix which avoids false positives if the search pattern
"lose" is found in path descriptions in comments generated by the
preprocessor we hit in our development environment.

[gdb Bug #16152] -- https://sourceware.org/bugzilla/show_bug.cgi?id=16152

Upstream-Status: Accepted
(From OE-Core rev: 7e2dbda690b480ab05d14353cb038749ce23d58c)

Signed-off-by: Steffen Sledz <sledz@dresearch-fe.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 22:42:50 +00:00
Richard Purdie
a49258a85b build-appliance-image: Update to dora head revision
(From OE-Core rev: e638af73a01496e246f1a55d27364861a1f0b0d0)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:06:04 +00:00
Enrico Scholz
a9b6e16daa qt4: fixed dependency on icu
Commit 46dcec6fd455584d9b5d0d7ff1e5b36fbe5a2d62 added 'icu' to DEPENDS
in qt4-x11 only, but enabled icu globally in qt4.inc.

This breaks build of qt4-embedded because this recipe does not have
such a DEPENDS but uses qt4.inc:

| icu.cpp:42:28: fatal error: unicode/utypes.h: No such file or directory
|  #include <unicode/utypes.h>
|                             ^
| compilation terminated.
| make: *** [icu.o] Error 1

Patch moves the 'icu' dependency into qt4.inc.

(From OE-Core rev: adb6e64d69fc947f2c8fa708dcbe854fd2b574f8)

(From OE-Core rev: ec35d5b4b3d2eed7a141f2fd41d5ed7215e66dbf)

Signed-off-by: Enrico Scholz <enrico.scholz@sigma-chemnitz.de>
Cc: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:02:36 +00:00
Baogen Shang
5f0074f022 libtiff: CVE-2013-4243
cve description:
Heap-based buffer overflow in the readgifimage function in the gif2tiff
tool in libtiff 4.0.3 and earlier allows remote attackers to cause a denial
of service (crash) and possibly execute arbitrary code via a crafted height
and width values in a GIF image.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-4243

(From OE-Core rev: a2a200a3951cecd7dd43dee360e0260051c97416)

Signed-off-by: Baogen Shang <baogen.shang@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:25 +00:00
Baogen Shang
3932287231 libtiff: CVE-2013-4232
cve description:
Use-after-free vulnerability in the t2p_readwrite_pdf_image function
in tools/tiff2pdf.c in libtiff 4.0.3 allows remote attackers to cause
a denial of service (crash) or possible execute arbitrary code via a
crafted TIFF image.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-4232

(From OE-Core rev: 60482e45677c467f55950ce0f825d6cb9c121c9c)

Signed-off-by: Baogen Shang <baogen.shang@windriver.com>
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:24 +00:00
Chen Qi
0a21008737 nfs-utils: explicitly rdepend on bash
Scripts in nfs-utils need bash as their interpreter, so if nfs-utils
doesn't explicitly rdepend on bash, we would experience build failures
if we add nfs-utils to glibc-small images.

Add bash to RDEPENDS to solve this problem.

(From OE-Core rev: 06c566596a92a309ca228a209f14d03b69a611c9)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:24 +00:00
Ming Liu
8cccf5fc9f libtiff: fix CVE-2013-1960
Heap-based buffer overflow in the tp_process_jpeg_strip function in tiff2pdf
in libtiff 4.0.3 and earlier allows remote attackers to cause a denial of
service (crash) and possibly execute arbitrary code via a crafted TIFF image
file.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-1960

(From OE-Core rev: 66387677cbd85ba4a76a254942377621acd68249)

Signed-off-by: Ming Liu <ming.liu@windriver.com>
Signed-off-by: Jeff Polk <jeff.polk@windriver.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:24 +00:00
Ross Burton
84087e94cb qt4-x11-free: depend on ICU
ICU presence is auto-detected at configure time and until recently (e68850 and
d61230) was pulled into most builds through harfbuzz and beecrypt.  Now it's
floating and this leads to build failures.

As in all likelihood the majority of people were building this with ICU enabled,
add an explicit dependency.

(From OE-Core master rev: 46dcec6fd455584d9b5d0d7ff1e5b36fbe5a2d62)

(From OE-Core rev: 034d3e35cce9ee63f6139d19be9b3edec4f97a64)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:24 +00:00
Nicolas Dechesne
e8aa4b5784 image-mklibs: ensure sysroot is correctly set when calling gcc
[YOCTO #2519]

When getting gcc from sstate, it is possible to get a gcc with a bogus
sysroot configuration, as discussed in [1] or in [YOCTO #2519].

mklibs script will eventually call gcc, so we need to make sure that it
provides gcc with the right sysroot location.

[1] http://lists.openembedded.org/pipermail/openembedded-core/2013-September/084159.html

(From OE-Core master rev: 3a66dd762e493ad2cda57110be67c3b06628050a)

(From OE-Core rev: 7275425524b8bb3d16d5c0c0a62aee5b08359ffd)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:23 +00:00
Lei Liu
d8c8742e45 syslinux: use cross toolchain to compile
syslinux is compling something with host gcc at do_install
stage, which leads to some unexpected errors with old gcc
on host.  Using our cross toolchain instead.

(From OE-Core master rev: b0da7ccde5380726acfccf1a96cdf5560edf9159)

(From OE-Core rev: 17f851f5c8e2a90f558396654b62a8f0f88f137f)

Signed-off-by: Lei Liu <lei.liu2@windriver.com>
Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:23 +00:00
Mark Hatle
e8dbbef1d0 util-linux: Package readprofile into it's own package
readprofile was missing from the alternative configuration, which was
causing readprofile to be packaged into the base util-linux.

(From OE-Core master rev: cac08f23aaed87148d1825cca3c7586ab891ef04)

(From OE-Core rev: a6fc9e62e848634e715b23f147c88a8710415845)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:23 +00:00
Jeffrey C Honig
f0323bb8ce rootfs_*.bbclass: List which post-install scripts can not be run
When preping a read-only rootfs and finding some post-install
scripts that can not be run, list the names of said scripts to
avoid having to look around the rootfs to find a list.

(From OE-Core master rev: 0188120691f433fdccf71b92618115195278c0af)

(From OE-Core rev: 2820f7fa411e5ca1cd7df765896b43716418340a)

Signed-off-by: Jeffrey C Honig <jeffrey.honig@windriver.com>
Signed-off-by: Jeff Polk <jeff.polk@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:23 +00:00
Lei Liu
25e21e3dca package_rpm.bbclass: Replace -linux-gnun32 with -linux.* in RPM platform file
On a multilib system when one of the multibs has a different OS then
other multilibs a failure can occur during the install process because
RPM assumes all systems have the same OS.

When an n32 platform is selected as an alternative multilib, it shows
up as mips64_n32-.*-linux-gnun32 in /etc/rpm/platform.  This causes
problems when the smart tool tries to add a channel for the multilib.
RPM archScore call always returns zero for arch "mips64_n32" -
after appending default vendor and os, it finds "mips64_n32-wrs-linux"
doesn't match any predefined platforms.  Fix this by removing the
restriction of -gnun32 suffix in platform file.

(From OE-Core master rev: d9489c44ee4f195ae1b09f340b9545cddba58145)

(From OE-Core rev: f0118b605b3727b5ca5d560094bb4dd2ff29c310)

Signed-off-by: Lei Liu <lei.liu2@windriver.com>
Signed-off-by: Jeff Polk <jeff.polk@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:22 +00:00
Ming Liu
d42a2c3813 gst-ffmpeg: fix CVE-2013-3674
The cdg_decode_frame function in cdgraphics.c in libavcodec in FFmpeg before
1.2.1 does not validate the presence of non-header data in a buffer, which
allows remote attackers to cause a denial of service (out-of-bounds array
access and application crash) via crafted CD Graphics Video data.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-3674

(From OE-Core master rev: f1721553a873b242bc26ad3e4d618aea39dfd507)

(From OE-Core rev: d4d908afca1c6ce64f05fabdadfcf86ab990dd0f)

Signed-off-by: Ming Liu <ming.liu@windriver.com>
Signed-off-by: Jeff Polk <jeff.polk@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:22 +00:00
Lei Liu
532660dcfe base.bbclass: Fix incorrect setting of multilib PREFERRED_PROVIDER_virtual_pkg
PREFERRED_PROVIDER_virtual_pkg has been incorrectly set with more
than one multilib prefixes.  For example, if we have two alternative
multilibs lib64 and lib32, PREFERRED_PROVIDER_virtual_pkg will be
set to lib32-lib64-pkg or lib64-lib32-pkg, depending on which
multilib shows up first in the list.

(From OE-Core master rev: 17a432dc059e24ba10d4baec988828c0025a5e46)

(From OE-Core rev: e5d8411869a2a018d0c8ab9d0e888027ac4208d5)

Signed-off-by: Lei Liu <lei.liu2@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:22 +00:00
Richard Purdie
6fe4b4dc95 ethtool: Fix ptest compile
buildtest-TESTS is a phony target and does nothing which results in a
do_install error since the tests aren't built. Since there isn't
a suitable make target but the number of tests are small, hardcode
the two to build to unbreak the build when ptest is enabled.

(From OE-Core master rev: 5dd8653fdcda5e0e8b4f3c37a46f357bc97ec66c)

(From OE-Core rev: b9b213b43c5ff6aa7c04733ce035fc9832065328)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:22 +00:00
Chen Qi
0ed76a495f sysvinit: use ALTERNATIVE to manage sulogin
Busybox also provides sulogin command, so we need to use the ALTERNATIVE
mechanism to manage it.

(From OE-Core master rev: 8b3a799a87d18b1d113d59b3e7a681db5683e5f8)

(From OE-Core rev: 33d7487a92905824ab46f74d7185f84662ddb922)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:21 +00:00
Chen Qi
3656f81f3d license.bbclass: fix missing of license files on ubuntu build host
The license_create_manifest function contains bashism, this will lead
to unexpected results on ubuntu build host, as sh is linked to dash on
ubuntu. Even if COPY_LIC_MANIFEST and COPY_LIC_DIRS are enabled, the
license files will still be missing on target.

This patch fixes the above problem.

[YOCTO #5549]

(From OE-Core master rev: 4df9daee5c732c0a20dabe8515577238a1508512)

(From OE-Core rev: 159e53f9402f1d1ceed8c6511c5874e199dea6e1)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:21 +00:00
Saul Wold
139920d199 image_types: newer btrfs.mkfs needs an empty file to build the disk in
(From OE-Core master rev: 836396a3450e7bf151956e87bd92f70c5050c995)

(From OE-Core rev: 1db74ff0544633c35fe0fdccf17a76ef833be0df)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:21 +00:00
Ming Liu
b416808c6e kernel.bbclass: move bundle_initramfs after kernel_link_vmlinux
${KERNEL_OUTPUT} is being renamed/restored in bundle_initramfs task, so we
must ensure bundle_initramfs run after kernel_link_vmlinux where the link
of vmlinux is created as the bootable image.

(From OE-Core master rev: 3baa63b4d588c3262254528b406ede265dd117bf)

(From OE-Core rev: 4acf2eaea963d9b5e3cf547db092f95d192cf2ab)

Signed-off-by: Ming Liu <ming.liu@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:20 +00:00
Hans Beckerus
b4481bb962 initscripts: add missing dmesg.sh to run-level S
In commit faa8cc6c2a582a32c695f3f2b0d45b6892c769fd dmesg.sh was
added to the set of init.d scripts. But the script was never put
in any run-level. This patch will add dmesg.sh to run-level S.

(From OE-Core master rev: 7d2767d4e27c6d0eaa56f3e126df56e65a5364c9)

(From OE-Core rev: fb7a6e0e3c5790415d56d14ec1a5eda5f9e6d039)

Signed-off-by: Hans Beckerus <hans.beckerus AT gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:20 +00:00
Wenzong Fan
f7d46fe7f3 udev: remove extra -dev/-dbg packages
We don't support multiple -dbg/-dev packages, the package can generate
them but the system does not correctly handle them. Just move all devel
stuffs into 'udev-dev' and all debug stuffs into 'udev-dbg'.

(From OE-Core master rev: 014f7a33f399192268f28acac835551413c4768d)

(From OE-Core rev: a700493063efdc3f60fb77e9a4a15d9c89ee79c6)

Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:20 +00:00
Ming Liu
baa8ca3315 grub: add xz RDEPENDS
grub_2.0.0 requires xz to run or an error may occur.

(From OE-Core master rev: ffa2877c06c587d4ea56c55bfd0f67a88e42a772)

(From OE-Core rev: d3e1dde9ae75faa1015b80da74604c5ae360c3a6)

Signed-off-by: Ming Liu <ming.liu@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:20 +00:00
Laurentiu Palcu
e830ecd686 flex: fix m4 issue on target
Flex needs m4 to run (see below) and, since the create_wrapper
introduces a bash dependency on target, give the path to m4 binary in
the configure command line.

Snippet from the flex documentation:
"The macro processor m4 must be installed wherever flex is installed.
<...>
m4 is only required at the time you run flex."

[YOCTO #5329]

(From OE-Core master rev: 64030f37b34f75144f53eef42d5822ede79e08bd)

(From OE-Core rev: d039150eae579af1bd85000982ef38a6b09bb90d)

Signed-off-by: Laurentiu Palcu <laurentiu.palcu@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:19 +00:00
Chunrong Guo
e5d7ea2bab mdadm: flag __SANE_USERSPACE_TYPES__ to include int-ll64.h for powerpc64
*PPC64 uses long long for u64 in the kernel, but powerpc's asm/types.h
       prevents 64-bit userland from seeing this definition, instead defaulting
       to u64 == long in userspace.

      *fix the below error
       |super-ddf.c:4542:5: error: format '%llu' expects argument of type 'long long unsigned int',
       |but argument 5 has type '__u64' [-Werror=format=]
       |dprintf("BVD %u has %08x at %llu\n", 0,

(From OE-Core master rev: d3caab6eb03264b4f4d744f914598022299011ba)

(From OE-Core rev: c5e59e68efcf2a3f902dbfd827da57ed3e8ad4ce)

Signed-off-by: Chunrong Guo <B40290@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:19 +00:00
Paul Eggleton
5b616aa7b6 buildhistory_analysis: fix error when comparing image contents
OE-Core commit b7de1eaac9eed559b2d68058f5de67de74a6cb58 added an extra
argument to the compare_dict_blobs() function but missed adding the
argument to one call to compare two versions of the image-info.txt file.

(From OE-Core master rev: 24a45d752c3e3d0d8b59c040355e4fe7de22b041)

(From OE-Core rev: a51d96c44e6feac8322284c54bfc01ef598f8821)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:19 +00:00
Jeffrey C Honig
8072e0726c perl: perl-ptest.inc polutes package dependencies when ptest not enabled
When ptest is not enabled, the populate_packages_prepend function runs
wheter ptest is enabled or not.  This causes ptest packages to get in the
dependencies list when ptest is not enabled.

(From OE-Core master rev: 826f4e4057a221127ac4c1d0658d975032fc7d90)

(From OE-Core rev: e739a64143901fa6f6f54e70445d19e9ce13dcf1)

Signed-off-by: Jeffrey C Honig <jeffrey.honig@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:18 +00:00
Phil Blundell
a8008ea07b boost: Add patch to avoid undefined references to boost::atomic::lockpool::get_lock_for()
Boost::thread uses functions from boost::atomic but doesn't actually
link with libboost_atomic.  This works fine on platforms where
BOOST_ATOMIC_FLAG_LOCK_FREE is true but will lead to undefined
symbol references otherwise.  Fix this by applying a patch from
the upstream bug tracker to add the missing library linkage.

(From OE-Core master rev: 1ffc27173576589191b037d111ecb59d94631de0)

(From OE-Core rev: 306fd06c7687340a0d4eb90e9aba9dc8669db07c)

Signed-off-by: Phil Blundell <pb@pbcl.net>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:18 +00:00
Saul Wold
fbe8b3ce1f sysvinit: unmount the psplash lazyily
There is an race condition where psplash is not quite exited before the unmount occurs
causing a umount: /mnt/.psplash: target is busy message to appear, it's ok to lazyily
unmount and not get this message

[YOCTO #5244]

(From OE-Core master rev: 9ded366084f22f48ef72aa22acf6a38982d16d97)

(From OE-Core rev: 8c3e3c90daee1639ac8b2633d8f1e500697d9c52)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:18 +00:00
Otavio Salvador
50574e41b8 lttng-modules: Update to 2.3.3 version
This updates lttng-modules for 2.3.3 and it also fixes the build with
3.12 Linux kernel.

While on that, we also renamed the recipe file to follow the other
lttng recipes which use the version number on it.

(From OE-Core master rev: 2d01bd48e689656bbe6189243d077f822092a14a)

(From OE-Core rev: 213ba50b8a747e1489f03872339e3931a99963a4)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:18 +00:00
Bruce Ashfield
4ab99bedf2 linux-yocto/3.10: meta: ARM: OMAP3: Add USB PHY driver for Beagleboard
Updating the meta branch SRCREV to update the USB configuration:

    The Beagleboard needs the USB PHY drivers in the kernel in order to enable
    USB and Ethernet functionality. This fix ensures that they are built in
    by tweaking the kernel config.

    Tested on Beagleboard xM Rev. C2.

(From OE-Core master rev: 89a372840a957e540bed954e629aa68335b3dfe0)

(From OE-Core rev: ee9dd4a40916668927b0c4b4dfaa4f80c671c9b2)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:17 +00:00
Bruce Ashfield
58b59bd927 linux-yocto-rt/3.10: fix ntp merge issue
Updating the -rt SRCREVs to pick up the following fix:

    ntp: fix ntp_notify_cmos_timer merge issue

    PREEMPT_RT_FULL has a stubbed ntp_notify_cmos_timer due to a bad merge.
    Renaming and restoring the full -rt functionality to this routine.

(From OE-Core master rev: 41d4f0feca69bf1b41f16f5f7d21bf7540e6c47a)

(From OE-Core rev: 0336b83df666617d61e8cd39c2d2a7f9c745064c)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:17 +00:00
Bruce Ashfield
0ebd48b90e linux-yocto/3.10: meta: ARM: OMAP3: Add USB PHY driver for Beagleboard
Updating the meta branch SRCREV to update the USB configuration:

    The Beagleboard needs the USB PHY drivers in the kernel in order to enable
    USB and Ethernet functionality. This fix ensures that they are built in
    by tweaking the kernel config.

    Tested on Beagleboard xM Rev. C2.

(From OE-Core master rev: 2a9944514362445ee891f6e77c4ae62950e247b3)

(From OE-Core rev: 670cd17b4ef4012a29eb4dee3c9899ccd5f967e6)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:17 +00:00
Bruce Ashfield
e546f17817 linux-yocto/3.10: fix qemuarm boot and spurious mips build warning
This update fixes two issues:

a) qemuarm boot failure

v3.10.13 picked up a patch for arm versatile interrupt mappings that fixes
the emulator boot out of the box. But it interacts badly with our previous
fix for the issue. Reverting the existing patch and going with the mainline
solution fixes the boot.

b) qemumips build warning and failure

Depending on the build host and compiler, the build of menuconfig throws
an potentially uninitialized variable warning. That warning causes an
error on archs with -Werror. We can do a trivial change to avoid the
warning all together (initilize it to null), and keep everyone happy.

[YOCTO #5460]

(From OE-Core master rev: 8d1a041891c87d0c2003c80f84b0501bdc9403a1)

(From OE-Core rev: 3928340ea03dc04cda9eb2eea021837421adf737)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:17 +00:00
Bruce Ashfield
64e1d78fbe linux-yocto/3.10: common-pc: add missing dependencies for BRCMSMAC
Updating the meta branch SRCREV to import some configuration updates
for the common-pc-wifi feature:

   CONFIG_WEXT_CORE=y
   CONFIG_WEXT_PROC=y
   CONFIG_CFG80211_WEXT=y

   CONFIG_BCMA=m
   CONFIG_BCMA_HOST_PCI_POSSIBLE=y
   CONFIG_BCMA_HOST_PCI=y
   CONFIG_BCMA_DRIVER_GMAC_CMN=y

   CONFIG_CRC8=m
   CONFIG_CORDIC=m

(From OE-Core master rev: cdd8145a7f4abc75c4089a30206c277db2712649)

(From OE-Core rev: 3d3c7f004fd8c262f1b4e15e2552a5630f387457)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:16 +00:00
Bruce Ashfield
f28814e649 linux-yocto/3.8: add crystalforest bsp legacy block drivers configurations
Updating the meta SRCREV to include the latest crystalforest configuration
updates.

(From OE-Core master rev: 9480e5b7231a2923b5ebff9623827c5d90334df3)

(From OE-Core rev: a7b2d0550157413834c9c5a3022745b25ad97fef)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:16 +00:00
Bruce Ashfield
2b62f03f70 linux-yocto/3.10: haswell-wc and crystalforest support
Updating the 3.10 SRCREVs to add support for the haswell-sc and crystalforest
boards.

(From OE-Core master rev: 47ebe8677ac0dc4f8799d0af75f5b7bc611fd882)

(From OE-Core rev: 06bef5014bac6e0d935453330dfec05153d71ad0)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:16 +00:00
Bruce Ashfield
844cf62b43 linux-yocto-3.10: bump to 3.10.17 and -rt11
Updating the SRCREVs to reflect the integration of the .17 -stable release
and the preempt-rt up date to -rt11.

(From OE-Core master rev: cefa022b814b8b4f9afacecf3bb035d211a0f3bf)

(From OE-Core rev: 65491b70eeebcc579c8d9933be165fcd860bf2c7)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:15 +00:00
Bruce Ashfield
fbf836f259 linux-yocto/3.10: MinnowBoard support
Updating the 3.10 SRCREVs to update minnowboard support via the following changes:

   3F6C824 pch_gbe: Add MinnowBoard support
   9f52743 pch_gbe: Use PCH_GBE_PHY_REGS_LEN instead of 32
   ec7b5e6 pch_gbe: use managed functions pcim_* and devm_*
   fd8bf50 pch_gbe: convert pr_* to netdev_*
   9b278e9 serial: pch_uart: fix compilation warning
   8982d79 serial: pch_uart: Fix signed-ness and casting of uartclk related fields
   cdbf456 serial: pch_uart: Remove __initdata annotation from dmi_table
   9e7c25e pch_uart: Use DMI interface for board detection

(From OE-Core master rev: 6c7115a56c3d0bf3d6d0275bd2d49d8cfef5c028)

(From OE-Core rev: 436b39e4d9484dc74f88d635af97eee3d6d1704b)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:15 +00:00
Khem Raj
75cf26a02f libnl: Fix random segfaults due to memory corruption
This is a backport from upstream fixes a severe problem
w.r.t memory management, where it would result in random
segfaults in applications depending on libnl

(From OE-Core rev: c3fb18aac0de49dc3113296699d95be298d98140)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:15 +00:00
Phil Blundell
ff80e69648 binutils: Add gnu-config-native to DEPENDS
do_configure() in binutils.inc includes an explicit call to
gnu-configize so we need to make sure that gnu-config-native is
present.  Previously this was being dragged in with the rest of the
autotools stuff, but commit 54a3e2ee37003fc56af0339f857b0b6442790c26
disabled that for binutils-cross on the grounds that "we don't
autoreconf" the toolchain components.  Fix this by adding
gnu-config-native itself explicitly to DEPENDS.

(From OE-Core master rev: 616354f13732d13c17434d5b60b166f691c25761)

(From OE-Core rev: c7b50b95ab27d3ea3b37e3ee16050af962746f5c)

Signed-off-by: Phil Blundell <philb@gnu.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:15 +00:00
Xufeng Zhang
5411bbcdb5 kernel.bbclass: Delay rm_work to run after do_bundle_initramfs
Since kernel will build twice when we are trying to bundle kernel
and initramfs together after commit 609d5a9ab("kernel.bbclass,
image.bbclass: Implement kernel INITRAMFS dependency and bundling"),
thus, the second building for kernel would fail if rm_work is done
previously.

To fix this problem, we need to make do_bundle_initramfs task run
before do_rm_work task.

[YOCTO #5416]

(From OE-Core master rev: 8308e22a44a2dea7d1bbfb429b9df9c63714a649)

(From OE-Core rev: fef443f1c40a3847cac08f4885d046acf6ede023)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Xufeng Zhang <xufeng.zhang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:14 +00:00
Jackie Huang
94025c05b5 busybox: fix sed auto insert newline testcase
backport the patch from busybox upstream to fix the
auto insert newline issue.

busybox defect:
https://bugs.busybox.net/show_bug.cgi?id=6584

(From OE-Core master rev: db00fa405c025c61844df2e9589fe635fc5df0e2)

(From OE-Core rev: 9266343cc00fefd3e7f33fa2e6b0da88b6abf622)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:14 +00:00
Qiang Chen
f8643d57ef irda-utils: restart irda daemon correctly
irattach init script restart faulty logic prevents irda daemon
from restart correctly.

root@qemu0:~# /etc/init.d/irattach restart
Restarting IrDA: Terminated
root@qemu0:~# ps aux | grep irattach
root       541  0.0  0.2   2400   612 ttyS0    S+   09:05   0:00 grep irattach

As above shows, irattach not started after executing restart command.
This commit changed the restart command logic: firstly stop, then
start.
Prompt telling user irattach start successfully or failure also
added.

(From OE-Core master rev: 39f266138b972b550979909b235a5779828d7d89)

(From OE-Core rev: 37ceb9ad0c45aca458e2ff4770b8a0535286a78e)

Signed-off-by: Qiang Chen <qiang.chen@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:14 +00:00
Qiang Chen
01411d9cf0 nfs-utils: modify nfsserver init script indent
Using sysvinit testing service status, nfsserver status
allways display as [?] unknown.

This is because sysvinit package check whether service's
init script supporting status function or not by:
grep -qs "\Wstatus)" "$SERVICE"

So, this commit modified the indent for status etc, as
most service's init script does.

(From OE-Core master rev: a6b02fe439fa13c8482383fba2bfdcb0e9742141)

(From OE-Core rev: f9be1ec26cf1f313d7c22e26475296b0362c25ea)

Signed-off-by: Qiang Chen <qiang.chen@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:13 +00:00
Qiang Chen
c34300c72e openssh: fix sshd status command error prompt
sshd status command results in error prompt:

root@qemu0:~# /etc/init.d/sshd status
/usr/sbin/sshd (pid 1199) is running...
/etc/init.d/sshd: line 100: return: can only `return' from a
function or sourced script

"service --status-all" command also display wrong status for sshd.

This commit fix this error prompt and make service command display
right status for sshd.

(From OE-Core master rev: e7cf83ec3f39a7c41e38c6030b0d903fa7d37b2a)

(From OE-Core rev: 1b5409b5b060459f15c32c89b1983122b2126f84)

Signed-off-by: Qiang Chen <qiang.chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:13 +00:00
Lei Liu
3e8bacfcb3 mklibs: add dependency on dpkg-native
mklibs requires the "dpkg-architecture" utility to work.
Add dependency on dpkg-native.

(From OE-Core master rev: 9811641e95dd7e1514eb41900e033a0548bd13d8)

(From OE-Core rev: f4a5005518e2384b22593063010a3fa1179f5861)

Signed-off-by: Lei Liu <lei.liu2@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:12 +00:00
Lei Liu
728ecd93c5 image-mklibs: Fix grep pattern when mklibs collects executables in rootfs
File command in some version could print extra space between
"LSB" and "executable" - it causes mklibs can't find any executables
using grep "LSB executable".  Fix the grep pattern to catch
multiple spaces.

(From OE-Core master rev: a52ef8c5dcd71f39bb48c71fb868cc0db662560e)

(From OE-Core rev: 78c22d9087b3058dd947f21bd24fa621aba7dd6d)

Signed-off-by: Lei Liu <lei.liu2@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:12 +00:00
Ross Burton
b4d22bb10f weston-init: start weston on a new VT
Weston 1.3 needs to run on a VT, which is typically handled by weston-launch.
Currently weston-init doesn't use weston-launch as that depends on the
(non-default) pam DISTRO_FEATURE, so depend on kbd and use openvt directly.

This also fixes problems caused by the init script blocking until Weston exits,
which meant that later init scripts were not actually running.

(From OE-Core master rev: 3726eb29cfa79a4a1fbdbcaa96f770063c482858)

(From OE-Core rev: d79f7846f5d538f6f835f52686fd2c749cb1b70f)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:12 +00:00
Chen Qi
6894ee1baf bootlogd: create log file if not present
Previously, our system had no boot log even if the bootlogd daemon was
started correctly. The root cause is that the log file doesn't exist
when starting the bootlogd.

Add '-c' option to bootlogd so that it will create the boot log if
it doesn't exist.

[YOCTO #5273]

(From OE-Core master rev: 6059be3ab60b8ab463d438c47bb17553d184a790)

(From OE-Core rev: f676a69dee845cfd6de7a0cc8bd0bb813a8dffc0)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:12 +00:00
Lu Chong
e078fa2537 ppp: Fix compilation errors in Makefile
This patch fixes below issues:

1. Make can't exit while compilation error occurs in subdir for plugins building.

2. If build ppp with newer kernel (3.10.10), it will pick 'if_pppox.h' from sysroot-dir and
   'if_pppol2tp.h' from its own source dir, this cause below build errors:

        bitbake_build/tmp/sysroots/intel-x86-64/usr/include/linux/if_pppox.h:84:26:
        error: field 'pppol2tp' has incomplete type
          struct pppol2tpin6_addr pppol2tp;
                                  ^
        bitbake_build/tmp/sysroots/intel-x86-64/usr/include/linux/if_pppox.h:99:28:
        error: field 'pppol2tp' has incomplete type
          struct pppol2tpv3in6_addr pppol2tp;
                                    ^

The 'sysroot-dir/if_pppox.h' enabled ipv6 support but the 'source-dir/if_pppol2tp.h' lost
related structure definitions, we should use both header files from sysroots to fix this
build failure.

(From OE-Core master rev: b536824ea64b8d6729b830738bce637fc815e832)

(From OE-Core rev: 16968759d39534fb9a703903c6de65535d57777b)

Signed-off-by: Lu Chong <Chong.Lu@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:11 +00:00
Jonathan Liu
252ab00746 eglibc_2.17.bb: accept make versions 4.0 and greater
(From OE-Core master rev: a678243d6e4add90c1e9459da42de34d3724db5d)

(From OE-Core rev: 9d59ceab6e58b422caf8ad7a306ed546316d8c3a)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:11 +00:00
Jonathan Liu
7a0761caa2 eglibc_2.18.bb: accept make versions 4.0 and greater
[YOCTO #5391]

(From OE-Core master rev: 1969ff9081d06109b7e8e7f9331f31fcb611c393)

(From OE-Core rev: bdc49826edf05233c6d506de404861a661043b1b)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:11 +00:00
Chen Qi
5d6fdbde7b extrausers.bbclass: avoid infinite loop
Avoid infinite loop if the last record in EXTRA_USRES_PARAMS doesn't
end with a semicolon.

It's possible the the users will write configurations like below.

INHERIT += "extrausers"
EXTRA_USERS_PARAMS = "useradd tester; useradd developer"

In such situation, the do_rootfs task will enter an infinite loop.
An infinite loop is never acceptable.

This patch fixes the above problem.

(From OE-Core master rev: bf4fb345a9db306fa4c7211b7e6795334a649dd5)

(From OE-Core rev: 05f9c15abb0705cd9f8aa28bfb3016485730b643)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:10 +00:00
Saul Wold
bd80aac525 mdadm: Disable the RUN_DIR check
This check was looking for /run/mdadm on the host system, this check is optional so disable it.

[YOCTO #5447]

(From OE-Core master rev: d62882794890eeee8e8d5c9ba4837ec77a58d787)

(From OE-Core rev: 6c212e229ba1be0c536e7c36ebe2a3e8b2ef84b4)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:10 +00:00
Paul Eggleton
28cd3e8fb5 zisofs-tools-native: add missing DEPENDS on zlib-native
zisofs-tools links against zlib.

Fixes [YOCTO #5420].

(From OE-Core master rev: 6058a34f4f0d907a3a065a0323ca085df690bd9b)

(From OE-Core rev: 5d9861dc320ac66b56ac70463fd646e0fe4a1baf)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:10 +00:00
Lu Chong
f46e8aa9f9 ppp: Add two structures in if_pppol2tp.h
Some further structure definitions are needed in include/linux/if_pppol2tp.h for
IPv6 support, else we would get the error as below:

	In file included from plugin.c:53:0:
	bitbake_build/tmp/sysroots/intel-x86-64/usr/include/linux/if_pppox.h:84:26:
	error: field 'pppol2tp' has incomplete type
  	  struct pppol2tpin6_addr pppol2tp;
        		          ^
	bitbake_build/tmp/sysroots/intel-x86-64/usr/include/linux/if_pppox.h:99:28:
	error: field 'pppol2tp' has incomplete type
	  struct pppol2tpv3in6_addr pppol2tp;
                		    ^
	make[2]: *** [plugin.o] Error 1

(From OE-Core master rev: 73d08c4bf12e2cc4f291cb018d00b26a5a573be4)

(From OE-Core rev: 398bae0d288f488020108c7d95bd376249e6ecbf)

Signed-off-by: Lu Chong <Chong.Lu@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:10 +00:00
Jason Wessel
8b4c1515f1 grub-efi.bbclass: Fix startup.nsh to work on more EFI revs
Some revs of the EFI firmware + shell do not automatically setup the
path in a such a way as to execute a binary without an absolute
reference like "FS0:\EFI\BOOT\bootx64.efi".  All the versions that I
have tested work properly by simply calling the binary which is in the
EFI\BOOT directory by name like "bootx64.efi".

The error you see on the console looks like the following:

startup.nsh> EFI\BOOT\bootx64.efi
'EFI\BOOT\bootx64.efi' is not recognized as an internal or external command, operable program, or batch file
Shell>

This patch simply drops the EFI\BOOT for greater compatibility.

(From OE-Core master rev: 754b52ea7a3cdf8e7e939a314525d16c4dfb52cb)

(From OE-Core rev: a65db42d4b0c6636ee6538a28930e4d19e170d18)

Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:09 +00:00
Joao Henrique Ferreira de Freitas
d19f1a030e grub-efi.bbclass: Fixes GRUB_IMAGE when using boot-directdisk class
When boot-directdisk class is used and EFI boot is set the
grub-efi-${TRANSLATED_TARGET_ARCH}-native need to be dependent.
Allowing GRUB_IMAGE to be created and bootia32.efi got from the
image directory.

(From OE-Core master rev: b9778975db410b8cd01ef6854c7cd3ea22a0b5b7)

(From OE-Core rev: b14ba80d22f4892a4d9269dbf982b2f88109da98)

Signed-off-by: Joao Henrique Ferreira de Freitas <joaohf@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:09 +00:00
Jason Wessel
2e127d4775 syslinux.bbclass: Fix default serial port string
The default value of SYSLINUX_SERIAL_TTY is not correct.

It should be console=ttyS0,115200 else the boot string generated in
the syslinux menus for the serial choice is not correct.  The kernel
boot parameters will get set to:

/vmlinuz initrd=/initrd LABEL=boot root=/dev/ram0 ttyS0,115200

Note that the above is missing the "console="

The default value will now work the same as the value found in
grub-efi.bbclass.

(From OE-Core master rev: fbc864241933c6f40814f47e7a85dd71ce255393)

(From OE-Core rev: 042e0f59d058c5caad4a3b8e8f08ec724d5c837b)

Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:09 +00:00
Seth Bollinger
664eb433d9 ncurses-terminfo: Remove bashism from basic terminfo installation
The vtX terminfo files aren't being copied on systems where bash isn't
the default shell (debian, etc.).  I removed the bash specific syntax
so the files are properly copied on these systems.

(From OE-Core master rev: edd7d53c6149b27d5636a458db91650c8c400612)

(From OE-Core rev: ccb25cd9df9ed4b84b74170336cfde5c1dc3d013)

Signed-off-by: Seth Bollinger <seth.boll@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:09 +00:00
Søren Holm
9be52e5f42 e2fsprogs: Escape filenames in populate-extfs.sh
Without this patch filenames containing spaces do not get into the final
ext2/3/4 filsystem.

[YOCTO #5401]

(From OE-Core master rev: 1350b461ed0c9d4afa1ab909a5b1ff60fb160c97)

(From OE-Core rev: 62d01b10508f86ca825ebc24773dfa2b485b4292)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:08 +00:00
Joe Slater
9134463e4d vala.bbclass: add dependency on vala
This class points the inheritor, if it is a target,
to directories in the target sysroot, so we want to
be sure the .vapi files are there.

(From OE-Core master rev: 2da8bbd47686f54efeec521d521f176f6aeb8d39)

(From OE-Core rev: e68307c3fb0a02efe5e0789a58686f343c842707)

Signed-off-by: Joe Slater <jslater@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:08 +00:00
Richard Purdie
11d9127ac5 cross-canadian: Handle powerpc linux verses linux-gnuspe
PowerPC toolchains can use the OS "linux" or "linux-gnuspe". This
patch links them together so the one cross-canadian toolchain can support
both.

GCC_FOR_TARGET is set for the GCC recipe as otherwise configure
can pick up an incorrect value.

[YOCTO #5354]

(From OE-Core master rev: a1d6331238982b0c5d39b0a18794f6654b00d46a)

(From OE-Core rev: b114b045687776ebc5c805e1f19758e7e37eebf2)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:08 +00:00
Andreas Müller
52bc1c3dd7 systemd-compat-units: run-postinsts fix script link
in

commit fe039170236080291c0220476a5809774f82ee5c
Author: Muhammad Shakeel <muhammad_shakeel@mentor.com>
Date:   Wed Oct 2 10:55:32 2013 +0000

    systemd-compat-units: Use correct run-postinsts script link

    OE-Core commit 75a14923da has moved
    run-postinsts script execution from S98 to S99 in rcS.d. run-postinsts.service
    should check for this script and run it on first boot rather than
    S98run-postinsts, which is for opkg/dpkg.

the link was corrected but the mentioned commit is not available. Instead of
reverting, we use the same variable as opkg for init script ordering and drop
a note in case somebody wants to change default.

(From OE-Core master rev: 7aabc9408fb382f0ae39f9932b6d9ac391528b76)

(From OE-Core rev: 91a51e10392c0f802d6c329ad2b11ae82c55f171)

Signed-off-by: Andreas Müller <schnitzeltony@googlemail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:07 +00:00
Paul Eggleton
9f52614bd6 classes/ptest: fix quoting
BitBake currently allows using the same quotes outside and inside the
value, but it isn't really right, looks odd and might stop working in
future.

(From OE-Core master rev: 0af9cf31851896276a219170001047406f45de50)

(From OE-Core rev: 4051c98ebfe79614d1284b38442d5a3290bb4ad1)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:07 +00:00
Gary Thomas
f8496bd09e core-image-basic.bb: Allow user extensions
Allow the user to provide additional packages to this image.
This lets core-image-basic behave like all other core-image*
recipes (which do support CORE_IMAGE_EXTRA_INSTALL), as well
as match the documentation which suggests this as the mode to
extend any core-image* image.

v2 - drop redundant setting of CORE_IMAGE_EXTRA_INSTALL

(From OE-Core master rev: 5faabf398819d40b55c46bc83ae03942d115024b)

(From OE-Core rev: cdda59d2850b163ced4dfc847c4e114f8592ae52)

Signed-off-by: Gary Thomas <gary@mlbassoc.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:07 +00:00
Qiang Chen
85bacab3a4 openssl: create package for openssl configuration file
* Add the openssl-conf package to the list of packages to
  be created.  This package contains the openssl.cnf file
  which is used by both the openssl executable in the
  openssl package and the libcrypto library.

* This is to avoid messages like:
    WARNING: can't open config file: /usr/lib/ssl/openssl.cnf

* When running "openssl req" to request and generate a certificate
  the command will fail without the openssl.cnf file being
  installed on the target system.

* Made this package an RRECOMMENDS for libcrypto since:
	* libcrypto is a RDEPENDS for the openssl package
	* Users can specify a configuration file at another
      location so it is not stricly required and many
      commands will work without it (with warnings)

(From OE-Core master rev: 5c3ec044838e23539f9fe4cc74da4db2e5b59166)

(From OE-Core rev: bf6ef555caf92b2a013f15d258bf40997247a150)

Signed-off-by: Chase Maupin <Chase.Maupin@ti.com>
Signed-off-by: Qiang Chen <qiang.chen@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:07 +00:00
Lu Chong
dc769690ec dbus: no messages of status command print
/etc/init.d/dbus-1 use "set -e" to let the script exit when any command failes.
This will cause "/etc/init.d/dbus-1 status" command can't display messages when dbus is stopped.

(From OE-Core master rev: 9844b5e2a544b2c2f76aac497c3a2cdfcc46577c)

(From OE-Core rev: 486d5d7e891df3fb2ce8d975d13625b11334814b)

Signed-off-by: Lu Chong <Chong.Lu@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:06 +00:00
Roy Li
a30af0dc1f libcap: fix CAP_LAST_CAP
(From OE-Core master rev: 9032c10cc882a96acdfd0739f090d121ab625a18)

(From OE-Core rev: c191cb79019482a5c6a404e02184bae40ff9f84a)

Signed-off-by: Roy Li <rongqing.li@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:06 +00:00
Konrad Scherer
348944397c pixbufcache.bbclass: gdk-pixbuf-query-loaders depends on libz
ldd sysroots/x86_64-linux/usr/bin/gdk-pixbuf-query-loaders.real
<snip>
libz.so.1 => /sysroots/x86_64-linux/usr/bin/../../usr/lib/libz.so.1 (0x00007fab55393000)

If zlib-native has not been unpacked, host libz is used which can fail.

(From OE-Core master rev: 8422c759ae674856aaaee176eab5a395a620443c)

(From OE-Core rev: b9ae15b45768d25c44a9484b4a156a15da548bd9)

Signed-off-by: Konrad Scherer <Konrad.Scherer@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:06 +00:00
Jonathan Liu
2421773925 qt4: add upstream QTBUG-34218/QTBUG-34234 misaligned selection patch
(From OE-Core master rev: 3af8f2e0697a9523d3b505ba4c48eca35f6de3a9)

(From OE-Core rev: 438032411ea5d71a33b7b030752193867a90b9f7)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:06 +00:00
Tom Zanussi
7d675f1967 wic: Remove selinux_check()
This seems to be an obsolete check - we don't have any problems with
image creation under selinux, so remove it.

(From OE-Core master rev: 12e81eceab9e0a483765566ad3791b14718195b5)

(From OE-Core rev: 28830d3988047023d3b47bcaf320f5efa4428da6)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:05 +00:00
Konrad Scherer
a975c8205c pigz: Add pigz to buildtools tarball
When using the tar executable in the buildtools, tar will execute
gzip. If this happens before zlib-native is built, then the gzip
on the host will be used and can fail if the libz in the buildtools
is not compatible. Adding pigz to the build tools avoids this host
contamination.

(From OE-Core master rev: af6424e8c2bf3a938fddabc669c0956d68964ed0)

(From OE-Core rev: dd9945dd510d6e7764721bec5573591a0ad69ba4)

Signed-off-by: Konrad Scherer <Konrad.Scherer@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:05 +00:00
Hongxu Jia
309ef190da debugedit: fix segment fault while file's bss offset have a large number
While ELF_C_RDWR_MMAP was used, elf_begin invoked mmap() to map file
into memory. While the file's bss Offset has a large number, elf_update
caculated file size by __elf64_updatenull_wrlock and the size was
enlarged.

In this situation, elf_update invoked ftruncate to enlarge the file,
and memory size (elf->maximum_size) also was incorrectly updated.
There was segment fault in elf_end which invoked munmap with the
length is the enlarged file size, not the mmap's length.

Before the above operations, invoke elf_begin/elf_update/elf_end
with ELF_C_RDWR and ELF_F_LAYOUT set to enlarge the above file, it
could make sure the file is safe for the following elf operations.

[YOCTO #5356]
https://bugzilla.redhat.com/show_bug.cgi?id=1019707
https://bugzilla.redhat.com/show_bug.cgi?id=1020842

(From OE-Core master rev: 35c8b1ac7c3b1e4209b1e30d1dbd1a457286b97b)

(From OE-Core rev: a82322a982dc97ebc95f3fc45f9ad98bed947ad9)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:05 +00:00
Andreas Oberritter
431c80b6d0 cogl-1.0: depend on virtual/mesa
- Wayland support depends on wayland-egl, which is provided by mesa.

(From OE-Core master rev: a1a379b3c9728a06b086b4c1f06f663f54d7d37d)

(From OE-Core rev: 8c75d888a5e4cf7fc2c92df730d80224f5ffa99a)

Signed-off-by: Andreas Oberritter <obi@opendreambox.org>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:04 +00:00
Roy Li
17ce831843 pseudo: fix library path in FILES_${PN}
libpseudo.so is always installed into ${prefix}/lib/, not ${libdir},
so fix these paths; and skip libdir WARN_QA checking to ignore the
warning in 64bit and multilib enabled system

(From OE-Core master rev: 47c7850c025994685aa1811057f4f9a5f0f2a3ae)

(From OE-Core rev: 1929d4ef17652a3eb825942041908d679773244f)

Signed-off-by: Roy Li <rongqing.li@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:04 +00:00
Khem Raj
44c3b68d5e pulseaudio: Fix build break on armeb
There is no need for += when using append hence removed and added a
leading space appropriately

(From OE-Core master rev: fb9cde0fc1a54b073edf5979f4cb7dc297b790fd)

(From OE-Core rev: 586db07af01da9d7772b7088a20886b506e09422)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:04 +00:00
Laurentiu Palcu
8432caedbd meta-toolchain-qt: put QT_CONF_PATH in environment script
This will allow apps using QLibraryInfo class to find qt.conf.

[YOCTO #5339]

(From OE-Core master rev: fffa4c37c49b169f663d28612b9251819cef9577)

(From OE-Core rev: 8dc1d62c5dc161ba606cebe27f6fe900699646f7)

Signed-off-by: Laurentiu Palcu <laurentiu.palcu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:04 +00:00
Laurentiu Palcu
e92c430010 nativesdk-qt4-tools: create qt.conf file
When installing the SDK to another location than the default one, qmake
will look for libraries, headers, etc. in the default location. That's
because the paths are hard-coded in the binary itself. Luckily, QT
allows to override this using a qt.conf file installed in the same
directory with the application executable. However, we already have a
patch that allows for the installation of qt.conf in another place and
read the location from QT_CONF_PATH environment variable.

Hence, install qt.conf in ${sysconfdir}. This will allow other apps, that
use QLibraryInfo class, to find it.

[YOCTO #5339]

(From OE-Core master rev: 23f88695683a8e428375a8ccb6be935347a8768c)

(From OE-Core rev: 0e71811a1c3285a71cbca682cb62c1563d3e74ee)

Signed-off-by: Laurentiu Palcu <laurentiu.palcu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:03 +00:00
Konrad Scherer
8e556d30ae cracklib: cracklib-native should not depend on zlib
(From OE-Core master rev: 89d7d46947d9bb8c7bf568c65e52d5bbe159027f)

(From OE-Core rev: 7c6504c6c059ba6b37f88143801ac8137267cf83)

Signed-off-by: Konrad Scherer <Konrad.Scherer@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:03 +00:00
Qiang Chen
470f005624 nfs-utils: nfsserver restart should kill and recreate nfsd kernel threads
nfsserver restart without killing kernel threads worked when portmap
was the rpc publishing process and portmap was restarted.
When rpcbind replaces portmap, nfsserver restart in this way does not
work after an rpcbind restart.

Steps to reproduce:
1). Make ext3 filesystem image on local host.
cd /root
dd if=/dev/zero of=test bs=1024K count=50
mkfs.ext3 -F test

2). runqemu qemux86-64
mkdir /mnt/wrtest
mount -t ext3 -o loop test /mnt/wrtest
echo "/mnt/wrtest *(sync,rw,no_subtree_check,no_root_squash)" > /etc/exports
/etc/init.d/rpcbind restart
/etc/init.d/nfsserver restart
showmount -e localhost
mkdir wrtest
mount -t nfs localhost:/mnt/wrtest wrtest

mount: mounting localhost:/mnt/wrtest on wrtest failed: Connection refused

Modifying the nfsserver script to kill and restart kernel threads on
restart makes the problem go away and is consistent with current
RHEL/SUSE and Ubuntu/Debian mechanisms of handling the nfs server.

(From OE-Core master rev: 1a96b8d7dfc490fc61bbd470a8b09065750cd563)

(From OE-Core rev: d1b5e944656807c9db9cbe5d08d7b4bd8daeb826)

Signed-off-by: Rich Dubielzig <rich.dubielzig@windriver.com>
Signed-off-by: Qiang Chen <qiang.chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:03 +00:00
Qiang Chen
a1927d6c45 nfs-utils: Stop rpc.statd correctly
An incorrect process name in the nfsserver initscript prevented
rpc.statd from being shut down.

root@qemux86-64:~# /etc/init.d/nfsserver start
creating NFS state directory: done
starting 8 nfsd kernel threads: done
starting mountd: done
starting statd: done

root@qemux86-64:~# ps | grep rpc.statd
  650 root     10532 S    /usr/sbin/rpc.statd
  654 root      4720 S    grep rpc.statd

root@qemux86-64:~# /etc/init.d/nfsserver stop
stopping statd: done
stopping mountd: done
stopping nfsd: done

root@qemux86-64:~# ps | grep rpc.statd
  650 root     10532 S    /usr/sbin/rpc.statd
  662 root      4720 S    grep rpc.statd

As this daemon drops a pid file,simply use that instead.
Also add some initialization checks so the daemons are not
left partially started in the absence of kernel nfsd support.

(From OE-Core master rev: 37e70a28e9cfc773bd70f09d7129295ce891ae18)

(From OE-Core rev: 5f22bad97a3bacb87cefb54ffd785d359c58aec0)

Signed-off-by: Andy Ross <andy.ross@windriver.com>
Signed-off-by: Qiang Chen <qiang.chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:03 +00:00
Wenzong Fan
5dacc78659 perf: flag __SANE_USERSPACE_TYPES__ to include int-ll64.h for mips64
As the same reason to powerpc64, mips64 also need the flag.

(From OE-Core master rev: d6f3cb0d71c3b6739365f085b6d5a5e20f329fa5)

(From OE-Core rev: 9c4b604ea0d81bc1de224b35ae160f87be6bcf7b)

Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:02 +00:00
Konrad Scherer
76e0ce8229 relocate_sdk.py: Allow script to work with Python 2.4 and 3.
Python 2.4 does not support the 'b' string literal or the
keyword 'as' in exception handling. Python 3 does not accept
the old method of exception handling and defaults to unicode.
The b() function converts strings to bytes on Python 3 and
using sys.exc_info() avoids the exception handling syntax.

(From OE-Core master rev: 1e2ec5f576f167673d7980737826987fefdc74a9)

(From OE-Core rev: 343127f2f81be337596d3eacbbc92278e82ce574)

Signed-off-by: Konrad Scherer <Konrad.Scherer@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:02 +00:00
Chen Qi
fc9f1c8168 runqemu-extract-sdk: add --numeric-owner option to tar command
If the same username exists on both target and the build host, but
the uids differ, and we start target via NFS, then the uid for the
user will be incorrect on target.

For example, if postfix's uid on host is 119 and on target is 1024,
then if we start target via NFS, the uid for postfix will be 119.

The root cause is that when we use runqemu-extract-sdk to generate
the NFS rootfs for later use, the tar command will respect the username
instead of uid. So if PSEUDO_PASSWD environment is not set correctly,
the host /etc/passwd will be used, resulting in wrong uids.

The situation for gid is completely analogous to that of uid.

It's almost impossible for the runqemu-extract-sdk to guess the correct
location of the needed password file merely based on the target tarball
name.

This patch solves this problem by adding the '--numeric-owner' option
to the tar command so that the uid/gid will be used when extracting the
tarball using runqemu-extract-sdk. In this situation, we'll always get
the correct uid/gid after extracting the tarball.

[YOCTO #5364]

(From OE-Core master rev: acce6ff1a77cfd29e3868faa89b120becb58bbbf)

(From OE-Core rev: c2baac739a521d1edd408a24f6b1fece8f755218)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:02 +00:00
Ross Burton
34f6dcf06c packagegroup-base: if zeroconf DISTRO_FEATURE enabled, add libnss-mdns
mDNS name resolution is a key part of mDNS, so if the DISTRO_FEATURE is enabled
then install libnss-mdns.

(From OE-Core master rev: ef2ee68778be8e5336cd33ab6551bce1d56047b6)

(From OE-Core rev: fb72bebc999aef7bb006a7ba273eaa96376a4016)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:02 +00:00
Christopher Larson
263b6aa58e base.bbclass: fix nondeterministic PACKAGECONFIG processing order
The PACKAGECONFIG flags were iterated over using dict.items(), but this
returns the items in an undefined order. As this order determines the
EXTRA_OECONF append order, we can get EXTRA_OECONF which are functionally
equivalent, but whose contents differ, resulting in not using shared state
archives we should be using.

(From OE-Core master rev: 843a5dd8f8f0461e286d9fdb3ba55205b4275f88)

(From OE-Core rev: 73f77c195e1af3df594eecce2cab47ee963d5c2e)

Signed-off-by: Christopher Larson <kergoth@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:01 +00:00
Jackie Huang
c2be524370 midori: exclude from self-hosted for mips64
midori depends on webkit-gtk which could not build for mips64.

[YOCTO #5141]

(From OE-Core master rev: abadeb934d4f41288c4fde6a4e5df2b124326326)

(From OE-Core rev: 672cd50a39a697b53c337a79c34fab05b48e0920)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:01 +00:00
Michaël Burtin
265b3423b1 initramfs-framework: fix test that filter backup module files
Test that filter backup module files (files starting with ~)
was accidentally reversed in e6039e6e3b98d6ab91252a5012d76279b1fac6e8,
this patch restore initial behavior.

(From OE-Core master rev: b2eb846ee12989add7a7ca8bbf45f293a3a7e56d)

(From OE-Core rev: 00ff958fec53e55cc475c1b31fb9813d97872ceb)

Signed-off-by: Michaël Burtin <michael.burtin@innotis.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:01 +00:00
Tom Zanussi
34739daf7c wic: Update and generalize pseudo setup for rootfs generation
Remove unnecessary pseudo exports i.e. PSEUDO_DISABLED and move the
setup to the top-level prepare_rootfs().

(From OE-Core master rev: 4bf11cd7d7301da664c098c8a0ae9c0294a6f423)

(From OE-Core rev: 8d661f578276c70e1671edfc810aa4dad97de970)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:01 +00:00
Tom Zanussi
f29fde153b wic: Make find_binary_path() more user-friendly
find_binary_path() is useful, but if the binary isn't found, it prints
a stacktrace and a less-than-useful message.  Users complain when they
get stacktraces for things they can act on, so remove the stacktrace
and tell the user what the problem is.

(From OE-Core master rev: 0d9eef0eaa267500e8eedab8b72ddf24eb0516db)

(From OE-Core rev: 8a17195c9be38815e9ae431bcb18f66a4ad2cdcb)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:01 +00:00
Tom Zanussi
b768094125 wic: Remove binary dependencies
Current functionality doesn't make use of kpartx, mount, or unmount,
and we use native mkswap, so remove the binary checks for those.

(From OE-Core master rev: 76293d2d6bbdeacd7b34f39f26fb97c3d7f9496f)

(From OE-Core rev: 0ed290b81e1c3b781170033f50db01ddfff14784)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:00 +00:00
Tom Zanussi
45bdd32eeb wic: Remove rpmmisc call from livecd
We don't currently use LiveCDImageCreator, but it makes calls when
initialized via the plugin interface to rpmmisc module functions,
which we don't want the dependency on.

To make it (and LiveUSBImageCreator) happy, we give it the dummy
"i386" value for now.

(From OE-Core master rev: e10ae516cfc10900ed12e84c743e3a7127372135)

(From OE-Core rev: a3cc57cf3116c997ec11dd3cbfa3b0d615e5dabc)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:00 +00:00
Tom Zanussi
72db8ccf90 wic: Remove rpm and grabber dependencies from BaseImageCreator
BaseImageCreator is a base class for DirectImageCreator and others,
and imports rpm and grabber (which imports rpm).

The various plugins e.g. DirectPlugin import the creators and
therefore these dependencies, which manifest at run-time as e.g.:

  Warning: Failed to load plugin imager/direct_plugin: No module named
    rpm

(From OE-Core master rev: a1e24c4a5f5771b7ad35e53ce96c6d82212e4d7e)

(From OE-Core rev: f5587ec7e7f925b321b9bfe6923be0879dadb2aa)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 17:00:00 +00:00
Tom Zanussi
8d158a7ecf wic: remove rpm warning code from BackendPlugin
We don't currently use rpm functionality, so we don't need to silence
rpm warnings.

(From OE-Core master rev: dd3cc03d4fa3347f8ef2db23d8ff98bdbdb73baa)

(From OE-Core rev: 8827b46d8cb4d6918451bd1c3c278465d8796e4b)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:59 +00:00
Tom Zanussi
d3f1ed6967 wic: Remove dependency on myurlgrab module
mylrlgrab is in grabber, which imports rpm.  For current
functionality, we don't need to grab urls or import rpm, so remove the
dependency.

(From OE-Core master rev: 429ecc2afa499df35a1ae9da6f92b88c6f2d8d11)

(From OE-Core rev: 1ef27c9dfa28f65458750c0afb2e136c4b79b226)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:59 +00:00
Tom Zanussi
ff4aaa8508 wic: Remove dependency on rpmmisc
rpmmisc imports rpm and contains misc rpm utilities related to
packaging and determining arches based on the packaging.  We should
never run across this in the initial version of wic, so remove the
dependency.

(From OE-Core master rev: 2d59b6eeb418cf23eef3e32b43354b4ab16a40b9)

(From OE-Core rev: b16a9de9f5eb2d252ee263a4b2c66c74ff4ff78f)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:59 +00:00
Tom Zanussi
7a5e6c4ec9 wic: eliminate module checks
We're removing all external dependencies including rpm and urlgrabber,
so we don't need this check.

(From OE-Core master rev: 429c0d72b9b8bfed34832e283be92996e074b9ac)

(From OE-Core rev: 898285fbe172e0e77f0986be8f5187f86bfca95b)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:59 +00:00
Tom Zanussi
3bcd49dddf wic: Remove dependency on rt_util module
rt_util contains bootstrap_mic(), which imports rpm and other things
we don't need because we don't do bootstrap i.e. runtime (set in
wic.conf) is always set to 'native', which means use what's on the
local host.

bootstrap mode is for downloading and installing rpms that wic needs,
which we may want to implement later; for now, we just want to use
what's local.

(From OE-Core master rev: 3103f0cb908eced7b751128c2bba898d12017c80)

(From OE-Core rev: f338d696b7f865bdb10020f806c69c78e8ed6625)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:58 +00:00
Tom Zanussi
9e27361d81 wic: add pseudo to the populate-extfs step
Without this, files in the generated filesystem pick up the wrong
ownership.

(From OE-Core master rev: 24a6b1324965080fef6c363edcb37768090eebea)

(From OE-Core rev: ecdc5422bd3e3efbf46c3d12d0a95d9f736a6d27)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:58 +00:00
Tom Zanussi
10cdde99a8 wic: Initialize return values in find_artifacts()
If one of these isn't found, it won't be initialized and will throw an
UnboundLocalError.

(From OE-Core master rev: ce6c3ec0e5f4822e85b8f957e9e31fa9de438c55)

(From OE-Core rev: 9bf229c7b66bda4fb52f53d6f7f6b86f10dc8681)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:58 +00:00
Tom Zanussi
9ee975be7b wic: Check for the existence/correctness of build artifacts
If a user uses the -e option and specifies a machine that hasn't been
built or uses the wrong .wks script for the build artifacts pointed to
by the current machine, we should point that out for obvious cases.

(From OE-Core master rev: a5b9ccadc0603c70c65f74fa386995c585a951db)

(From OE-Core rev: 54e0b423852580c62b43c5ead00f431b971bea1a)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:57 +00:00
Jack Mitchell
cb916ff4e5 crond: remove UID check in init script
this init script fails when the default shell is busybox sh. This
is because busybox sh doesn't set the UID. No other init scripts
in oecore feel the need to check the UID so just remove the check.

(From OE-Core master rev: dd6a45536043af34c05a699e468cef4845f7affd)

(From OE-Core rev: 11385321982411d3d1c2aa663b2b2d195c74f353)

Signed-off-by: Jack Mitchell <jmitchell@cbnl.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:57 +00:00
Ting Liu
dba3eff8e2 perf: flag __SANE_USERSPACE_TYPES__ to include int-ll64.h for powerpc64
PPC64 uses long long for u64 in the kernel, but powerpc's asm/types.h
prevents 64-bit userland from seeing this definition, instead defaulting
to u64 == long in userspace.
Perf want LL64, flag __SANE_USERSPACE_TYPES__ to get int-ll64.h.

Fix the below issue:
| tests/attr.c:71:4: error: format '%llu' expects argument of type 'long
long unsigned int', but argument 6 has type '__u64' [-Werror=format=]
| tests/attr.c:80:7: error: format '%llu' expects argument of type 'long
long unsigned int', but argument 4 has type '__u64' [-Werror=format=]
|        attr->type, attr->config, fd) < 0) {
|        ^

(From OE-Core master rev: e0b56f7ed84da4f71f448548e15d5a75e8eada6e)

(From OE-Core rev: 7d55375ddc1a928e77d5687a8e7b36eecb91c594)

Signed-off-by: Ting Liu <b28495@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:56 +00:00
Jeffrey C Honig
404e0cfafd initscripts: insure checkroot.sh runs before anything writing to the file
If bootlogd was configured to write to a log file on the root file system,
the checkroot.sh was not able to change the rootfs to read-only because
bootlogd was started earlier and had a file descriptor open.  Lowering
the order of checkroot.sh ensures that the volatile filesystem is set
up before anything writes to it.

(From OE-Core master rev: 13c9bc143f6861517970dafdc7e7a45740d0933d)

(From OE-Core rev: e54864e6db9fddb8ec98e323fe6da88e02ff070a)

Signed-off-by: Jeffrey C Honig <jeffrey.honig@windriver.com>
Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:56 +00:00
Ross Burton
7612323dba cairo: add explicit dependency on zlib
In normal use this is pulled in through libpng, but it's exposed in the headers
of cairo-pdf and cairo-ps and a build from sstate can end up without zlib being
present.

(From OE-Core master rev: 8413bf1ce95802bff032b4592ca1aa4728d62cbf)

(From OE-Core rev: 252896140bb107315e58bbd82b9a73528da9b860)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:56 +00:00
Saul Wold
52e0863e51 kmod: Add patch to fix seperate build dir of ptest
(From OE-Core master rev: 68322eadd1d9456e606b375c2f4181725784c292)

(From OE-Core rev: ed94da7c1fbb39eb7f28d50d61d2d254892a5df8)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:56 +00:00
Richard Purdie
e07b344870 cross-canadian: Fix SHLIBSDIR when using multilib
Both nativesdk and multilib use MLPREFIX for their partciular purposes. When
we have both set, cross-canadian can confuse SHLIBSDIR. This forces the
variable to the correct value for cross-canadian, fixing toolchains in
multilib builds.

[YOCTO #5333]

(From OE-Core master rev: 0633b93086a7de7226f4dc6ca403ee116bc58669)

(From OE-Core rev: 003ddbccb260cdbfc6c1ff9e576a0584b0f25378)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:55 +00:00
Richard Purdie
0866fc91de nativesdk: Fix pn check
There are missing brackets in the check meaning MLPREFIX doesn't
get set for nativesdk-qemu-helper when it should be.

(From OE-Core master rev: 5011f4bc8a418d0616d2936b60ecb7ca156632a3)

(From OE-Core rev: 48b4bb2f98a1b7b97688071af4c2d3ee390b6ab3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:55 +00:00
Darren Hart
e395ace671 wic: Check for external modules
Since eight unique files import rpm, perform a check at the top level
for the existence of the rpm module print a sensible error message if it
is not. This may be able to be removed if some of the core rpm
dependencies are removed from the mic libs.

Also check for urlgrabber.

This avoids a bracktrace in the event the modules are not installed
which can be very off-putting to would-be users.

(From OE-Core master rev: b11bfadba20c1f39a63e396e605a8316c2ed2a94)

(From OE-Core rev: 93b1d54bf377703cd0c7debce21b07b4fbf4e5a5)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Cc: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:55 +00:00
Darren Hart
30df3a1e7b wic: Force lba off for FAT16 partitions
If fat16 is specified to the mkpart parted command, parted will
default to setting the lba flag which causes certain EFI firmware
to fail to detect the filesystem. lba shouldn't be necessary for
FAT16 filesystems anyway, explicitly disable it.

(From OE-Core master rev: 30442d432e203e655b7d40b93f7307f475de1614)

(From OE-Core rev: e437cd5ccaa44798107a6aa5177b1b867c94dfc3)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Cc: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:55 +00:00
Saul Wold
a253acca45 busybox: Add depmod (adds 2240 bytes)
This will allow packages that update kernel modules to run correctly

(From OE-Core master rev: 72c23255cc88b5e2cd6f783231e6f42bf5190df7)

(From OE-Core rev: bce32a41c7e700f49aaa0aacbeb6c83d91a8b255)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:54 +00:00
Tom Zanussi
dad9e78f09 wic: check passed-in build artifact directories
Make sure they exist - complain if they don't.

(From OE-Core master rev: 24a585e3fd0ea0166991a6aa834bba15bcd8295d)

(From OE-Core rev: 4565512015d257e492d7fbb9e28c6086111d892d)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:54 +00:00
Tom Zanussi
aeae568600 wic: check for build artifacts
wic needs to be given one form of build artifacts or another -
complain if the user doesn't do that.

(From OE-Core master rev: 9116a17efd42447f276000927d0c2ea63776865b)

(From OE-Core rev: f9b9b920dc071b0798cbdaa150314a0126d2afad)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:54 +00:00
Ross Burton
042fec8fc3 pango: fix x11 DISTRO_FEATURE check
--without-x was removed in 1.32.0, so the correct option is now --without-xft.

Also remove --disable-glibtest, as configure.ac doesn't invoke that test.

(From OE-Core master rev: e806f4ff404515f38318b6fed7d2b614c2138da6)

(From OE-Core rev: 147eac9794bd815f7f10002beacbe47302352ca4)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:54 +00:00
Jonathan Liu
b5c4bad0bf qt4: add upstream QTBUG-31579 patch for QPainter regression
(From OE-Core master rev: a5afc67cbfc32beb3be10392bf9788cfc3610ab1)

(From OE-Core rev: 18e49ebf8d5cc43eca417cd51cabae175f6b5ea4)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:53 +00:00
Ross Burton
ce15d9a1e5 xorg-lib-common: fix malloc0returnsnull usage
Xorg libraries that use Xmalloc need to know if malloc(0) returns NULL or not,
and as this is a runtime test it can't be checked for.  Previously
xorg-lib-common declared that malloc(0) did return NULL, but this isn't true for
eglibc (only uclibc).

Instead, use libc-specific overrides to pass the relevant option.

(ideally the check would use the autoconf cache so this can be stored in the site files)

(From OE-Core master rev: e628c8aba0189de30de2833882b9999ff3b6547a)

(From OE-Core rev: 93e084ae8bcae8e7bde5a7e52a274a39dc9ba509)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:53 +00:00
Muhammad Shakeel
2ec209c0f4 sysvinit-inittab: Fix getting tty device name from SERIAL_CONSOLES entries
Currently the part after "tty" in the device name go into label along with
everything after that part. For example if SERIAL_CONSOLES="115200;vt100;ttyS0"
than label=S0 but if SERIAL_CONSOLES="115200;ttyS0;vt100" than label=S0;vt100.
If SERIAL_CONSOLES="..;ttyX;..", part after 'X' should also be trimmed.

(From OE-Core master rev: b00b9ae5693e04cacd0843c12a529e7f3dc501ed)

(From OE-Core rev: e807747f28798b080bf13a65293ab04c8d83834c)

Signed-off-by: Muhammad Shakeel <muhammad_shakeel@mentor.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:53 +00:00
Ting Liu
63d6c644a3 kmod-native: use bswap to work on older Linux hosts
We can't use htobe* and be*toh functions because they are not
available on older versions of glibc, For example, shipped on Centos 5.5.

Change to directly calling bswap_* as defined in byteswap.h.

(From OE-Core master rev: 63edb6b9a8bdf2f5541edd618f2f598185e37223)

(From OE-Core rev: 2e791639625746037626e348cfac4ad1109bb9ae)

Signed-off-by: Ting Liu <b28495@freescale.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:52 +00:00
Chen Qi
fb354a3a1c device_table-minimal.txt: change group of /dev/hda back to disk
The group for /dev/hda should be disk instead of root.
The group ID for /dev/hda was 6, but it was modified to be root by
accident in the following commit.

	 commit c5ef0294a9b8d178896a47c9f5d6e3dd6797e343
	     device_table-minimal.txt: use user/group names instead of uid/gid

This patch changes it back.

(From OE-Core master rev: 5c5db302400894c2bb1f4052d0f120738589c128)

(From OE-Core rev: aaaba8d087bd4186cd2e309b962d62f240f96b98)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-12 16:59:52 +00:00
Scott Rifenbark
446df6fef3 ref-manual: Added new SDK_DEPLOY variable description.
(From yocto-docs rev: a49e265083467e79e12f729d7f23c5ffc5a0c22f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:35 +00:00
Scott Rifenbark
8c930e5217 ref-manual: Added more info to distrodata class.
Saul Wold provided me with more detail on this class, which
I added.

(From yocto-docs rev: 73c79f0460511039d5497bfd303cdc88f96e2fcc)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:34 +00:00
Scott Rifenbark
9f8def21d6 ref-manual: Added more info to gnomebase class.
Ross Burton provided an expanded explanation.

(From yocto-docs rev: 3d867803d03c22499c0be3a27a2ffac3b0a5854a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:34 +00:00
Scott Rifenbark
60dfe7203c ref-manual: Fixed some broken cross-references.
Three broken cross-references to the "qmake_base" class existed
in the variables chapter.  The actual class "qmake_base" is
listed as part of the "qmake*" class so there is no tag taking
a user to the exact "qmake_base" class.  Changed the tag to
"ref-classes-qmake*".

(From yocto-docs rev: 96c92e7a26afefbce16a723a071fa203d70ce547)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:34 +00:00
Scott Rifenbark
ab3d7e13c1 ref-manual: Fixed double word error in EXTRA_QMAKEVARS_PRE variable.
(From yocto-docs rev: 855de90887750bc5811dfb1da51edb342f831c67)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:34 +00:00
Scott Rifenbark
a14a29635d ref-manual: Fixed double word in EXTRA_QMAKEVARS_POST variable.
(From yocto-docs rev: 575c611a8c78e6f9c9eeba143c12425b8f648396)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:33 +00:00
Scott Rifenbark
87f3c55236 ref-manual: Edits to QMAKE_PROFILES variable.
(From yocto-docs rev: 781f18ca3948ead3f3153f593f63e06f4a7501af)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:33 +00:00
Scott Rifenbark
b49a8ed1c9 ref-manual: Fixed typo in PACKAGE_CLASSES variable.
(From yocto-docs rev: 106471227a8d9ad073207f601a78b059d7f10361)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:33 +00:00
Scott Rifenbark
886b7e79c4 ref-manual: Edits to the IMAGE_PKGTYPE variable.
Added a note at the bottom clearly stating TAR files are never
used as a substitute for RPM, DEB, or IPK files when generating
your image or SDK.

(From yocto-docs rev: 5bc911ce2d13a19744aa800133a5ca4c8fba98b2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:32 +00:00
Scott Rifenbark
c666e21b7a ref-manual: Edits to PIXBUF_PACKAGES variable.
(From yocto-docs rev: 538c84715b1487133a8d8a2522f8e8112696dffc)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:32 +00:00
Scott Rifenbark
f8c0f01c75 ref-manual: Edits to the icecc class.
The text used "packages" rather than "recipes."  This is a
tragic result of historical naming mistakes early in the project.

(From yocto-docs rev: 844224d76e0daf40709fe85659c75afcdeb26280)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:32 +00:00
Scott Rifenbark
16e24ab95c ref-manual: Edits to GTKIMMODULES_PACKAGES variable.
(From yocto-docs rev: 16e8905a65ac3a5fa4654492603d5db4eb02f173)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:32 +00:00
Scott Rifenbark
948f2fac07 ref-manual: Edits to FONT_PACKAGES variable.
(From yocto-docs rev: 4f0cd82ee07f88a572560bdbc8edfe5450329f43)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:32 +00:00
Scott Rifenbark
cfff1d696d ref-manual: Edits to the terminal class.
(From yocto-docs rev: 33052ca2c5feaf353f2348c3c8d0ac6b4cd770db)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:31 +00:00
Scott Rifenbark
bca79ff0b2 ref-manual: Edits to the systemd class.
Removed the formatting of the second instance of the
term "systemd".

(From yocto-docs rev: 0975dd2649f028ac6553ad41c9f32fa412197aea)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:31 +00:00
Scott Rifenbark
82abb2325d ref-manual: Edits to the setuptools class.
Some minor re-wordings here.

(From yocto-docs rev: 73129359d2f1425f68f5801525b42f3efda206d0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:31 +00:00
Scott Rifenbark
374e582d63 ref-manual: Edits to the rootfs* class.
I removed the rootfs_deb, rootfs_ipk, and rootfs_rpm classes
altoghther and opted to briefly describe their purposes in the
rootfs* class section.  I also am not linking to the IMAGE_FSTYPES
variable but am rather linking over to the PACKAGE_CLASSES
variable.

(From yocto-docs rev: 64038a23403fd274ce4488c239bfa0b9a2e56b16)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:30 +00:00
Scott Rifenbark
9b8a8e463a ref-manual: Edits to pixbufcache and pythonnative classes.
(From yocto-docs rev: 6d2f07b23ea39e1068be97c45d334168424bccf1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:30 +00:00
Scott Rifenbark
3cb58633dc ref-manual: Edits to the PIXBUF_PACKAGES variable.
(From yocto-docs rev: 67be82a0c3952c0bf155abf5ad728bd5b1b95fc9)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:30 +00:00
Scott Rifenbark
acf18e52dc ref-manual: Edits to the package_tar class.
Turns out you can specify this using the PACKAGE_CLASSES
variable but you better not list it first.

(From yocto-docs rev: 7fdf5f9fcf026503cdedd906cd7b8c814affa81d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:30 +00:00
Scott Rifenbark
a8aa6bd679 ref-manual: Updated the PACKAGE_CLASSES variable description.
This was pathetic and needed updating.

(From yocto-docs rev: 6dd88094a07da56efc752dea0b447f65b25d276a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:29 +00:00
Scott Rifenbark
08436ae88d ref-manual: Edits to PACKAGE_CLASSES variable.
I discovered some issues with this description.  Should be
referencing the Build Directory for where the local.conf file
is and not the Source Directory.  Also, the link to the
package.bbclass section was mis-titled.

(From yocto-docs rev: ddfc7454dfb653ad3e90f995716e55977fe44311)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:29 +00:00
Scott Rifenbark
a51875b08c ref-manual: Edits to SYSTEMD_PACKAGES and SYSTEMD_SERVICES.
(From yocto-docs rev: 6b7c22c1d931a49e39433f7d0165e9e2ba3cbaf2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:29 +00:00
Scott Rifenbark
63f321197d ref-manual: Edits to the EXTRA_OESCONS variable.
(From yocto-docs rev: b051e62e552263588f0d832c6ca8df3448f9b0b8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:29 +00:00
Scott Rifenbark
b6d41355e1 ref-manual: Edits to the QMAKE_PROFILES variable.
(From yocto-docs rev: b4a4516029e98d7fd189b884299232516c871839)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:28 +00:00
Scott Rifenbark
ae0766bf0f ref-manual: Edits to the EXTRA_QMAKEVARS_* variables.
(From yocto-docs rev: cc3d87348414a8b89361b2976f6e23f20f68ee3e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:28 +00:00
Scott Rifenbark
7628cdbc8f ref-manual: Edits to the SDK_DIR and SDK_OUTPUT variables.
(From yocto-docs rev: a150ee32fda9222f749f666ebf5f3db30165be2e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:28 +00:00
Scott Rifenbark
03d27e12dc ref-manual: Edits to IMAGE_PKGTYPE variable.
(From yocto-docs rev: 7bcdae64b69067728dc89aebced40d18d7c973fb)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:28 +00:00
Scott Rifenbark
6e148e742f ref-manual: More edits to PIXBUF_PACKAGES variable.
Sorting out who assumes what.

(From yocto-docs rev: 31d7f9c92c411c639a5cd05c25e73bbc0fc265f8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:27 +00:00
Scott Rifenbark
22c0fde9a8 ref-manual: Edits to PIXBUF_PACKAGES variable.
typo fixed.

(From yocto-docs rev: 9d32225c86d904bcd62b738f88d69594f959982a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:27 +00:00
Scott Rifenbark
4647dee052 ref-manual: Edits to INHERIT_DISTRO variable.
(From yocto-docs rev: 7be16706e3515bd74d293da0bea2a8d720555b19)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:27 +00:00
Scott Rifenbark
6cf5b608ef ref-manual: Updates to ICECC_USER_PACKAGE_BL and _WL variable.
(From yocto-docs rev: 50cf335a1b07f97b27aae1332e4cf27a0531218c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:26 +00:00
Scott Rifenbark
122657979e ref-manual: Edits to icecc class and removal of three variables.
Fixed the links in the icecc class that linked into ICECC_CC,
ICECC_CXX, and ICECC_VERSION so they don't link anymore but rather
give a very brief explanation of what the variable does.

Removed the above three variables from the variable glossary.
These are not BitBake variables and we should not document them.

(From yocto-docs rev: 67842df1a4a01f7bdd924027c66faf8f30c426ce)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:26 +00:00
Scott Rifenbark
84c76124a7 ref-manual: Minor edits to GTKIMMODULES_PACKAGES variable.
(From yocto-docs rev: c02782ece3ac8d422c9870ac237e7fd2e7b076ee)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:26 +00:00
Scott Rifenbark
d29e57a017 ref-manual: Minor edits to FONT_PACKAGES variable.
(From yocto-docs rev: 92e085cdebd470c27e740c8a16ea7ffc98207633)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:26 +00:00
Scott Rifenbark
79d9919005 ref-manual: Edits to SYSTEMD* variables.
Minor changes to the SYSTEMD_PACKAGES, SYSTEMD_SERVICE, and
SYSTEMD_AUTO_ENABLE variables.

(From yocto-docs rev: 39a0a959d77ce7c6cb19564445d198b64d4cb9b6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:26 +00:00
Scott Rifenbark
49735c68e0 ref-manual: Added some references to variables in the useradd class.
Added links to USERADD_PACKAGES, USERADD_PARAM, GROUPADD_PARAM,
and GROUPMEMS_PARAM

(From yocto-docs rev: d0a594b474662fe5daee6fb72b6190541a40ef14)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:25 +00:00
Scott Rifenbark
7b0e4fb067 ref-manual: Review edits from "I" through "Z" classes.
icecc - added link to Icecream.

images - added some reference links to the end of the section.

logging - added closing ")" character.

nativesdk - corrected the mis-copied recipe name.

own-mirrors - fixed the class name so it was not "ownmirrors".

package - minor tweak to indicate class. Also spelled Berkeley
          correctly.

package_deb, package_ipk, and package_rpm - dumped a note and
         mentioned that you need PACKAGE_CLASSES to enable the
         class.

package_tar - noted that the recipe inheriting the tar class is what
         does the trick here.

pixbufcache - minor edits

populate_sdk - minor edits

prserv - edits to tell how it is enabled.

pythonnative - re-worded it.

rootfs* - reworded.

rootfs_deb, rootfs_ipk, and rootfs_rpm - Brand new.

systemd - reworded.

terminal - rewording

useradd - reworded

(From yocto-docs rev: 110c192499a0e349b317e51aeca1a391c35785fc)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:25 +00:00
Scott Rifenbark
179e55d619 ref-manual: Fixed link back to own-mirrors class.
(From yocto-docs rev: 761849633b541ce9f26ac27b76fb84ec47e48d70)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:25 +00:00
Scott Rifenbark
3658c0aa62 ref-manual: Changed Gobject to GObject.
(From yocto-docs rev: 9ef07e0ad3b201807c7548495f4d4d53360da21d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:25 +00:00
Scott Rifenbark
ffeec6c997 ref-manual: Edits to classes "I" through "K"
Added opening reason for the icecc class. Also cleared up the
sentence describing ICECC_PATH.

Minor fix to image class. Also added some reference
links.

Minor fix to image-mklibs class.  Also combined rouge
sentence stating that the class in enabled.

Same fix to rogue sentence in image-prelink class.

Fixed "insserve" into "insserv" throughout.

Added many links to some missing classes in the kernel
class.  Subsequent commit to actually add the class
documentation.

(From yocto-docs rev: 2260032cdbfd04dbb445d72341a2d2c87ce72545)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:24 +00:00
Scott Rifenbark
f4efad340e ref-manual: Review edits to classes "E" through "G"
extrausers - Changed the note to try and describe that the
  change is still specific to an image recipe but is not
  tied to other individual recipes.

fontcache - Minor fix.

gtk-icon-cache - fixed capitalization issue.

gtk-immodules-cache - Minor fix

gzipnative - Minor fix.

(From yocto-docs rev: bc25a62de2345b7ec0b2b019ff399921ec6edd0a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:24 +00:00
Scott Rifenbark
9881936f9c ref-manual: Added new DEPLOYDIR variable.
(From yocto-docs rev: 7edc293539ffc630f05e4a63f6efff4e4e9930b7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:24 +00:00
Scott Rifenbark
936b8180e3 ref-manual: Edits to bugzilla and deploy class.
Fixed a link in the deploy class.
Re-wrote the bugzilla class to be clearer.

(From yocto-docs rev: ce4f72644e9da987adb3873c91c14a8e5e8df555)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:24 +00:00
Scott Rifenbark
51362f7858 ref-manual: Review edits to the "C" and "D" classes.
Modifications to ccache, chrpath, clutter, cross, cross-canadian,
crosssdk, and debian classes.

Added a new variable to the glossary for LEAD_SONAME.

(From yocto-docs rev: bfaea19935ed694edee1dc03be37c7dcbebea47f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:23 +00:00
Scott Rifenbark
54a2ed579d ref-manual: Review edits to classes A through B
Applied some review edits from Paul Eggleton for the following
classes:

  allarch
  base
  bin_package
  bugzilla
  buildstats

(From yocto-docs rev: 7caa9de2ffd2024e9ad560c58425bd16fbca2790)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:23 +00:00
Scott Rifenbark
8f2c3eca29 ref-manual: Scrubbed the comment list of undocumented classes.
(From yocto-docs rev: 7d5ebc1ef98970f147319f77918eefa4bdb5040f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:23 +00:00
Scott Rifenbark
d7c3160f58 ref-manual: Removed the section "Other Classes"
This section was there because we did not document all the
classes in meta/classes.  Now that we are I am dumping it.

(From yocto-docs rev: 94fb23c80770af2fd6b52e0f255a5d24020a3ef5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:23 +00:00
Scott Rifenbark
a3519ba573 ref-manual: Re-ordered externalsrc class into the "C"'s
(From yocto-docs rev: c2f04331ee5e1681e2c00f94f025bcf01506d129)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:22 +00:00
Scott Rifenbark
bfb6cd84f1 ref-manual: Edits to externalsrc class.
(From yocto-docs rev: 08c18fa4fc354972e9898bd3eb10e9aa6b96532d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:22 +00:00
Scott Rifenbark
010d18250d ref-manual: Edits to useradd class.
(From yocto-docs rev: 250b955536f4f8ea4369c08265670b59113dcbfc)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:22 +00:00
Scott Rifenbark
2ac1c72943 ref-manual: Edits to update-rc.d class.
(From yocto-docs rev: 34d011d3e658a86a28339cfe47f1c93e15eb8b26)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:21 +00:00
Scott Rifenbark
ba1af319de ref-manual: Edits to update-alternatives class.
Also had to update four instances in the variables glossary where
links to the section for the class became misnamed due to
stripping out the excess stuff from the class section heading.

(From yocto-docs rev: 2b7c245021d16a22fb5a2e14e26b2f9b0082a669)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:21 +00:00
Scott Rifenbark
aa12d3bc48 ref-manual: Re-ordered the classes that start with "U"
(From yocto-docs rev: d1fa21f55d85804934a52e93704f6b5fd46acce7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:21 +00:00
Scott Rifenbark
bb0fca8a62 ref-manual: Edits to testimage class.
(From yocto-docs rev: a44377ed2f02127eb6d807721e6c37f53322f89c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:21 +00:00
Scott Rifenbark
ae82bc0e0c ref-manual: Re-ordered classes that start with "T"
(From yocto-docs rev: 9a6e1c68b7c4fb22d0724dd108735b9dd1e7f721)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:20 +00:00
Scott Rifenbark
5f85440bd5 ref-manual: Edits to siteinfo class.
(From yocto-docs rev: 6be23e9efacf2025d1ca337b98dc52bd4ab65477)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:20 +00:00
Scott Rifenbark
9407e718c1 ref-manual: Edits to sanity class.
(From yocto-docs rev: 517b4348988b88de3cb9b8031e25175772e26f64)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:20 +00:00
Scott Rifenbark
32b62a9c96 ref-manual: Re-ordered classes that start with "S".
(From yocto-docs rev: bfe1c17bd023cd7865cf535803bb7b02741ba16c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:19 +00:00
Scott Rifenbark
c7521f627b ref-manual: Edits to rootfs* class.
(From yocto-docs rev: 3c57e1239054fae3e3c72ec7b49bcea95a3f1313)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:19 +00:00
Scott Rifenbark
d34ba8b56b ref-manual: Edits to rm_work class.
(From yocto-docs rev: 89de6352a441f85532516e08883ba5d4620e0210)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:19 +00:00
Scott Rifenbark
2db4b6565d ref-manual: Edits to classes-qt4* classes.
(From yocto-docs rev: 04e74c7dcdae2dbd01660db6fc5f6745a97b1130)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:19 +00:00
Scott Rifenbark
e9131310ab ref-manual: Edits to qmake* classes.
(From yocto-docs rev: ea57971614689ea70a9ba74da70435b03e6d6158)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:18 +00:00
Scott Rifenbark
28a93a5ce9 ref-manual: Edits to pkgconfig class.
(From yocto-docs rev: 4b53e19d830dba8c0b85c5f91a5fa67d908ac6cb)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:18 +00:00
Scott Rifenbark
ab8665879d ref-manual: Edits to populate-sdk classes.
(From yocto-docs rev: 7ba49d9db2bed6d3da12f695ad9e2be4b138663d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:18 +00:00
Scott Rifenbark
573baf28e0 ref-manual: Edits to packageinfo class.
(From yocto-docs rev: ab28f1f9596b1defbf4dd1ae8ad3b90304e8260d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:18 +00:00
Scott Rifenbark
91a59ebc32 ref-manual: Edits to packagegroup class.
Had to also fix a couple links in the "migration" chapter due
to the section heading of the packagegroup.bbclass section
changing.

(From yocto-docs rev: 412b8325a13e5bf31d057efb5f6dbb54f8a1aba0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:17 +00:00
Scott Rifenbark
0259a40e31 ref-manual: Edits to packagedata class.
(From yocto-docs rev: efcfe0a447998d2fcf2dd4d741bf0ba493edf7de)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:17 +00:00
Scott Rifenbark
055910579c ref-manual: Edits to package_tar class.
(From yocto-docs rev: eb7ef27b9fb12c2cf13873a61b482ff8a1c9c49c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:17 +00:00
Scott Rifenbark
997aae29fa ref-manual: Edits to package_rpm class.
(From yocto-docs rev: d82beb24b8c3bba9385c1504d44da25c093e7e6c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:16 +00:00
Scott Rifenbark
335ac80ff6 ref-manual: Edits to package_ipk class.
(From yocto-docs rev: 8e29ca5e0bb057426206bfdae399ed8e75b4776c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:16 +00:00
Scott Rifenbark
9af84aa1d8 ref-manual: Edits to package_deb class.
(From yocto-docs rev: e6bd83296e42b361e57ebbdbfaf2f5c513b778e6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:16 +00:00
Scott Rifenbark
05d9a74462 ref-manual: Edits to package class.
(From yocto-docs rev: 8af4bd18ba08642b46a4aa82bb13ebe2742a6eee)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:16 +00:00
Scott Rifenbark
b6de0c9a4e ref-manual: Re-ordered classes that start with "P".
(From yocto-docs rev: e441adc9ed731d0dcd636e2204751c095504e6db)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:15 +00:00
Scott Rifenbark
6b7704d524 ref-manual: Edits to ownmirrors class.
(From yocto-docs rev: 46dfb31482ff7348974079323aeabbd06b058986)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:15 +00:00
Scott Rifenbark
cc7b1b32d9 ref-manual: Edits to oelint class.
(From yocto-docs rev: 30d54e4d73fda5cbfcd5dc0b010f2843e5842f55)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:15 +00:00
Scott Rifenbark
d3edf6ad3d ref-manual: Edits to nativesdk class.
(From yocto-docs rev: e869b89259c8f46a5dd3c27542cbf3a1d2267c81)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:14 +00:00
Scott Rifenbark
6a44b35f4e ref-manual: Edits to native class.
(From yocto-docs rev: 3f026aa351be7c39ea0927df3599012abadf9364)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:14 +00:00
Scott Rifenbark
4652dbc172 ref-manual: Edits to multilib classes.
(From yocto-docs rev: 4654aadfd8ea822e9f67546d583d9fd402687244)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:14 +00:00
Scott Rifenbark
24020c0c30 ref-manual: Edits to mirrors class.
(From yocto-docs rev: cf7edc4452a5e04f5b1088b0aa57610530845035)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:14 +00:00
Scott Rifenbark
b9663ce80f ref-manual: Edits to mime class.
(From yocto-docs rev: 770be89bc3ce279e7ab40cf411d623c8c2005095)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:13 +00:00
Scott Rifenbark
1eb1d8fcab ref-manual: Edits to metadata_scm class.
(From yocto-docs rev: f7a131ad4d27e73221e30dda631dae8426a240c9)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:13 +00:00
Scott Rifenbark
5d8d5f33d3 ref-manual: Edits to meta class.
(From yocto-docs rev: 4425e5ddc0c45e8fabaa8d962535977669a1e2c9)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:13 +00:00
Scott Rifenbark
eeb2085b99 ref-manual: Edits to logging class.
(From yocto-docs rev: 4e2dd82488b818c9c5283d5489364825d6ed6681)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:12 +00:00
Scott Rifenbark
043b9b13f1 ref-manual: Edits to linux-kernel-base class.
(From yocto-docs rev: 0c3170c4dcb62853fa9925486905ee587c070496)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:11 +00:00
Scott Rifenbark
9f5d1873fe ref-manual: Edits to license class.
(From yocto-docs rev: 54c4e798de3325eade7b303cac726ad9a7364e05)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:11 +00:00
Scott Rifenbark
35bc7dc25e ref-manual: Edits to lib_package class.
(From yocto-docs rev: 2e2846611a441a7a4c5c71ce9b5b58912d47dc09)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:11 +00:00
Scott Rifenbark
1f0853c7ea ref-manual: Re-ordered classes that start with "L".
(From yocto-docs rev: a6aa120b1c15743a85b8730545cd5fe89c783d2f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:10 +00:00
Scott Rifenbark
be168f33f9 ref-manual: Edits to kernel-yocto class.
(From yocto-docs rev: 062b42e7822d1f787f7295a21f7eebb920db3434)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:10 +00:00
Scott Rifenbark
ebb06c8c96 ref-manuals: Edits to kernel-module-split class.
(From yocto-docs rev: 5880ecb8bf70944f257832c2b692ce4724bc6a85)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:10 +00:00
Scott Rifenbark
edce571ec0 ref-manual: Edits to kernel-arch class.
(From yocto-docs rev: 0d49f85cf3a039a60b65efffda4dc2b4f98ddf48)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:09 +00:00
Scott Rifenbark
8aec3db151 ref-manual: Edits to kernel class.
(From yocto-docs rev: eeb8e5a6f49590fa17bca6aa142b76d8917a8731)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:09 +00:00
Scott Rifenbark
68e8bc8449 ref-manual: Re-ordered classes that start with "K".
(From yocto-docs rev: 1df0aeede8b35020771274d4ae7dfa096da3672f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:09 +00:00
Scott Rifenbark
5555e4f25b ref-manual: Edits to insserve class.
(From yocto-docs rev: 7c93717d8f4d917d9fddc288e262dbd2f7d15e49)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:09 +00:00
Scott Rifenbark
1c5c2db321 ref-manual: Edits to insane class.
(From yocto-docs rev: d69e06c401290882aed97189a9beeecb7accf452)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:08 +00:00
Scott Rifenbark
4fb27cd111 ref-manual: Edits to image-vmdk class.
(From yocto-docs rev: 0c8ce94bb9e821f82c96a2fee20afcb0ca064f49)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:08 +00:00
Scott Rifenbark
7269f3fa48 ref-manual: Edits to the image-swab class.
(From yocto-docs rev: a4f0555177bb94d64ed973222a6e7234e4663120)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:08 +00:00
Scott Rifenbark
df9b61ea9f ref-manual: Edits to image-prelink class.
(From yocto-docs rev: 3660fcbd1a3973a39329cb3450aa7c43dbddb6dc)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:08 +00:00
Scott Rifenbark
93e1f76b81 ref-manuals: Edits to image-mklibs class.
(From yocto-docs rev: 0bf0fc9147a7c0a515ebdc417ae74e0665c6abaa)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:08 +00:00
Scott Rifenbark
5daa76fc34 ref-manual: Edits to image-live class.
(From yocto-docs rev: 1287a4d88f7b72e73d7c6cbe5308c64aa26841a5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:07 +00:00
Scott Rifenbark
a4696d42f1 ref-manual: Edits to image_types_uboot class.
(From yocto-docs rev: e0617a2adb198431b40d14548743486c1d12f4f6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:07 +00:00
Scott Rifenbark
f0bfc9c4f9 ref-manual: Edits to image_types class.
(From yocto-docs rev: 1e65670b7efcf4189f05190799d4a1740da249f5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:07 +00:00
Scott Rifenbark
96f46c3347 ref-manual: Edits to image class.
(From yocto-docs rev: 45a9e03953ad664fafc234e289d0fad47e5d8c08)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:07 +00:00
Scott Rifenbark
ae9f8623a9 ref-manual: Edits to icecc class and re-order of "I" classes.
(From yocto-docs rev: 643ccc4ac495e0dc88de20012d4843d2a402b507)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:06 +00:00
Scott Rifenbark
d22c00e9fc ref-manual: Edits to gzipnative class.
(From yocto-docs rev: 4294792a9ef590945a2adc91ddd75456e7ea80b2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:05 +00:00
Scott Rifenbark
5f9c15fc85 ref-manual: Edits to gtk-immodules-cache class.
(From yocto-docs rev: b6c042b34a302b659369c005b07f6f476d04f695)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:05 +00:00
Scott Rifenbark
8081a47a04 ref-manual: Edits to gtk-icon-cache class.
(From yocto-docs rev: 6e0136e432211fc47d49f9634fc21ec1b328cac8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:05 +00:00
Scott Rifenbark
c38098f91c ref-manual: Edits to gtk-doc class.
(From yocto-docs rev: ba43afe951e31e6ddac1df247451ba79d1c6e615)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:05 +00:00
Scott Rifenbark
9814463a40 ref-manual: Edits to gsettings class.
(From yocto-docs rev: 4eab42554d1409f58cf8498e40eb32b4d91677d3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:05 +00:00
Scott Rifenbark
a860737bba ref-manual: Edits to grub-efi class.
(From yocto-docs rev: 99b858ea434c087de900ed99b8615dc060fd1850)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:04 +00:00
Scott Rifenbark
bb795b7830 ref-manual: Edits to gnomebase class.
(From yocto-docs rev: ed4380f015ecb554e8efc3f6250a53053c2c327c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:04 +00:00
Scott Rifenbark
0b8dfacddf ref-manual: Edits to gnome class.
(From yocto-docs rev: 8c845bdef9380da400be7b0bfbe5167876b0d5aa)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:04 +00:00
Scott Rifenbark
7b9874a0c1 ref-manual: Edits to gettext class.
(From yocto-docs rev: 22689cbe58a157de5c1ebc38e0859491373b73fe)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:04 +00:00
Scott Rifenbark
fac71e1909 ref-manual: Edits to gconf class.
(From yocto-docs rev: c2643fd1075817392bb324a0e92466e7995b2e2f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:03 +00:00
Scott Rifenbark
f1751a9676 ref-manual: Edits to fontcache class.
(From yocto-docs rev: 48ddff0c81ac4992247401115049acd7e72603db)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:03 +00:00
Scott Rifenbark
3b69a8e19d ref-manual: Edits to extrausers class.
(From yocto-docs rev: cbda2153184c9b8f16ca407b269a284d03fcdff9)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:03 +00:00
Scott Rifenbark
86a44bbd99 ref-manual: Edits to distutils class.
(From yocto-docs rev: 97c5d2a4e5e8b1e6495cd79a47bd2e2155cd246c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:03 +00:00
Scott Rifenbark
66946cc026 ref-manual: Edits to distrodata class.
(From yocto-docs rev: e744c7ac57c22dec67f612e3e0c1669cc4df447b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:02 +00:00
Scott Rifenbark
213540317d ref-manual: Edits to distro_features_check class.
(From yocto-docs rev: 058a4a38e1001ec5b8405a1f2eb3a15837750605)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:02 +00:00
Scott Rifenbark
7042d72d04 ref-manual: Edits to devshell class.
(From yocto-docs rev: 2688ebef1dac71d64083bf0725c1c5e8382dfa6c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:02 +00:00
Scott Rifenbark
4526a1747c ref-manual: Edits to deploy class.
(From yocto-docs rev: aec5ec0b95bc2d4ba5a00d070282cbef460df877)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:02 +00:00
Scott Rifenbark
1856110b92 ref-manual: Edits to debian class.
(From yocto-docs rev: e17c4f495c1b3d029bdfe820a8a8448f2453a9f7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:02 +00:00
Scott Rifenbark
8f3ed72b06 ref-manual: Re-ordered the classes that start with "D".
(From yocto-docs rev: e65278b9476681eb6f9c662993e110ee4422ebc7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:01 +00:00
Scott Rifenbark
35dcd38287 ref-manual: Edits to crosssdk class.
(From yocto-docs rev: 24a16ca53617091e6201a94f2fae7c9fdbc9b4ee)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:01 +00:00
Scott Rifenbark
bf29c7002e ref-manual: Edits to cross-canadian class.
(From yocto-docs rev: 5e4fe91ae999cd4f0a84e64ebc0e1931f44a88f2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:01 +00:00
Scott Rifenbark
44c15f901e ref-manual: Edits to cross class.
(From yocto-docs rev: 852f477f180968dac3eca7b56876616e7479918c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:01 +00:00
Scott Rifenbark
67b2ba861d ref-manual: Edits to core-image class.
(From yocto-docs rev: 6ad4edae4ccbe071a52a142b062e5d2d95745eb0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:00 +00:00
Scott Rifenbark
cdae5044de ref-manual: Edits to the copyleft_compliance class.
(From yocto-docs rev: deeca7276527aaac6cde01ff0440b39543b3ce3f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:00 +00:00
Scott Rifenbark
5d966deca7 ref-manual: Edits to cmake class.
(From yocto-docs rev: aa397e791e02f3a5e3d9d2df40f298cbf4f26fe2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:18:00 +00:00
Scott Rifenbark
ca3c32e7c7 ref-manual: Edits to clutter class.
(From yocto-docs rev: 3869e568ee53a895971c1f70e8225eda91544c8d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:59 +00:00
Scott Rifenbark
86c35d0d39 ref-manual: Edits to chrpath class.
(From yocto-docs rev: b8909479a6396890f66ecf0a70a03563ae0311c2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:59 +00:00
Scott Rifenbark
38f25fae3f ref-manual: Edits to ccache class.
(From yocto-docs rev: 2e94db5e642ddcb7d81526b2e36ee135c825b1c2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:59 +00:00
Scott Rifenbark
74ff808e1c ref-manual: Edits to the cpan class.
(From yocto-docs rev: c74aad5cb61fbe0d030d0a7d6a0c3b74154b02a0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:59 +00:00
Scott Rifenbark
764a60c698 ref-manual: Re-ordered the classes that start with "C"
(From yocto-docs rev: 3edf122c8323e2a9c60b3f95f8f93a429d8a6116)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:59 +00:00
Scott Rifenbark
5beaa27ee9 ref-manual: Edits to buildstats class.
(From yocto-docs rev: 7f868e766f64515d3276a013d3750fdbba74f077)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:58 +00:00
Scott Rifenbark
f6f4e5d335 ref-manual: Edits to buildhistory class.
(From yocto-docs rev: 8046633f00b376db502db98f78c772e94622fb6e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:58 +00:00
Scott Rifenbark
2580b6ab92 ref-manual: Edits to bugzilla class.
(From yocto-docs rev: 5891f1a3b1c54f10221d2c7c98fb76dcdafef052)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:58 +00:00
Scott Rifenbark
4348ee8eab ref-manual: Edits to bootimg class.
(From yocto-docs rev: 7717bd85ee6458d7279fc2010f9ac32dc1134c81)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:57 +00:00
Scott Rifenbark
a6a30b16e6 ref-manual: Edits to boot-directdisk class.
(From yocto-docs rev: 9b6fcdb32fe53d99c5c32f475f6944da7556aca7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:57 +00:00
Scott Rifenbark
d717d7865d ref-manual: Edits to the blacklist class.
(From yocto-docs rev: 8092f0cbc8e511f38e9636a963f35647de4ad9e0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:57 +00:00
Scott Rifenbark
ef2bf39bc4 ref-manual: Edits to binconfig class.
(From yocto-docs rev: 28128f24690150c648f52714a5b92df307a24fec)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:57 +00:00
Scott Rifenbark
4d3a713cd2 ref-manual: Edits to the bin_package class.
(From yocto-docs rev: 5261cb2b993150e929edcf8a298b54d543b1826c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:56 +00:00
Scott Rifenbark
bbb9996af4 ref-manual: Edits to base class.
(From yocto-docs rev: 91e5eb73e223fc8fb95978d39a57b4790760daec)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:56 +00:00
Scott Rifenbark
0619ece9b3 ref-manual: Edits to autotools class.
(From yocto-docs rev: 91f058a605dc8f6ad38f73c5874fe754f6e1eed6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:56 +00:00
Scott Rifenbark
3ca6ae7cdf ref-manual: Edits to archive* classes.
(From yocto-docs rev: 2e989cbc9f994bde26ed9fda3e6d7ac011790f94)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:55 +00:00
Scott Rifenbark
ca29b1c14f ref-manual: Added waf class.
(From yocto-docs rev: 2ceb41523952d01eaa2b416e0999c8ea9fa5aca9)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:55 +00:00
Scott Rifenbark
a7def726c9 ref-manual: Added vala class.
(From yocto-docs rev: be011365798ad1cd6a098e3d8d5b760be0a59dee)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:55 +00:00
Scott Rifenbark
e319b70e07 ref-manual: Added utils class.
(From yocto-docs rev: ba1c91587a272bc285867b0d7d8c1fe8d73b809e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:55 +00:00
Scott Rifenbark
07c96ea0ea ref-manual: Added utility-tasks class.
(From yocto-docs rev: 8b3254f3df533954f77270740e04d7882565285c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:54 +00:00
Scott Rifenbark
d919228aba ref-manual: Added uboot-config class.
(From yocto-docs rev: 6dec694c5002bea025e8ee4db018276e2d5288ea)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:54 +00:00
Scott Rifenbark
7ead8dc752 ref-manual: Added typecheck class.
(From yocto-docs rev: de7d8ec25cd6e3d03392d4106f081a32e349d50d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:54 +00:00
Scott Rifenbark
1ac994c980 ref-manual: Added toolchain-scripts class.
(From yocto-docs rev: 0ae57bd21835a4903f783bac9772d52f53255e52)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:54 +00:00
Scott Rifenbark
b9a78f05f3 ref-manual: Added toaster class.
(From yocto-docs rev: ae813ee94ffe467e6b5a979010d3a5d5b7616f59)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:53 +00:00
Scott Rifenbark
eaf4e8e69a ref-manual: Added tinderclient class.
(From yocto-docs rev: f4bf1658de1c5edefa31644368a3376658949b23)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:53 +00:00
Scott Rifenbark
2a173ade62 ref-manual: Added terminal class.
(From yocto-docs rev: 756d44806d82963775e604dc01aeec5e9e4fe812)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:53 +00:00
Scott Rifenbark
99f680264d ref-manual: Added systemd class and three variables:
Variables:
  SYSTEMD_PACKAGES
  SYSTEMD_SERVICE
  SYSTEMD_AUTO_ENABLE

(From yocto-docs rev: 052db07a049fac9e41a1f4cbef0e6daa0c0a1d76)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:52 +00:00
Scott Rifenbark
4dcaea2128 ref-manual: Added syslinux class and many variables.
Variables added:
  AUTO_SYSLINUXMENU
  SYSLINUX_OPTS
  SYSLINUX_SPLASH
  SYSLINUX_DEFAULT_CONSOLE
  SYSLINUX_SERIAL
  SYSLINUX_SERIAL_TTY

(From yocto-docs rev: 1c1bfe53f760b56db9a84a1d46dd2cd85168990f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:52 +00:00
Scott Rifenbark
234ea4b9ba ref-manual: Added the staging class.
(From yocto-docs rev: 68b8c6156997f7910020e959ec47a35635287c3a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:52 +00:00
Scott Rifenbark
5f36c1f12c ref-manual: Added sstate class.
(From yocto-docs rev: 75f7f3103554f6138aab4fd02c8fefa0c61e164f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:52 +00:00
Scott Rifenbark
92d5851a62 ref-manual: Added spdx class.
(From yocto-docs rev: 1b7f65f6b48050190cc04bba07f3501ad119adbb)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:52 +00:00
Scott Rifenbark
089cc7d093 ref-manual: Added siteconfig class.
(From yocto-docs rev: f7a2fad49f12033141ce7f84b94275f93413d7de)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:51 +00:00
Scott Rifenbark
45fa7b27dd ref-manual: Added sip class.
(From yocto-docs rev: 955439810f4b49728e4ce797479dcb79dae95ef7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:51 +00:00
Scott Rifenbark
9ff93dbc50 ref-manual: Added setuptools class and edits to distutils class.
(From yocto-docs rev: 5bd4a02a55f0be84787103f99c48f1759840ac7c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:51 +00:00
Scott Rifenbark
1f72508b96 ref-manual: Added sdl class.
(From yocto-docs rev: 60debc9853b82b01a1549178514374264ec3b51c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:51 +00:00
Scott Rifenbark
2d5b043311 ref-manual: Added scons class and EXTRA_OESCONS variable.
(From yocto-docs rev: 99faed264301dbe46f071733e5d7291c8e2e0444)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:50 +00:00
Scott Rifenbark
4edb41b9a7 ref-manual: Added relocatable class.
(From yocto-docs rev: 6a5c2d42f4eab1e27e71e85b4ef4c91d468dc553)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:50 +00:00
Scott Rifenbark
08c105e9ad ref-manual: Added qt4* classes.
(From yocto-docs rev: 468d08f309621045f3e049595ee9aa43baa04d25)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:50 +00:00
Scott Rifenbark
6f9fc443fb ref-manual: Added qmake* class and three variables.
Variables:
  EXTRA_QMAKEVARS_POST
  EXTRA_QMAKEVARS_PRE
  QMAKE_PROFILES

(From yocto-docs rev: 2b9a3d3fc639b859142bf3372334f69029e9f111)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:49 +00:00
Scott Rifenbark
04823a7b7a ref-manual: Added qemu class.
(From yocto-docs rev: 85732913c2abb9b38857cec7abe12407e1be42ce)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:49 +00:00
Scott Rifenbark
aef5ef91b4 ref-manual: Added pythonnative class.
(From yocto-docs rev: 3cbf468049b21f2ccced662e339c6313c8eed9ee)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:49 +00:00
Scott Rifenbark
fd7683133e ref-manual: Added python-dir class.
(From yocto-docs rev: 0f89ce33ccf3975abae018ec206d89caa552fbaa)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:49 +00:00
Scott Rifenbark
b240103ad6 ref-manual: Added ptest class.
(From yocto-docs rev: f132704166b6b91a2a6db931622f8f1dc3347f3e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:48 +00:00
Scott Rifenbark
46aa9e7027 dev-manual: Updated the "Working with a PR Service" section.
I added a link to the PRSERV_HOST variable.  That variable is
now defined in the ref-manual variable glossary.

(From yocto-docs rev: ac6050263eba890ea0084f8f8444e967e10004f6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:48 +00:00
Scott Rifenbark
4a2da5eeb4 ref-manual: Added prserv class and PRSERV_HOST variable.
(From yocto-docs rev: 3a17d1709c5b5291dfae2a72b16e4c2dac561525)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:48 +00:00
Scott Rifenbark
4dd7a66a9d ref-manual: Added primport class.
(From yocto-docs rev: d11eaff5d616ffba8dff021388f90271726d7a2d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:47 +00:00
Scott Rifenbark
33f921cd5f ref-manual: Added prexport class.
(From yocto-docs rev: 471139942a937f095e55476d17ed75855e34dc10)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:47 +00:00
Scott Rifenbark
2a21166e3c ref-manual: Added populate_sdk_* class and some new variables.
Variables added:
  IMAGE_PKGTYPE
  SDK_OUTPUT
  SKD_DIR

(From yocto-docs rev: 90cd5ad1235a66117a86182bd6bf9bc75f09c424)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:47 +00:00
Scott Rifenbark
39eab266c2 ref-manual: Added populate_sdk class.
(From yocto-docs rev: 36a1d43d7deea639cf8c66408354d276e2f0aa89)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:47 +00:00
Scott Rifenbark
6bb0173d90 ref-manual: Added pixbufcache class and PIXBUF_PACKAGES variable.
(From yocto-docs rev: 88186f1a694b655d92f936935743759788e834f6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:46 +00:00
Scott Rifenbark
eb48cd7b56 ref-manual: Added the perlnative class.
(From yocto-docs rev: a55691268830692cdef40fd174e4028ca73ea871)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:46 +00:00
Scott Rifenbark
6a655bb023 ref-manual: Added the patch class.
(From yocto-docs rev: 359e4bd3b26ed45bc3dfe42339d99bfad7b3b1ac)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:46 +00:00
Scott Rifenbark
9e1170506d ref-manual: Edits to the packageinfo class.
Forgot the part about the class being automatically enabled
when using the Hob.

(From yocto-docs rev: 38eb3208adf18b75c9b441afe117900d9052f9c8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:45 +00:00
Scott Rifenbark
52d20f88dd ref-manual: Added the packageinfo class.
(From yocto-docs rev: 509958a080dcefc6ec44a98fe89e0a762c27d2dc)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:45 +00:00
Scott Rifenbark
e8d94a6abf ref-manual: Improved on package* class.
Previously, we were documenting the "package*" class and lumping
the "package_deb", "package_rpm", and "package_ipk" classes in
that entry.  Really, we need to break out the "package" class on
its own and create entries for the sub-classes that were being
bundled in there.  Additionally, we needed to document the
"package_tar" class.

(From yocto-docs rev: 0c263568c1c6c1700b0b87ed1a22fdc8e51f28c1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:45 +00:00
Scott Rifenbark
10146f0738 ref-manual: Added the packagedata class.
(From yocto-docs rev: b813c690089fd73a23347b4ad2be38cef683d754)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:45 +00:00
Scott Rifenbark
267c0a39cc ref-manual: Added the ownmirrors class and the SOURCE_MIRROR_URL variable.
(From yocto-docs rev: 8979676949e1c32ff71835b8d506e176a7b5c941)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:44 +00:00
Scott Rifenbark
9a38035b4e ref-manual: Added the oelint class.
(From yocto-docs rev: 58570cd703abc3066e2c0925fbe8888390825906)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:44 +00:00
Scott Rifenbark
8933434d6e ref-manual: Added the nativesdk class.
(From yocto-docs rev: 17e7e5571cc5e60bed498844efa2f90b5c60e38e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:44 +00:00
Scott Rifenbark
c4f72e13fa ref-manual: Added the native class.
(From yocto-docs rev: 9be6f08f35e085302a0527d8eaa76062b898b247)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:43 +00:00
Scott Rifenbark
a3afb9301a ref-manual: Added the multilib* class.
(From yocto-docs rev: eb6484ddc6a79c9749877e1499b6f76e06ff0a47)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:43 +00:00
Scott Rifenbark
9055d86ed7 ref-manual: Added the mirrors class.
(From yocto-docs rev: 3dc1dd9f8d03f28ee7c1a7ac2c7827bd6d064f8b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:43 +00:00
Scott Rifenbark
8440bc7f0d ref-manual: Added the mime class.
(From yocto-docs rev: de672e576fe410a2fe51aa0e2d2e9df9e3acd0a9)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:43 +00:00
Scott Rifenbark
fadb2c803f ref-manual: Added the metadata_scm class.
(From yocto-docs rev: 0464c95d6421d3d7547ed69f38697ae7212e682e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:42 +00:00
Scott Rifenbark
5169456348 ref-manual: Added the meta class.
(From yocto-docs rev: 05058a65e239f114efb1381a416008470f8a4a3b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:42 +00:00
Scott Rifenbark
5e8bd4311b ref-manual: Added the logging class.
(From yocto-docs rev: d2c2b7c50f316ab6bad30e6248d996fe0ff806fa)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:42 +00:00
Scott Rifenbark
ad750cf7b3 ref-manual: Edits to license class added INHERIT_DISTRO variable.
(From yocto-docs rev: 55f45ce942ba7b4c398b37d4d8784ecf3d5b01e4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:41 +00:00
Scott Rifenbark
b57292b442 ref-manual: Added the license class.
(From yocto-docs rev: 6958ed69a82bef1305cd3c4d5257dc412254348b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:41 +00:00
Scott Rifenbark
f639451816 ref-manual: Added the linux-kernel-base class.
(From yocto-docs rev: 06adf8c60b4c80f84ff834872a48ca961252c135)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:41 +00:00
Scott Rifenbark
2fb93ec40c ref-manual: Added the lib_package class.
(From yocto-docs rev: 93aaf3705c28d97041368b2a4ca00f964fdf5837)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:41 +00:00
Scott Rifenbark
e3e87d6f11 ref-manual: Added the kernel-yocto class.
(From yocto-docs rev: f6434320b8fdd67c0b4833d474ea920ba60aa1c9)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:40 +00:00
Scott Rifenbark
0466785eb9 ref-manual: Added kernel module split class.
(From yocto-docs rev: d320d2df41ac4082b1773f1480ad01c62df47999)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:40 +00:00
Scott Rifenbark
39e9690f87 ref-manual: Added kernel-arch class.
(From yocto-docs rev: 0f07277b4ab7850ca5aa39ef6e8e926351069771)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:17:40 +00:00
Richard Purdie
64fc47ba12 bitbake: bitbake: runqueue: Fix hole in setsceneverify skipped task logic
We have do_bundle_initramfs which is a task inserted after compile and
before build. It is not covered by sstate.

If we run a build with a valid sstate cache present, the setsceneverify
function realises it will rerun the do_compile step (due to the
bundle_initramfs task) and hence marks do_populate_sysroot to rerun.
do_install, a dependency of do_populate_sysroot is left as marked as
covered by sstate.

What we need to do is traverse the dependency tree for any setsceneverify
invalided task and ensure any dependencies are also invalidated. We can
stop at any point we reach another setscene task though.

This means the do_populate_sysroot task has the data from do_install
available and doesn't crash.

(Bitbake master rev: f21910157d873c030b149c4cdc5b57c5062ab5a6)

(Bitbake rev: 1484905373ad717cedcaef37a0addde034ebdc60)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:16:04 +00:00
Peter Seebach
43c1b32b2c bitbake: bitbake: build.py: add single-quotes around already-expanded directory name
If the computed name of a directory contains an undefined variable
reference, bitbake dutifully creates a directory with a name that has
${...} in it. However, the actual task script created then tries to cd
to that directory, and the cd command fails, because no such directory
exists -- because the shell has helpfully removed the ${...} which did
not match any actual variables.

Since we want the name to be used exactly-as-is, add single quotes around
the name so this doesn't cause strange failures running tasks, which
allows us to progress past such failures and get to a point where they
can be diagnosed.

(Bitbake master rev: 2809c2e6f2f35f9b08058950be896947ab5a0284)

(Bitbake rev: 3059ee335b7ae1bf77d6fd02e66ea5ba37d96c7b)

Signed-off-by: Peter Seebach <peter.seebach@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:16:04 +00:00
Richard Purdie
53ce38bd0f bitbake: bitbake: fetch2: Fix handling of SCM mirrors in MIRRORS
If an SCM mirror is in PREMIRRORS, the tarball is downloaded and then found
by the "upstream" check and handled correctly.

If an SCM mirror is in MIRRORS, the tarball is downloaded but not used
since there is no "upstream" run after MIRRORS completes. It therefore
sits there useless and unused. This code change forces the upstream to
run after a mirror tarball is found and fixes the usage of SCM mirrors
in MIRRORS.

(Bitbake master rev: a66ee0994645aa5658b2f5ea134ed17d89f8751a)

(Bitbake rev: 98d2cd8576a8d035e2b073cd54bb737a3c22bc4d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:16:03 +00:00
Olof Johansson
c53d0d4786 bitbake: bitbake: monitordisk: lower inode check warning to note
Filesystems like btrfs and reiserfs sets the inode count to 0, since
they don't have an inode concept. This is expected, and having a warning
show up every time you run bitbake can cause undue concern.

(Bitbake master rev: f3ac2d3678f48c68a250a0a20c08cf8687322d38)

(Bitbake rev: 04e2a1e4e3b3580660cdd3926caadeb0a9fbd4d3)

Signed-off-by: Olof Johansson <olof.johansson@axis.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:16:03 +00:00
Richard Purdie
9d5052d3ec bitbake: bitbake: cooker/command: Add error state for the server and use for pre_serve errors
Currently if errors occur when starting the PR service, there is a race that
occurs since the UI runs various commands including starting builds before
processing the CookerExit(). By adding the error state and refusing to run
async commands in this mode, builds are prevented from starting and the
UI reaches the exit code with the system shutting down cleanly.

(Bitbake master rev: 42fa34142ea685f91115a551e74416ca28ef1c91)

(Bitbake rev: bc2e0796c1846d1567db6343b24b85fd7dba9163)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-11 14:16:03 +00:00
Richard Purdie
53d2563ff1 bitbake: perforce: Fix path subdirectory issues
With a SRC_URI = " \
p4://depot/folder/...;module=localfolder/localsubfolder;changeslist=${P4CHANGELIST} \
"

the subfolders of //depot/folder/... get renamed when mapped to the
local folder structure. They lose the first 3 letters. This
patch fixes that.

Issue reported by and patch sent from katutxakurra@gmail.com

[YOCTO #5380]

(Bitbake master rev: 40e06dc459d9c0b5d42d65b2d2c846196fd36b1f)

(Bitbake rev: df0f92cdc925fe7f3bb2e6afe76cf10b0656ead6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 17:51:31 +00:00
Volker Vogelhuber
c5989a9e51 bitbake: fetch/hg: Improve user/password handling
Trying to use a server with username and password authentication
within the URL of the SRC_URI variable doesn't appear to work.

This patch adds the missing parts to the hg fetcher to make this
work properly.

(Bitbake master rev: dc3d6d73e44802c203b3f7247f6f212acc2f69bf)

(Bitbake rev: 76b50d0d72c4e2b03fc53fade255e87c1922e88d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 17:51:31 +00:00
Nicolas Dechesne
1c8c9f1e53 bitbake: fetch2/svn.py: use log instead of info to retrieve revision
We have faced a corner case situation where the 'last changed
revision' returned from svn info is wrong. It happens when the last
revision is a directory move. e.g. if we assume that the svn
repository at revA has root/x/y/z/foo/bar and it is moved to
root/a/b/c/foo/bar in revB, then svn info 'last change revision' will
return revA. As such when using AUTOREV, we are going to attempt to
retrieve root/a/b/c/foo/bar (as per SRC_URI) but at revA when it did
not exist.

So this patch changes how we retrieve the latest revision and uses
'svn log --limit 1' which gives correct result in all tested cases.

(Bitbake master rev: 17d8ef0b813a05c231e3dbe6e8bc82a4a9b1d2f8)

(Bitbake rev: d14e532f07f31b99c55bec9d87470eb54251c8db)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 17:51:31 +00:00
Richard Purdie
2238d4a63f Revert "utils.bbclass: Fix override ordering for FILESPATH"
This reverts commit 0bd63125c3.

As discussed on the mailing list, this change changes layer layout
in a stable branch which is unaccetable. The was accidentally
backported and should not have been, this reverts it.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 17:51:31 +00:00
Scott Rifenbark
7008903065 ref-manual: Added the insserve class.
(From yocto-docs rev: 39e76367c5f5489209af7bb7cb040a621076fb06)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:26 +00:00
Scott Rifenbark
e16bbd631d ref-manual: Added the image_types_uboot class.
(From yocto-docs rev: ebaacf429cec81b17440255e67e00711e6e65258)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:26 +00:00
Scott Rifenbark
de42e04de7 ref-manual: Edits to image_types class.
Noticed that this class file name is "image_types.bbclass" and not
"image-types.bbclass".  Fixed it.

(From yocto-docs rev: 354aa820a13f2dcff32e8a24cbce477c3d6b54f6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:25 +00:00
Scott Rifenbark
5f24d24595 ref-manual: Added the image-types class.
(From yocto-docs rev: 135ebbb8644525f3d85e128f510a650331623058)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:25 +00:00
Scott Rifenbark
6ba27fa504 ref-manual: Added the image-swab class.
(From yocto-docs rev: 8dfc65cf8911ac91fa0bc0a9aa8298f9e1328ec8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:25 +00:00
Scott Rifenbark
5599505af7 ref-manual: Added image-mklibs and image-prelink classes.
(From yocto-docs rev: ec9783ab26661167149be02b3e24c86154309b97)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:25 +00:00
Scott Rifenbark
fa5efc2dda ref-manual: Added image-vmdk class.
(From yocto-docs rev: 7f3212cc983d25db94db57c214dbe8d49ef4b912)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:24 +00:00
Scott Rifenbark
cc8f61cdf1 ref-manual: Added image-live class and updated IMAGE_FSTYPES variable.
Added a note to the existing IMAGE_FSTYPES variable based on
using "live" as an image type.  Need to be sure that appears
before inherit line in the recipe.

(From yocto-docs rev: 248ad730ad78c74c242d212c5a61c0cf83057f14)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:24 +00:00
Scott Rifenbark
ee6ca90f1b ref-manual: Added image-empty.bbclass to undocumented class list.
I accidently removed this so had to add it back in.

(From yocto-docs rev: c763a70118c20581176981f6380a427adb6b8a45)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:24 +00:00
Scott Rifenbark
e492cb70f3 ref-manual: Removed image-empty.bbclass from undocumented list.
(From yocto-docs rev: 55aeadaa89524dcb1ad0926703abc43758ca69b7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:24 +00:00
Scott Rifenbark
d2907bf4f8 ref-manual: Placed the ICECC_CC variable as entry point for "I" variables.
(From yocto-docs rev: e977266065c8645a4bfa73a72f047a9e40d4bbd2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:23 +00:00
Scott Rifenbark
2ee060691e ref-manual: Added icecc class and several ICECC_* variables.
New variables added for:

  ICECC_CC
  ICECC_CXX
  ICECC_ENV_EXEC
  ICECC_PATH
  ICECC_USER_CLASS_BL
  ICECC_USER_PACKAGE_BL
  ICECC_USER_PACKAGE_WL
  ICECC_VERSION

(From yocto-docs rev: 89ae30f5351cf26926f2a53c42163dd3418e05c3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:23 +00:00
Scott Rifenbark
c294f67611 ref-manual: Edits to the GRUB_GFXSERIAL variable.
Told where to set this variable.

(From yocto-docs rev: 6984f3ea58479e855762d0ab2e1d68f3e0759655)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:23 +00:00
Scott Rifenbark
edfdc96aea ref-manual: Added gzipnative class.
(From yocto-docs rev: 431572a20e8175dc513daedb5f28efe8291a6606)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:22 +00:00
Scott Rifenbark
312682c59f ref-manual: Added the gtk-immodules-cache class and GTKIMMODULES_PACKAGES variable.
(From yocto-docs rev: 2c5476591e932951ed77c0b09265610cd102e2c7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:22 +00:00
Scott Rifenbark
5a5833a437 ref-manual: Added gtk-icon-cache class.
(From yocto-docs rev: c19238e50847518695ae6e46d63e353757059495)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:22 +00:00
Scott Rifenbark
881b9ab252 ref-manual: Added gtk-doc class.
(From yocto-docs rev: 075a9afac196d129eaec8bed4e6bb3ebfb5fe9f7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:22 +00:00
Scott Rifenbark
e8b0fd068e ref-manual: Added gsettings class.
(From yocto-docs rev: 7322722d67ea3c29f9ea62ee062344fd6d930e68)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:21 +00:00
Scott Rifenbark
927db7c6a1 ref-manual: Added grub-efi class and supporting variables.
Created glossary entries for the GRUB_GFXSERIAL, LABELS,
APPEND, GRUB_OPTS, and GRUB_TIMEOUT variables.

(From yocto-docs rev: a9a1dc6775d8c479b06fcadc51eb01ac27bef62d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:21 +00:00
Scott Rifenbark
70351d9bae ref-manual: Fixed class name gtext to gettext.
I put this name in wrong in the original commit.

(From yocto-docs rev: c68ab8404a693de9ba6b229317c6fb5d78060b10)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:21 +00:00
Scott Rifenbark
7d3e2a8c49 ref-manual: Added gnome, gnomebase, gtk-icon-cache, and mime classes
The entries for gtk-icon-cache and mime are placeholders only
with this commit.

(From yocto-docs rev: f6325cef06186cfe50c164bf2b536209e8a97e90)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:21 +00:00
Scott Rifenbark
4120cdb8ee ref-manual: Addedt gtext class.
(From yocto-docs rev: ca8fd78455c583799509eb2447bc3fbadd91517d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:20 +00:00
Scott Rifenbark
1fcdab9145 ref-manual: Added gconf class.
(From yocto-docs rev: 8f0c43b15f47344a8b42be954af097ab1bdfbabe)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:20 +00:00
Scott Rifenbark
48cef6ad6b ref-manual: Added fontcache class and FONT_PACKAGES variable.
(From yocto-docs rev: 6e091001cabeca1d7427e6c74058b0c5b9204938)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:20 +00:00
Scott Rifenbark
6032dcdb32 ref-manual: Added extrausers class and EXTRA_USERS_PARAMS variable.
(From yocto-docs rev: e339505941f620ff74cd1bdd5f652c341baf2aad)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:20 +00:00
Scott Rifenbark
affd9bf773 ref-manual: Added classes_distro_features class.
(From yocto-docs rev: 94dfec1c0fe0131371ffcb28472efbc5dcc71510)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:19 +00:00
Scott Rifenbark
c9974480a2 ref-manual: Added distrodata class.
(From yocto-docs rev: d7b1a1ec7024f00c6934398025e9fcebd504c4bd)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:19 +00:00
Scott Rifenbark
8998295eaf ref-manual: Added deploy class.
(From yocto-docs rev: f7c60be2dad01cdb6d0d5462c40c68217191bcd6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:19 +00:00
Scott Rifenbark
60f4dcb298 ref-manual: Added crosssdk class.
(From yocto-docs rev: 0af692cc483ec22e79c8cbf15407920bd0c8fcd8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:18 +00:00
Scott Rifenbark
40fcffae65 ref-manual: Added cross-canadian class.
(From yocto-docs rev: a6300fde4fc3292caa497684d9f2143436845484)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:18 +00:00
Scott Rifenbark
56cd04c08d ref-manual: Added cross class.
(From yocto-docs rev: 421fbc549e8905a144d152af356f4d7e8c68305a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:18 +00:00
Scott Rifenbark
b4a248f1b4 ref-manual: Added core-image class.
(From yocto-docs rev: a3aa92bb1089962febab9dbb152a4cb71489e7d6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:18 +00:00
Scott Rifenbark
ebd35a9de6 ref-manual: Added the copyleft_compliance class.
(From yocto-docs rev: d546cd482a5d90929d7ed0ed177bf030d26b941a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:17 +00:00
Scott Rifenbark
5def0b4b68 ref-manual: Added the cml1 class.
(From yocto-docs rev: d865a82be42b7c0d4928fbe56b6e05609992c6c2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:17 +00:00
Scott Rifenbark
23dcaa5cc6 ref-manual: Added the cmake class.
(From yocto-docs rev: 599538fe8e25aa4445097a9d1c83fa196d80b433)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:17 +00:00
Scott Rifenbark
9e65ed399a ref-manual: Added the clutter class.
(From yocto-docs rev: 1bf7123de6be760a12e3056a9ff03bb9bac1369e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:17 +00:00
Scott Rifenbark
1a56dd7015 ref-manual: Added the chrpath class.
(From yocto-docs rev: 48f9e29437a6e55fbd88b92e746ca6af02f35605)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:16 +00:00
Scott Rifenbark
72a29a60ab ref-manual: Added the ccache class.
(From yocto-docs rev: 12c98bd349188f0c9555b326792330e70afc4b5d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:16 +00:00
Scott Rifenbark
e0b167bb78 ref-manual: Added BUILDSTATS_BASE variable description.
(From yocto-docs rev: a755fa4283d966e657cee94e2165c87283494caa)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:16 +00:00
Scott Rifenbark
ceee77a014 ref-manual: Added the buildstats class.
(From yocto-docs rev: 048b0c2a87bc122efb2c7efffaecac17a46fec27)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:16 +00:00
Scott Rifenbark
752b6dbffd ref-manual: Added the buildhistory class.
(From yocto-docs rev: 8a04660072fdefe556d29ed010476512b899cbc7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:15 +00:00
Scott Rifenbark
db08f1ee82 ref-manual: Added bugzilla class.
(From yocto-docs rev: 3caddb5dae398c498d94d2106f9810b1a2f94f4d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:15 +00:00
Scott Rifenbark
246fafd2e1 ref-manual: Added boot-directdisk class.
(From yocto-docs rev: 6c40ec521aeb15e590efeaa33fa049f3ae644063)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:15 +00:00
Scott Rifenbark
7bfbd54617 ref-manual: Added ROOTFS, NOHDD, and NOISO variable descriptions.
(From yocto-docs rev: 037bfb5e9867a39a8feb0ef4c4f0feb8e450543d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:14 +00:00
Scott Rifenbark
fdd81a0ecd ref-manual: Upper-cased the term "ram".
(From yocto-docs rev: 51b8584fecc168c10bd61a7fcaad1a06ea4ea74b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:14 +00:00
Scott Rifenbark
fcd968e582 ref-manual: Added INITRD variable to the glossary.
(From yocto-docs rev: 372501ebcf2a29603aa183e50109876045b133b7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:13 +00:00
Scott Rifenbark
78961c0190 ref-manual: Added bootimg class description.
(From yocto-docs rev: 01e51a69b3102e2a52826383762e8148d37933bf)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:13 +00:00
Scott Rifenbark
688211b181 ref-manual: Added PNBLACKLIST variable to the glossary.
(From yocto-docs rev: 36dde74fdfe5826b4d2e65d4f8bc98ff1116650e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:13 +00:00
Scott Rifenbark
0aaea60ea9 ref-manual: Added blacklist class description.
(From yocto-docs rev: 65b0b7f0675428566d72601fecaa7ef7c71311c6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:13 +00:00
Scott Rifenbark
ce4dd272c3 ref-manual: Added bin_package.bbclass description.
(From yocto-docs rev: 9c11ae7a589ba1534e830caf1ab387d63a9df923)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:12 +00:00
Scott Rifenbark
6d560da453 ref-manual: Removed binconfig class from undocumented list.
(From yocto-docs rev: acac8e81d4085bd8c8a9ccd790bdffd61481c98f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:12 +00:00
Scott Rifenbark
b357a60eaf ref-manual: Removed archive* type classes from the undocumented list.
(From yocto-docs rev: 3c0c93d0af6af5982f655fb8831c0ea529570c67)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:12 +00:00
Scott Rifenbark
a38ebf38ab ref-manual: Added allarch class description.
(From yocto-docs rev: 956c4343869f632b9383a4082303e5660e59648b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:11 +00:00
Scott Rifenbark
4f7fea8ccb ref-manual: Updated the introduction text for Classes chapter.
(From yocto-docs rev: fbaae0f02856d58592be1b54117463245e527897)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:11 +00:00
Trevor Woerner
71619842de ref-manual: add usage NOTE to IMAGE_FSTYPES
Due to the way in which IMAGE_FSTYPES is processed, it is not possible to
modify it using _append or _prepend. Therefore add a note to the manual to
warn users in case they stumble on this issue.

(From yocto-docs rev: 304e196329842d04356775fb8ad5d73466413e66)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:11 +00:00
Robert P. J. Day
579dce4233 dev-manual: ksize.py and dirsize.py live in scripts/tiny
(From yocto-docs rev: e222eb4b509772c1f5f493a22920e5fe4f5efad6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:11 +00:00
Robert P. J. Day
0ba27b42a8 dev-manual: Add a reference to poky-bleeding distro
As an actual example of using AUTOREV, refer the reader to the
Yocto-supplied poky-bleeding distribution.

Also cleaned up some wording and added a Caution statement
about the distro not being regularly tested - Scott

(From yocto-docs rev: 41e9c7d08ecf688c72e7ecac16a6bf030147061d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:10 +00:00
Robert P. J. Day
c2db7f4cb9 dev-manual: Documentation: More minor tweaks to dev manual for clarity
i'm still looking at the dev manual for more changes but those would
be more substantive changes, so i'll pass along just this collection
of minor stuff.

(From yocto-docs rev: 3ea3fd4625c571f8cf20e32e6edc03ba1e517e94)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:10 +00:00
Scott Rifenbark
cd64633f94 dev-manual: Changed wording of BBMASK example
I changed the intro text for this one-line example so that it
would not imply that the example is the only syntax that can
get the job done.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 40cabe53187a94256c8f2c50598610668ea4de77)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:10 +00:00
Scott Rifenbark
8995214e47 dev-manual: Added note about naming your distribution
Added a small note in the section where you create your own
distribution to be clear about where the name is set.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: f08fef284cb57cfc982b1fd3b4ca1b6fe5b883cb)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:10 +00:00
Scott Rifenbark
21ea9bced3 dev-manual: Re-worded menuconfig preparation
Rather than saying menuconfig is "built", I changed the text to say
it is "run."

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: f95b945787c84edb532c24886cdd44f1bc8bd98a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:09 +00:00
Scott Rifenbark
18c59735d7 dev-manual: Re-ordered sections for customizing an image.
The four sub-sections describing how to customize an image seemed
to be backwards as they progressed from most complex to easiest.
I switched up the order and provided better transitional
introductory wording for the four sub-sections.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: ebce74fde98fb3d3b74ed476288e482e87c83461)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:09 +00:00
Scott Rifenbark
f977932cca ref-manual: Updates and additions to variable glossary:
I applied some review edits to the USERADD_PACKAGES and
USERADD_PARAM variables from Paul.  Additionally, I added descriptions
for the GROUPADD_PARAM and GROUPMEMS_PARAM variables.

(From yocto-docs rev: eba37e97855e55f547aa6257a050f30ffed7948e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:09 +00:00
Robert P. J. Day
7650be618a dev-manual: Number of minor tweaks to Dev Manual, Chapter 5.
(From yocto-docs rev: 779e33c9f1228c54ed1b4e60c109d0b2ecd4b2f8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:08 +00:00
Scott Rifenbark
551166d616 ref-manual: Added new REQUIRED_DISTRO_FEATURES variable description.
Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 7dfbba8fd5b1dc8e020a588f210c3d2f339e1f9a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:08 +00:00
Scott Rifenbark
1df11a5af0 ref-manual: Added new CONFLICT_DISTRO_FEATURES variable description.
Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 490a57b5c8c8230d47be53bf3092060c3fb97f19)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:08 +00:00
Scott Rifenbark
1598a54c69 ref-manual: Slight grammar edits.
(From yocto-docs rev: db18d3e0986a87da7670c1ae424ce76df82e18ca)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:08 +00:00
Scott Rifenbark
d0d86d62a5 ref-manual: Edits to the COMBINED_FEATURES variable.
(From yocto-docs rev: c1e2a7f8985f058d520615ef389007424d7d4b7f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:07 +00:00
Scott Rifenbark
c86bd272ea ref-manual: Updated the intro text to "Distro Features" section.
(From yocto-docs rev: 328f7115e3f557ca979fa330e068739c4b0e2bcd)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:07 +00:00
Scott Rifenbark
2f1b6e1126 ref-manual: Some edits to the DISTRO_FEATURES variable.
(From yocto-docs rev: 10c4a13dc28ef1a8ceeee5a31dfbdd64511cd3d0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:07 +00:00
Scott Rifenbark
859980e7db ref-manual: Updated the DISTRO variable description.
This needed some work to be more descriptive.

(From yocto-docs rev: 00acc5f28a87c10572f1df4bf801c16f5b861f8e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:07 +00:00
Scott Rifenbark
2ae9f689a1 ref-manual: Edits to the DISTRO_FEATURES variable.
Took care of some quoting for the "x11" feature and also a
messed up sentence in the second paragraph.

(From yocto-docs rev: 4de95398d44fe0b12c68a6484c9b8d341bd6edb7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:06 +00:00
Scott Rifenbark
7f55d261e1 dev-manual, ref-manual: Updated some section titles.
I noticed that chapter 10 of the ref-manual still had
"Reference: Features" as the title.  This is left over from way
back.  I changed that chapter title to "Features."  Next, I
noticed an inconsistency with some sub-section titles.
I changed the "Images" title into "Image Features."  This affected
several links across the doc set so I had to update those
cross-references as well so they have the latest section title
as part of the reference.

(From yocto-docs rev: 41de29a5042d92bd34bc52315003a86b6d3277db)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:06 +00:00
Scott Rifenbark
abb6287a4f ref-manual: Changed depends.dot filename to pn-depends.dot.
Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: ecc38e4c71fe21223ed5ddfdabc7f6a7c41ec5a3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:06 +00:00
Scott Rifenbark
65b7f134fb ref-manual: Added the pn-buildlist file to list of files
In the "Dependency Graphs" section, the pn-buildlist file was
not mentioned as a file generated by the
'bitbake -g <target>' command.  I added this in and provided a
bit of re-writing.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 811a6af8fdb2cfa0b38d260665ed00260a939251)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:05 +00:00
Scott Rifenbark
33e7953a58 ref-manual: Cleaned up some instances of the term "working directory"
This term should always reference a users current working directory
and not be confused with the OpenEmbedded build system's "work
directory (WORKDIR).  I found several instances where the term
"working directory" was not used correctly and fixed them.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 80dcbf41fc57d0d527db13dd2f993233dd5c1675)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:05 +00:00
Robert P. J. Day
17d0bc0f4c ref-manual: fixed typo
Removed extra "the"

(From yocto-docs rev: a4a14eccf591bda7ce09c60dedb752da1c1ba26e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:05 +00:00
Scott Rifenbark
0ed81805f4 ref-manual: Updated to *_FEATURES variables.
I updated the MACHINE_FEATURES, DISTRO_FEATURES, and
COMBINED_FEATURES variable descriptions to better reflect what
they actually do.  Also, fixed two occurences of IrDA in the
features lists section.

(From yocto-docs rev: 89e40a2f309eacec37fc63c2ef0d4cd440722b2f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:05 +00:00
Scott Rifenbark
6fbfd3494f ref-manual: Review edits applied to UBOOT_LOCALVERSION variable.
(From yocto-docs rev: 58384e305a7e35624da44457929d3487fabf5915)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:04 +00:00
Scott Rifenbark
8267b847dd ref-manual: Edits to the UBOOT_LOCALVERSION variable.
(From yocto-docs rev: c47fbc2a0e6b9a7cd21a99dc227ee6b82e85f394)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:04 +00:00
Scott Rifenbark
03fe1aff16 ref-manual: Fixed typo in the UBOOT_CONFIG variable.
(From yocto-docs rev: 13ef9bc3c2e4c1e7045fd0413b28379eb0b92090)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:03 +00:00
Scott Rifenbark
477f49ec0b ref-manual: Added new variable UBOOT_MAKE_TARGET.
(From yocto-docs rev: c9a2cabfadf32d55c7cf022a5ad376c0e8d585a5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:02 +00:00
Scott Rifenbark
cd8c865688 ref-manual: Added new UBOOT_SUFFIX variable description.
(From yocto-docs rev: 87272ea28fca85ff9cf22ce4c81738a73f7f3410)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:02 +00:00
Scott Rifenbark
8507753bb8 ref-manual: Added new variable UBOOT_LOCALDEFINITION
Very rough draft for this new variable.  It will likely
change.

(From yocto-docs rev: 0ac34164b60908455b198d46475b7e88b0e25f6c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:02 +00:00
Scott Rifenbark
e24d021506 ref-manual: Fixed misc. formatting by removing blank lines.
(From yocto-docs rev: 413f49b0d460a3c292bdbd83cae5cee860f71244)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:02 +00:00
Scott Rifenbark
0a2aff1140 ref-manual: Fixed typo.
(From yocto-docs rev: eaca0035b8e7190f075e030d2abd370feaf19d3e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:01 +00:00
Scott Rifenbark
d716445808 ref-manual: Reset the top "U" entry for glossary to UBOOT_CONFIG.
(From yocto-docs rev: 2870f3345fed8e313e3ac4101cc38e89e6d021e0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:01 +00:00
Scott Rifenbark
42b42f105c ref-manual: Added glossary entry for UBOOT_CONFIG variable.
(From yocto-docs rev: a02a918147e903aaf08390ae1c02bad5f8d90c77)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:01 +00:00
Scott Rifenbark
04ee674163 ref-manual: Expanded CLASSOVERRIDE variable example description.
(From yocto-docs rev: f78cfbd4bd06f4dbf35522f2fd4b2012de889323)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:00 +00:00
Scott Rifenbark
3cd74158dd dev-manual: Updated EXTRA_IMAGE_FEATURES operater for read-only-rootfs example
The operator used was "=" which was inconsistent in light of
previous uses of the variable in the ptest section.  I changed the
operator to "+=" to be consistent.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 2cc73ae4e3a543a60bcc29b54d2de97b08f990db)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:00 +00:00
Scott Rifenbark
6fc8adf98a dev-manual: Added "Task" term.
After adding "Package Group" definition as the original "Task"
definition, we needed to create a new definition for the term
"Task".

(From yocto-docs rev: bc861fda764a6d5fe0dc3b62b0771e183e7356a4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:00 +00:00
Scott Rifenbark
e9d0f1d211 dev-manual: Changed "Tasks" term into "Package Groups" term.
Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 5e9f2a6192db61ffa93e83a2e5e5d7bcd75e5eb4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:54:00 +00:00
Scott Rifenbark
2c45a8c3ff dev-manual: updated the ptest section
Removed the note indicating that three specific recipes were
"ptest-enabled" for the release.  I substituted in wording that
tells the user to see if a a recipe inherits ptest as a way
of determining if the receipe is ptest-enabled.

(From yocto-docs rev: f9886957055619e9c5e9eccfe0a606e7aee275cd)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:59 +00:00
Scott Rifenbark
2c5640cdb4 ref-manual: Added core-image-weston and core-image-directfb
The image core-image-gtk-directfb really should be
core-image-directfb.  Also, we need to add the core-image-weston
image to the list of images chapter.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: a1d2e745a4500f5e9d8c34feda8dc474da2cf01f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:59 +00:00
Scott Rifenbark
3bc23c367b dev-manual: Re-worded the "Customizing Images Using Custom .bb Files"
Changed the wording so that it reflects better what is actually
going on when use IMAGE_INSTALL to afect an image.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 36178822a53f9eb7065513c8b2b1b01fc166b771)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:59 +00:00
Scott Rifenbark
4d3730597d ref-manual: New glossary entries
I added glossary definitions for BUGTRACKER and
CLASSOVERRIDE.

(From yocto-docs rev: 11517aa35b0ce694749f3ec55f452976f26f8166)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:59 +00:00
Scott Rifenbark
fcae7ef9c5 ref-manual: Added other argument values to set variable
The BB_DANGLINGAPPENDS_WARNONLY variable seems to be able
to be set using "1", "yes", and "true".  And, it is set using
these various methods throughout the poky metadata.  I guess it
has caused a bit of confusion so I have added the fact to the
description.

Reporte-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 8f96a657079f7dd3e601c4d99de4b8c9c09c26d9)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:58 +00:00
Robert P. J. Day
53aca1554e ref-manual: Fixed a typo.
(From yocto-docs rev: 496d4b9bc1e5cf7f59fbd97ed14257a64c0b560e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:58 +00:00
Robert P. J. Day
a70b2f2a9c profile-manual: Edits from Robert P. J. Day
If someone wants to check this over, make sure I didn't make
any silly changes.

(From yocto-docs rev: d1dd154740ffb9c858a66cab80486a4d684131da)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:58 +00:00
Robert P. J. Day
ac3f5a3e78 profile-manual: Review edits from Robert P. J. Day
Given the length of the tools sections in the profiling manual,
I'm doing each tool separately so that patches come in
manageable chunks.

(From yocto-docs rev: 9e06ea7c09ca397f7ade7404f9d3fd2dc17da095)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:58 +00:00
Robert P. J. Day
ea8dcadbe5 profile-manual: Small typo fixed.
(From yocto-docs rev: af33c6c26a56aff1e0e220583c5d66d3f89cefd5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:57 +00:00
Robert P. J. Day
1a49f09afc dev-manual: More minor tweaks to development manual.
(From yocto-docs rev: dc0dd5896e874aa0566bc11c7909c496af130561)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:57 +00:00
Scott Rifenbark
acd3784933 dev-manual: Complete first draft of the new wic section.
(From yocto-docs rev: 4bd0f5db0e0d8a2c3d28f415afaf92fed93102f0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:57 +00:00
Scott Rifenbark
b0c4c855a0 dev-manual: More work on the new wic section.
I am working through the raw text.  Not clear through yet but
needed to commit this.

(From yocto-docs rev: 4da28c311443ad31a0a36b07b39aa7ce4180b49c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:56 +00:00
Scott Rifenbark
fa191d4882 dev-manual: Rough draft of wic section.
(From yocto-docs rev: a628ab0034c66f0c62ffd7e9b6010c5afbecee82)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:56 +00:00
Scott Rifenbark
184166fa7a dev-manual: Fixed broken structure.
I discovered that the dev-manual was not making correctly.
Probably resulted in trying to merge in some of Robert P. J. Day's
comments.  I have made the tag changes to fix this.  Also,
I added back in the two methods for setting up meta-intel
as I am not making the tarball-exclusion changes to the dora
branch.

(From yocto-docs rev: a10a9c3960045a777da6245a2502504f15fad579)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:56 +00:00
Scott Rifenbark
5de0010aff dev-manual: Reworded sentence for meta-intel setup
Changed wording to remove implications of meta-intel clone
importance regarding working with BSPs.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: c5f282e2b5f9931103fc4a1adafbf9c5becb0084)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>

Conflicts:

	documentation/dev-manual/dev-manual-start.xml

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:56 +00:00
Robert P. J. Day
30bebff0dc dev manual: Minor tweaks to first part of ch 5, dev-manual.
given the length of chapter 5 in the dev manual, i'm going to do
this in bite-size pieces.

(From yocto-docs rev: 3db48a0be170a02e5042fe65253c65b5245c6b89)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:55 +00:00
Robert P. J. Day
3736d5c8e4 dev-manual: Some minor tweaks to ch 4, development manual:
* is it technically correct to say there are now 5 BSPs? as in, does
  genericx86-64 count as a new BSP distinct from genericx86? [aside: are
  there any plans for a MIPS64 BSP?] - rpjday

  MIPS64 is under development. - scottrif

* if scott is up for it, a couple more variables for the variable
  glossary might be BASE_WORKDIR and TARGET_VENDOR, which i would have
  added to that variable list but they don't appear in the glossary
  - rpjday

  Noted. - scottrif

(From yocto-docs rev: 45876209fd94a6800176381518f008890da646ee)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:55 +00:00
Robert P. J. Day
e3e0d40704 dev-manual: Small number of tweaks to ch 3, development manual.
(From yocto-docs rev: f496e2fb8050830a2daf9f712a9b9b40b4025f1f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>

Conflicts:

	documentation/dev-manual/dev-manual-newbie.xml

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:55 +00:00
Robert P. J. Day
803300cc12 dev-manual: A few tweaks to Ch2 of dev-manual
scott (or anyone else) is welcome to use any or all of this, or
tweak to taste. i have a few other concerns with ch 2 but i'll read it
more carefully to make sure i'm reading it correctly. - rpjday

I implemented all but the addition of MIPS64 as it is not tested
using the autobuilder yet.  - scottrif

(From yocto-docs rev: 9ae733456e1d39de66ad6235172f26cab4a33269)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>

Conflicts:

	documentation/dev-manual/dev-manual-start.xml

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:54 +00:00
Scott Rifenbark
4cab968da8 dev-manual: Updated Build Directory term
The examples went stale.  Two out of three did not work.  I have
provided new examples that work.

Reporte-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: ae50d3ee3c244b2c864d80adf69a7a69fb6e3985)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:54 +00:00
Scott Rifenbark
8b35d3fdfb ref-manual: Removed FAQ entry about running on RHEL/CentOS 5.1.
We don't even support these Linux distributions.  We should not
have a FAQ entry telling people how to deal with it. If the
distro is that "hot", we should take steps to support it.

(From yocto-docs rev: dd4a8f81a1a0517a9859ab775f22a4cf702ed341)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:54 +00:00
Scott Rifenbark
f63ae44a8e ref-manual: Updated the FAQ entry about old Python version
I re-wrote this FAQ entry to indicate more recent versions of
Python and to leverage off the way we can now download or build
out the buildtools.

Reported-by: Paul Eggleton <paul.eggleton@linux.intel.com>
(From yocto-docs rev: 87bcd154526feac7218a27b62bffd3a017885435)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:54 +00:00
Scott Rifenbark
9addcf5ccb documentation: Scrubbed use of directory names
There was inconsistent use of the way directory names were
handled throughout the YP documentation.  I have scrubbed the
set and replaced many instances such as the following:

meta/<something> replaces /meta/<something>
poky replaces ~/poky (except in some very specific examples)

I basically got rid of leading slash characters.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 12a96db6dffe09fca7ce848e006c591a637be5a4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:53 +00:00
Robert P. J. Day
a665b9b68e ref-manual: Tweaks to the structure chapter.
Significant tweaks:

* removal of (confusing) leading slashes from YP filenames
* deletion of reference to non-existent "build/tmp/pkgdata/"

(From yocto-docs rev: e00776f75c350a51314dcbcd629e307e6026188a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:53 +00:00
Scott Rifenbark
fa0e05d28a ref-manual: Removed the pseudodone section in the build structure
This folder disappeared and I was not told about it.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 50a9b599455da329ee09790f5b7c69333fa30ee9)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:53 +00:00
Scott Rifenbark
3c68b660b4 ref-manual: Updated the description for oe-init-build-env-memres
Added more wording to clearly state what is going on when you
run this script.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 7aa93cadb4758aba239ffd472ea5e1026125d371)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:53 +00:00
Scott Rifenbark
802e09b0ab ref-manual: Tweaks to a patch from Robert P. J. Day
I altered three areas from the previous patch submitted and
applied from Robert P. J. Day.  Two minor wording changes and
removal of negative language.

(From yocto-docs rev: e4370fb28e6278292224b3a3efbf41943c5c0829)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:52 +00:00
Robert P. J. Day
e9327ca078 ref-manual: Various tweaks/fixes for ch4. ref manual
(From yocto-docs rev: dab494935de28e327941071193a0f251d30a5005)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:52 +00:00
Scott Rifenbark
5eeb8e08cf ref-manual: Updated bitbake/ section to remove wrapper script
Robert P. J. Day noted that the bitbake command no longer uses
a wrapper as the section indicated.  I have removed this reference.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: ccdcf3d80f2e684877265d2dde8606ddeed4dfd2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:52 +00:00
Robert P. J. Day
abf9789893 ref-manual: Couple innocuuous tweaks to migration chapter.
(From yocto-docs rev: 1a8a2c22549a3ed4bb750cb902255770b980bd48)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:52 +00:00
Scott Rifenbark
bf77e55a5e ref-manual: Updates to "Variables" and "Building with No Dependencies"
There was some confusion over some things in these two sections.
I re-wrote them with the help of Paul Eggleton to be clear on
them.

Reported-by: Robert P. J. Day (rpjday@crashcourse.ca>
(From yocto-docs rev: 076aa5d8244ed3fcf321ef61da5d2270b40a7791)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:51 +00:00
Scott Rifenbark
0d9874d9a3 ref-manual: re-wrote the "Invalidating Shared State" section
Investigating this section for an apparent typo it was decided
that the term needed removed.  During the process I re-wrote
the section for clarity.

(From yocto-docs rev: 8ba011f9f3066bb821b8b371f20f1f9522960a2e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:51 +00:00
Scott Rifenbark
b271f1196d ref-manual: Updates to the FEED_DEPLOYDIR_BASE_URI variable.
Fixes [YOCTO #5408]

Some edits to remove a link to the YP Metadata definition.
The metadata referred to here is for opkg only.

(From yocto-docs rev: 9a1d6e1929ef6cb3f7007ae9b7377e52da68d0f4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:51 +00:00
Scott Rifenbark
e1e98886d7 dev-manual: Added a link for the FEED_DEPLOYDIR_BASE_URI variable
Fixes [YOCTO #5408]

Added a cross-reference link for this variable into the glossary.

(From yocto-docs rev: 9e978095d8ef5de514698a7b1959c0612496b56d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:50 +00:00
Scott Rifenbark
f86f539f10 ref-manual: Added FEED_DEPLOYDIR_BASE_URI variable description
Fixes [YOCTO #5408]

As part of the fix for this bug, I have added a description of this
variable to the glossary.

(From yocto-docs rev: 71c75163c24b249769f7acc96285a512d3ba9f86)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:50 +00:00
Robert P. J. Day
ae168b4217 ref-manual: Small number of fixes to Ch1 of ref-manual
Three chunks attempted in a patch from Robert.  Two out of
three worked.  One did not because the text had changed due
to re-writing a note that had some links to out-of-date
wiki pages.

(From yocto-docs rev: 68617e9b131acc28a7f96e1965b78036f277dc77)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:50 +00:00
Robert P. J. Day
b107150ed8 ref-manual: Pedonic tweaks to ref manual usingpoky.xml
Patch from Robert P. J. Day.  Good catches for some minor
wordings and such.

(From yocto-docs rev: bb5befebdfc642958ec3544cba752aab0d933936)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:50 +00:00
Robert P. J. Day
d47c4ea18c ref-manual: Grammar and typo edits to "Closer Look" section.
Patch applied from Robert P. J. Day that basically amounted to
a good review of this section.  Robert caught several typos and
small writing issues throughout the section.

(From yocto-docs rev: 559c700187f04c54352b3202fba6022eb74ac610)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:49 +00:00
Trevor Woerner
0c4b95531c dev-manual: package add using facilities in oe-core only.
Fixes [YOCTO #5408]

The developer's manual should only refer to functionality which
is available in oe-core.  Currently, this passage requires the
user to add a package from, and use facilities of, meta-oe.  This
fix describes how to use facilities in oe-core to the same end.

(From yocto-docs rev: 0f6d955c5d7fba7258441ce6dbdecfac67d722f7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:49 +00:00
Scott Rifenbark
c04bbe3dff yocto-project-qs: Removed out-of-date links to distro requirement info
Two links in the section discussing Linux distro requirements to
wiki pages were terribly out of date.  I have rewritten the note
to remove them.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: bdc0623673585a45ac8c090e847c9a764d7d7ee1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:49 +00:00
Scott Rifenbark
45d50e6c94 ref-manual: Removed out-of-date links in supported distros section.
The links to the supplemental information wiki pages on setting up
your system to run YP were terribly out of date.  I re-wrote the
section to remove them.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 04423d3b06845eae00685f594d72b9ee06e99234)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:49 +00:00
Scott Rifenbark
2e5ce3fac9 ref-manual: Formatted "Caution" boxes
The formatting for the "Caution" boxes was poor.  There was no
apparent reason in the style guide why these types of admonitions
should appear any different than "Notes" or "Tips", which look
fine.  I could not devise a .css solution so I tricked the
formatting by using the <title></title> tags in combination with
a <note></note> pair.  Basically dumped the <caution></caution>
tag pair. It looks okay now.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: cda5685bc1cd87128007f68eea8e727ed5405115)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:48 +00:00
Scott Rifenbark
7b7302d2f4 poky.ent: Added gcc-multilib as an essential package
Fixes [YOCTO #5440]

Needed to add this to Ubuntu and Debian as an essential package.
Updated the variable so that both the QS and ref-manual will
have the package listed.

(From yocto-docs rev: d6e7ad50ea7e1b68f91d83a06908400e0ebf2f47)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:48 +00:00
Robert P. J. Day
da3d082c5c ref-manual: Patch for various fixes to glossary
Fixes include:

 * typoes
 * grammar fixes
 * updated package and version references
 * clarifications

(From yocto-docs rev: 18e4c0396b49882a533fe5de8f93c4bd6c888378)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:48 +00:00
Scott Rifenbark
bf6209c622 ref-manual: Edits to INITSCRIPT_NAME variable.
The etcdir string was replaced by sysconfdir.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 23b5db0135d3df31dc895e8c0c756aec05fcb8db)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:47 +00:00
Scott Rifenbark
fcd56baec1 ref-manual: Edits to the BBLAYERS_NON_REMOVABLE variable.
This variable is only used when building an image using Hob.
The description implied otherwise.  I clearly state this now.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: ee4616ba45f43e631e8040ed4662e403e610fa0b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:47 +00:00
Scott Rifenbark
7f55ac338a ref-manual: Fixed URL to buildtools to use the &DISTRO; variable.
This was hard-coded to "1.5".  It now tracks the actual distro
release.  The fix assumes that there will be an actual location
to resolve to.

(From yocto-docs rev: 43ceaeac04bac3b1c46134c032c375ad9d9420b7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-12-03 12:53:47 +00:00
Maxin B. John
75bed4086e adt-manual: Added note for static builds using -c populate_sdk
Documentation fix for [YOCTO #5347]

SDK created useing the "-c populate_sdk" will not support static
binary build without proper staticdev library packages.
I have added a note to inform the user about this limitation.

(From yocto-docs rev: ce9e4815e700cc22d3f54ab0621c1a22091c2b54)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:48 +00:00
Scott Rifenbark
2093e92ef4 dev-manual: Updated the note about BB Commander/Eclipse WS locations
Fixes [YOCTO #5203]

This was reviewed by Alex and an ordering change was needed due
to the order of how things are created during the workflow.

(From yocto-docs rev: cd55870ac91f5a3e9329dd89bcb175b67bb4aca3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:48 +00:00
Scott Rifenbark
55f35e457d dev-manual: Updates to Toaster section
Partial fix for [YOCTO #4414]

I put in the API stuff as well as made sure the other comments
for the dora-toaster branch and temporariness of the GUI were
mentioned.  Probably more tweaks before this section settles
out.

(From yocto-docs rev: 54dd0edff55c26fb52941e69b1919e30d7a7ec55)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:48 +00:00
Scott Rifenbark
26c099f15e dev-manual: Edits to the toaster section
Partial fix for [YOCTO #4414]

Got some feedback on the section and added a step that the user
needs to checkout the dora-toaster branch after clonine poky.
Also, that the Django version is specific and not the listed
version +.  Finally, a bit of wording to note that the GUI is
temporary for this release.

(From yocto-docs rev: 7d7c42894b05c86366e8b35ec2f8f9e0e7ee5baa)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:47 +00:00
Scott Rifenbark
2ad3e6e0e4 dev-manual: Fixed grammar error.
(From yocto-docs rev: b3efa5dd7c647025da4066c8d44f55a7861ab675)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:47 +00:00
Scott Rifenbark
bc12d35a7f dev-manual: Updated the layer.conf example
The meta-yocto-bsp/conf/layer.conf example had gone stale.
I updated it.

(From yocto-docs rev: d77ea0f078675b227a018a57574f5512629c8afb)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:47 +00:00
Scott Rifenbark
2a66460cae adt-manual, dev-manual: Removed the LatencyTop website link.
This site has disappeared and I don't think there is a
replacement.

(From yocto-docs rev: c568a4f3a66a3cde7f71f9c31ce39814f05b8f38)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:47 +00:00
Scott Rifenbark
1ba659469c dev-manual: Updates to Toaster section about examing data
Fixes [YOCTO #4414]

Removed the details from the examining data section as directed
by Belen.  This detail remains in the 1.6 leg but not in the
dora branch.

(From yocto-docs rev: 2403f20edcfe258c10fd36de891c2f479e0a2d17)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:46 +00:00
Scott Rifenbark
d1952de21d dev-manual: Updates to toaster section
Fixes [YOCTO #4414]

Some changes to the toaster section according to the updated
toaster wike site and Belen.

(From yocto-docs rev: e8ca4c6f14c07e486a0a6a79ec4f98244390e621)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:46 +00:00
Scott Rifenbark
39f0315a32 ref-manual: Edits to the IMAGE_FSTYPES variable
Fixes [YOCTO #5368]

Applied some review comments from Laszlo to the new description.
I added an example.

Reported-by: Laszlo Papp <lpapp@kde.org>
(From yocto-docs rev: 6be18fbf59c344b96532944abb8c4d8c8921f64b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:46 +00:00
Scott Rifenbark
d4fe8f987f ref-manual: Updated IMAGE_FSTYPES variable description.
Fixes [YOCTO #5368]

The current explanation was ambiguous regarging the term
"subset".  I have rewritten to be clear.

Reported-by: Laszlo Papp <lpapp@kde.org>
(From yocto-docs rev: 56842d98601d4215f7c429ec1cd1d4f7b98d54f3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:45 +00:00
Scott Rifenbark
d52f3770fd adt-manual: Minor edits to Chapter 4
Broke up the introductory paragraph a bit because it was one
large chunk of text.

Added the release variable to specify the directory in which to
find the environment setup script for the toolchain.

(From yocto-docs rev: d8a4b92490d13d9d16fc118168730e4a7767e775)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:45 +00:00
Scott Rifenbark
dfd41a6c37 adt-manual: Edits to "Optionally Building a Toolchain Installer"
This section pretty much sucked.  I did some re-writing and
created a list to better present the material and options.

(From yocto-docs rev: 26402913d60043c643462aa51a187145c9a1356c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:45 +00:00
Scott Rifenbark
baa03f300e adt-manual: Edits to "Extracting the Root Filesystem" section
This section was a bit confusing.  I added some lists to make
it clearer when this step is necessary.  I also added some more
detail on where to find the setup script.

(From yocto-docs rev: 231e1f44da61d4a2fdb5ddb20bce89dfddccf092)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:45 +00:00
Scott Rifenbark
dc3e39efbd adt-manual: Edits to "Getting the Images" section
Fixed a grammar issue and provided some wording to make things
clearer for locating the Makefile.inc file.

(From yocto-docs rev: 37c71be85da57a353890e6be9e9d4c2414ddabb1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:44 +00:00
Scott Rifenbark
aa8bad8bac adt-manual: Edits to "Using BitBake and the Build Directory" section.
I added in some enhancements here:

1. Worded the local build environment setup stuff to include
   the possibility of memory resident version of BB.

2. Make a better looking list.

3. Dumped the note about changing directories after running
   your setup script.

(From yocto-docs rev: 722836e1af86a46e57dfbeff69a34df5c36fead8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:44 +00:00
Scott Rifenbark
b963e5d723 adt-manual: Edits to the "Using a Cross-Toolchain Tarball" section.
This section was not quite right for the installation method
after obtaining the tarball.  There was some old stuff in there
and it didn't mention the fact that the script asks you what
directory you want to install into.

(From yocto-docs rev: d950b48fede7a139aad0aa4da39a2b34fa565438)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:44 +00:00
Scott Rifenbark
c7000e6817 adt-manual: Fixed toolchain path in example
The path used for the toolchain was wrong.  The string
"toolchain" appears in the path.  I was not informed of this
before locking down 1.5 release doc changes.  I have updated
the path in the "Using a Cross-Toolchain Tarball" section.
Occured twice.

(From yocto-docs rev: 8a73c27dac7aebfb93afd7614b0da6c7c8752a03)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:44 +00:00
Scott Rifenbark
560606c3ec adt-manual: Removed tarball method from "Getting the ADT Installer Tarball"
Fixes [YOCTO #5368]

Partial fix to this issue.

This section demonstrated how to build the ADT Installer tarball
using BitBake by downloading the poky release tarball.  I updated
the section to use the method where you use Git to clone the
poky repo and then check out the current release as a branch.

I also re-organized the section to read better.

(From yocto-docs rev: 8c1a5e9fd114fd181b15de3b117d56fbb5779e1f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:43 +00:00
Scott Rifenbark
4e6514248b adt-manual: Fixed wording for getting the ADT tarball
Poorly worded opening sentence for the "Getting the ADT Installer
Tarball" section.  I re-wrote this to remove the confusing link
to the Index of Releases.

(From yocto-docs rev: 638504f4a5c2910bb2a114e01e1aa12dd498a861)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:43 +00:00
Scott Rifenbark
874f178544 yocto-project-qs: Fixed toolchain path
I messed up on a previous commit and didn't get the example
path correct.  This fixes it.

(From yocto-docs rev: 8ecc545db862ed2a9b6b795930610a193f5ed9e6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:43 +00:00
Scott Rifenbark
a39bff8132 yocto-project-qs: Removed note in "Downloading the Filesystem" section.
This note is bullshit.  It was left over from like YP 1.3
release.  I have removed it.

(From yocto-docs rev: c4c154e7a74d4de8ca0fdeea380e72044f290e08)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:43 +00:00
Scott Rifenbark
1a08cfd4f8 yocto-project-qs: Fixed toolchain installer download path
Discovered that the string "toolchain" is in the pathname of the
toolchain installers from the downloads area.  This invalidated
the explanation of how the installer files are named and the
actual examples themselves.  I fixed it all.

(From yocto-docs rev: fbe67d2fc92cb5a5597d793ea2c3614b8260ae02)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:42 +00:00
Scott Rifenbark
29d095ff71 documentation: Changed Note for most recent manual
Fixes [YOCTO #5368]

Changes partially address this issue, which is the removal of
tarball installations for poky.  The notes at the beginning of
the YP manuals suggested that the tarball version of a manual
might lag the version found on the website.  Because we are
discouraging installing poky/documentation from a tarball now,
I have re-written the note to be generic and suggest simply that
for the most recent version of the manual associated with the
release, see the manual on the website.

(From yocto-docs rev: d47ff8285a6ba01d721509724641a51aab8da73d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:42 +00:00
David Nyström
3c72e93fd3 ref-manual: Added sdk-pms to Distro Feature list.
Added description of DISTRO_FEATURE sdk-pms in the Yocto
Reference Manual.

The changes I made are not exactly identical to the patch
submitted by David.  I dit a bit of re-writing for the
text but the concepts are the same.

(From yocto-docs rev: a00b34badd547d585f0349ccc4112df6fc93691a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:42 +00:00
Robert P. J. Day
97d63634b4 kernel-dev: Corrected error in the FILESEXTRAPATHS example
FILESEXTRAPATHS_prepend example is missing essential trailing
colon.

(From yocto-docs rev: 9832a45e2ee6bda096453786d0273acff4033dab)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:42 +00:00
Scott Rifenbark
52a9ca562e dev-manual: Added -ptest to list of complementarty packages
Added the '-ptest' complementary package to the list of packages,
which included '-dev' and '-dbg' when using inherit packagegroup.
Robert P. J. Day pointed out the code in the OE packagegroup.bbclass
class that showed these three packages all together.

Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
(From yocto-docs rev: 92bbbbe77b5694eb9e11789c6c425be8c47399c9)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:41 +00:00
Scott Rifenbark
c65cb4348b dev-manual: Uncommented the Toaster GUI section.
Removed the comments for the section that describes how the
Toaster works with a GUI.  I also commented out the smaller
section that was used in place of the GUI section before it
was fully functional.

(From yocto-docs rev: 187e5fe9236742167903cbc7ae11f184256e8712)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:41 +00:00
Scott Rifenbark
854c136d48 tools/mega-manual.sed: Changed 1.5 to 1.5.1
Updated the script to support building the 1.5.1 released
manuals.

(From yocto-docs rev: 875d1920e15e6e203f17e2051b7d7ca51a340251)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:41 +00:00
Scott Rifenbark
d72b558cd9 poky.ent, documentation: Updates for building 1.5.1 release.
Changed variables in poky.ent to support building 1.5.1.

Added Manual revision entry to all manuals with that table for
the 1.5.1 release.

(From yocto-docs rev: a1c3196199437c3a7cfdf86e4f78b806c66b6385)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:41 +00:00
Scott Rifenbark
c8b6934009 profile-manual: Added note about SystemTap ssh connectivity
Fixes [YOCTO #4409]

Added a note into the Setup section for SystemTap that tells
how ssh connection is assumed.  Also provided a link to the
wiki page that basically replicates all the same information that
is in the section.

(From yocto-docs rev: b57429bb8ceccb51648cc797467d8bf8778f2da2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:40 +00:00
Scott Rifenbark
271b6cb714 dev-manual: Added note for BB Commander project location.
Fixes [YOCTO #5203]

Adding a BB Commander project location that is the same as your
Eclipse workspace causes an error.  I have added a note warning
the user to not do this.

(From yocto-docs rev: eff9eeba71cc7f464302b824a07bdb2591149f84)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:40 +00:00
Scott Rifenbark
b7f25ec9cd dev-manual: Updates to section on using script to create general layer
I made some minor edits to this section for better wording.

(From yocto-docs rev: 080a194f4911b97bedcef259ed8a5ab9dc0f9b0a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:40 +00:00
Scott Rifenbark
3c576c8025 dev-manual: Updated "Using .bbappend Files" section.
The actual file listings for the formfactor_0.0.bb and
formfactor_0.0.bbappend files had changed.  I updated the listings
to match the actual files with the release.

(From yocto-docs rev: 8b2e285d83a9bfd97ed123085d046eaeec2be6f8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:39 +00:00
Scott Rifenbark
d84f259f23 dev-manual: Updates to "Best Practices to Follow When Creating Layers"
I applied some capital letters to a bullet item for consistency.

(From yocto-docs rev: c6bc04dc236ed2afaea888046091015165ab3a85)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:39 +00:00
Robert P. J. Day
891786ee7d bsp-guide: Fixed some grammar and some filenames.
* A couple grammatical fixes.
* A couple filename corrections.

(From yocto-docs rev: 1bad0049c57d47aaa785b8825a9a702c16e4cb69)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-11-01 17:46:39 +00:00
6892 changed files with 333941 additions and 438741 deletions

16
.gitignore vendored
View File

@@ -1,27 +1,23 @@
*.pyc
*.pyo
/*.patch
/build*/
build*/
pyshtables.py
pstage/
scripts/oe-git-proxy-socks
sources/
meta-*/
!meta-skeleton
!meta-selftest
!meta-hob
hob-image-*.bb
*.swp
*.orig
*.rej
*~
!meta-poky
!meta-yocto
!meta-yocto-bsp
!meta-yocto-imported
documentation/user-manual/user-manual.html
documentation/user-manual/user-manual.pdf
documentation/user-manual/user-manual.tgz
bitbake/doc/manual/html/
bitbake/doc/manual/pdf/
bitbake/doc/manual/txt/
bitbake/doc/manual/xhtml/
pull-*/
bitbake/lib/toaster/contrib/tts/backlog.txt
bitbake/lib/toaster/contrib/tts/log/*
bitbake/lib/toaster/contrib/tts/.cache/*

View File

@@ -1,2 +0,0 @@
# Template settings
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf}

16
LICENSE
View File

@@ -1,14 +1,14 @@
Different components of OpenEmbedded are under different licenses (a mix
of MIT and GPLv2). Please see:
Different components of Poky are under different licenses (a mix of
MIT and GPLv2). Please see:
meta/COPYING.GPLv2 (GPLv2)
bitbake/COPYING (GPLv2)
meta/COPYING.MIT (MIT)
meta-selftest/COPYING.MIT (MIT)
meta-skeleton/COPYING.MIT (MIT)
meta-extras/COPYING.MIT (MIT)
All metadata is MIT licensed unless otherwise stated. Source code
included in tree for individual recipes is under the LICENSE stated in
the associated recipe (.bb file) unless otherwise stated.
which cover the components in those subdirectories. This means all
metadata is MIT licensed unless otherwise stated. Source code included
in tree for individual recipes is under the LICENSE stated in the .bb
file for those software projects unless otherwise stated.
License information for any other files is either explicitly stated
or defaults to GPL version 2.

31
README
View File

@@ -22,7 +22,7 @@ reference manual which can be found at:
OpenEmbedded-Core is a layer containing the core metadata for current versions
of OpenEmbedded. It is distro-less (can build a functional image with
DISTRO = "nodistro") and contains only emulated machine support.
DISTRO = "") and contains only emulated machine support.
For information about OpenEmbedded, see the OpenEmbedded website:
http://www.openembedded.org/
@@ -30,29 +30,20 @@ For information about OpenEmbedded, see the OpenEmbedded website:
Where to Send Patches
=====================
As Poky is an integration repository (built using a tool called combo-layer),
patches against the various components should be sent to their respective
upstreams:
As Poky is an integration repository, patches against the various components
should be sent to their respective upstreams.
bitbake:
Git repository: http://git.openembedded.org/bitbake/
Mailing list: bitbake-devel@lists.openembedded.org
bitbake-devel@lists.openembedded.org
documentation:
Git repository: http://git.yoctoproject.org/cgit/cgit.cgi/yocto-docs/
Mailing list: yocto@yoctoproject.org
meta-yocto:
poky@yoctoproject.org
meta-poky, meta-yocto-bsp:
Git repository: http://git.yoctoproject.org/cgit/cgit.cgi/meta-yocto(-bsp)
Mailing list: poky@yoctoproject.org
Everything else should be sent to the OpenEmbedded Core mailing list. If in
doubt, check the oe-core git repository for the content you intend to modify.
Most everything else should be sent to the OpenEmbedded Core mailing list. If
in doubt, check the oe-core git repository for the content you intend to modify.
Before sending, be sure the patches apply cleanly to the current oe-core git
repository.
openembedded-core@lists.openembedded.org
Git repository: http://git.openembedded.org/openembedded-core/
Mailing list: openembedded-core@lists.openembedded.org
Note: The scripts directory should be treated with extra care as it is a mix of
oe-core and poky-specific files.
Note: The scripts directory should be treated with extra care as it is a mix
of oe-core and poky-specific files.

View File

@@ -46,19 +46,13 @@ Hardware Reference Boards
The following boards are supported by the meta-yocto-bsp layer:
* Texas Instruments Beaglebone (beaglebone)
* Texas Instruments Beagleboard (beagleboard)
* Freescale MPC8315E-RDB (mpc8315e-rdb)
* Ubiquiti Networks RouterStation Pro (routerstationpro)
For more information see the board's section below. The appropriate MACHINE
variable value corresponding to the board is given in brackets.
Reference Board Maintenance
===========================
Send pull requests, patches, comments or questions about meta-yocto-bsps to poky@yoctoproject.org
Maintainers: Kevin Hao <kexin.hao@windriver.com>
Bruce Ashfield <bruce.ashfield@windriver.com>
Consumer Devices
================
@@ -66,7 +60,6 @@ Consumer Devices
The following consumer devices are supported by the meta-yocto-bsp layer:
* Intel x86 based PCs and devices (genericx86)
* Ubiquiti Networks EdgeRouter Lite (edgerouter)
For more information see the device's section below. The appropriate MACHINE
variable value corresponding to the device is given in brackets.
@@ -77,26 +70,37 @@ variable value corresponding to the device is given in brackets.
===============================
Intel x86 based PCs and devices (genericx86*)
=============================================
Intel x86 based PCs and devices (genericx86)
==========================================
The genericx86 and genericx86-64 MACHINE are tested on the following platforms:
The genericx86 MACHINE is tested on the following platforms:
Intel Xeon/Core i-Series:
+ Intel NUC5 Series - ix-52xx Series SOC (Broadwell)
+ Intel NUC6 Series - ix-62xx Series SOC (Skylake)
+ Intel Shumway Xeon Server
+ Intel Romley Server: Sandy Bridge Xeon processor, C600 PCH (Patsburg), (Canoe Pass CRB)
+ Intel Romley Server: Ivy Bridge Xeon processor, C600 PCH (Patsburg), (Intel SDP S2R3)
+ Intel Crystal Forest Server: Sandy Bridge Xeon processor, DH89xx PCH (Cave Creek), (Stargo CRB)
+ Intel Chief River Mobile: Ivy Bridge Mobile processor, QM77 PCH (Panther Point-M), (Emerald Lake II CRB, Sabino Canyon CRB)
+ Intel Huron River Mobile: Sandy Bridge processor, QM67 PCH (Cougar Point), (Emerald Lake CRB, EVOC EC7-1817LNAR board)
+ Intel Calpella Platform: Core i7 processor, QM57 PCH (Ibex Peak-M), (Red Fort CRB, Emerson MATXM CORE-411-B)
+ Intel Nehalem/Westmere-EP Server: Xeon 56xx/55xx processors, 5520 chipset, ICH10R IOH (82801), (Hanlan Creek CRB)
+ Intel Nehalem Workstation: Xeon 56xx/55xx processors, System SC5650SCWS (Greencity CRB)
+ Intel Picket Post Server: Xeon 56xx/55xx processors (Jasper Forest), 3420 chipset (Ibex Peak), (Osage CRB)
+ Intel Storage Platform: Sandy Bridge Xeon processor, C600 PCH (Patsburg), (Oak Creek Canyon CRB)
+ Intel Shark Bay Client Platform: Haswell processor, LynxPoint PCH, (Walnut Canyon CRB, Lava Canyon CRB, Basking Ridge CRB, Flathead Creek CRB)
+ Intel Shark Bay Ultrabook Platform: Haswell ULT processor, Lynx Point-LP PCH, (WhiteTip Mountain 1 CRB)
Intel Atom platforms:
+ MinnowBoard MAX - E3825 SOC (Bay Trail)
+ MinnowBoard MAX - Turbot (ADI Engineering) - E3826 SOC (Bay Trail)
- These boards can be either 32bot or 64bit modes depending on firmware
- See minnowboard.org for details
+ Intel Braswell SOC
+ Intel embedded Menlow: Intel Atom Z510/530 CPU, System Controller Hub US15W (Portwell NANO-8044)
+ Intel Luna Pier: Intel Atom N4xx/D5xx series CPU (aka: Pineview-D & -M), 82801HM I/O Hub (ICH8M), (Advantech AIMB-212, Moon Creek CRB)
+ Intel Queens Bay platform: Intel Atom E6xx CPU (aka: Tunnel Creek), Topcliff EG20T I/O Hub (Emerson NITX-315, Crown Bay CRB, Minnow Board)
+ Intel Fish River Island platform: Intel Atom E6xx CPU (aka: Tunnel Creek), Topcliff EG20T I/O Hub (Kontron KM2M806)
+ Intel Cedar Trail platform: Intel Atom N2000 & D2000 series CPU (aka: Cedarview), NM10 Express Chipset (Norco kit BIS-6630, Cedar Rock CRB)
and is likely to work on many unlisted Atom/Core/Xeon based devices. The MACHINE
type supports ethernet, wifi, sound, and Intel/vesa graphics by default in
addition to common PC input devices, busses, and so on.
addition to common PC input devices, busses, and so on. Note that it does not
included the binary-only graphic drivers used on some Atom platforms, for
accelerated graphics on these machines please refer to meta-intel.
Depending on the device, it can boot from a traditional hard-disk, a USB device,
or over the network. Writing generated images to physical media is
@@ -127,50 +131,144 @@ USB Device:
device, but the idea is to force BIOS to read the Cylinder/Head/Sector
geometry from the device.
2. Use a ".wic" image with an EFI partition
2. Without such an option, the BIOS generally boots the device in USB-ZIP
mode. To write an image to a USB device that will be bootable in
USB-ZIP mode, carry out the following actions:
a) With a default grub-efi bootloader:
# dd if=core-image-minimal-genericx86-64.wic of=/dev/sdb
a. Determine the geometry of your USB device using fdisk:
b) Use systemd-boot instead
- Build an image with EFI_PROVIDER="systemd-boot" then use the above
dd command to write the image to a USB stick.
# fdisk /dev/sdb
Command (m for help): p
Disk /dev/sdb: 4011 MB, 4011491328 bytes
124 heads, 62 sectors/track, 1019 cylinders, total 7834944 sectors
...
Command (m for help): q
b. Configure the USB device for USB-ZIP mode:
# mkdiskimage -4 /dev/sdb 1019 124 62
Where 1019, 124 and 62 are the cylinder, head and sectors/track counts
as reported by fdisk (substitute the values reported for your device).
When the operation has finished and the access LED (if any) on the
device stops flashing, remove and reinsert the device to allow the
kernel to detect the new partition layout.
c. Copy the contents of the image to the USB-ZIP mode device:
# mkdir /tmp/image
# mkdir /tmp/usbkey
# mount -o loop core-image-minimal-genericx86.hddimg /tmp/image
# mount /dev/sdb4 /tmp/usbkey
# cp -rf /tmp/image/* /tmp/usbkey
d. Install the syslinux boot loader:
# syslinux /dev/sdb4
e. Unmount everything:
# umount /tmp/image
# umount /tmp/usbkey
Install the boot device in the target board and configure the BIOS to boot
from it.
For more details on the USB-ZIP scenario, see the syslinux documentation:
http://git.kernel.org/?p=boot/syslinux/syslinux.git;a=blob_plain;f=doc/usbkey.txt;hb=HEAD
Texas Instruments Beaglebone (beaglebone)
=========================================
Texas Instruments Beagleboard (beagleboard)
===========================================
The Beaglebone is an ARM Cortex-A8 development board with USB, Ethernet, 2D/3D
accelerated graphics, audio, serial, JTAG, and SD/MMC. The Black adds a faster
CPU, more RAM, eMMC flash and a micro HDMI port. The beaglebone MACHINE is
tested on the following platforms:
The Beagleboard is an ARM Cortex-A8 development board with USB, DVI-D, S-Video,
2D/3D accelerated graphics, audio, serial, JTAG, and SD/MMC. The xM adds a
faster CPU, more RAM, an ethernet port, more USB ports, microSD, and removes
the NAND flash. The beagleboard MACHINE is tested on the following platforms:
o Beaglebone Black A6
o Beaglebone A6 (the original "White" model)
o Beagleboard C4
o Beagleboard xM rev A & B
The Beaglebone Black has eMMC, while the White does not. Pressing the USER/BOOT
button when powering on will temporarily change the boot order. But for the sake
of simplicity, these instructions assume you have erased the eMMC on the Black,
so its boot behavior matches that of the White and boots off of SD card. To do
this, issue the following commands from the u-boot prompt:
The Beagleboard C4 has NAND, while the xM does not. For the sake of simplicity,
these instructions assume you have erased the NAND on the C4 so its boot
behavior matches that of the xM. To do this, issue the following commands from
the u-boot prompt (note that the unlock may be unecessary depending on the
version of u-boot installed on your board and only one of the erase commands
will succeed):
# mmc dev 1
# mmc erase 0 512
# nand unlock
# nand erase
# nand erase.chip
To further tailor these instructions for your board, please refer to the
documentation at http://www.beagleboard.org/bone and http://www.beagleboard.org/black
documentation at http://www.beagleboard.org.
From a Linux system with access to the image files perform the following steps:
From a Linux system with access to the image files perform the following steps
as root, replacing mmcblk0* with the SD card device on your machine (such as sdc
if used via a usb card reader):
1. Build an image. For example:
1. Partition and format an SD card:
# fdisk -lu /dev/mmcblk0
Disk /dev/mmcblk0: 3951 MB, 3951034368 bytes
255 heads, 63 sectors/track, 480 cylinders, total 7716864 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 * 63 144584 72261 c Win95 FAT32 (LBA)
/dev/mmcblk0p2 144585 465884 160650 83 Linux
$ bitbake core-image-minimal
# mkfs.vfat -F 16 -n "boot" /dev/mmcblk0p1
# mke2fs -j -L "root" /dev/mmcblk0p2
2. Use the "dd" utility to write the image to the SD card. For example:
The following assumes the SD card partition 1 and 2 are mounted at
/media/boot and /media/root respectively. Removing the card and reinserting
it will do just that on most modern Linux desktop environments.
The files referenced below are made available after the build in
build/tmp/deploy/images.
# dd core-image-minimal-beaglebone.wic of=/dev/sdb
2. Install the boot loaders
# cp MLO-beagleboard /media/boot/MLO
# cp u-boot-beagleboard.bin /media/boot/u-boot.bin
3. Install the root filesystem
# tar x -C /media/root -f core-image-$IMAGE_TYPE-beagleboard.tar.bz2
# tar x -C /media/root -f modules-$KERNEL_VERSION-beagleboard.tgz
4. Install the kernel uImage
# cp uImage-beagleboard.bin /media/boot/uImage
5. Prepare a u-boot script to simplify the boot process
The Beagleboard can be made to boot at this point from the u-boot command
shell. To automate this process, generate a user.scr script as follows.
Install uboot-mkimage (from uboot-mkimage on Ubuntu or uboot-tools on Fedora).
Prepare a script config:
# (cat << EOF
setenv bootcmd 'mmc init; fatload mmc 0:1 0x80300000 uImage; bootm 0x80300000'
setenv bootargs 'console=tty0 console=ttyO2,115200n8 root=/dev/mmcblk0p2 rootwait rootfstype=ext3 ro'
boot
EOF
) > serial-boot.cmd
# mkimage -A arm -O linux -T script -C none -a 0 -e 0 -n "Core Minimal" -d ./serial-boot.cmd ./boot.scr
# cp boot.scr /media/boot
6. Unmount the SD partitions, insert the SD card into the Beagleboard, and
boot the Beagleboard
Note: As of the 2.6.37 linux-yocto kernel recipe, the Beagleboard uses the
OMAP_SERIAL device (ttyO2). If you are using an older kernel, such as the
2.6.34 linux-yocto-stable, be sure to replace ttyO2 with ttyS2 above. You
should also override the machine SERIAL_CONSOLE in your local.conf in
order to setup the getty on the serial line:
SERIAL_CONSOLE_beagleboard = "115200 ttyS2"
3. Insert the SD card into the Beaglebone and boot the board.
Freescale MPC8315E-RDB (mpc8315e-rdb)
=====================================
@@ -203,70 +301,6 @@ linux/arch/powerpc/boot/dts/mpc8315erdb.dts within the kernel source). If
you have left them at the factory default then you shouldn't need to do
anything here.
Note: To boot from USB disk you need u-boot that supports 'ext2load usb'
command. You need to setup TFTP server, load u-boot from there and
flash it to NOR flash.
Beware! Flashing bootloader is potentially dangerous operation that can
brick your device if done incorrectly. Please, make sure you understand
what below commands mean before executing them.
Load the new u-boot.bin from TFTP server to memory address 200000
=> tftp 200000 u-boot.bin
Disable flash protection
=> protect off all
Erase the old u-boot from fe000000 to fe06ffff in NOR flash.
The size is 0x70000 (458752 bytes)
=> erase fe000000 fe06ffff
Copy the new u-boot from address 200000 to fe000000
the size is 0x70000. It has to be greater or equal to u-boot.bin size
=> cp.b 200000 fe000000 70000
Enable flash protection again
=> protect on all
Reset the board
=> reset
--- Booting from USB disk ---
1. Flash partitioned image to the USB disk
# dd if=core-image-minimal-mpc8315e-rdb.wic of=/dev/sdb
2. Plug USB disk into the MPC8315 board
3. Connect the board's first serial port to your workstation and then start up
your favourite serial terminal so that you will be able to interact with
the serial console. If you don't have a favourite, picocom is suggested:
$ picocom /dev/ttyUSB0 -b 115200
4. Power up or reset the board and press a key on the terminal when prompted
to get to the U-Boot command line
5. Optional. Load the u-boot.bin from the USB disk:
=> usb start
=> ext2load usb 0:1 200000 u-boot.bin
and flash it to NOR flash as described above.
6. Load the kernel and dtb from the first partition of the USB disk:
=> usb start
=> ext2load usb 0:1 1000000 uImage
=> ext2load usb 0:1 2000000 dtb
7. Set bootargs and boot up the device
=> setenv bootargs root=/dev/sdb2 rw rootwait console=ttyS0,115200
=> bootm 1000000 - 2000000
--- Booting from NFS root ---
Load the kernel and dtb (device tree blob), and boot the system as follows:
@@ -296,129 +330,165 @@ Load the kernel and dtb (device tree blob), and boot the system as follows:
=> tftp 2000000 uImage-mpc8315e-rdb.dtb
=> bootm 1000000 - 2000000
--- Booting from JFFS2 root ---
1. First boot the board with NFS root.
Ubiquiti Networks RouterStation Pro (routerstationpro)
======================================================
2. Erase the MTD partition which will be used as root:
$ flash_eraseall /dev/mtd3
3. Copy the JFFS2 image to the MTD partition:
$ flashcp core-image-minimal-mpc8315e-rdb.jffs2 /dev/mtd3
4. Then reboot the board and set up the environment in U-Boot:
=> setenv bootargs root=/dev/mtdblock3 rootfstype=jffs2 console=ttyS0,115200
Ubiquiti Networks EdgeRouter Lite (edgerouter)
==============================================
The EdgeRouter Lite is part of the EdgeMax series. It is a MIPS64 router
(based on the Cavium Octeon processor) with 512MB of RAM, which uses an
internal USB pendrive for storage.
The RouterStation Pro is an Atheros AR7161 MIPS-based board. Geared towards
networking applications, it has all of the usual features as well as three
type IIIA mini-PCI slots and an on-board 3-port 10/100/1000 Ethernet switch,
in addition to the 10/100/1000 Ethernet WAN port which supports
Power-over-Ethernet.
Setup instructions
------------------
You will need the following:
* RJ45 -> serial ("rollover") cable connected from your PC to the CONSOLE
port on the device
* Ethernet connected to the first ethernet port on the board
* A serial cable - female to female (or female to male + gender changer)
NOTE: cable must be straight through, *not* a null modem cable.
* USB flash drive or hard disk that is able to be powered from the
board's USB port.
* tftp server installed on your workstation
If using NFS as part of the setup process, you will also need:
* NFS root setup on your workstation
* TFTP server installed on your workstation (if fetching the kernel from
TFTP, see below).
NOTE: in the following instructions it is assumed that /dev/sdb corresponds
to the USB disk when it is plugged into your workstation. If this is not the
case in your setup then please be careful to substitute the correct device
name in all commands where appropriate.
--- Preparation ---
Build an image (e.g. core-image-minimal) using "edgerouter" as the MACHINE.
In the following instruction it is based on core-image-minimal. Another target
may be similiar with it.
1) Build an image (e.g. core-image-minimal) using "routerstationpro" as the
MACHINE
--- Booting from NFS root / kernel via TFTP ---
2) Partition the USB drive so that primary partition 1 is type Linux (83).
Minimum size depends on your root image size - core-image-minimal probably
only needs 8-16MB, other images will need more.
Load the kernel, and boot the system as follows:
# fdisk /dev/sdb
Command (m for help): p
1. Get the kernel (vmlinux) file from the tmp/deploy/images/edgerouter
directory, and make them available on your TFTP server.
Disk /dev/sdb: 4011 MB, 4011491328 bytes
124 heads, 62 sectors/track, 1019 cylinders, total 7834944 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009e87d
2. Connect the board's first serial port to your workstation and then start up
your favourite serial terminal so that you will be able to interact with
the serial console. If you don't have a favourite, picocom is suggested:
Device Boot Start End Blocks Id System
/dev/sdb1 62 1952751 976345 83 Linux
$ picocom /dev/ttyS0 -b 115200
3) Format partition 1 on the USB as ext3
3. Power up or reset the board and press a key on the terminal when prompted
to get to the U-Boot command line
# mke2fs -j /dev/sdb1
4. Set up the environment in U-Boot:
4) Mount partition 1 and then extract the contents of
tmp/deploy/images/core-image-XXXX.tar.bz2 into it (preserving permissions).
=> setenv ipaddr <board ip>
=> setenv serverip <tftp server ip>
# mount /dev/sdb1 /media/sdb1
# cd /media/sdb1
# tar -xvjpf tmp/deploy/images/core-image-XXXX.tar.bz2
5. Download the kernel and boot:
5) Unmount the USB drive and then plug it into the board's USB port
=> tftp tftp $loadaddr vmlinux
=> bootoctlinux $loadaddr coremask=0x3 root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:<netmask>:edgerouter:eth0:off mtdparts=phys_mapped_flash:512k(boot0),512k(boot1),64k@3072k(eeprom)
6) Connect the board's serial port to your workstation and then start up
your favourite serial terminal so that you will be able to interact with
the serial console. If you don't have a favourite, picocom is suggested:
--- Booting from USB disk ---
$ picocom /dev/ttyUSB0 -b 115200
To boot from the USB disk, you either need to remove it from the edgerouter
box and populate it from another computer, or use a previously booted NFS
image and populate from the edgerouter itself.
7) Connect the network into eth0 (the one that is NOT the 3 port switch). If
you are using power-over-ethernet then the board will power up at this point.
Type 1: Use partitioned image
-----------------------------
8) Start up the board, watch the serial console. Hit Ctrl+C to abort the
autostart if the board is configured that way (it is by default). The
bootloader's fconfig command can be used to disable autostart and configure
the IP settings if you need to change them (default IP is 192.168.1.20).
Steps:
9) Make the kernel (tmp/deploy/images/vmlinux-routerstationpro.bin) available
on the tftp server.
1. Remove the USB disk from the edgerouter and insert it into a computer
that has access to your build artifacts.
10) If you are going to write the kernel to flash (optional - see "Booting a
kernel directly" below for the alternative), remove the current kernel and
rootfs flash partitions. You can list the partitions using the following
bootloader command:
2. Flash the image.
RedBoot> fis list
# dd if=core-image-minimal-edgerouter.wic of=/dev/sdb
You can delete the existing kernel and rootfs with these commands:
3. Insert USB disk into the edgerouter and boot it.
RedBoot> fis delete kernel
RedBoot> fis delete rootfs
Type 2: NFS
-----------
--- Booting a kernel directly ---
Note: If you place the kernel on the ext3 partition, you must re-create the
ext3 filesystem, since the factory u-boot can only handle 128 byte inodes and
cannot read the partition otherwise.
1) Load the kernel using the following bootloader command:
These boot instructions assume that you have recreated the ext3 filesystem with
128 byte inodes, you have an updated uboot or you are running and image capable
of making the filesystem on the board itself.
RedBoot> load -m tftp -h <ip of tftp server> vmlinux-routerstationpro.bin
You should see a message on it being successfully loaded.
1. Boot from NFS root
2) Execute the kernel:
2. Mount the USB disk partition 2 and then extract the contents of
tmp/deploy/core-image-XXXX.tar.bz2 into it.
RedBoot> exec -c "console=ttyS0,115200 root=/dev/sda1 rw rootdelay=2 board=UBNT-RSPRO"
Before starting, copy core-image-minimal-xxx.tar.bz2 and vmlinux into
rootfs path on your workstation.
Note that specifying the command line with -c is important as linux-yocto does
not provide a default command line.
and then,
# mount /dev/sda2 /media/sda2
# tar -xvjpf core-image-minimal-XXX.tar.bz2 -C /media/sda2
# cp vmlinux /media/sda2/boot/vmlinux
# umount /media/sda2
# reboot
--- Writing a kernel to flash ---
3. Reboot the board and press a key on the terminal when prompted to get to the U-Boot
command line:
1) Go to your tftp server and gzip the kernel you want in flash. It should
halve the size.
# reboot
2) Load the kernel using the following bootloader command:
4. Load the kernel and boot:
RedBoot> load -r -b 0x80600000 -m tftp -h <ip of tftp server> vmlinux-routerstationpro.bin.gz
This should output something similar to the following:
Raw file loaded 0x80600000-0x8087c537, assumed entry at 0x80600000
Calculate the length by subtracting the first number from the second number
and then rounding the result up to the nearest 0x1000.
3) Using the length calculated above, create a flash partition for the kernel:
RedBoot> fis create -b 0x80600000 -l 0x240000 kernel
(change 0x240000 to your rounded length -- change "kernel" to whatever
you want to name your kernel)
--- Booting a kernel from flash ---
To boot the flashed kernel perform the following steps.
1) At the bootloader prompt, load the kernel:
RedBoot> fis load -d -e kernel
(Change the name "kernel" above if you chose something different earlier)
(-e means 'elf', -d 'decompress')
2) Execute the kernel using the exec command as above.
--- Automating the boot process ---
After writing the kernel to flash and testing the load and exec commands
manually, you can automate the boot process with a boot script.
1) RedBoot> fconfig
(Answer the questions not specified here as they pertain to your environment)
2) Run script at boot: true
Boot script:
.. fis load -d -e kernel
.. exec
Enter script, terminate with empty line
>> fis load -d -e kernel
>> exec -c "console=ttyS0,115200 root=/dev/sda1 rw rootdelay=2 board=UBNT-RSPRO"
>>
3) Answer the remaining questions and write the changes to flash:
Update RedBoot non-volatile configuration - continue (y/n)? y
... Erase from 0xbfff0000-0xc0000000: .
... Program from 0x87ff0000-0x88000000 at 0xbfff0000: .
4) Power cycle the board.
=> ext2load usb 0:2 $loadaddr boot/vmlinux
=> bootoctlinux $loadaddr coremask=0x3 root=/dev/sda2 rw rootwait mtdparts=phys_mapped_flash:512k(boot0),512k(boot1),64k@3072k(eeprom)

View File

@@ -1,19 +0,0 @@
BitBake is licensed under the GNU General Public License version 2.0. See COPYING for further details.
The following external components are distributed with this software:
* The Toaster Simple UI application is based upon the Django project template, the files of which are covered by the BSD license and are copyright (c) Django Software
Foundation and individual contributors.
* Twitter Bootstrap (including Glyphicons), redistributed under the MIT license
* jQuery is redistributed under the MIT license.
* Twitter typeahead.js redistributed under the MIT license. Note that the JS source has one small modification, so the full unminified file is currently included to make it obvious where this is.
* jsrender is redistributed under the MIT license.
* QUnit is redistributed under the MIT license.
* Font Awesome fonts redistributed under the SIL Open Font License 1.1
* simplediff is distributed under the zlib license.

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
@@ -23,34 +23,312 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import sys, logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),
'lib'))
import optparse
import warnings
from traceback import format_exception
try:
import bb
except RuntimeError as exc:
sys.exit(str(exc))
from bb import event
import bb.msg
from bb import cooker
from bb import ui
from bb import server
from bb import cookerdata
from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports utf-8.\nPython can't change the filesystem locale after loading so we need a utf-8 when python starts or things won't work.")
__version__ = "1.20.0"
logger = logging.getLogger("BitBake")
__version__ = "1.34.0"
# Python multiprocessing requires /dev/shm
if not os.access('/dev/shm', os.W_OK | os.X_OK):
sys.exit("FATAL: /dev/shm does not exist or is not writable")
# Unbuffer stdout to avoid log truncation in the event
# of an unorderly exit as well as to provide timely
# updates to log files for use with tail
try:
if sys.stdout.name == '<stdout>':
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
except:
pass
def get_ui(config):
if not config.ui:
# modify 'ui' attribute because it is also read by cooker
config.ui = os.environ.get('BITBAKE_UI', 'knotty')
interface = config.ui
try:
# Dynamically load the UI based on the ui name. Although we
# suggest a fixed set this allows you to have flexibility in which
# ones are available.
module = __import__("bb.ui", fromlist = [interface])
return getattr(module, interface)
except AttributeError:
sys.exit("FATAL: Invalid user interface '%s' specified.\n"
"Valid interfaces: depexp, goggle, ncurses, hob, knotty [default]." % interface)
# Display bitbake/OE warnings via the BitBake.Warnings logger, ignoring others"""
warnlog = logging.getLogger("BitBake.Warnings")
_warnings_showwarning = warnings.showwarning
def _showwarning(message, category, filename, lineno, file=None, line=None):
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno)
warnlog.warn(s)
warnings.showwarning = _showwarning
warnings.filterwarnings("ignore")
warnings.filterwarnings("default", module="(<string>$|(oe|bb)\.)")
warnings.filterwarnings("ignore", category=PendingDeprecationWarning)
warnings.filterwarnings("ignore", category=ImportWarning)
warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
class BitBakeConfigParameters(cookerdata.ConfigParameters):
def parseCommandLine(self):
parser = optparse.OptionParser(
version = "BitBake Build Tool Core version %s, %%prog version %s" % (bb.__version__, __version__),
usage = """%prog [options] [recipename/target ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.""")
parser.add_option("-b", "--buildfile", help = "Execute tasks from a specific .bb recipe directly. WARNING: Does not handle any dependencies from other recipes.",
action = "store", dest = "buildfile", default = None)
parser.add_option("-k", "--continue", help = "Continue as much as possible after an error. While the target that failed and anything depending on it cannot be built, as much as possible will be built before stopping.",
action = "store_false", dest = "abort", default = True)
parser.add_option("-a", "--tryaltconfigs", help = "Continue with builds by trying to use alternative providers where possible.",
action = "store_true", dest = "tryaltconfigs", default = False)
parser.add_option("-f", "--force", help = "Force the specified targets/task to run (invalidating any existing stamp file).",
action = "store_true", dest = "force", default = False)
parser.add_option("-c", "--cmd", help = "Specify the task to execute. The exact options available depend on the metadata. Some examples might be 'compile' or 'populate_sysroot' or 'listtasks' may give a list of the tasks available.",
action = "store", dest = "cmd")
parser.add_option("-C", "--clear-stamp", help = "Invalidate the stamp for the specified task such as 'compile' and then run the default task for the specified target(s).",
action = "store", dest = "invalidate_stamp")
parser.add_option("-r", "--read", help = "Read the specified file before bitbake.conf.",
action = "append", dest = "prefile", default = [])
parser.add_option("-R", "--postread", help = "Read the specified file after bitbake.conf.",
action = "append", dest = "postfile", default = [])
parser.add_option("-v", "--verbose", help = "Output more log message data to the terminal.",
action = "store_true", dest = "verbose", default = False)
parser.add_option("-D", "--debug", help = "Increase the debug level. You can specify this more than once.",
action = "count", dest="debug", default = 0)
parser.add_option("-n", "--dry-run", help = "Don't execute, just go through the motions.",
action = "store_true", dest = "dry_run", default = False)
parser.add_option("-S", "--dump-signatures", help = "Don't execute, just dump out the signature construction information.",
action = "store_true", dest = "dump_signatures", default = False)
parser.add_option("-p", "--parse-only", help = "Quit after parsing the BB recipes.",
action = "store_true", dest = "parse_only", default = False)
parser.add_option("-s", "--show-versions", help = "Show current and preferred versions of all recipes.",
action = "store_true", dest = "show_versions", default = False)
parser.add_option("-e", "--environment", help = "Show the global or per-package environment complete with information about where variables were set/changed.",
action = "store_true", dest = "show_environment", default = False)
parser.add_option("-g", "--graphviz", help = "Save dependency tree information for the specified targets in the dot syntax.",
action = "store_true", dest = "dot_graph", default = False)
parser.add_option("-I", "--ignore-deps", help = """Assume these dependencies don't exist and are already provided (equivalent to ASSUME_PROVIDED). Useful to make dependency graphs more appealing""",
action = "append", dest = "extra_assume_provided", default = [])
parser.add_option("-l", "--log-domains", help = """Show debug logging for the specified logging domains""",
action = "append", dest = "debug_domains", default = [])
parser.add_option("-P", "--profile", help = "Profile the command and save reports.",
action = "store_true", dest = "profile", default = False)
parser.add_option("-u", "--ui", help = "The user interface to use (e.g. knotty, hob, depexp).",
action = "store", dest = "ui")
parser.add_option("-t", "--servertype", help = "Choose which server to use, process or xmlrpc.",
action = "store", dest = "servertype")
parser.add_option("", "--revisions-changed", help = "Set the exit code depending on whether upstream floating revisions have changed or not.",
action = "store_true", dest = "revisions_changed", default = False)
parser.add_option("", "--server-only", help = "Run bitbake without a UI, only starting a server (cooker) process.",
action = "store_true", dest = "server_only", default = False)
parser.add_option("-B", "--bind", help = "The name/address for the bitbake server to bind to.",
action = "store", dest = "bind", default = False)
parser.add_option("", "--no-setscene", help = "Do not run any setscene tasks. sstate will be ignored and everything needed, built.",
action = "store_true", dest = "nosetscene", default = False)
parser.add_option("", "--remote-server", help = "Connect to the specified server.",
action = "store", dest = "remote_server", default = False)
parser.add_option("-m", "--kill-server", help = "Terminate the remote server.",
action = "store_true", dest = "kill_server", default = False)
parser.add_option("", "--observe-only", help = "Connect to a server as an observing-only client.",
action = "store_true", dest = "observe_only", default = False)
options, targets = parser.parse_args(sys.argv)
# some environmental variables set also configuration options
if "BBSERVER" in os.environ:
options.servertype = "xmlrpc"
options.remote_server = os.environ["BBSERVER"]
return options, targets[1:]
def start_server(servermodule, configParams, configuration):
server = servermodule.BitBakeServer()
if configParams.bind:
(host, port) = configParams.bind.split(':')
server.initServer((host, int(port)))
else:
server.initServer()
try:
configuration.setServerRegIdleCallback(server.getServerIdleCB())
cooker = bb.cooker.BBCooker(configuration)
server.addcooker(cooker)
server.saveConnectionDetails()
except Exception as e:
exc_info = sys.exc_info()
while True:
try:
import queue
except ImportError:
import Queue as queue
try:
event = server.event_queue.get(block=False)
except (queue.Empty, IOError):
break
if isinstance(event, logging.LogRecord):
logger.handle(event)
raise exc_info[1], None, exc_info[2]
server.detach()
return server
def main():
configParams = BitBakeConfigParameters()
configuration = cookerdata.CookerConfiguration()
configuration.setConfigParameters(configParams)
ui_module = get_ui(configParams)
# Server type can be xmlrpc or process currently, if nothing is specified,
# the default server is process
if configParams.servertype:
server_type = configParams.servertype
else:
server_type = 'process'
try:
module = __import__("bb.server", fromlist = [server_type])
servermodule = getattr(module, server_type)
except AttributeError:
sys.exit("FATAL: Invalid server type '%s' specified.\n"
"Valid interfaces: xmlrpc, process [default]." % servertype)
if configParams.server_only:
if configParams.servertype != "xmlrpc":
sys.exit("FATAL: If '--server-only' is defined, we must set the servertype as 'xmlrpc'.\n")
if not configParams.bind:
sys.exit("FATAL: The '--server-only' option requires a name/address to bind to with the -B option.\n")
if configParams.remote_server:
sys.exit("FATAL: The '--server-only' option conflicts with %s.\n" %
("the BBSERVER environment variable" if "BBSERVER" in os.environ else "the '--remote-server' option" ))
if configParams.bind and configParams.servertype != "xmlrpc":
sys.exit("FATAL: If '-B' or '--bind' is defined, we must set the servertype as 'xmlrpc'.\n")
if configParams.remote_server and configParams.servertype != "xmlrpc":
sys.exit("FATAL: If '--remote-server' is defined, we must set the servertype as 'xmlrpc'.\n")
if configParams.observe_only and (not configParams.remote_server or configParams.bind):
sys.exit("FATAL: '--observe-only' can only be used by UI clients connecting to a server.\n")
if "BBDEBUG" in os.environ:
level = int(os.environ["BBDEBUG"])
if level > configuration.debug:
configuration.debug = level
bb.msg.init_msgconfig(configParams.verbose, configuration.debug,
configuration.debug_domains)
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
logger.addHandler(handler)
# Clear away any spurious environment variables while we stoke up the cooker
cleanedvars = bb.utils.clean_environment()
if not configParams.remote_server:
# we start a server with a given configuration
server = start_server(servermodule, configParams, configuration)
bb.event.ui_queue = []
else:
# we start a stub server that is actually a XMLRPClient that connects to a real server
server = servermodule.BitBakeXMLRPCClient(configParams.observe_only)
server.saveConnectionDetails(configParams.remote_server)
if not configParams.server_only:
# Collect the feature set for the UI
featureset = getattr(ui_module, "featureSet", [])
# Setup a connection to the server (cooker)
server_connection = server.establishConnection(featureset)
# Restore the environment in case the UI needs it
for k in cleanedvars:
os.environ[k] = cleanedvars[k]
logger.removeHandler(handler)
try:
return ui_module.main(server_connection.connection, server_connection.events, configParams)
finally:
bb.event.ui_queue = []
server_connection.terminate()
else:
print("server address: %s, server port: %s" % (server.serverImpl.host, server.serverImpl.port))
return 1
if __name__ == "__main__":
if __version__ != bb.__version__:
sys.exit("Bitbake core version and program version mismatch!")
try:
sys.exit(bitbake_main(BitBakeConfigParameters(sys.argv),
cookerdata.CookerConfiguration()))
except BBMainException as err:
sys.exit(err)
ret = main()
except bb.BBHandledException:
sys.exit(1)
ret = 1
except Exception:
ret = 1
import traceback
traceback.print_exc()
sys.exit(1)
sys.exit(ret)

View File

@@ -1,9 +1,9 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# bitbake-diffsigs
# BitBake task signature data comparison utility
#
# Copyright (C) 2012-2013, 2017 Intel Corporation
# Copyright (C) 2012-2013 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -22,19 +22,28 @@ import os
import sys
import warnings
import fnmatch
import argparse
import optparse
import logging
import pickle
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb.tinfoil
import bb.siggen
import bb.msg
logger = bb.msg.logger_create('bitbake-diffsigs')
def logger_create(name, output=sys.stderr):
logger = logging.getLogger(name)
console = logging.StreamHandler(output)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if output.isatty():
format.enable_color()
console.setFormatter(format)
logger.addHandler(console)
logger.setLevel(logging.INFO)
return logger
def find_compare_task(bbhandler, pn, taskname, sig1=None, sig2=None, color=False):
logger = logger_create('bitbake-diffsigs')
def find_compare_task(bbhandler, pn, taskname):
""" Find the most recent signature files for the specified PN/task and compare them """
if not hasattr(bb.siggen, 'find_siginfo'):
@@ -44,119 +53,70 @@ def find_compare_task(bbhandler, pn, taskname, sig1=None, sig2=None, color=False
if not taskname.startswith('do_'):
taskname = 'do_%s' % taskname
if sig1 and sig2:
sigfiles = bb.siggen.find_siginfo(pn, taskname, [sig1, sig2], bbhandler.config_data)
if len(sigfiles) == 0:
logger.error('No sigdata files found matching %s %s matching either %s or %s' % (pn, taskname, sig1, sig2))
sys.exit(1)
elif not sig1 in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig1))
sys.exit(1)
elif not sig2 in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig2))
sys.exit(1)
latestfiles = [sigfiles[sig1], sigfiles[sig2]]
filedates = bb.siggen.find_siginfo(pn, taskname, None, bbhandler.config_data)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-2:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
elif len(latestfiles) < 2:
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (pn, taskname))
sys.exit(1)
else:
filedates = bb.siggen.find_siginfo(pn, taskname, None, bbhandler.config_data)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-3:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
elif len(latestfiles) < 2:
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (pn, taskname))
sys.exit(1)
# Define recursion callback
def recursecb(key, hash1, hash2):
hashes = [hash1, hash2]
hashfiles = bb.siggen.find_siginfo(key, None, hashes, bbhandler.config_data)
# Define recursion callback
def recursecb(key, hash1, hash2):
hashes = [hash1, hash2]
hashfiles = bb.siggen.find_siginfo(key, None, hashes, bbhandler.config_data)
recout = []
if len(hashfiles) == 2:
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb)
recout.extend(list(' ' + l for l in out2))
else:
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
recout = []
if len(hashfiles) == 0:
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
elif not hash1 in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash1))
elif not hash2 in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash2))
else:
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb, color=color)
for change in out2:
for line in change.splitlines():
recout.append(' ' + line)
return recout
return recout
# Recurse into signature comparison
logger.debug("Signature file (previous): %s" % latestfiles[-2])
logger.debug("Signature file (latest): %s" % latestfiles[-1])
output = bb.siggen.compare_sigfiles(latestfiles[-2], latestfiles[-1], recursecb, color=color)
if output:
print('\n'.join(output))
# Recurse into signature comparison
output = bb.siggen.compare_sigfiles(latestfiles[0], latestfiles[1], recursecb)
if output:
print '\n'.join(output)
sys.exit(0)
parser = argparse.ArgumentParser(
description="Compares siginfo/sigdata files written out by BitBake")
parser = optparse.OptionParser(
description = "Compares siginfo/sigdata files written out by BitBake",
usage = """
%prog -t recipename taskname
%prog sigdatafile1 sigdatafile2
%prog sigdatafile1""")
parser.add_argument('-d', '--debug',
help='Enable debug output',
action='store_true')
parser.add_option("-t", "--task",
help = "find the signature data files for last two runs of the specified task and compare them",
action="store", dest="taskargs", nargs=2, metavar='recipename taskname')
parser.add_argument('--color',
help='Colorize output (where %(metavar)s is %(choices)s)',
choices=['auto', 'always', 'never'], default='auto', metavar='color')
parser.add_argument("-t", "--task",
help="find the signature data files for last two runs of the specified task and compare them",
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
parser.add_argument("-s", "--signature",
help="With -t/--task, specify the signatures to look for instead of taking the last two",
action="store", dest="sigargs", nargs=2, metavar=('fromsig', 'tosig'))
parser.add_argument("sigdatafile1",
help="First signature file to compare (or signature file to dump, if second not specified). Not used when using -t/--task.",
action="store", nargs='?')
parser.add_argument("sigdatafile2",
help="Second signature file to compare",
action="store", nargs='?')
options = parser.parse_args()
if options.debug:
logger.setLevel(logging.DEBUG)
color = (options.color == 'always' or (options.color == 'auto' and sys.stdout.isatty()))
options, args = parser.parse_args(sys.argv)
if options.taskargs:
with bb.tinfoil.Tinfoil() as tinfoil:
tinfoil.prepare(config_only=True)
if options.sigargs:
find_compare_task(tinfoil, options.taskargs[0], options.taskargs[1], options.sigargs[0], options.sigargs[1], color=color)
else:
find_compare_task(tinfoil, options.taskargs[0], options.taskargs[1], color=color)
tinfoil = bb.tinfoil.Tinfoil()
tinfoil.prepare(config_only = True)
find_compare_task(tinfoil, options.taskargs[0], options.taskargs[1])
else:
if options.sigargs:
logger.error('-s/--signature can only be used together with -t/--task')
sys.exit(1)
try:
if options.sigdatafile1 and options.sigdatafile2:
output = bb.siggen.compare_sigfiles(options.sigdatafile1, options.sigdatafile2, color=color)
elif options.sigdatafile1:
output = bb.siggen.dump_sigfile(options.sigdatafile1)
else:
logger.error('Must specify signature file(s) or -t/--task')
parser.print_help()
if len(args) == 1:
parser.print_help()
else:
import cPickle
try:
if len(args) == 2:
output = bb.siggen.dump_sigfile(sys.argv[1])
else:
output = bb.siggen.compare_sigfiles(sys.argv[1], sys.argv[2])
except IOError as e:
logger.error(str(e))
sys.exit(1)
except cPickle.UnpicklingError, EOFError:
logger.error('Invalid signature data - ensure you are specifying sigdata/siginfo files')
sys.exit(1)
except IOError as e:
logger.error(str(e))
sys.exit(1)
except (pickle.UnpicklingError, EOFError):
logger.error('Invalid signature data - ensure you are specifying sigdata/siginfo files')
sys.exit(1)
if output:
print('\n'.join(output))
print '\n'.join(output)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# bitbake-dumpsig
# BitBake task signature dump utility
@@ -23,72 +23,43 @@ import sys
import warnings
import optparse
import logging
import pickle
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb.tinfoil
import bb.siggen
import bb.msg
logger = bb.msg.logger_create('bitbake-dumpsig')
def logger_create(name, output=sys.stderr):
logger = logging.getLogger(name)
console = logging.StreamHandler(output)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if output.isatty():
format.enable_color()
console.setFormatter(format)
logger.addHandler(console)
logger.setLevel(logging.INFO)
return logger
def find_siginfo_task(bbhandler, pn, taskname):
""" Find the most recent signature file for the specified PN/task """
if not hasattr(bb.siggen, 'find_siginfo'):
logger.error('Metadata does not support finding signature data files')
sys.exit(1)
if not taskname.startswith('do_'):
taskname = 'do_%s' % taskname
filedates = bb.siggen.find_siginfo(pn, taskname, None, bbhandler.config_data)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-1:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
return latestfiles[0]
logger = logger_create('bitbake-dumpsig')
parser = optparse.OptionParser(
description = "Dumps siginfo/sigdata files written out by BitBake",
usage = """
%prog -t recipename taskname
%prog sigdatafile""")
parser.add_option("-D", "--debug",
help = "enable debug",
action = "store_true", dest="debug", default = False)
parser.add_option("-t", "--task",
help = "find the signature data file for the specified task",
action="store", dest="taskargs", nargs=2, metavar='recipename taskname')
options, args = parser.parse_args(sys.argv)
if options.debug:
logger.setLevel(logging.DEBUG)
if options.taskargs:
tinfoil = bb.tinfoil.Tinfoil()
tinfoil.prepare(config_only = True)
file = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1])
logger.debug("Signature file: %s" % file)
elif len(args) == 1:
if len(args) == 1:
parser.print_help()
sys.exit(0)
else:
file = args[1]
import cPickle
try:
output = bb.siggen.dump_sigfile(args[1])
except IOError as e:
logger.error(str(e))
sys.exit(1)
except cPickle.UnpicklingError, EOFError:
logger.error('Invalid signature data - ensure you are specifying a sigdata/siginfo file')
sys.exit(1)
try:
output = bb.siggen.dump_sigfile(file)
except IOError as e:
logger.error(str(e))
sys.exit(1)
except (pickle.UnpicklingError, EOFError):
logger.error('Invalid signature data - ensure you are specifying a sigdata/siginfo file')
sys.exit(1)
if output:
print('\n'.join(output))
if output:
print '\n'.join(output)

View File

@@ -1,11 +1,11 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# This script has subcommands which operate against your bitbake layers, either
# displaying useful information, or acting against them.
# See the help output for details on available commands.
# Copyright (C) 2011 Mentor Graphics Corporation
# Copyright (C) 2011-2015 Intel Corporation
# Copyright (C) 2012 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -20,91 +20,701 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import cmd
import logging
import os
import sys
import argparse
import signal
import fnmatch
from collections import defaultdict
import re
bindir = os.path.dirname(__file__)
topdir = os.path.dirname(bindir)
sys.path[0:0] = [os.path.join(topdir, 'lib')]
import bb.cache
import bb.cooker
import bb.providers
import bb.utils
import bb.tinfoil
import bb.msg
logger = bb.msg.logger_create('bitbake-layers', sys.stdout)
def main():
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
parser = argparse.ArgumentParser(
description="BitBake layers utility",
epilog="Use %(prog)s <subcommand> --help to get help on a specific command",
add_help=False)
parser.add_argument('-d', '--debug', help='Enable debug output', action='store_true')
parser.add_argument('-q', '--quiet', help='Print only errors', action='store_true')
parser.add_argument('-F', '--force', help='Force add without recipe parse verification', action='store_true')
parser.add_argument('--color', choices=['auto', 'always', 'never'], default='auto', help='Colorize output (where %(metavar)s is %(choices)s)', metavar='COLOR')
global_args, unparsed_args = parser.parse_known_args()
# Help is added here rather than via add_help=True, as we don't want it to
# be handled by parse_known_args()
parser.add_argument('-h', '--help', action='help', default=argparse.SUPPRESS,
help='show this help message and exit')
subparsers = parser.add_subparsers(title='subcommands', metavar='<subcommand>')
subparsers.required = True
if global_args.debug:
logger.setLevel(logging.DEBUG)
elif global_args.quiet:
logger.setLevel(logging.ERROR)
# Need to re-run logger_create with color argument
# (will be the same logger since it has the same name)
bb.msg.logger_create('bitbake-layers', output=sys.stdout, color=global_args.color)
plugins = []
tinfoil = bb.tinfoil.Tinfoil(tracking=True)
tinfoil.logger.setLevel(logger.getEffectiveLevel())
try:
tinfoil.prepare(True)
for path in ([topdir] +
tinfoil.config_data.getVar('BBPATH').split(':')):
pluginpath = os.path.join(path, 'lib', 'bblayers')
bb.utils.load_plugins(logger, plugins, pluginpath)
registered = False
for plugin in plugins:
if hasattr(plugin, 'register_commands'):
registered = True
plugin.register_commands(subparsers)
if hasattr(plugin, 'tinfoil_init'):
plugin.tinfoil_init(tinfoil)
if not registered:
logger.error("No commands registered - missing plugins?")
sys.exit(1)
args = parser.parse_args(unparsed_args, namespace=global_args)
if getattr(args, 'parserecipes', False):
tinfoil.config_data.disableTracking()
tinfoil.parseRecipes()
tinfoil.config_data.enableTracking()
return args.func(args)
finally:
tinfoil.shutdown()
if __name__ == "__main__":
try:
ret = main()
except bb.BBHandledException:
ret = 1
except Exception:
ret = 1
import traceback
traceback.print_exc()
sys.exit(ret)
logger = logging.getLogger('BitBake')
def main(args):
cmds = Commands()
if args:
# Allow user to specify e.g. show-layers instead of show_layers
args = [args[0].replace('-', '_')] + args[1:]
cmds.onecmd(' '.join(args))
else:
cmds.do_help('')
return cmds.returncode
class Commands(cmd.Cmd):
def __init__(self):
cmd.Cmd.__init__(self)
self.bbhandler = bb.tinfoil.Tinfoil()
self.returncode = 0
self.bblayers = (self.bbhandler.config_data.getVar('BBLAYERS', True) or "").split()
def default(self, line):
"""Handle unrecognised commands"""
sys.stderr.write("Unrecognised command or option\n")
self.do_help('')
def do_help(self, topic):
"""display general help or help on a specified command"""
if topic:
sys.stdout.write('%s: ' % topic)
cmd.Cmd.do_help(self, topic.replace('-', '_'))
else:
sys.stdout.write("usage: bitbake-layers <command> [arguments]\n\n")
sys.stdout.write("Available commands:\n")
procnames = list(set(self.get_names()))
for procname in procnames:
if procname[:3] == 'do_':
sys.stdout.write(" %s\n" % procname[3:].replace('_', '-'))
doc = getattr(self, procname).__doc__
if doc:
sys.stdout.write(" %s\n" % doc.splitlines()[0])
def do_show_layers(self, args):
"""show current configured layers"""
self.bbhandler.prepare(config_only = True)
logger.plain("%s %s %s" % ("layer".ljust(20), "path".ljust(40), "priority"))
logger.plain('=' * 74)
for layerdir in self.bblayers:
layername = self.get_layer_name(layerdir)
layerpri = 0
for layer, _, regex, pri in self.bbhandler.cooker.recipecache.bbfile_config_priorities:
if regex.match(os.path.join(layerdir, 'test')):
layerpri = pri
break
logger.plain("%s %s %d" % (layername.ljust(20), layerdir.ljust(40), layerpri))
def version_str(self, pe, pv, pr = None):
verstr = "%s" % pv
if pr:
verstr = "%s-%s" % (verstr, pr)
if pe:
verstr = "%s:%s" % (pe, verstr)
return verstr
def do_show_overlayed(self, args):
"""list overlayed recipes (where the same recipe exists in another layer)
usage: show-overlayed [-f] [-s]
Lists the names of overlayed recipes and the available versions in each
layer, with the preferred version first. Note that skipped recipes that
are overlayed will also be listed, with a " (skipped)" suffix.
Options:
-f instead of the default formatting, list filenames of higher priority
recipes with the ones they overlay indented underneath
-s only list overlayed recipes where the version is the same
"""
self.bbhandler.prepare()
show_filenames = False
show_same_ver_only = False
for arg in args.split():
if arg == '-f':
show_filenames = True
elif arg == '-s':
show_same_ver_only = True
else:
sys.stderr.write("show-overlayed: invalid option %s\n" % arg)
self.do_help('')
return
items_listed = self.list_recipes('Overlayed recipes', None, True, show_same_ver_only, show_filenames, True)
# Check for overlayed .bbclass files
classes = defaultdict(list)
for layerdir in self.bblayers:
classdir = os.path.join(layerdir, 'classes')
if os.path.exists(classdir):
for classfile in os.listdir(classdir):
if os.path.splitext(classfile)[1] == '.bbclass':
classes[classfile].append(classdir)
# Locating classes and other files is a bit more complicated than recipes -
# layer priority is not a factor; instead BitBake uses the first matching
# file in BBPATH, which is manipulated directly by each layer's
# conf/layer.conf in turn, thus the order of layers in bblayers.conf is a
# factor - however, each layer.conf is free to either prepend or append to
# BBPATH (or indeed do crazy stuff with it). Thus the order in BBPATH might
# not be exactly the order present in bblayers.conf either.
bbpath = str(self.bbhandler.config_data.getVar('BBPATH', True))
overlayed_class_found = False
for (classfile, classdirs) in classes.items():
if len(classdirs) > 1:
if not overlayed_class_found:
logger.plain('=== Overlayed classes ===')
overlayed_class_found = True
mainfile = bb.utils.which(bbpath, os.path.join('classes', classfile))
if show_filenames:
logger.plain('%s' % mainfile)
else:
# We effectively have to guess the layer here
logger.plain('%s:' % classfile)
mainlayername = '?'
for layerdir in self.bblayers:
classdir = os.path.join(layerdir, 'classes')
if mainfile.startswith(classdir):
mainlayername = self.get_layer_name(layerdir)
logger.plain(' %s' % mainlayername)
for classdir in classdirs:
fullpath = os.path.join(classdir, classfile)
if fullpath != mainfile:
if show_filenames:
print(' %s' % fullpath)
else:
print(' %s' % self.get_layer_name(os.path.dirname(classdir)))
if overlayed_class_found:
items_listed = True;
if not items_listed:
logger.plain('No overlayed files found.')
def do_show_recipes(self, args):
"""list available recipes, showing the layer they are provided by
usage: show-recipes [-f] [-m] [pnspec]
Lists the names of overlayed recipes and the available versions in each
layer, with the preferred version first. Optionally you may specify
pnspec to match a specified recipe name (supports wildcards). Note that
skipped recipes will also be listed, with a " (skipped)" suffix.
Options:
-f instead of the default formatting, list filenames of higher priority
recipes with other available recipes indented underneath
-m only list where multiple recipes (in the same layer or different
layers) exist for the same recipe name
"""
self.bbhandler.prepare()
show_filenames = False
show_multi_provider_only = False
pnspec = None
title = 'Available recipes:'
for arg in args.split():
if arg == '-f':
show_filenames = True
elif arg == '-m':
show_multi_provider_only = True
elif not arg.startswith('-'):
pnspec = arg
title = 'Available recipes matching %s:' % pnspec
else:
sys.stderr.write("show-recipes: invalid option %s\n" % arg)
self.do_help('')
return
self.list_recipes(title, pnspec, False, False, show_filenames, show_multi_provider_only)
def list_recipes(self, title, pnspec, show_overlayed_only, show_same_ver_only, show_filenames, show_multi_provider_only):
pkg_pn = self.bbhandler.cooker.recipecache.pkg_pn
(latest_versions, preferred_versions) = bb.providers.findProviders(self.bbhandler.config_data, self.bbhandler.cooker.recipecache, pkg_pn)
allproviders = bb.providers.allProviders(self.bbhandler.cooker.recipecache)
# Ensure we list skipped recipes
# We are largely guessing about PN, PV and the preferred version here,
# but we have no choice since skipped recipes are not fully parsed
skiplist = self.bbhandler.cooker.skiplist.keys()
skiplist.sort( key=lambda fileitem: self.bbhandler.cooker.collection.calc_bbfile_priority(fileitem) )
skiplist.reverse()
for fn in skiplist:
recipe_parts = os.path.splitext(os.path.basename(fn))[0].split('_')
p = recipe_parts[0]
if len(recipe_parts) > 1:
ver = (None, recipe_parts[1], None)
else:
ver = (None, 'unknown', None)
allproviders[p].append((ver, fn))
if not p in pkg_pn:
pkg_pn[p] = 'dummy'
preferred_versions[p] = (ver, fn)
def print_item(f, pn, ver, layer, ispref):
if f in skiplist:
skipped = ' (skipped)'
else:
skipped = ''
if show_filenames:
if ispref:
logger.plain("%s%s", f, skipped)
else:
logger.plain(" %s%s", f, skipped)
else:
if ispref:
logger.plain("%s:", pn)
logger.plain(" %s %s%s", layer.ljust(20), ver, skipped)
preffiles = []
items_listed = False
for p in sorted(pkg_pn):
if pnspec:
if not fnmatch.fnmatch(p, pnspec):
continue
if len(allproviders[p]) > 1 or not show_multi_provider_only:
pref = preferred_versions[p]
preffile = bb.cache.Cache.virtualfn2realfn(pref[1])[0]
if preffile not in preffiles:
preflayer = self.get_file_layer(preffile)
multilayer = False
same_ver = True
provs = []
for prov in allproviders[p]:
provfile = bb.cache.Cache.virtualfn2realfn(prov[1])[0]
provlayer = self.get_file_layer(provfile)
provs.append((provfile, provlayer, prov[0]))
if provlayer != preflayer:
multilayer = True
if prov[0] != pref[0]:
same_ver = False
if (multilayer or not show_overlayed_only) and (same_ver or not show_same_ver_only):
if not items_listed:
logger.plain('=== %s ===' % title)
items_listed = True
print_item(preffile, p, self.version_str(pref[0][0], pref[0][1]), preflayer, True)
for (provfile, provlayer, provver) in provs:
if provfile != preffile:
print_item(provfile, p, self.version_str(provver[0], provver[1]), provlayer, False)
# Ensure we don't show two entries for BBCLASSEXTENDed recipes
preffiles.append(preffile)
return items_listed
def do_flatten(self, args):
"""flattens layer configuration into a separate output directory.
usage: flatten [layer1 layer2 [layer3]...] <outputdir>
Takes the specified layers (or all layers in the current layer
configuration if none are specified) and builds a "flattened" directory
containing the contents of all layers, with any overlayed recipes removed
and bbappends appended to the corresponding recipes. Note that some manual
cleanup may still be necessary afterwards, in particular:
* where non-recipe files (such as patches) are overwritten (the flatten
command will show a warning for these)
* where anything beyond the normal layer setup has been added to
layer.conf (only the lowest priority number layer's layer.conf is used)
* overridden/appended items from bbappends will need to be tidied up
* when the flattened layers do not have the same directory structure (the
flatten command should show a warning when this will cause a problem)
Warning: if you flatten several layers where another layer is intended to
be used "inbetween" them (in layer priority order) such that recipes /
bbappends in the layers interact, and then attempt to use the new output
layer together with that other layer, you may no longer get the same
build results (as the layer priority order has effectively changed).
"""
arglist = args.split()
if len(arglist) < 1:
logger.error('Please specify an output directory')
self.do_help('flatten')
return
if len(arglist) == 2:
logger.error('If you specify layers to flatten you must specify at least two')
self.do_help('flatten')
return
outputdir = arglist[-1]
if os.path.exists(outputdir) and os.listdir(outputdir):
logger.error('Directory %s exists and is non-empty, please clear it out first' % outputdir)
return
self.bbhandler.prepare()
layers = self.bblayers
if len(arglist) > 2:
layernames = arglist[:-1]
found_layernames = []
found_layerdirs = []
for layerdir in layers:
layername = self.get_layer_name(layerdir)
if layername in layernames:
found_layerdirs.append(layerdir)
found_layernames.append(layername)
for layername in layernames:
if not layername in found_layernames:
logger.error('Unable to find layer %s in current configuration, please run "%s show-layers" to list configured layers' % (layername, os.path.basename(sys.argv[0])))
return
layers = found_layerdirs
else:
layernames = []
# Ensure a specified path matches our list of layers
def layer_path_match(path):
for layerdir in layers:
if path.startswith(os.path.join(layerdir, '')):
return layerdir
return None
appended_recipes = []
for layer in layers:
overlayed = []
for f in self.bbhandler.cooker.collection.overlayed.iterkeys():
for of in self.bbhandler.cooker.collection.overlayed[f]:
if of.startswith(layer):
overlayed.append(of)
logger.plain('Copying files from %s...' % layer )
for root, dirs, files in os.walk(layer):
for f1 in files:
f1full = os.sep.join([root, f1])
if f1full in overlayed:
logger.plain(' Skipping overlayed file %s' % f1full )
else:
ext = os.path.splitext(f1)[1]
if ext != '.bbappend':
fdest = f1full[len(layer):]
fdest = os.path.normpath(os.sep.join([outputdir,fdest]))
bb.utils.mkdirhier(os.path.dirname(fdest))
if os.path.exists(fdest):
if f1 == 'layer.conf' and root.endswith('/conf'):
logger.plain(' Skipping layer config file %s' % f1full )
continue
else:
logger.warn('Overwriting file %s', fdest)
bb.utils.copyfile(f1full, fdest)
if ext == '.bb':
if f1 in self.bbhandler.cooker.collection.appendlist:
appends = self.bbhandler.cooker.collection.appendlist[f1]
if appends:
logger.plain(' Applying appends to %s' % fdest )
for appendname in appends:
if layer_path_match(appendname):
self.apply_append(appendname, fdest)
appended_recipes.append(f1)
# Take care of when some layers are excluded and yet we have included bbappends for those recipes
for recipename in self.bbhandler.cooker.collection.appendlist.iterkeys():
if recipename not in appended_recipes:
appends = self.bbhandler.cooker.collection.appendlist[recipename]
first_append = None
for appendname in appends:
layer = layer_path_match(appendname)
if layer:
if first_append:
self.apply_append(appendname, first_append)
else:
fdest = appendname[len(layer):]
fdest = os.path.normpath(os.sep.join([outputdir,fdest]))
bb.utils.mkdirhier(os.path.dirname(fdest))
bb.utils.copyfile(appendname, fdest)
first_append = fdest
# Get the regex for the first layer in our list (which is where the conf/layer.conf file will
# have come from)
first_regex = None
layerdir = layers[0]
for layername, pattern, regex, _ in self.bbhandler.cooker.recipecache.bbfile_config_priorities:
if regex.match(os.path.join(layerdir, 'test')):
first_regex = regex
break
if first_regex:
# Find the BBFILES entries that match (which will have come from this conf/layer.conf file)
bbfiles = str(self.bbhandler.config_data.getVar('BBFILES', True)).split()
bbfiles_layer = []
for item in bbfiles:
if first_regex.match(item):
newpath = os.path.join(outputdir, item[len(layerdir)+1:])
bbfiles_layer.append(newpath)
if bbfiles_layer:
# Check that all important layer files match BBFILES
for root, dirs, files in os.walk(outputdir):
for f1 in files:
ext = os.path.splitext(f1)[1]
if ext in ['.bb', '.bbappend']:
f1full = os.sep.join([root, f1])
entry_found = False
for item in bbfiles_layer:
if fnmatch.fnmatch(f1full, item):
entry_found = True
break
if not entry_found:
logger.warning("File %s does not match the flattened layer's BBFILES setting, you may need to edit conf/layer.conf or move the file elsewhere" % f1full)
def get_file_layer(self, filename):
for layer, _, regex, _ in self.bbhandler.cooker.recipecache.bbfile_config_priorities:
if regex.match(filename):
for layerdir in self.bblayers:
if regex.match(os.path.join(layerdir, 'test')) and re.match(layerdir, filename):
return self.get_layer_name(layerdir)
return "?"
def get_file_layerdir(self, filename):
for layer, _, regex, _ in self.bbhandler.cooker.recipecache.bbfile_config_priorities:
if regex.match(filename):
for layerdir in self.bblayers:
if regex.match(os.path.join(layerdir, 'test')) and re.match(layerdir, filename):
return layerdir
return "?"
def remove_layer_prefix(self, f):
"""Remove the layer_dir prefix, e.g., f = /path/to/layer_dir/foo/blah, the
return value will be: layer_dir/foo/blah"""
f_layerdir = self.get_file_layerdir(f)
prefix = os.path.join(os.path.dirname(f_layerdir), '')
return f[len(prefix):] if f.startswith(prefix) else f
def get_layer_name(self, layerdir):
return os.path.basename(layerdir.rstrip(os.sep))
def apply_append(self, appendname, recipename):
appendfile = open(appendname, 'r')
recipefile = open(recipename, 'a')
recipefile.write('\n')
recipefile.write('##### bbappended from %s #####\n' % self.get_file_layer(appendname))
recipefile.writelines(appendfile.readlines())
recipefile.close()
appendfile.close()
def do_show_appends(self, args):
"""list bbappend files and recipe files they apply to
usage: show-appends
Recipes are listed with the bbappends that apply to them as subitems.
"""
self.bbhandler.prepare()
if not self.bbhandler.cooker.collection.appendlist:
logger.plain('No append files found')
return
logger.plain('=== Appended recipes ===')
pnlist = list(self.bbhandler.cooker_data.pkg_pn.keys())
pnlist.sort()
for pn in pnlist:
self.show_appends_for_pn(pn)
self.show_appends_for_skipped()
def show_appends_for_pn(self, pn):
filenames = self.bbhandler.cooker_data.pkg_pn[pn]
best = bb.providers.findBestProvider(pn,
self.bbhandler.config_data,
self.bbhandler.cooker_data,
self.bbhandler.cooker_data.pkg_pn)
best_filename = os.path.basename(best[3])
self.show_appends_output(filenames, best_filename)
def show_appends_for_skipped(self):
filenames = [os.path.basename(f)
for f in self.bbhandler.cooker.skiplist.iterkeys()]
self.show_appends_output(filenames, None, " (skipped)")
def show_appends_output(self, filenames, best_filename, name_suffix = ''):
appended, missing = self.get_appends_for_files(filenames)
if appended:
for basename, appends in appended:
logger.plain('%s%s:', basename, name_suffix)
for append in appends:
logger.plain(' %s', append)
if best_filename:
if best_filename in missing:
logger.warn('%s: missing append for preferred version',
best_filename)
self.returncode |= 1
def get_appends_for_files(self, filenames):
appended, notappended = [], []
for filename in filenames:
_, cls = bb.cache.Cache.virtualfn2realfn(filename)
if cls:
continue
basename = os.path.basename(filename)
appends = self.bbhandler.cooker.collection.appendlist.get(basename)
if appends:
appended.append((basename, list(appends)))
else:
notappended.append(basename)
return appended, notappended
def do_show_cross_depends(self, args):
"""figure out the dependency between recipes that crosses a layer boundary.
usage: show-cross-depends [-f]
Figure out the dependency between recipes that crosses a layer boundary.
Options:
-f show full file path
NOTE:
The .bbappend file can impact the dependency.
"""
self.bbhandler.prepare()
show_filenames = False
for arg in args.split():
if arg == '-f':
show_filenames = True
else:
sys.stderr.write("show-cross-depends: invalid option %s\n" % arg)
self.do_help('')
return
pkg_fn = self.bbhandler.cooker_data.pkg_fn
bbpath = str(self.bbhandler.config_data.getVar('BBPATH', True))
self.require_re = re.compile(r"require\s+(.+)")
self.include_re = re.compile(r"include\s+(.+)")
self.inherit_re = re.compile(r"inherit\s+(.+)")
# The bb's DEPENDS and RDEPENDS
for f in pkg_fn:
f = bb.cache.Cache.virtualfn2realfn(f)[0]
# Get the layername that the file is in
layername = self.get_file_layer(f)
# The DEPENDS
deps = self.bbhandler.cooker_data.deps[f]
for pn in deps:
if pn in self.bbhandler.cooker_data.pkg_pn:
best = bb.providers.findBestProvider(pn,
self.bbhandler.config_data,
self.bbhandler.cooker_data,
self.bbhandler.cooker_data.pkg_pn)
self.check_cross_depends("DEPENDS", layername, f, best[3], show_filenames)
# The RDPENDS
all_rdeps = self.bbhandler.cooker_data.rundeps[f].values()
# Remove the duplicated or null one.
sorted_rdeps = {}
# The all_rdeps is the list in list, so we need two for loops
for k1 in all_rdeps:
for k2 in k1:
sorted_rdeps[k2] = 1
all_rdeps = sorted_rdeps.keys()
for rdep in all_rdeps:
all_p = bb.providers.getRuntimeProviders(self.bbhandler.cooker_data, rdep)
if all_p:
best = bb.providers.filterProvidersRunTime(all_p, rdep,
self.bbhandler.config_data,
self.bbhandler.cooker_data)[0][0]
self.check_cross_depends("RDEPENDS", layername, f, best, show_filenames)
# The inherit class
cls_re = re.compile('classes/')
if f in self.bbhandler.cooker_data.inherits:
inherits = self.bbhandler.cooker_data.inherits[f]
for cls in inherits:
# The inherits' format is [classes/cls, /path/to/classes/cls]
# ignore the classes/cls.
if not cls_re.match(cls):
inherit_layername = self.get_file_layer(cls)
if inherit_layername != layername:
if not show_filenames:
f_short = self.remove_layer_prefix(f)
cls = self.remove_layer_prefix(cls)
else:
f_short = f
logger.plain("%s inherits %s" % (f_short, cls))
# The 'require/include xxx' in the bb file
pv_re = re.compile(r"\${PV}")
fnfile = open(f, 'r')
line = fnfile.readline()
while line:
m, keyword = self.match_require_include(line)
# Found the 'require/include xxxx'
if m:
needed_file = m.group(1)
# Replace the ${PV} with the real PV
if pv_re.search(needed_file) and f in self.bbhandler.cooker_data.pkg_pepvpr:
pv = self.bbhandler.cooker_data.pkg_pepvpr[f][1]
needed_file = re.sub(r"\${PV}", pv, needed_file)
self.print_cross_files(bbpath, keyword, layername, f, needed_file, show_filenames)
line = fnfile.readline()
fnfile.close()
# The "require/include xxx" in conf/machine/*.conf, .inc and .bbclass
conf_re = re.compile(".*/conf/machine/[^\/]*\.conf$")
inc_re = re.compile(".*\.inc$")
# The "inherit xxx" in .bbclass
bbclass_re = re.compile(".*\.bbclass$")
for layerdir in self.bblayers:
layername = self.get_layer_name(layerdir)
for dirpath, dirnames, filenames in os.walk(layerdir):
for name in filenames:
f = os.path.join(dirpath, name)
s = conf_re.match(f) or inc_re.match(f) or bbclass_re.match(f)
if s:
ffile = open(f, 'r')
line = ffile.readline()
while line:
m, keyword = self.match_require_include(line)
# Only bbclass has the "inherit xxx" here.
bbclass=""
if not m and f.endswith(".bbclass"):
m, keyword = self.match_inherit(line)
bbclass=".bbclass"
# Find a 'require/include xxxx'
if m:
self.print_cross_files(bbpath, keyword, layername, f, m.group(1) + bbclass, show_filenames)
line = ffile.readline()
ffile.close()
def print_cross_files(self, bbpath, keyword, layername, f, needed_filename, show_filenames):
"""Print the depends that crosses a layer boundary"""
needed_file = bb.utils.which(bbpath, needed_filename)
if needed_file:
# Which layer is this file from
needed_layername = self.get_file_layer(needed_file)
if needed_layername != layername:
if not show_filenames:
f = self.remove_layer_prefix(f)
needed_file = self.remove_layer_prefix(needed_file)
logger.plain("%s %s %s" %(f, keyword, needed_file))
def match_inherit(self, line):
"""Match the inherit xxx line"""
return (self.inherit_re.match(line), "inherits")
def match_require_include(self, line):
"""Match the require/include xxx line"""
m = self.require_re.match(line)
keyword = "requires"
if not m:
m = self.include_re.match(line)
keyword = "includes"
return (m, keyword)
def check_cross_depends(self, keyword, layername, f, needed_file, show_filenames):
"""Print the DEPENDS/RDEPENDS file that crosses a layer boundary"""
best_realfn = bb.cache.Cache.virtualfn2realfn(needed_file)[0]
needed_layername = self.get_file_layer(best_realfn)
if needed_layername != layername:
if not show_filenames:
f = self.remove_layer_prefix(f)
best_realfn = self.remove_layer_prefix(best_realfn)
logger.plain("%s %s %s" % (f, keyword, best_realfn))
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]) or 0)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
import os
import sys,logging
import optparse
@@ -50,6 +50,6 @@ if __name__ == "__main__":
except Exception:
ret = 1
import traceback
traceback.print_exc()
traceback.print_exc(5)
sys.exit(ret)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
#
# Copyright (C) 2012 Richard Purdie
#
@@ -25,48 +25,14 @@ try:
except RuntimeError as exc:
sys.exit(str(exc))
tests = ["bb.tests.codeparser",
tests = ["bb.tests.codeparser",
"bb.tests.cow",
"bb.tests.data",
"bb.tests.fetch",
"bb.tests.parse",
"bb.tests.utils"]
for t in tests:
t = '.'.join(t.split('.')[:3])
__import__(t)
unittest.main(argv=["bitbake-selftest"] + tests)
# Set-up logging
class StdoutStreamHandler(logging.StreamHandler):
"""Special handler so that unittest is able to capture stdout"""
def __init__(self):
# Override __init__() because we don't want to set self.stream here
logging.Handler.__init__(self)
@property
def stream(self):
# We want to dynamically write wherever sys.stdout is pointing to
return sys.stdout
handler = StdoutStreamHandler()
bb.logger.addHandler(handler)
bb.logger.setLevel(logging.DEBUG)
ENV_HELP = """\
Environment variables:
BB_SKIP_NETTESTS set to 'yes' in order to skip tests using network
connection
BB_TMPDIR_NOCLEAN set to 'yes' to preserve test tmp directories
"""
class main(unittest.main):
def _print_help(self, *args, **kwargs):
super(main, self)._print_help(*args, **kwargs)
print(ENV_HELP)
if __name__ == '__main__':
main(defaultTest=tests, buffer=True)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
import os
import sys
@@ -10,48 +10,23 @@ import bb
import select
import errno
import signal
import pickle
import traceback
import queue
from multiprocessing import Lock
from threading import Thread
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports utf-8.\nPython can't change the filesystem locale after loading so we need a utf-8 when python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 2 or not sys.argv[1].startswith("decafbad"):
if len(sys.argv) != 2 or sys.argv[1] != "decafbad":
print("bitbake-worker is meant for internal execution by bitbake itself, please don't use it standalone.")
sys.exit(1)
profiling = False
if sys.argv[1].startswith("decafbadbad"):
profiling = True
try:
import cProfile as profile
except:
import profile
# Unbuffer stdout to avoid log truncation in the event
# of an unorderly exit as well as to provide timely
# updates to log files for use with tail
try:
if sys.stdout.name == '<stdout>':
import fcntl
fl = fcntl.fcntl(sys.stdout.fileno(), fcntl.F_GETFL)
fl |= os.O_SYNC
fcntl.fcntl(sys.stdout.fileno(), fcntl.F_SETFL, fl)
#sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
except:
pass
logger = logging.getLogger("BitBake")
try:
import cPickle as pickle
except ImportError:
import pickle
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
worker_pipe = sys.stdout.fileno()
bb.utils.nonblockingfd(worker_pipe)
# Need to guard against multiprocessing being used in child processes
# and multiple processes trying to write to the parent at the same time
worker_pipe_lock = None
handler = bb.event.LogHandler()
logger.addHandler(handler)
@@ -66,61 +41,36 @@ if 0:
consolelog.setFormatter(conlogformat)
logger.addHandler(consolelog)
worker_queue = queue.Queue()
worker_queue = ""
def worker_fire(event, d):
data = b"<event>" + pickle.dumps(event) + b"</event>"
data = "<event>" + pickle.dumps(event) + "</event>"
worker_fire_prepickled(data)
def worker_fire_prepickled(event):
global worker_queue
worker_queue.put(event)
worker_queue = worker_queue + event
worker_flush()
#
# We can end up with write contention with the cooker, it can be trying to send commands
# and we can be trying to send event data back. Therefore use a separate thread for writing
# back data to cooker.
#
worker_thread_exit = False
def worker_flush():
global worker_queue, worker_pipe
def worker_flush(worker_queue):
worker_queue_int = b""
global worker_pipe, worker_thread_exit
if not worker_queue:
return
while True:
try:
worker_queue_int = worker_queue_int + worker_queue.get(True, 1)
except queue.Empty:
pass
while (worker_queue_int or not worker_queue.empty()):
try:
(_, ready, _) = select.select([], [worker_pipe], [], 1)
if not worker_queue.empty():
worker_queue_int = worker_queue_int + worker_queue.get()
written = os.write(worker_pipe, worker_queue_int)
worker_queue_int = worker_queue_int[written:]
except (IOError, OSError) as e:
if e.errno != errno.EAGAIN and e.errno != errno.EPIPE:
raise
if worker_thread_exit and worker_queue.empty() and not worker_queue_int:
return
worker_thread = Thread(target=worker_flush, args=(worker_queue,))
worker_thread.start()
try:
written = os.write(worker_pipe, worker_queue)
worker_queue = worker_queue[written:]
except (IOError, OSError) as e:
if e.errno != errno.EAGAIN:
raise
def worker_child_fire(event, d):
global worker_pipe
global worker_pipe_lock
data = b"<event>" + pickle.dumps(event) + b"</event>"
try:
worker_pipe_lock.acquire()
worker_pipe.write(data)
worker_pipe_lock.release()
except IOError:
sigterm_handler(None, None)
raise
data = "<event>" + pickle.dumps(event) + "</event>"
worker_pipe.write(data)
bb.event.worker_fire = worker_fire
@@ -131,12 +81,7 @@ def workerlog_write(msg):
lf.write(msg)
lf.flush()
def sigterm_handler(signum, frame):
signal.signal(signal.SIGTERM, signal.SIG_DFL)
os.killpg(0, signal.SIGTERM)
sys.exit()
def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, appends, taskdepdata, extraconfigdata, quieterrors=False, dry_run_exec=False):
def fork_off_task(cfg, data, workerdata, fn, task, taskname, appends, quieterrors=False):
# We need to setup the environment BEFORE the fork, since
# a fork() or exec*() activates PSEUDO...
@@ -152,10 +97,8 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, append
except TypeError:
umask = taskdep['umask'][taskname]
dry_run = cfg.dry_run or dry_run_exec
# We can't use the fakeroot environment in a dry run as it possibly hasn't been built
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not dry_run:
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not cfg.dry_run:
envvars = (workerdata["fakerootenv"][fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
@@ -183,32 +126,20 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, append
pipeout = os.fdopen(pipeout, 'wb', 0)
pid = os.fork()
except OSError as e:
logger.critical("fork failed: %d (%s)" % (e.errno, e.strerror))
sys.exit(1)
bb.msg.fatal("RunQueue", "fork failed: %d (%s)" % (e.errno, e.strerror))
if pid == 0:
def child():
global worker_pipe
global worker_pipe_lock
pipein.close()
signal.signal(signal.SIGTERM, sigterm_handler)
# Let SIGHUP exit as SIGTERM
signal.signal(signal.SIGHUP, sigterm_handler)
bb.utils.signal_on_parent_exit("SIGTERM")
# Save out the PID so that the event can include it the
# events
bb.event.worker_pid = os.getpid()
bb.event.worker_fire = worker_child_fire
worker_pipe = pipeout
worker_pipe_lock = Lock()
# Make the child the process group leader and ensure no
# child process will be controlled by the current terminal
# This ensures signals sent to the controlling terminal like Ctrl+C
# don't stop the child processes.
os.setsid()
# Make the child the process group leader
os.setpgid(0, 0)
# No stdin
newsi = os.open(os.devnull, os.O_RDWR)
os.dup2(newsi, sys.stdin.fileno())
@@ -216,75 +147,46 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, append
if umask:
os.umask(umask)
data.setVar("BB_WORKERCONTEXT", "1")
data.setVar("BUILDNAME", workerdata["buildname"])
data.setVar("DATE", workerdata["date"])
data.setVar("TIME", workerdata["time"])
bb.parse.siggen.set_taskdata(workerdata["hashes"], workerdata["hash_deps"], workerdata["sigchecksums"])
ret = 0
try:
bb_cache = bb.cache.NoCache(databuilder)
(realfn, virtual, mc) = bb.cache.virtualfn2realfn(fn)
the_data = databuilder.mcdata[mc]
the_data.setVar("BB_WORKERCONTEXT", "1")
the_data.setVar("BB_TASKDEPDATA", taskdepdata)
if cfg.limited_deps:
the_data.setVar("BB_LIMITEDDEPS", "1")
the_data.setVar("BUILDNAME", workerdata["buildname"])
the_data.setVar("DATE", workerdata["date"])
the_data.setVar("TIME", workerdata["time"])
for varname, value in extraconfigdata.items():
the_data.setVar(varname, value)
bb.parse.siggen.set_taskdata(workerdata["sigdata"])
ret = 0
the_data = bb_cache.loadDataFull(fn, appends)
the_data = bb.cache.Cache.loadDataFull(fn, appends, data)
the_data.setVar('BB_TASKHASH', workerdata["runq_hash"][task])
bb.utils.set_process_name("%s:%s" % (the_data.getVar("PN"), taskname.replace("do_", "")))
for h in workerdata["hashes"]:
the_data.setVar("BBHASH_%s" % h, workerdata["hashes"][h])
for h in workerdata["hash_deps"]:
the_data.setVar("BBHASHDEPS_%s" % h, workerdata["hash_deps"][h])
# exported_vars() returns a generator which *cannot* be passed to os.environ.update()
# successfully. We also need to unset anything from the environment which shouldn't be there
exports = bb.data.exported_vars(the_data)
bb.utils.empty_environment()
for e, v in exports:
os.environ[e] = v
for e in fakeenv:
os.environ[e] = fakeenv[e]
the_data.setVar(e, fakeenv[e])
the_data.setVarFlag(e, 'export', "1")
task_exports = the_data.getVarFlag(taskname, 'exports')
if task_exports:
for e in task_exports.split():
the_data.setVarFlag(e, 'export', '1')
v = the_data.getVar(e)
if v is not None:
os.environ[e] = v
if quieterrors:
the_data.setVarFlag(taskname, "quieterrors", "1")
except Exception:
except Exception as exc:
if not quieterrors:
logger.critical(traceback.format_exc())
logger.critical(str(exc))
os._exit(1)
try:
if dry_run:
return 0
return bb.build.exec_task(fn, taskname, the_data, cfg.profile)
if not cfg.dry_run:
ret = bb.build.exec_task(fn, taskname, the_data, cfg.profile)
os._exit(ret)
except:
os._exit(1)
if not profiling:
os._exit(child())
else:
profname = "profile-%s.log" % (fn.replace("/", "-") + "-" + taskname)
prof = profile.Profile()
try:
ret = profile.Profile.runcall(prof, child)
finally:
prof.dump_stats(profname)
bb.utils.process_profilelog(profname)
os._exit(ret)
else:
for key, value in iter(envbackup.items()):
for key, value in envbackup.iteritems():
if value is None:
del os.environ[key]
else:
@@ -301,22 +203,22 @@ class runQueueWorkerPipe():
if pipeout:
pipeout.close()
bb.utils.nonblockingfd(self.input)
self.queue = b""
self.queue = ""
def read(self):
start = len(self.queue)
try:
self.queue = self.queue + (self.input.read(102400) or b"")
self.queue = self.queue + self.input.read(102400)
except (OSError, IOError) as e:
if e.errno != errno.EAGAIN:
raise
end = len(self.queue)
index = self.queue.find(b"</event>")
index = self.queue.find("</event>")
while index != -1:
worker_fire_prepickled(self.queue[:index+8])
self.queue = self.queue[index+8:]
index = self.queue.find(b"</event>")
index = self.queue.find("</event>")
return (end > start)
def close(self):
@@ -332,67 +234,44 @@ class BitbakeWorker(object):
def __init__(self, din):
self.input = din
bb.utils.nonblockingfd(self.input)
self.queue = b""
self.queue = ""
self.cookercfg = None
self.databuilder = None
self.data = None
self.extraconfigdata = None
self.build_pids = {}
self.build_pipes = {}
signal.signal(signal.SIGTERM, self.sigterm_exception)
# Let SIGHUP exit as SIGTERM
signal.signal(signal.SIGHUP, self.sigterm_exception)
if "beef" in sys.argv[1]:
bb.utils.set_process_name("Worker (Fakeroot)")
else:
bb.utils.set_process_name("Worker")
def sigterm_exception(self, signum, stackframe):
if signum == signal.SIGTERM:
bb.warn("Worker received SIGTERM, shutting down...")
elif signum == signal.SIGHUP:
bb.warn("Worker received SIGHUP, shutting down...")
self.handle_finishnow(None)
signal.signal(signal.SIGTERM, signal.SIG_DFL)
os.kill(os.getpid(), signal.SIGTERM)
def serve(self):
while True:
(ready, _, _) = select.select([self.input] + [i.input for i in self.build_pipes.values()], [] , [], 1)
if self.input in ready:
if self.input in ready or len(self.queue):
start = len(self.queue)
try:
r = self.input.read()
if len(r) == 0:
# EOF on pipe, server must have terminated
self.sigterm_exception(signal.SIGTERM, None)
self.queue = self.queue + r
self.queue = self.queue + self.input.read()
except (OSError, IOError):
pass
if len(self.queue):
self.handle_item(b"cookerconfig", self.handle_cookercfg)
self.handle_item(b"extraconfigdata", self.handle_extraconfigdata)
self.handle_item(b"workerdata", self.handle_workerdata)
self.handle_item(b"runtask", self.handle_runtask)
self.handle_item(b"finishnow", self.handle_finishnow)
self.handle_item(b"ping", self.handle_ping)
self.handle_item(b"quit", self.handle_quit)
end = len(self.queue)
self.handle_item("cookerconfig", self.handle_cookercfg)
self.handle_item("workerdata", self.handle_workerdata)
self.handle_item("runtask", self.handle_runtask)
self.handle_item("finishnow", self.handle_finishnow)
self.handle_item("ping", self.handle_ping)
self.handle_item("quit", self.handle_quit)
for pipe in self.build_pipes:
if self.build_pipes[pipe].input in ready:
self.build_pipes[pipe].read()
self.build_pipes[pipe].read()
if len(self.build_pids):
while self.process_waitpid():
continue
self.process_waitpid()
worker_flush()
def handle_item(self, item, func):
if self.queue.startswith(b"<" + item + b">"):
index = self.queue.find(b"</" + item + b">")
if self.queue.startswith("<" + item + ">"):
index = self.queue.find("</" + item + ">")
while index != -1:
func(self.queue[(len(item) + 2):index])
self.queue = self.queue[(index + len(item) + 3):]
index = self.queue.find(b"</" + item + b">")
index = self.queue.find("</" + item + ">")
def handle_cookercfg(self, data):
self.cookercfg = pickle.loads(data)
@@ -400,22 +279,18 @@ class BitbakeWorker(object):
self.databuilder.parseBaseConfiguration()
self.data = self.databuilder.data
def handle_extraconfigdata(self, data):
self.extraconfigdata = pickle.loads(data)
def handle_workerdata(self, data):
self.workerdata = pickle.loads(data)
bb.msg.loggerDefaultDebugLevel = self.workerdata["logdefaultdebug"]
bb.msg.loggerDefaultVerbose = self.workerdata["logdefaultverbose"]
bb.msg.loggerVerboseLogs = self.workerdata["logdefaultverboselogs"]
bb.msg.loggerDefaultDomains = self.workerdata["logdefaultdomain"]
for mc in self.databuilder.mcdata:
self.databuilder.mcdata[mc].setVar("PRSERV_HOST", self.workerdata["prhost"])
self.data.setVar("PRSERV_HOST", self.workerdata["prhost"])
def handle_ping(self, _):
workerlog_write("Handling ping\n")
logger.warning("Pong from bitbake-worker!")
logger.warn("Pong from bitbake-worker!")
def handle_quit(self, data):
workerlog_write("Handling quit\n")
@@ -425,10 +300,10 @@ class BitbakeWorker(object):
sys.exit(0)
def handle_runtask(self, data):
fn, task, taskname, quieterrors, appends, taskdepdata, dry_run_exec = pickle.loads(data)
fn, task, taskname, quieterrors, appends = pickle.loads(data)
workerlog_write("Handling runtask %s %s %s\n" % (task, fn, taskname))
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.databuilder, self.workerdata, fn, task, taskname, appends, taskdepdata, self.extraconfigdata, quieterrors, dry_run_exec)
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.workerdata, fn, task, taskname, appends, quieterrors)
self.build_pids[pid] = task
self.build_pipes[pid] = runQueueWorkerPipe(pipein, pipeout)
@@ -441,9 +316,9 @@ class BitbakeWorker(object):
try:
pid, status = os.waitpid(-1, os.WNOHANG)
if pid == 0 or os.WIFSTOPPED(status):
return False
return None
except OSError:
return False
return None
workerlog_write("Exit code of %s for pid %s\n" % (status, pid))
@@ -460,14 +335,12 @@ class BitbakeWorker(object):
self.build_pipes[pid].close()
del self.build_pipes[pid]
worker_fire_prepickled(b"<exitcode>" + pickle.dumps((task, status)) + b"</exitcode>")
return True
worker_fire_prepickled("<exitcode>" + pickle.dumps((task, status)) + "</exitcode>")
def handle_finishnow(self, _):
if self.build_pids:
logger.info("Sending SIGTERM to remaining %s tasks", len(self.build_pids))
for k, v in iter(self.build_pids.items()):
for k, v in self.build_pids.iteritems():
try:
os.kill(-k, signal.SIGTERM)
os.waitpid(-1, 0)
@@ -477,26 +350,15 @@ class BitbakeWorker(object):
self.build_pipes[pipe].read()
try:
worker = BitbakeWorker(os.fdopen(sys.stdin.fileno(), 'rb'))
if not profiling:
worker.serve()
else:
profname = "profile-worker.log"
prof = profile.Profile()
try:
profile.Profile.runcall(prof, worker.serve)
finally:
prof.dump_stats(profname)
bb.utils.process_profilelog(profname)
worker = BitbakeWorker(sys.stdin)
worker.serve()
except BaseException as e:
if not normalexit:
import traceback
sys.stderr.write(traceback.format_exc())
sys.stderr.write(str(e))
worker_thread_exit = True
worker_thread.join()
while len(worker_queue):
worker_flush()
workerlog_write("exitting")
sys.exit(0)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
@@ -462,7 +462,7 @@ def main():
state_group = 2
for key in bb.data.keys(documentation):
data = documentation.getVarFlag(key, "doc", False)
data = documentation.getVarFlag(key, "doc")
if not data:
continue

View File

@@ -1,165 +0,0 @@
#!/usr/bin/env python3
"""git-make-shallow: make the current git repository shallow
Remove the history of the specified revisions, then optionally filter the
available refs to those specified.
"""
import argparse
import collections
import errno
import itertools
import os
import subprocess
import sys
version = 1.0
def main():
if sys.version_info < (3, 4, 0):
sys.exit('Python 3.4 or greater is required')
git_dir = check_output(['git', 'rev-parse', '--git-dir']).rstrip()
shallow_file = os.path.join(git_dir, 'shallow')
if os.path.exists(shallow_file):
try:
check_output(['git', 'fetch', '--unshallow'])
except subprocess.CalledProcessError:
try:
os.unlink(shallow_file)
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
args = process_args()
revs = check_output(['git', 'rev-list'] + args.revisions).splitlines()
make_shallow(shallow_file, args.revisions, args.refs)
ref_revs = check_output(['git', 'rev-list'] + args.refs).splitlines()
remaining_history = set(revs) & set(ref_revs)
for rev in remaining_history:
if check_output(['git', 'rev-parse', '{}^@'.format(rev)]):
sys.exit('Error: %s was not made shallow' % rev)
filter_refs(args.refs)
if args.shrink:
shrink_repo(git_dir)
subprocess.check_call(['git', 'fsck', '--unreachable'])
def process_args():
# TODO: add argument to automatically keep local-only refs, since they
# can't be easily restored with a git fetch.
parser = argparse.ArgumentParser(description='Remove the history of the specified revisions, then optionally filter the available refs to those specified.')
parser.add_argument('--ref', '-r', metavar='REF', action='append', dest='refs', help='remove all but the specified refs (cumulative)')
parser.add_argument('--shrink', '-s', action='store_true', help='shrink the git repository by repacking and pruning')
parser.add_argument('revisions', metavar='REVISION', nargs='+', help='a git revision/commit')
if len(sys.argv) < 2:
parser.print_help()
sys.exit(2)
args = parser.parse_args()
if args.refs:
args.refs = check_output(['git', 'rev-parse', '--symbolic-full-name'] + args.refs).splitlines()
else:
args.refs = get_all_refs(lambda r, t, tt: t == 'commit' or tt == 'commit')
args.refs = list(filter(lambda r: not r.endswith('/HEAD'), args.refs))
args.revisions = check_output(['git', 'rev-parse'] + ['%s^{}' % i for i in args.revisions]).splitlines()
return args
def check_output(cmd, input=None):
return subprocess.check_output(cmd, universal_newlines=True, input=input)
def make_shallow(shallow_file, revisions, refs):
"""Remove the history of the specified revisions."""
for rev in follow_history_intersections(revisions, refs):
print("Processing %s" % rev)
with open(shallow_file, 'a') as f:
f.write(rev + '\n')
def get_all_refs(ref_filter=None):
"""Return all the existing refs in this repository, optionally filtering the refs."""
ref_output = check_output(['git', 'for-each-ref', '--format=%(refname)\t%(objecttype)\t%(*objecttype)'])
ref_split = [tuple(iter_extend(l.rsplit('\t'), 3)) for l in ref_output.splitlines()]
if ref_filter:
ref_split = (e for e in ref_split if ref_filter(*e))
refs = [r[0] for r in ref_split]
return refs
def iter_extend(iterable, length, obj=None):
"""Ensure that iterable is the specified length by extending with obj."""
return itertools.islice(itertools.chain(iterable, itertools.repeat(obj)), length)
def filter_refs(refs):
"""Remove all but the specified refs from the git repository."""
all_refs = get_all_refs()
to_remove = set(all_refs) - set(refs)
if to_remove:
check_output(['xargs', '-0', '-n', '1', 'git', 'update-ref', '-d', '--no-deref'],
input=''.join(l + '\0' for l in to_remove))
def follow_history_intersections(revisions, refs):
"""Determine all the points where the history of the specified revisions intersects the specified refs."""
queue = collections.deque(revisions)
seen = set()
for rev in iter_except(queue.popleft, IndexError):
if rev in seen:
continue
parents = check_output(['git', 'rev-parse', '%s^@' % rev]).splitlines()
yield rev
seen.add(rev)
if not parents:
continue
check_refs = check_output(['git', 'merge-base', '--independent'] + sorted(refs)).splitlines()
for parent in parents:
for ref in check_refs:
print("Checking %s vs %s" % (parent, ref))
try:
merge_base = check_output(['git', 'merge-base', parent, ref]).rstrip()
except subprocess.CalledProcessError:
continue
else:
queue.append(merge_base)
def iter_except(func, exception, start=None):
"""Yield a function repeatedly until it raises an exception."""
try:
if start is not None:
yield start()
while True:
yield func()
except exception:
pass
def shrink_repo(git_dir):
"""Shrink the newly shallow repository, removing the unreachable objects."""
subprocess.check_call(['git', 'reflog', 'expire', '--expire-unreachable=now', '--all'])
subprocess.check_call(['git', 'repack', '-ad'])
try:
os.unlink(os.path.join(git_dir, 'objects', 'info', 'alternates'))
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
subprocess.check_call(['git', 'prune', '--expire', 'now'])
if __name__ == '__main__':
main()

122
bitbake/bin/image-writer Executable file
View File

@@ -0,0 +1,122 @@
#!/usr/bin/env python
# Copyright (c) 2012 Wind River Systems, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
import os
import sys
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname( \
os.path.abspath(__file__))), 'lib'))
try:
import bb
except RuntimeError as exc:
sys.exit(str(exc))
import gtk
import optparse
import pygtk
from bb.ui.crumbs.hobwidget import HobAltButton, HobButton
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
from bb.ui.crumbs.hig.deployimagedialog import DeployImageDialog
from bb.ui.crumbs.hig.imageselectiondialog import ImageSelectionDialog
# I put all the fs bitbake supported here. Need more test.
DEPLOYABLE_IMAGE_TYPES = ["jffs2", "cramfs", "ext2", "ext3", "btrfs", "squashfs", "ubi", "vmdk"]
Title = "USB Image Writer"
class DeployWindow(gtk.Window):
def __init__(self, image_path=''):
super(DeployWindow, self).__init__()
if len(image_path) > 0:
valid = True
if not os.path.exists(image_path):
valid = False
lbl = "<b>Invalid image file path: %s.</b>\nPress <b>Select Image</b> to select an image." % image_path
else:
image_path = os.path.abspath(image_path)
extend_name = os.path.splitext(image_path)[1][1:]
if extend_name not in DEPLOYABLE_IMAGE_TYPES:
valid = False
lbl = "<b>Undeployable imge type: %s</b>\nPress <b>Select Image</b> to select an image." % extend_name
if not valid:
image_path = ''
crumbs_dialog = CrumbsMessageDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
button = crumbs_dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
crumbs_dialog.run()
crumbs_dialog.destroy()
self.deploy_dialog = DeployImageDialog(Title, image_path, self,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT
| gtk.DIALOG_NO_SEPARATOR, None, standalone=True)
close_button = self.deploy_dialog.add_button("Close", gtk.RESPONSE_NO)
HobAltButton.style_button(close_button)
close_button.connect('clicked', gtk.main_quit)
write_button = self.deploy_dialog.add_button("Write USB image", gtk.RESPONSE_YES)
HobAltButton.style_button(write_button)
self.deploy_dialog.connect('select_image_clicked', self.select_image_clicked_cb)
self.deploy_dialog.connect('destroy', gtk.main_quit)
response = self.deploy_dialog.show()
def select_image_clicked_cb(self, dialog):
cwd = os.getcwd()
dialog = ImageSelectionDialog(cwd, DEPLOYABLE_IMAGE_TYPES, Title, self, gtk.FILE_CHOOSER_ACTION_SAVE )
button = dialog.add_button("Cancel", gtk.RESPONSE_NO)
HobAltButton.style_button(button)
button = dialog.add_button("Open", gtk.RESPONSE_YES)
HobAltButton.style_button(button)
response = dialog.run()
if response == gtk.RESPONSE_YES:
if not dialog.image_names:
lbl = "<b>No selections made</b>\nClicked the radio button to select a image."
crumbs_dialog = CrumbsMessageDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
button = crumbs_dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
crumbs_dialog.run()
crumbs_dialog.destroy()
dialog.destroy()
return
# get the full path of image
image_path = os.path.join(dialog.image_folder, dialog.image_names[0])
self.deploy_dialog.set_image_text_buffer(image_path)
self.deploy_dialog.set_image_path(image_path)
dialog.destroy()
def main():
parser = optparse.OptionParser(
usage = """%prog [-h] [image_file]
%prog writes bootable images to USB devices. You can
provide the image file on the command line or select it using the GUI.""")
options, args = parser.parse_args(sys.argv)
image_file = args[1] if len(args) > 1 else ''
dw = DeployWindow(image_file)
if __name__ == '__main__':
try:
main()
gtk.main()
except Exception:
import traceback
traceback.print_exc(3)

View File

@@ -1,261 +0,0 @@
#!/bin/echo ERROR: This script needs to be sourced. Please run as .
# toaster - shell script to start Toaster
# Copyright (C) 2013-2015 Intel Corp.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
HELP="
Usage: source toaster start|stop [webport=<address:port>] [noweb]
Optional arguments:
[noweb] Setup the environment for building with toaster but don't start the development server
[webport] Set the development server (default: localhost:8000)
"
webserverKillAll()
{
local pidfile
for pidfile in ${BUILDDIR}/.toastermain.pid ${BUILDDIR}/.runbuilds.pid; do
if [ -f ${pidfile} ]; then
pid=`cat ${pidfile}`
while kill -0 $pid 2>/dev/null; do
kill -SIGTERM -$pid 2>/dev/null
sleep 1
done
rm ${pidfile}
fi
done
}
webserverStartAll()
{
# do not start if toastermain points to a valid process
if ! cat "${BUILDDIR}/.toastermain.pid" 2>/dev/null | xargs -I{} kill -0 {} ; then
retval=1
rm "${BUILDDIR}/.toastermain.pid"
fi
retval=0
# you can always add a superuser later via
# ../bitbake/lib/toaster/manage.py createsuperuser --username=<ME>
$MANAGE migrate --noinput || retval=1
if [ $retval -eq 1 ]; then
echo "Failed migrations, aborting system start" 1>&2
return $retval
fi
# Make sure that checksettings can pick up any value for TEMPLATECONF
export TEMPLATECONF
$MANAGE checksettings --traceback || retval=1
if [ $retval -eq 1 ]; then
printf "\nError while checking settings; aborting\n"
return $retval
fi
echo "Starting webserver..."
$MANAGE runserver "$ADDR_PORT" \
</dev/null >>${BUILDDIR}/toaster_web.log 2>&1 \
& echo $! >${BUILDDIR}/.toastermain.pid
sleep 1
if ! cat "${BUILDDIR}/.toastermain.pid" | xargs -I{} kill -0 {} ; then
retval=1
rm "${BUILDDIR}/.toastermain.pid"
else
echo "Toaster development webserver started at http://$ADDR_PORT"
echo -e "\nYou can now run 'bitbake <target>' on the command line and monitor your build in Toaster.\nYou can also use a Toaster project to configure and run a build.\n"
fi
return $retval
}
INSTOPSYSTEM=0
# define the stop command
stop_system()
{
# prevent reentry
if [ $INSTOPSYSTEM -eq 1 ]; then return; fi
INSTOPSYSTEM=1
webserverKillAll
# unset exported variables
unset TOASTER_DIR
unset BITBAKE_UI
unset BBBASEDIR
trap - SIGHUP
#trap - SIGCHLD
INSTOPSYSTEM=0
}
verify_prereq() {
# Verify Django version
reqfile=$(python3 -c "import os; print(os.path.realpath('$BBBASEDIR/toaster-requirements.txt'))")
exp='s/Django\([><=]\+\)\([^,]\+\),\([><=]\+\)\(.\+\)/'
exp=$exp'import sys,django;version=django.get_version().split(".");'
exp=$exp'sys.exit(not (version \1 "\2".split(".") and version \3 "\4".split(".")))/p'
if ! sed -n "$exp" $reqfile | python3 - ; then
req=`grep ^Django $reqfile`
echo "This program needs $req"
echo "Please install with pip3 install -r $reqfile"
return 2
fi
return 0
}
# read command line parameters
if [ -n "$BASH_SOURCE" ] ; then
TOASTER=${BASH_SOURCE}
elif [ -n "$ZSH_NAME" ] ; then
TOASTER=${(%):-%x}
else
TOASTER=$0
fi
export BBBASEDIR=`dirname $TOASTER`/..
MANAGE="python3 $BBBASEDIR/lib/toaster/manage.py"
OE_ROOT=`dirname $TOASTER`/../..
# this is the configuraton file we are using for toaster
# we are using the same logic that oe-setup-builddir uses
# (based on TEMPLATECONF and .templateconf) to determine
# which toasterconf.json to use.
# note: There are a number of relative path assumptions
# in the local layers that currently make using an arbitrary
# toasterconf.json difficult.
. $OE_ROOT/.templateconf
if [ -n "$TEMPLATECONF" ]; then
if [ ! -d "$TEMPLATECONF" ]; then
# Allow TEMPLATECONF=meta-xyz/conf as a shortcut
if [ -d "$OE_ROOT/$TEMPLATECONF" ]; then
TEMPLATECONF="$OE_ROOT/$TEMPLATECONF"
fi
fi
fi
unset OE_ROOT
WEBSERVER=1
ADDR_PORT="localhost:8000"
unset CMD
for param in $*; do
case $param in
noweb )
WEBSERVER=0
;;
start )
CMD=$param
;;
stop )
CMD=$param
;;
webport=*)
ADDR_PORT="${param#*=}"
# Split the addr:port string
ADDR=`echo $ADDR_PORT | cut -f 1 -d ':'`
PORT=`echo $ADDR_PORT | cut -f 2 -d ':'`
# If only a port has been speified then set address to localhost.
if [ $ADDR = $PORT ] ; then
ADDR_PORT="localhost:$PORT"
fi
;;
--help)
echo "$HELP"
return 0
;;
*)
echo "$HELP"
return 1
;;
esac
done
if [ `basename \"$0\"` = `basename \"${TOASTER}\"` ]; then
echo "Error: This script needs to be sourced. Please run as . $TOASTER"
return 1
fi
verify_prereq || return 1
# We make sure we're running in the current shell and in a good environment
if [ -z "$BUILDDIR" ] || ! which bitbake >/dev/null 2>&1 ; then
echo "Error: Build environment is not setup or bitbake is not in path." 1>&2
return 2
fi
# this defines the dir toaster will use for
# 1) clones of layers (in _toaster_clones )
# 2) the build dir (in build)
# 3) the sqlite db if that is being used.
# 4) pid's we need to clean up on exit/shutdown
export TOASTER_DIR=`dirname $BUILDDIR`
export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE TOASTER_DIR"
# Determine the action. If specified by arguments, fine, if not, toggle it
if [ "$CMD" = "start" ] ; then
if [ -n "$BBSERVER" ]; then
echo " Toaster is already running. Exiting..."
return 1
fi
elif [ "$CMD" = "" ]; then
echo "No command specified"
echo "$HELP"
return 1
fi
echo "The system will $CMD."
# Execute the commands
case $CMD in
start )
# check if addr:port is not in use
if [ "$CMD" == 'start' ]; then
if [ $WEBSERVER -gt 0 ]; then
$MANAGE checksocket "$ADDR_PORT" || return 1
fi
fi
# Create configuration file
conf=${BUILDDIR}/conf/local.conf
line='INHERIT+="toaster buildhistory"'
grep -q "$line" $conf || echo $line >> $conf
if [ $WEBSERVER -gt 0 ] && ! webserverStartAll; then
echo "Failed ${CMD}."
return 4
fi
export BITBAKE_UI='toasterui'
$MANAGE runbuilds \
</dev/null >>${BUILDDIR}/toaster_runbuilds.log 2>&1 \
& echo $! >${BUILDDIR}/.runbuilds.pid
# set fail safe stop system on terminal exit
trap stop_system SIGHUP
echo "Successful ${CMD}."
return 0
;;
stop )
stop_system
echo "Successful ${CMD}."
;;
esac

View File

@@ -1,126 +0,0 @@
#!/usr/bin/env python3
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Copyright (C) 2014 Alex Damian
#
# This file re-uses code spread throughout other Bitbake source files.
# As such, all other copyrights belong to their own right holders.
#
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
This command takes a filename as a single parameter. The filename is read
as a build eventlog, and the ToasterUI is used to process events in the file
and log data in the database
"""
import os
import sys
import json
import pickle
import codecs
from collections import namedtuple
# mangle syspath to allow easy import of modules
from os.path import join, dirname, abspath
sys.path.insert(0, join(dirname(dirname(abspath(__file__))), 'lib'))
import bb.cooker
from bb.ui import toasterui
class EventPlayer:
"""Emulate a connection to a bitbake server."""
def __init__(self, eventfile, variables):
self.eventfile = eventfile
self.variables = variables
self.eventmask = []
def waitEvent(self, _timeout):
"""Read event from the file."""
line = self.eventfile.readline().strip()
if not line:
return
try:
event_str = json.loads(line)['vars'].encode('utf-8')
event = pickle.loads(codecs.decode(event_str, 'base64'))
event_name = "%s.%s" % (event.__module__, event.__class__.__name__)
if event_name not in self.eventmask:
return
return event
except ValueError as err:
print("Failed loading ", line)
raise err
def runCommand(self, command_line):
"""Emulate running a command on the server."""
name = command_line[0]
if name == "getVariable":
var_name = command_line[1]
variable = self.variables.get(var_name)
if variable:
return variable['v'], None
return None, "Missing variable %s" % var_name
elif name == "getAllKeysWithFlags":
dump = {}
flaglist = command_line[1]
for key, val in self.variables.items():
try:
if not key.startswith("__"):
dump[key] = {
'v': val['v'],
'history' : val['history'],
}
for flag in flaglist:
dump[key][flag] = val[flag]
except Exception as err:
print(err)
return (dump, None)
elif name == 'setEventMask':
self.eventmask = command_line[-1]
return True, None
else:
raise Exception("Command %s not implemented" % command_line[0])
def getEventHandle(self):
"""
This method is called by toasterui.
The return value is passed to self.runCommand but not used there.
"""
pass
def main(argv):
with open(argv[-1]) as eventfile:
# load variables from the first line
variables = json.loads(eventfile.readline().strip())['allvariables']
params = namedtuple('ConfigParams', ['observe_only'])(True)
player = EventPlayer(eventfile, variables)
return toasterui.main(player, player, params)
# run toaster ui on our mock bitbake class
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: %s <event file>" % os.path.basename(sys.argv[0]))
sys.exit(1)
sys.exit(main(sys.argv))

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
@@ -29,14 +29,14 @@ import warnings
sys.path.insert(0, os.path.join(os.path.abspath(os.path.dirname(sys.argv[0])), '../lib'))
from bb.cache import CoreRecipeInfo
import pickle as pickle
import cPickle as pickle
def main(argv=None):
"""
Get the mapping for the target recipe.
"""
if len(argv) != 1:
print("Error, need one argument!", file=sys.stderr)
print >>sys.stderr, "Error, need one argument!"
return 2
cachefile = argv[0]
@@ -56,7 +56,7 @@ def main(argv=None):
continue
# 1.0 is the default version for a no PV recipe.
if "pv" in val.__dict__:
if val.__dict__.has_key("pv"):
pv = val.pv
else:
pv = "1.0"

View File

@@ -17,7 +17,7 @@ endif
fun! <SID>GetUserName()
let l:user_name = system("git config --get user.name")
if v:shell_error
return "Unknown User"
return "Unknow User"
else
return substitute(l:user_name, "\n", "", "")
endfun
@@ -53,6 +53,7 @@ fun! NewBBTemplate()
put ='LICENSE = \"\"'
put ='SECTION = \"\"'
put ='DEPENDS = \"\"'
put ='PR = \"r0\"'
put =''
put ='SRC_URI = \"\"'

View File

@@ -1,91 +0,0 @@
# This is a single Makefile to handle all generated BitBake documents.
# The Makefile needs to live in the documentation directory and all figures used
# in any manuals must be .PNG files and live in the individual book's figures
# directory.
#
# The Makefile has these targets:
#
# pdf: generates a PDF version of a manual.
# html: generates an HTML version of a manual.
# tarball: creates a tarball for the doc files.
# validate: validates
# clean: removes files
#
# The Makefile generates an HTML version of every document. The
# variable DOC indicates the folder name for a given manual.
#
# To build a manual, you must invoke 'make' with the DOC argument.
#
# Examples:
#
# make DOC=bitbake-user-manual
# make pdf DOC=bitbake-user-manual
#
# The first example generates the HTML version of the User Manual.
# The second example generates the PDF version of the User Manual.
#
ifeq ($(DOC),bitbake-user-manual)
XSLTOPTS = --stringparam html.stylesheet bitbake-user-manual-style.css \
--stringparam chapter.autolabel 1 \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
--xinclude
ALLPREQ = html tarball
TARFILES = bitbake-user-manual-style.css bitbake-user-manual.html figures/bitbake-title.png
MANUALS = $(DOC)/$(DOC).html
FIGURES = figures
STYLESHEET = $(DOC)/*.css
endif
##
# These URI should be rewritten by your distribution's xml catalog to
# match your localy installed XSL stylesheets.
XSL_BASE_URI = http://docbook.sourceforge.net/release/xsl/current
XSL_XHTML_URI = $(XSL_BASE_URI)/xhtml/docbook.xsl
all: $(ALLPREQ)
pdf:
ifeq ($(DOC),bitbake-user-manual)
@echo " "
@echo "********** Building."$(DOC)
@echo " "
cd $(DOC); ../tools/docbook-to-pdf $(DOC).xml ../template; cd ..
endif
html:
ifeq ($(DOC),bitbake-user-manual)
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
@echo " "
@echo "******** Building "$(DOC)
@echo " "
cd $(DOC); xsltproc $(XSLTOPTS) -o $(DOC).html $(DOC)-customization.xsl $(DOC).xml; cd ..
endif
tarball: html
@echo " "
@echo "******** Creating Tarball of document files"
@echo " "
cd $(DOC); tar -cvzf $(DOC).tgz $(TARFILES); cd ..
validate:
cd $(DOC); xmllint --postvalid --xinclude --noout $(DOC).xml; cd ..
publish:
@if test -f $(DOC)/$(DOC).html; \
then \
echo " "; \
echo "******** Publishing "$(DOC)".html"; \
echo " "; \
scp -r $(MANUALS) $(STYLESHEET) docs.yp:/var/www/www.yoctoproject.org-docs/$(VER)/$(DOC); \
cd $(DOC); scp -r $(FIGURES) docs.yp:/var/www/www.yoctoproject.org-docs/$(VER)/$(DOC); \
else \
echo " "; \
echo $(DOC)".html missing. Generate the file first then try again."; \
echo " "; \
fi
clean:
rm -rf $(MANUALS); rm $(DOC)/$(DOC).tgz;

View File

@@ -1,39 +0,0 @@
Documentation
=============
This is the directory that contains the BitBake documentation.
Manual Organization
===================
Folders exist for individual manuals as follows:
* bitbake-user-manual - The BitBake User Manual
Each folder is self-contained regarding content and figures.
If you want to find HTML versions of the BitBake manuals on the web,
go to http://www.openembedded.org/wiki/Documentation.
Makefile
========
The Makefile processes manual directories to create HTML, PDF,
tarballs, etc. Details on how the Makefile work are documented
inside the Makefile. See that file for more information.
To build a manual, you run the make command and pass it the name
of the folder containing the manual's contents.
For example, the following command run from the documentation directory
creates an HTML and a PDF version of the BitBake User Manual.
The DOC variable specifies the manual you are making:
$ make DOC=bitbake-user-manual
template
========
Contains various templates, fonts, and some old PNG files.
tools
=====
Contains a tool to convert the DocBook files to PDF format.

View File

@@ -1,29 +0,0 @@
<?xml version='1.0'?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
<xsl:import href="http://downloads.yoctoproject.org/mirror/docbook-mirror/docbook-xsl-1.76.1/xhtml/docbook.xsl" />
<!--
<xsl:import href="../template/1.76.1/docbook-xsl-1.76.1/xhtml/docbook.xsl" />
<xsl:import href="http://docbook.sourceforge.net/release/xsl/1.76.1/xhtml/docbook.xsl" />
-->
<xsl:include href="../template/permalinks.xsl"/>
<xsl:include href="../template/section.title.xsl"/>
<xsl:include href="../template/component.title.xsl"/>
<xsl:include href="../template/division.title.xsl"/>
<xsl:include href="../template/formal.object.heading.xsl"/>
<xsl:include href="../template/gloss-permalinks.xsl"/>
<xsl:param name="html.stylesheet" select="'user-manual-style.css'" />
<xsl:param name="chapter.autolabel" select="1" />
<xsl:param name="section.autolabel" select="1" />
<xsl:param name="section.label.includes.component.label" select="1" />
<xsl:param name="appendix.autolabel">A</xsl:param>
<!-- <xsl:param name="generate.toc" select="'article nop'"></xsl:param> -->
</xsl:stylesheet>

View File

@@ -1,932 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id="bitbake-user-manual-execution">
<title>Execution</title>
<para>
The primary purpose for running BitBake is to produce some kind
of output such as a single installable package, a kernel, a software
development kit, or even a full, board-specific bootable Linux image,
complete with bootloader, kernel, and root filesystem.
Of course, you can execute the <filename>bitbake</filename>
command with options that cause it to execute single tasks,
compile single recipe files, capture or clear data, or simply
return information about the execution environment.
</para>
<para>
This chapter describes BitBake's execution process from start
to finish when you use it to create an image.
The execution process is launched using the following command
form:
<literallayout class='monospaced'>
$ bitbake <replaceable>target</replaceable>
</literallayout>
For information on the BitBake command and its options,
see
"<link linkend='bitbake-user-manual-command'>The BitBake Command</link>"
section.
<note>
<para>
Prior to executing BitBake, you should take advantage of available
parallel thread execution on your build host by setting the
<link linkend='var-BB_NUMBER_THREADS'><filename>BB_NUMBER_THREADS</filename></link>
variable in your project's <filename>local.conf</filename>
configuration file.
</para>
<para>
A common method to determine this value for your build host is to run
the following:
<literallayout class='monospaced'>
$ grep processor /proc/cpuinfo
</literallayout>
This command returns the number of processors, which takes into
account hyper-threading.
Thus, a quad-core build host with hyper-threading most likely
shows eight processors, which is the value you would then assign to
<filename>BB_NUMBER_THREADS</filename>.
</para>
<para>
A possibly simpler solution is that some Linux distributions
(e.g. Debian and Ubuntu) provide the <filename>ncpus</filename> command.
</para>
</note>
</para>
<section id='parsing-the-base-configuration-metadata'>
<title>Parsing the Base Configuration Metadata</title>
<para>
The first thing BitBake does is parse base configuration
metadata.
Base configuration metadata consists of your project's
<filename>bblayers.conf</filename> file to determine what
layers BitBake needs to recognize, all necessary
<filename>layer.conf</filename> files (one from each layer),
and <filename>bitbake.conf</filename>.
The data itself is of various types:
<itemizedlist>
<listitem><para><emphasis>Recipes:</emphasis>
Details about particular pieces of software.
</para></listitem>
<listitem><para><emphasis>Class Data:</emphasis>
An abstraction of common build information
(e.g. how to build a Linux kernel).
</para></listitem>
<listitem><para><emphasis>Configuration Data:</emphasis>
Machine-specific settings, policy decisions,
and so forth.
Configuration data acts as the glue to bind everything
together.</para></listitem>
</itemizedlist>
</para>
<para>
The <filename>layer.conf</filename> files are used to
construct key variables such as
<link linkend='var-BBPATH'><filename>BBPATH</filename></link>
and
<link linkend='var-BBFILES'><filename>BBFILES</filename></link>.
<filename>BBPATH</filename> is used to search for
configuration and class files under the
<filename>conf</filename> and <filename>classes</filename>
directories, respectively.
<filename>BBFILES</filename> is used to locate both recipe
and recipe append files
(<filename>.bb</filename> and <filename>.bbappend</filename>).
If there is no <filename>bblayers.conf</filename> file,
it is assumed the user has set the <filename>BBPATH</filename>
and <filename>BBFILES</filename> directly in the environment.
</para>
<para>
Next, the <filename>bitbake.conf</filename> file is located
using the <filename>BBPATH</filename> variable that was
just constructed.
The <filename>bitbake.conf</filename> file may also include other
configuration files using the
<filename>include</filename> or
<filename>require</filename> directives.
</para>
<para>
Prior to parsing configuration files, Bitbake looks
at certain variables, including:
<itemizedlist>
<listitem><para>
<link linkend='var-BB_ENV_WHITELIST'><filename>BB_ENV_WHITELIST</filename></link>
</para></listitem>
<listitem><para>
<link linkend='var-BB_ENV_EXTRAWHITE'><filename>BB_ENV_EXTRAWHITE</filename></link>
</para></listitem>
<listitem><para>
<link linkend='var-BB_PRESERVE_ENV'><filename>BB_PRESERVE_ENV</filename></link>
</para></listitem>
<listitem><para>
<link linkend='var-BB_ORIGENV'><filename>BB_ORIGENV</filename></link>
</para></listitem>
<listitem><para>
<link linkend='var-BITBAKE_UI'><filename>BITBAKE_UI</filename></link>
</para></listitem>
</itemizedlist>
The first four variables in this list relate to how BitBake treats shell
environment variables during task execution.
By default, BitBake cleans the environment variables and provides tight
control over the shell execution environment.
However, through the use of these first four variables, you can
apply your control regarding the
environment variables allowed to be used by BitBake in the shell
during execution of tasks.
See the
"<link linkend='passing-information-into-the-build-task-environment'>Passing Information Into the Build Task Environment</link>"
section and the information about these variables in the
variable glossary for more information on how they work and
on how to use them.
</para>
<para>
The base configuration metadata is global
and therefore affects all recipes and tasks that are executed.
</para>
<para>
BitBake first searches the current working directory for an
optional <filename>conf/bblayers.conf</filename> configuration file.
This file is expected to contain a
<link linkend='var-BBLAYERS'><filename>BBLAYERS</filename></link>
variable that is a space-delimited list of 'layer' directories.
Recall that if BitBake cannot find a <filename>bblayers.conf</filename>
file, then it is assumed the user has set the <filename>BBPATH</filename>
and <filename>BBFILES</filename> variables directly in the environment.
</para>
<para>
For each directory (layer) in this list, a <filename>conf/layer.conf</filename>
file is located and parsed with the
<link linkend='var-LAYERDIR'><filename>LAYERDIR</filename></link>
variable being set to the directory where the layer was found.
The idea is these files automatically set up
<link linkend='var-BBPATH'><filename>BBPATH</filename></link>
and other variables correctly for a given build directory.
</para>
<para>
BitBake then expects to find the <filename>conf/bitbake.conf</filename>
file somewhere in the user-specified <filename>BBPATH</filename>.
That configuration file generally has include directives to pull
in any other metadata such as files specific to the architecture,
the machine, the local environment, and so forth.
</para>
<para>
Only variable definitions and include directives are allowed
in BitBake <filename>.conf</filename> files.
Some variables directly influence BitBake's behavior.
These variables might have been set from the environment
depending on the environment variables previously
mentioned or set in the configuration files.
The
"<link linkend='ref-variables-glos'>Variables Glossary</link>"
chapter presents a full list of variables.
</para>
<para>
After parsing configuration files, BitBake uses its rudimentary
inheritance mechanism, which is through class files, to inherit
some standard classes.
BitBake parses a class when the inherit directive responsible
for getting that class is encountered.
</para>
<para>
The <filename>base.bbclass</filename> file is always included.
Other classes that are specified in the configuration using the
<link linkend='var-INHERIT'><filename>INHERIT</filename></link>
variable are also included.
BitBake searches for class files in a
<filename>classes</filename> subdirectory under
the paths in <filename>BBPATH</filename> in the same way as
configuration files.
</para>
<para>
A good way to get an idea of the configuration files and
the class files used in your execution environment is to
run the following BitBake command:
<literallayout class='monospaced'>
$ bitbake -e > mybb.log
</literallayout>
Examining the top of the <filename>mybb.log</filename>
shows you the many configuration files and class files
used in your execution environment.
</para>
<note>
<para>
You need to be aware of how BitBake parses curly braces.
If a recipe uses a closing curly brace within the function and
the character has no leading spaces, BitBake produces a parsing
error.
If you use a pair of curly braces in a shell function, the
closing curly brace must not be located at the start of the line
without leading spaces.
</para>
<para>
Here is an example that causes BitBake to produce a parsing
error:
<literallayout class='monospaced'>
fakeroot create_shar() {
cat &lt;&lt; "EOF" &gt; ${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.sh
usage()
{
echo "test"
###### The following "}" at the start of the line causes a parsing error ######
}
EOF
}
</literallayout>
Writing the recipe this way avoids the error:
<literallayout class='monospaced'>
fakeroot create_shar() {
cat &lt;&lt; "EOF" &gt; ${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.sh
usage()
{
echo "test"
######The following "}" with a leading space at the start of the line avoids the error ######
}
EOF
}
</literallayout>
</para>
</note>
</section>
<section id='locating-and-parsing-recipes'>
<title>Locating and Parsing Recipes</title>
<para>
During the configuration phase, BitBake will have set
<link linkend='var-BBFILES'><filename>BBFILES</filename></link>.
BitBake now uses it to construct a list of recipes to parse,
along with any append files (<filename>.bbappend</filename>)
to apply.
<filename>BBFILES</filename> is a space-separated list of
available files and supports wildcards.
An example would be:
<literallayout class='monospaced'>
BBFILES = "/path/to/bbfiles/*.bb /path/to/appends/*.bbappend"
</literallayout>
BitBake parses each recipe and append file located
with <filename>BBFILES</filename> and stores the values of
various variables into the datastore.
<note>
Append files are applied in the order they are encountered in
<filename>BBFILES</filename>.
</note>
For each file, a fresh copy of the base configuration is
made, then the recipe is parsed line by line.
Any inherit statements cause BitBake to find and
then parse class files (<filename>.bbclass</filename>)
using
<link linkend='var-BBPATH'><filename>BBPATH</filename></link>
as the search path.
Finally, BitBake parses in order any append files found in
<filename>BBFILES</filename>.
</para>
<para>
One common convention is to use the recipe filename to define
pieces of metadata.
For example, in <filename>bitbake.conf</filename> the recipe
name and version are used to set the variables
<link linkend='var-PN'><filename>PN</filename></link> and
<link linkend='var-PV'><filename>PV</filename></link>:
<literallayout class='monospaced'>
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
</literallayout>
In this example, a recipe called "something_1.2.3.bb" would set
<filename>PN</filename> to "something" and
<filename>PV</filename> to "1.2.3".
</para>
<para>
By the time parsing is complete for a recipe, BitBake
has a list of tasks that the recipe defines and a set of
data consisting of keys and values as well as
dependency information about the tasks.
</para>
<para>
BitBake does not need all of this information.
It only needs a small subset of the information to make
decisions about the recipe.
Consequently, BitBake caches the values in which it is
interested and does not store the rest of the information.
Experience has shown it is faster to re-parse the metadata than to
try and write it out to the disk and then reload it.
</para>
<para>
Where possible, subsequent BitBake commands reuse this cache of
recipe information.
The validity of this cache is determined by first computing a
checksum of the base configuration data (see
<link linkend='var-BB_HASHCONFIG_WHITELIST'><filename>BB_HASHCONFIG_WHITELIST</filename></link>)
and then checking if the checksum matches.
If that checksum matches what is in the cache and the recipe
and class files have not changed, Bitbake is able to use
the cache.
BitBake then reloads the cached information about the recipe
instead of reparsing it from scratch.
</para>
<para>
Recipe file collections exist to allow the user to
have multiple repositories of
<filename>.bb</filename> files that contain the same
exact package.
For example, one could easily use them to make one's
own local copy of an upstream repository, but with
custom modifications that one does not want upstream.
Here is an example:
<literallayout class='monospaced'>
BBFILES = "/stuff/openembedded/*/*.bb /stuff/openembedded.modified/*/*.bb"
BBFILE_COLLECTIONS = "upstream local"
BBFILE_PATTERN_upstream = "^/stuff/openembedded/"
BBFILE_PATTERN_local = "^/stuff/openembedded.modified/"
BBFILE_PRIORITY_upstream = "5"
BBFILE_PRIORITY_local = "10"
</literallayout>
<note>
The layers mechanism is now the preferred method of collecting
code.
While the collections code remains, its main use is to set layer
priorities and to deal with overlap (conflicts) between layers.
</note>
</para>
</section>
<section id='bb-bitbake-providers'>
<title>Providers</title>
<para>
Assuming BitBake has been instructed to execute a target
and that all the recipe files have been parsed, BitBake
starts to figure out how to build the target.
BitBake looks through the <filename>PROVIDES</filename> list
for each of the recipes.
A <filename>PROVIDES</filename> list is the list of names by which
the recipe can be known.
Each recipe's <filename>PROVIDES</filename> list is created
implicitly through the recipe's
<link linkend='var-PN'><filename>PN</filename></link> variable
and explicitly through the recipe's
<link linkend='var-PROVIDES'><filename>PROVIDES</filename></link>
variable, which is optional.
</para>
<para>
When a recipe uses <filename>PROVIDES</filename>, that recipe's
functionality can be found under an alternative name or names other
than the implicit <filename>PN</filename> name.
As an example, suppose a recipe named <filename>keyboard_1.0.bb</filename>
contained the following:
<literallayout class='monospaced'>
PROVIDES += "fullkeyboard"
</literallayout>
The <filename>PROVIDES</filename> list for this recipe becomes
"keyboard", which is implicit, and "fullkeyboard", which is explicit.
Consequently, the functionality found in
<filename>keyboard_1.0.bb</filename> can be found under two
different names.
</para>
</section>
<section id='bb-bitbake-preferences'>
<title>Preferences</title>
<para>
The <filename>PROVIDES</filename> list is only part of the solution
for figuring out a target's recipes.
Because targets might have multiple providers, BitBake needs
to prioritize providers by determining provider preferences.
</para>
<para>
A common example in which a target has multiple providers
is "virtual/kernel", which is on the
<filename>PROVIDES</filename> list for each kernel recipe.
Each machine often selects the best kernel provider by using a
line similar to the following in the machine configuration file:
<literallayout class='monospaced'>
PREFERRED_PROVIDER_virtual/kernel = "linux-yocto"
</literallayout>
The default
<link linkend='var-PREFERRED_PROVIDER'><filename>PREFERRED_PROVIDER</filename></link>
is the provider with the same name as the target.
Bitbake iterates through each target it needs to build and
resolves them and their dependencies using this process.
</para>
<para>
Understanding how providers are chosen is made complicated by the fact
that multiple versions might exist for a given provider.
BitBake defaults to the highest version of a provider.
Version comparisons are made using the same method as Debian.
You can use the
<link linkend='var-PREFERRED_VERSION'><filename>PREFERRED_VERSION</filename></link>
variable to specify a particular version.
You can influence the order by using the
<link linkend='var-DEFAULT_PREFERENCE'><filename>DEFAULT_PREFERENCE</filename></link>
variable.
</para>
<para>
By default, files have a preference of "0".
Setting <filename>DEFAULT_PREFERENCE</filename> to "-1" makes the
recipe unlikely to be used unless it is explicitly referenced.
Setting <filename>DEFAULT_PREFERENCE</filename> to "1" makes it
likely the recipe is used.
<filename>PREFERRED_VERSION</filename> overrides any
<filename>DEFAULT_PREFERENCE</filename> setting.
<filename>DEFAULT_PREFERENCE</filename> is often used to mark newer
and more experimental recipe versions until they have undergone
sufficient testing to be considered stable.
</para>
<para>
When there are multiple “versions” of a given recipe,
BitBake defaults to selecting the most recent
version, unless otherwise specified.
If the recipe in question has a
<link linkend='var-DEFAULT_PREFERENCE'><filename>DEFAULT_PREFERENCE</filename></link>
set lower than the other recipes (default is 0), then
it will not be selected.
This allows the person or persons maintaining
the repository of recipe files to specify
their preference for the default selected version.
Additionally, the user can specify their preferred version.
</para>
<para>
If the first recipe is named <filename>a_1.1.bb</filename>, then the
<link linkend='var-PN'><filename>PN</filename></link> variable
will be set to “a”, and the
<link linkend='var-PV'><filename>PV</filename></link>
variable will be set to 1.1.
</para>
<para>
Thus, if a recipe named <filename>a_1.2.bb</filename> exists, BitBake
will choose 1.2 by default.
However, if you define the following variable in a
<filename>.conf</filename> file that BitBake parses, you
can change that preference:
<literallayout class='monospaced'>
PREFERRED_VERSION_a = "1.1"
</literallayout>
</para>
<note>
<para>
It is common for a recipe to provide two versions -- a stable,
numbered (and preferred) version, and a version that is
automatically checked out from a source code repository that
is considered more "bleeding edge" but can be selected only
explicitly.
</para>
<para>
For example, in the OpenEmbedded codebase, there is a standard,
versioned recipe file for BusyBox,
<filename>busybox_1.22.1.bb</filename>,
but there is also a Git-based version,
<filename>busybox_git.bb</filename>, which explicitly contains the line
<literallayout class='monospaced'>
DEFAULT_PREFERENCE = "-1"
</literallayout>
to ensure that the numbered, stable version is always preferred
unless the developer selects otherwise.
</para>
</note>
</section>
<section id='bb-bitbake-dependencies'>
<title>Dependencies</title>
<para>
Each target BitBake builds consists of multiple tasks such as
<filename>fetch</filename>, <filename>unpack</filename>,
<filename>patch</filename>, <filename>configure</filename>,
and <filename>compile</filename>.
For best performance on multi-core systems, BitBake considers each
task as an independent
entity with its own set of dependencies.
</para>
<para>
Dependencies are defined through several variables.
You can find information about variables BitBake uses in
the <link linkend='ref-variables-glos'>Variables Glossary</link>
near the end of this manual.
At a basic level, it is sufficient to know that BitBake uses the
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link> and
<link linkend='var-RDEPENDS'><filename>RDEPENDS</filename></link> variables when
calculating dependencies.
</para>
<para>
For more information on how BitBake handles dependencies, see the
"<link linkend='dependencies'>Dependencies</link>" section.
</para>
</section>
<section id='ref-bitbake-tasklist'>
<title>The Task List</title>
<para>
Based on the generated list of providers and the dependency information,
BitBake can now calculate exactly what tasks it needs to run and in what
order it needs to run them.
The
"<link linkend='executing-tasks'>Executing Tasks</link>" section has more
information on how BitBake chooses which task to execute next.
</para>
<para>
The build now starts with BitBake forking off threads up to the limit set in the
<link linkend='var-BB_NUMBER_THREADS'><filename>BB_NUMBER_THREADS</filename></link>
variable.
BitBake continues to fork threads as long as there are tasks ready to run,
those tasks have all their dependencies met, and the thread threshold has not been
exceeded.
</para>
<para>
It is worth noting that you can greatly speed up the build time by properly setting
the <filename>BB_NUMBER_THREADS</filename> variable.
</para>
<para>
As each task completes, a timestamp is written to the directory specified by the
<link linkend='var-STAMP'><filename>STAMP</filename></link> variable.
On subsequent runs, BitBake looks in the build directory within
<filename>tmp/stamps</filename> and does not rerun
tasks that are already completed unless a timestamp is found to be invalid.
Currently, invalid timestamps are only considered on a per
recipe file basis.
So, for example, if the configure stamp has a timestamp greater than the
compile timestamp for a given target, then the compile task would rerun.
Running the compile task again, however, has no effect on other providers
that depend on that target.
</para>
<para>
The exact format of the stamps is partly configurable.
In modern versions of BitBake, a hash is appended to the
stamp so that if the configuration changes, the stamp becomes
invalid and the task is automatically rerun.
This hash, or signature used, is governed by the signature policy
that is configured (see the
"<link linkend='checksums'>Checksums (Signatures)</link>"
section for information).
It is also possible to append extra metadata to the stamp using
the <filename>[stamp-extra-info]</filename> task flag.
For example, OpenEmbedded uses this flag to make some tasks machine-specific.
</para>
<note>
Some tasks are marked as "nostamp" tasks.
No timestamp file is created when these tasks are run.
Consequently, "nostamp" tasks are always rerun.
</note>
<para>
For more information on tasks, see the
"<link linkend='tasks'>Tasks</link>" section.
</para>
</section>
<section id='executing-tasks'>
<title>Executing Tasks</title>
<para>
Tasks can be either a shell task or a Python task.
For shell tasks, BitBake writes a shell script to
<filename>${</filename><link linkend='var-T'><filename>T</filename></link><filename>}/run.do_taskname.pid</filename>
and then executes the script.
The generated shell script contains all the exported variables,
and the shell functions with all variables expanded.
Output from the shell script goes to the file
<filename>${T}/log.do_taskname.pid</filename>.
Looking at the expanded shell functions in the run file and
the output in the log files is a useful debugging technique.
</para>
<para>
For Python tasks, BitBake executes the task internally and logs
information to the controlling terminal.
Future versions of BitBake will write the functions to files
similar to the way shell tasks are handled.
Logging will be handled in a way similar to shell tasks as well.
</para>
<para>
The order in which BitBake runs the tasks is controlled by its
task scheduler.
It is possible to configure the scheduler and define custom
implementations for specific use cases.
For more information, see these variables that control the
behavior:
<itemizedlist>
<listitem><para>
<link linkend='var-BB_SCHEDULER'><filename>BB_SCHEDULER</filename></link>
</para></listitem>
<listitem><para>
<link linkend='var-BB_SCHEDULERS'><filename>BB_SCHEDULERS</filename></link>
</para></listitem>
</itemizedlist>
It is possible to have functions run before and after a task's main
function.
This is done using the <filename>[prefuncs]</filename>
and <filename>[postfuncs]</filename> flags of the task
that lists the functions to run.
</para>
</section>
<section id='checksums'>
<title>Checksums (Signatures)</title>
<para>
A checksum is a unique signature of a task's inputs.
The signature of a task can be used to determine if a task
needs to be run.
Because it is a change in a task's inputs that triggers running
the task, BitBake needs to detect all the inputs to a given task.
For shell tasks, this turns out to be fairly easy because
BitBake generates a "run" shell script for each task and
it is possible to create a checksum that gives you a good idea of when
the task's data changes.
</para>
<para>
To complicate the problem, some things should not be included in
the checksum.
First, there is the actual specific build path of a given task -
the working directory.
It does not matter if the working directory changes because it should not
affect the output for target packages.
The simplistic approach for excluding the working directory is to set
it to some fixed value and create the checksum for the "run" script.
BitBake goes one step better and uses the
<link linkend='var-BB_HASHBASE_WHITELIST'><filename>BB_HASHBASE_WHITELIST</filename></link>
variable to define a list of variables that should never be included
when generating the signatures.
</para>
<para>
Another problem results from the "run" scripts containing functions that
might or might not get called.
The incremental build solution contains code that figures out dependencies
between shell functions.
This code is used to prune the "run" scripts down to the minimum set,
thereby alleviating this problem and making the "run" scripts much more
readable as a bonus.
</para>
<para>
So far we have solutions for shell scripts.
What about Python tasks?
The same approach applies even though these tasks are more difficult.
The process needs to figure out what variables a Python function accesses
and what functions it calls.
Again, the incremental build solution contains code that first figures out
the variable and function dependencies, and then creates a checksum for the data
used as the input to the task.
</para>
<para>
Like the working directory case, situations exist where dependencies
should be ignored.
For these cases, you can instruct the build process to ignore a dependency
by using a line like the following:
<literallayout class='monospaced'>
PACKAGE_ARCHS[vardepsexclude] = "MACHINE"
</literallayout>
This example ensures that the <filename>PACKAGE_ARCHS</filename> variable does not
depend on the value of <filename>MACHINE</filename>, even if it does reference it.
</para>
<para>
Equally, there are cases where we need to add dependencies BitBake
is not able to find.
You can accomplish this by using a line like the following:
<literallayout class='monospaced'>
PACKAGE_ARCHS[vardeps] = "MACHINE"
</literallayout>
This example explicitly adds the <filename>MACHINE</filename> variable as a
dependency for <filename>PACKAGE_ARCHS</filename>.
</para>
<para>
Consider a case with in-line Python, for example, where BitBake is not
able to figure out dependencies.
When running in debug mode (i.e. using <filename>-DDD</filename>), BitBake
produces output when it discovers something for which it cannot figure out
dependencies.
</para>
<para>
Thus far, this section has limited discussion to the direct inputs into a task.
Information based on direct inputs is referred to as the "basehash" in the
code.
However, there is still the question of a task's indirect inputs - the
things that were already built and present in the build directory.
The checksum (or signature) for a particular task needs to add the hashes
of all the tasks on which the particular task depends.
Choosing which dependencies to add is a policy decision.
However, the effect is to generate a master checksum that combines the basehash
and the hashes of the task's dependencies.
</para>
<para>
At the code level, there are a variety of ways both the basehash and the
dependent task hashes can be influenced.
Within the BitBake configuration file, we can give BitBake some extra information
to help it construct the basehash.
The following statement effectively results in a list of global variable
dependency excludes - variables never included in any checksum.
This example uses variables from OpenEmbedded to help illustrate
the concept:
<literallayout class='monospaced'>
BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL TERM \
USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST \
PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN PARALLEL_MAKE \
CCACHE_DIR EXTERNAL_TOOLCHAIN CCACHE CCACHE_DISABLE LICENSE_PATH SDKPKGSUFFIX"
</literallayout>
The previous example excludes the work directory, which is part of
<filename>TMPDIR</filename>.
</para>
<para>
The rules for deciding which hashes of dependent tasks to include through
dependency chains are more complex and are generally accomplished with a
Python function.
The code in <filename>meta/lib/oe/sstatesig.py</filename> shows two examples
of this and also illustrates how you can insert your own policy into the system
if so desired.
This file defines the two basic signature generators OpenEmbedded Core
uses: "OEBasic" and "OEBasicHash".
By default, there is a dummy "noop" signature handler enabled in BitBake.
This means that behavior is unchanged from previous versions.
<filename>OE-Core</filename> uses the "OEBasicHash" signature handler by default
through this setting in the <filename>bitbake.conf</filename> file:
<literallayout class='monospaced'>
BB_SIGNATURE_HANDLER ?= "OEBasicHash"
</literallayout>
The "OEBasicHash" <filename>BB_SIGNATURE_HANDLER</filename> is the same as the
"OEBasic" version but adds the task hash to the stamp files.
This results in any metadata change that changes the task hash, automatically
causing the task to be run again.
This removes the need to bump
<link linkend='var-PR'><filename>PR</filename></link>
values, and changes to metadata automatically ripple across the build.
</para>
<para>
It is also worth noting that the end result of these signature generators is to
make some dependency and hash information available to the build.
This information includes:
<itemizedlist>
<listitem><para><filename>BB_BASEHASH_task-</filename><replaceable>taskname</replaceable>:
The base hashes for each task in the recipe.
</para></listitem>
<listitem><para><filename>BB_BASEHASH_</filename><replaceable>filename</replaceable><filename>:</filename><replaceable>taskname</replaceable>:
The base hashes for each dependent task.
</para></listitem>
<listitem><para><filename>BBHASHDEPS_</filename><replaceable>filename</replaceable><filename>:</filename><replaceable>taskname</replaceable>:
The task dependencies for each task.
</para></listitem>
<listitem><para><filename>BB_TASKHASH</filename>:
The hash of the currently running task.
</para></listitem>
</itemizedlist>
</para>
<para>
It is worth noting that BitBake's "-S" option lets you
debug Bitbake's processing of signatures.
The options passed to -S allow different debugging modes
to be used, either using BitBake's own debug functions
or possibly those defined in the metadata/signature handler
itself.
The simplest parameter to pass is "none", which causes a
set of signature information to be written out into
<filename>STAMPS_DIR</filename>
corresponding to the targets specified.
The other currently available parameter is "printdiff",
which causes BitBake to try to establish the closest
signature match it can (e.g. in the sstate cache) and then
run <filename>bitbake-diffsigs</filename> over the matches
to determine the stamps and delta where these two
stamp trees diverge.
<note>
It is likely that future versions of BitBake will
provide other signature handlers triggered through
additional "-S" parameters.
</note>
</para>
<para>
You can find more information on checksum metadata in the
"<link linkend='task-checksums-and-setscene'>Task Checksums and Setscene</link>"
section.
</para>
</section>
<section id='setscene'>
<title>Setscene</title>
<para>
The setscene process enables BitBake to handle "pre-built" artifacts.
The ability to handle and reuse these artifacts allows BitBake
the luxury of not having to build something from scratch every time.
Instead, BitBake can use, when possible, existing build artifacts.
</para>
<para>
BitBake needs to have reliable data indicating whether or not an
artifact is compatible.
Signatures, described in the previous section, provide an ideal
way of representing whether an artifact is compatible.
If a signature is the same, an object can be reused.
</para>
<para>
If an object can be reused, the problem then becomes how to
replace a given task or set of tasks with the pre-built artifact.
BitBake solves the problem with the "setscene" process.
</para>
<para>
When BitBake is asked to build a given target, before building anything,
it first asks whether cached information is available for any of the
targets it's building, or any of the intermediate targets.
If cached information is available, BitBake uses this information instead of
running the main tasks.
</para>
<para>
BitBake first calls the function defined by the
<link linkend='var-BB_HASHCHECK_FUNCTION'><filename>BB_HASHCHECK_FUNCTION</filename></link>
variable with a list of tasks and corresponding
hashes it wants to build.
This function is designed to be fast and returns a list
of the tasks for which it believes in can obtain artifacts.
</para>
<para>
Next, for each of the tasks that were returned as possibilities,
BitBake executes a setscene version of the task that the possible
artifact covers.
Setscene versions of a task have the string "_setscene" appended to the
task name.
So, for example, the task with the name <filename>xxx</filename> has
a setscene task named <filename>xxx_setscene</filename>.
The setscene version of the task executes and provides the necessary
artifacts returning either success or failure.
</para>
<para>
As previously mentioned, an artifact can cover more than one task.
For example, it is pointless to obtain a compiler if you
already have the compiled binary.
To handle this, BitBake calls the
<link linkend='var-BB_SETSCENE_DEPVALID'><filename>BB_SETSCENE_DEPVALID</filename></link>
function for each successful setscene task to know whether or not it needs
to obtain the dependencies of that task.
</para>
<para>
Finally, after all the setscene tasks have executed, BitBake calls the
function listed in
<link linkend='var-BB_SETSCENE_VERIFY_FUNCTION2'><filename>BB_SETSCENE_VERIFY_FUNCTION2</filename></link>
with the list of tasks BitBake thinks has been "covered".
The metadata can then ensure that this list is correct and can
inform BitBake that it wants specific tasks to be run regardless
of the setscene result.
</para>
<para>
You can find more information on setscene metadata in the
"<link linkend='task-checksums-and-setscene'>Task Checksums and Setscene</link>"
section.
</para>
</section>
</chapter>

View File

@@ -1,821 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter>
<title>File Download Support</title>
<para>
BitBake's fetch module is a standalone piece of library code
that deals with the intricacies of downloading source code
and files from remote systems.
Fetching source code is one of the cornerstones of building software.
As such, this module forms an important part of BitBake.
</para>
<para>
The current fetch module is called "fetch2" and refers to the
fact that it is the second major version of the API.
The original version is obsolete and has been removed from the codebase.
Thus, in all cases, "fetch" refers to "fetch2" in this
manual.
</para>
<section id='the-download-fetch'>
<title>The Download (Fetch)</title>
<para>
BitBake takes several steps when fetching source code or files.
The fetcher codebase deals with two distinct processes in order:
obtaining the files from somewhere (cached or otherwise)
and then unpacking those files into a specific location and
perhaps in a specific way.
Getting and unpacking the files is often optionally followed
by patching.
Patching, however, is not covered by this module.
</para>
<para>
The code to execute the first part of this process, a fetch,
looks something like the following:
<literallayout class='monospaced'>
src_uri = (d.getVar('SRC_URI') or "").split()
fetcher = bb.fetch2.Fetch(src_uri, d)
fetcher.download()
</literallayout>
This code sets up an instance of the fetch class.
The instance uses a space-separated list of URLs from the
<link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>
variable and then calls the <filename>download</filename>
method to download the files.
</para>
<para>
The instantiation of the fetch class is usually followed by:
<literallayout class='monospaced'>
rootdir = l.getVar('WORKDIR')
fetcher.unpack(rootdir)
</literallayout>
This code unpacks the downloaded files to the
specified by <filename>WORKDIR</filename>.
<note>
For convenience, the naming in these examples matches
the variables used by OpenEmbedded.
If you want to see the above code in action, examine
the OpenEmbedded class file <filename>base.bbclass</filename>.
</note>
The <filename>SRC_URI</filename> and <filename>WORKDIR</filename>
variables are not hardcoded into the fetcher, since those fetcher
methods can be (and are) called with different variable names.
In OpenEmbedded for example, the shared state (sstate) code uses
the fetch module to fetch the sstate files.
</para>
<para>
When the <filename>download()</filename> method is called,
BitBake tries to resolve the URLs by looking for source files
in a specific search order:
<itemizedlist>
<listitem><para><emphasis>Pre-mirror Sites:</emphasis>
BitBake first uses pre-mirrors to try and find source files.
These locations are defined using the
<link linkend='var-PREMIRRORS'><filename>PREMIRRORS</filename></link>
variable.
</para></listitem>
<listitem><para><emphasis>Source URI:</emphasis>
If pre-mirrors fail, BitBake uses the original URL (e.g from
<filename>SRC_URI</filename>).
</para></listitem>
<listitem><para><emphasis>Mirror Sites:</emphasis>
If fetch failures occur, BitBake next uses mirror locations as
defined by the
<link linkend='var-MIRRORS'><filename>MIRRORS</filename></link>
variable.
</para></listitem>
</itemizedlist>
</para>
<para>
For each URL passed to the fetcher, the fetcher
calls the submodule that handles that particular URL type.
This behavior can be the source of some confusion when you
are providing URLs for the <filename>SRC_URI</filename>
variable.
Consider the following two URLs:
<literallayout class='monospaced'>
http://git.yoctoproject.org/git/poky;protocol=git
git://git.yoctoproject.org/git/poky;protocol=http
</literallayout>
In the former case, the URL is passed to the
<filename>wget</filename> fetcher, which does not
understand "git".
Therefore, the latter case is the correct form since the
Git fetcher does know how to use HTTP as a transport.
</para>
<para>
Here are some examples that show commonly used mirror
definitions:
<literallayout class='monospaced'>
PREMIRRORS ?= "\
bzr://.*/.* http://somemirror.org/sources/ \n \
cvs://.*/.* http://somemirror.org/sources/ \n \
git://.*/.* http://somemirror.org/sources/ \n \
hg://.*/.* http://somemirror.org/sources/ \n \
osc://.*/.* http://somemirror.org/sources/ \n \
p4://.*/.* http://somemirror.org/sources/ \n \
svn://.*/.* http://somemirror.org/sources/ \n"
MIRRORS =+ "\
ftp://.*/.* http://somemirror.org/sources/ \n \
http://.*/.* http://somemirror.org/sources/ \n \
https://.*/.* http://somemirror.org/sources/ \n"
</literallayout>
It is useful to note that BitBake supports
cross-URLs.
It is possible to mirror a Git repository on an HTTP
server as a tarball.
This is what the <filename>git://</filename> mapping in
the previous example does.
</para>
<para>
Since network accesses are slow, Bitbake maintains a
cache of files downloaded from the network.
Any source files that are not local (i.e.
downloaded from the Internet) are placed into the download
directory, which is specified by the
<link linkend='var-DL_DIR'><filename>DL_DIR</filename></link>
variable.
</para>
<para>
File integrity is of key importance for reproducing builds.
For non-local archive downloads, the fetcher code can verify
SHA-256 and MD5 checksums to ensure the archives have been
downloaded correctly.
You can specify these checksums by using the
<filename>SRC_URI</filename> variable with the appropriate
varflags as follows:
<literallayout class='monospaced'>
SRC_URI[md5sum] = "<replaceable>value</replaceable>"
SRC_URI[sha256sum] = "<replaceable>value</replaceable>"
</literallayout>
You can also specify the checksums as parameters on the
<filename>SRC_URI</filename> as shown below:
<literallayout class='monospaced'>
SRC_URI = "http://example.com/foobar.tar.bz2;md5sum=4a8e0f237e961fd7785d19d07fdb994d"
</literallayout>
If multiple URIs exist, you can specify the checksums either
directly as in the previous example, or you can name the URLs.
The following syntax shows how you name the URIs:
<literallayout class='monospaced'>
SRC_URI = "http://example.com/foobar.tar.bz2;name=foo"
SRC_URI[foo.md5sum] = 4a8e0f237e961fd7785d19d07fdb994d
</literallayout>
After a file has been downloaded and has had its checksum checked,
a ".done" stamp is placed in <filename>DL_DIR</filename>.
BitBake uses this stamp during subsequent builds to avoid
downloading or comparing a checksum for the file again.
<note>
It is assumed that local storage is safe from data corruption.
If this were not the case, there would be bigger issues to worry about.
</note>
</para>
<para>
If
<link linkend='var-BB_STRICT_CHECKSUM'><filename>BB_STRICT_CHECKSUM</filename></link>
is set, any download without a checksum triggers an
error message.
The
<link linkend='var-BB_NO_NETWORK'><filename>BB_NO_NETWORK</filename></link>
variable can be used to make any attempted network access a fatal
error, which is useful for checking that mirrors are complete
as well as other things.
</para>
</section>
<section id='bb-the-unpack'>
<title>The Unpack</title>
<para>
The unpack process usually immediately follows the download.
For all URLs except Git URLs, BitBake uses the common
<filename>unpack</filename> method.
</para>
<para>
A number of parameters exist that you can specify within the
URL to govern the behavior of the unpack stage:
<itemizedlist>
<listitem><para><emphasis>unpack:</emphasis>
Controls whether the URL components are unpacked.
If set to "1", which is the default, the components
are unpacked.
If set to "0", the unpack stage leaves the file alone.
This parameter is useful when you want an archive to be
copied in and not be unpacked.
</para></listitem>
<listitem><para><emphasis>dos:</emphasis>
Applies to <filename>.zip</filename> and
<filename>.jar</filename> files and specifies whether to
use DOS line ending conversion on text files.
</para></listitem>
<listitem><para><emphasis>basepath:</emphasis>
Instructs the unpack stage to strip the specified
directories from the source path when unpacking.
</para></listitem>
<listitem><para><emphasis>subdir:</emphasis>
Unpacks the specific URL to the specified subdirectory
within the root directory.
</para></listitem>
</itemizedlist>
The unpack call automatically decompresses and extracts files
with ".Z", ".z", ".gz", ".xz", ".zip", ".jar", ".ipk", ".rpm".
".srpm", ".deb" and ".bz2" extensions as well as various combinations
of tarball extensions.
</para>
<para>
As mentioned, the Git fetcher has its own unpack method that
is optimized to work with Git trees.
Basically, this method works by cloning the tree into the final
directory.
The process is completed using references so that there is
only one central copy of the Git metadata needed.
</para>
</section>
<section id='bb-fetchers'>
<title>Fetchers</title>
<para>
As mentioned earlier, the URL prefix determines which
fetcher submodule BitBake uses.
Each submodule can support different URL parameters,
which are described in the following sections.
</para>
<section id='local-file-fetcher'>
<title>Local file fetcher (<filename>file://</filename>)</title>
<para>
This submodule handles URLs that begin with
<filename>file://</filename>.
The filename you specify within the URL can be
either an absolute or relative path to a file.
If the filename is relative, the contents of the
<link linkend='var-FILESPATH'><filename>FILESPATH</filename></link>
variable is used in the same way
<filename>PATH</filename> is used to find executables.
If the file cannot be found, it is assumed that it is available in
<link linkend='var-DL_DIR'><filename>DL_DIR</filename></link>
by the time the <filename>download()</filename> method is called.
</para>
<para>
If you specify a directory, the entire directory is
unpacked.
</para>
<para>
Here are a couple of example URLs, the first relative and
the second absolute:
<literallayout class='monospaced'>
SRC_URI = "file://relativefile.patch"
SRC_URI = "file:///Users/ich/very_important_software"
</literallayout>
</para>
</section>
<section id='http-ftp-fetcher'>
<title>HTTP/FTP wget fetcher (<filename>http://</filename>, <filename>ftp://</filename>, <filename>https://</filename>)</title>
<para>
This fetcher obtains files from web and FTP servers.
Internally, the fetcher uses the wget utility.
</para>
<para>
The executable and parameters used are specified by the
<filename>FETCHCMD_wget</filename> variable, which defaults
to sensible values.
The fetcher supports a parameter "downloadfilename" that
allows the name of the downloaded file to be specified.
Specifying the name of the downloaded file is useful
for avoiding collisions in
<link linkend='var-DL_DIR'><filename>DL_DIR</filename></link>
when dealing with multiple files that have the same name.
</para>
<para>
Some example URLs are as follows:
<literallayout class='monospaced'>
SRC_URI = "http://oe.handhelds.org/not_there.aac"
SRC_URI = "ftp://oe.handhelds.org/not_there_as_well.aac"
SRC_URI = "ftp://you@oe.handhelds.org/home/you/secret.plan"
</literallayout>
</para>
<note>
Because URL parameters are delimited by semi-colons, this can
introduce ambiguity when parsing URLs that also contain semi-colons,
for example:
<literallayout class='monospaced'>
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git;a=snapshot;h=a5dd47"
</literallayout>
Such URLs should should be modified by replacing semi-colons with '&amp;' characters:
<literallayout class='monospaced'>
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&amp;a=snapshot&amp;h=a5dd47"
</literallayout>
In most cases this should work. Treating semi-colons and '&amp;' in queries
identically is recommended by the World Wide Web Consortium (W3C).
Note that due to the nature of the URL, you may have to specify the name
of the downloaded file as well:
<literallayout class='monospaced'>
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&amp;a=snapshot&amp;h=a5dd47;downloadfilename=myfile.bz2"
</literallayout>
</note>
</section>
<section id='cvs-fetcher'>
<title>CVS fetcher (<filename>(cvs://</filename>)</title>
<para>
This submodule handles checking out files from the
CVS version control system.
You can configure it using a number of different variables:
<itemizedlist>
<listitem><para><emphasis><filename>FETCHCMD_cvs</filename>:</emphasis>
The name of the executable to use when running
the <filename>cvs</filename> command.
This name is usually "cvs".
</para></listitem>
<listitem><para><emphasis><filename>SRCDATE</filename>:</emphasis>
The date to use when fetching the CVS source code.
A special value of "now" causes the checkout to
be updated on every build.
</para></listitem>
<listitem><para><emphasis><link linkend='var-CVSDIR'><filename>CVSDIR</filename></link>:</emphasis>
Specifies where a temporary checkout is saved.
The location is often <filename>DL_DIR/cvs</filename>.
</para></listitem>
<listitem><para><emphasis><filename>CVS_PROXY_HOST</filename>:</emphasis>
The name to use as a "proxy=" parameter to the
<filename>cvs</filename> command.
</para></listitem>
<listitem><para><emphasis><filename>CVS_PROXY_PORT</filename>:</emphasis>
The port number to use as a "proxyport=" parameter to
the <filename>cvs</filename> command.
</para></listitem>
</itemizedlist>
As well as the standard username and password URL syntax,
you can also configure the fetcher with various URL parameters:
</para>
<para>
The supported parameters are as follows:
<itemizedlist>
<listitem><para><emphasis>"method":</emphasis>
The protocol over which to communicate with the CVS
server.
By default, this protocol is "pserver".
If "method" is set to "ext", BitBake examines the
"rsh" parameter and sets <filename>CVS_RSH</filename>.
You can use "dir" for local directories.
</para></listitem>
<listitem><para><emphasis>"module":</emphasis>
Specifies the module to check out.
You must supply this parameter.
</para></listitem>
<listitem><para><emphasis>"tag":</emphasis>
Describes which CVS TAG should be used for
the checkout.
By default, the TAG is empty.
</para></listitem>
<listitem><para><emphasis>"date":</emphasis>
Specifies a date.
If no "date" is specified, the
<link linkend='var-SRCDATE'><filename>SRCDATE</filename></link>
of the configuration is used to checkout a specific date.
The special value of "now" causes the checkout to be
updated on every build.
</para></listitem>
<listitem><para><emphasis>"localdir":</emphasis>
Used to rename the module.
Effectively, you are renaming the output directory
to which the module is unpacked.
You are forcing the module into a special
directory relative to
<link linkend='var-CVSDIR'><filename>CVSDIR</filename></link>.
</para></listitem>
<listitem><para><emphasis>"rsh"</emphasis>
Used in conjunction with the "method" parameter.
</para></listitem>
<listitem><para><emphasis>"scmdata":</emphasis>
Causes the CVS metadata to be maintained in the tarball
the fetcher creates when set to "keep".
The tarball is expanded into the work directory.
By default, the CVS metadata is removed.
</para></listitem>
<listitem><para><emphasis>"fullpath":</emphasis>
Controls whether the resulting checkout is at the
module level, which is the default, or is at deeper
paths.
</para></listitem>
<listitem><para><emphasis>"norecurse":</emphasis>
Causes the fetcher to only checkout the specified
directory with no recurse into any subdirectories.
</para></listitem>
<listitem><para><emphasis>"port":</emphasis>
The port to which the CVS server connects.
</para></listitem>
</itemizedlist>
Some example URLs are as follows:
<literallayout class='monospaced'>
SRC_URI = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
SRC_URI = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
</literallayout>
</para>
</section>
<section id='svn-fetcher'>
<title>Subversion (SVN) Fetcher (<filename>svn://</filename>)</title>
<para>
This fetcher submodule fetches code from the
Subversion source control system.
The executable used is specified by
<filename>FETCHCMD_svn</filename>, which defaults
to "svn".
The fetcher's temporary working directory is set by
<link linkend='var-SVNDIR'><filename>SVNDIR</filename></link>,
which is usually <filename>DL_DIR/svn</filename>.
</para>
<para>
The supported parameters are as follows:
<itemizedlist>
<listitem><para><emphasis>"module":</emphasis>
The name of the svn module to checkout.
You must provide this parameter.
You can think of this parameter as the top-level
directory of the repository data you want.
</para></listitem>
<listitem><para><emphasis>"path_spec":</emphasis>
A specific directory in which to checkout the
specified svn module.
</para></listitem>
<listitem><para><emphasis>"protocol":</emphasis>
The protocol to use, which defaults to "svn".
If "protocol" is set to "svn+ssh", the "ssh"
parameter is also used.
</para></listitem>
<listitem><para><emphasis>"rev":</emphasis>
The revision of the source code to checkout.
</para></listitem>
<listitem><para><emphasis>"scmdata":</emphasis>
Causes the “.svn” directories to be available during
compile-time when set to "keep".
By default, these directories are removed.
</para></listitem>
<listitem><para><emphasis>"ssh":</emphasis>
An optional parameter used when "protocol" is set
to "svn+ssh".
You can use this parameter to specify the ssh
program used by svn.
</para></listitem>
<listitem><para><emphasis>"transportuser":</emphasis>
When required, sets the username for the transport.
By default, this parameter is empty.
The transport username is different than the username
used in the main URL, which is passed to the subversion
command.
</para></listitem>
</itemizedlist>
Following are three examples using svn:
<literallayout class='monospaced'>
SRC_URI = "svn://myrepos/proj1;module=vip;protocol=http;rev=667"
SRC_URI = "svn://myrepos/proj1;module=opie;protocol=svn+ssh"
SRC_URI = "svn://myrepos/proj1;module=trunk;protocol=http;path_spec=${MY_DIR}/proj1"
</literallayout>
</para>
</section>
<section id='git-fetcher'>
<title>Git Fetcher (<filename>git://</filename>)</title>
<para>
This fetcher submodule fetches code from the Git
source control system.
The fetcher works by creating a bare clone of the
remote into
<link linkend='var-GITDIR'><filename>GITDIR</filename></link>,
which is usually <filename>DL_DIR/git2</filename>.
This bare clone is then cloned into the work directory during the
unpack stage when a specific tree is checked out.
This is done using alternates and by reference to
minimize the amount of duplicate data on the disk and
make the unpack process fast.
The executable used can be set with
<filename>FETCHCMD_git</filename>.
</para>
<para>
This fetcher supports the following parameters:
<itemizedlist>
<listitem><para><emphasis>"protocol":</emphasis>
The protocol used to fetch the files.
The default is "git" when a hostname is set.
If a hostname is not set, the Git protocol is "file".
You can also use "http", "https", "ssh" and "rsync".
</para></listitem>
<listitem><para><emphasis>"nocheckout":</emphasis>
Tells the fetcher to not checkout source code when
unpacking when set to "1".
Set this option for the URL where there is a custom
routine to checkout code.
The default is "0".
</para></listitem>
<listitem><para><emphasis>"rebaseable":</emphasis>
Indicates that the upstream Git repository can be rebased.
You should set this parameter to "1" if
revisions can become detached from branches.
In this case, the source mirror tarball is done per
revision, which has a loss of efficiency.
Rebasing the upstream Git repository could cause the
current revision to disappear from the upstream repository.
This option reminds the fetcher to preserve the local cache
carefully for future use.
The default value for this parameter is "0".
</para></listitem>
<listitem><para><emphasis>"nobranch":</emphasis>
Tells the fetcher to not check the SHA validation
for the branch when set to "1".
The default is "0".
Set this option for the recipe that refers to
the commit that is valid for a tag instead of
the branch.
</para></listitem>
<listitem><para><emphasis>"bareclone":</emphasis>
Tells the fetcher to clone a bare clone into the
destination directory without checking out a working tree.
Only the raw Git metadata is provided.
This parameter implies the "nocheckout" parameter as well.
</para></listitem>
<listitem><para><emphasis>"branch":</emphasis>
The branch(es) of the Git tree to clone.
If unset, this is assumed to be "master".
The number of branch parameters much match the number of
name parameters.
</para></listitem>
<listitem><para><emphasis>"rev":</emphasis>
The revision to use for the checkout.
The default is "master".
</para></listitem>
<listitem><para><emphasis>"tag":</emphasis>
Specifies a tag to use for the checkout.
To correctly resolve tags, BitBake must access the
network.
For that reason, tags are often not used.
As far as Git is concerned, the "tag" parameter behaves
effectively the same as the "rev" parameter.
</para></listitem>
<listitem><para><emphasis>"subpath":</emphasis>
Limits the checkout to a specific subpath of the tree.
By default, the whole tree is checked out.
</para></listitem>
<listitem><para><emphasis>"destsuffix":</emphasis>
The name of the path in which to place the checkout.
By default, the path is <filename>git/</filename>.
</para></listitem>
</itemizedlist>
Here are some example URLs:
<literallayout class='monospaced'>
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
</literallayout>
</para>
</section>
<section id='gitsm-fetcher'>
<title>Git Submodule Fetcher (<filename>gitsm://</filename>)</title>
<para>
This fetcher submodule inherits from the
<link linkend='git-fetcher'>Git fetcher</link> and extends
that fetcher's behavior by fetching a repository's submodules.
<link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>
is passed to the Git fetcher as described in the
"<link linkend='git-fetcher'>Git Fetcher (<filename>git://</filename>)</link>"
section.
<note>
<title>Notes and Warnings</title>
<para>
You must clean a recipe when switching between
'<filename>git://</filename>' and
'<filename>gitsm://</filename>' URLs.
</para>
<para>
The Git Submodules fetcher is not a complete fetcher
implementation.
The fetcher has known issues where it does not use the
normal source mirroring infrastructure properly.
</para>
</note>
</para>
</section>
<section id='clearcase-fetcher'>
<title>ClearCase Fetcher (<filename>ccrc://</filename>)</title>
<para>
This fetcher submodule fetches code from a
<ulink url='http://en.wikipedia.org/wiki/Rational_ClearCase'>ClearCase</ulink>
repository.
</para>
<para>
To use this fetcher, make sure your recipe has proper
<link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>,
<link linkend='var-SRCREV'><filename>SRCREV</filename></link>, and
<link linkend='var-PV'><filename>PV</filename></link> settings.
Here is an example:
<literallayout class='monospaced'>
SRC_URI = "ccrc://cc.example.org/ccrc;vob=/example_vob;module=/example_module"
SRCREV = "EXAMPLE_CLEARCASE_TAG"
PV = "${@d.getVar("SRCREV", False).replace("/", "+")}"
</literallayout>
The fetcher uses the <filename>rcleartool</filename> or
<filename>cleartool</filename> remote client, depending on
which one is available.
</para>
<para>
Following are options for the <filename>SRC_URI</filename>
statement:
<itemizedlist>
<listitem><para><emphasis><filename>vob</filename></emphasis>:
The name, which must include the
prepending "/" character, of the ClearCase VOB.
This option is required.
</para></listitem>
<listitem><para><emphasis><filename>module</filename></emphasis>:
The module, which must include the
prepending "/" character, in the selected VOB.
<note>
The <filename>module</filename> and <filename>vob</filename>
options are combined to create the <filename>load</filename> rule in
the view config spec.
As an example, consider the <filename>vob</filename> and
<filename>module</filename> values from the
<filename>SRC_URI</filename> statement at the start of this section.
Combining those values results in the following:
<literallayout class='monospaced'>
load /example_vob/example_module
</literallayout>
</note>
</para></listitem>
<listitem><para><emphasis><filename>proto</filename></emphasis>:
The protocol, which can be either <filename>http</filename> or
<filename>https</filename>.
</para></listitem>
</itemizedlist>
</para>
<para>
By default, the fetcher creates a configuration specification.
If you want this specification written to an area other than the default,
use the <filename>CCASE_CUSTOM_CONFIG_SPEC</filename> variable
in your recipe to define where the specification is written.
<note>
the <filename>SRCREV</filename> loses its functionality if you
specify this variable.
However, <filename>SRCREV</filename> is still used to label the
archive after a fetch even though it does not define what is
fetched.
</note>
</para>
<para>
Here are a couple of other behaviors worth mentioning:
<itemizedlist>
<listitem><para>
When using <filename>cleartool</filename>, the login of
<filename>cleartool</filename> is handled by the system.
The login require no special steps.
</para></listitem>
<listitem><para>
In order to use <filename>rcleartool</filename> with authenticated
users, an "rcleartool login" is necessary before using the fetcher.
</para></listitem>
</itemizedlist>
</para>
</section>
<section id='perforce-fetcher'>
<title>Perforce Fetcher (<filename>p4://</filename>)</title>
<para>
This fetcher submodule fetches code from the
<ulink url='https://www.perforce.com/'>Perforce</ulink>
source control system.
The executable used is specified by
<filename>FETCHCMD_p4</filename>, which defaults
to "p4".
The fetcher's temporary working directory is set by
<link linkend='var-P4DIR'><filename>P4DIR</filename></link>,
which defaults to "DL_DIR/p4".
</para>
<para>
To use this fetcher, make sure your recipe has proper
<link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>,
<link linkend='var-SRCREV'><filename>SRCREV</filename></link>, and
<link linkend='var-PV'><filename>PV</filename></link> values.
The p4 executable is able to use the config file defined by your
system's <filename>P4CONFIG</filename> environment variable in
order to define the Perforce server URL and port, username, and
password if you do not wish to keep those values in a recipe
itself.
If you choose not to use <filename>P4CONFIG</filename>,
or to explicitly set variables that <filename>P4CONFIG</filename>
can contain, you can specify the <filename>P4PORT</filename> value,
which is the server's URL and port number, and you can
specify a username and password directly in your recipe within
<filename>SRC_URI</filename>.
</para>
<para>
Here is an example that relies on <filename>P4CONFIG</filename>
to specify the server URL and port, username, and password, and
fetches the Head Revision:
<literallayout class='monospaced'>
SRC_URI = "p4://example-depot/main/source/..."
SRCREV = "${AUTOREV}"
PV = "p4-${SRCPV}"
S = "${WORKDIR}/p4"
</literallayout>
</para>
<para>
Here is an example that specifies the server URL and port,
username, and password, and fetches a Revision based on a Label:
<literallayout class='monospaced'>
P4PORT = "tcp:p4server.example.net:1666"
SRC_URI = "p4://user:passwd@example-depot/main/source/..."
SRCREV = "release-1.0"
PV = "p4-${SRCPV}"
S = "${WORKDIR}/p4"
</literallayout>
<note>
You should always set <filename>S</filename>
to <filename>"${WORKDIR}/p4"</filename> in your recipe.
</note>
</para>
</section>
<section id='other-fetchers'>
<title>Other Fetchers</title>
<para>
Fetch submodules also exist for the following:
<itemizedlist>
<listitem><para>
Bazaar (<filename>bzr://</filename>)
</para></listitem>
<listitem><para>
Trees using Git Annex (<filename>gitannex://</filename>)
</para></listitem>
<listitem><para>
Secure FTP (<filename>sftp://</filename>)
</para></listitem>
<listitem><para>
Secure Shell (<filename>ssh://</filename>)
</para></listitem>
<listitem><para>
Repo (<filename>repo://</filename>)
</para></listitem>
<listitem><para>
OSC (<filename>osc://</filename>)
</para></listitem>
<listitem><para>
Mercurial (<filename>hg://</filename>)
</para></listitem>
</itemizedlist>
No documentation currently exists for these lesser used
fetcher submodules.
However, you might find the code helpful and readable.
</para>
</section>
</section>
<section id='auto-revisions'>
<title>Auto Revisions</title>
<para>
We need to document <filename>AUTOREV</filename> and
<filename>SRCREV_FORMAT</filename> here.
</para>
</section>
</chapter>

View File

@@ -1,505 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<appendix id='hello-world-example'>
<title>Hello World Example</title>
<section id='bitbake-hello-world'>
<title>BitBake Hello World</title>
<para>
The simplest example commonly used to demonstrate any new
programming language or tool is the
"<ulink url="http://en.wikipedia.org/wiki/Hello_world_program">Hello World</ulink>"
example.
This appendix demonstrates, in tutorial form, Hello
World within the context of BitBake.
The tutorial describes how to create a new project
and the applicable metadata files necessary to allow
BitBake to build it.
</para>
</section>
<section id='example-obtaining-bitbake'>
<title>Obtaining BitBake</title>
<para>
See the
"<link linkend='obtaining-bitbake'>Obtaining BitBake</link>"
section for information on how to obtain BitBake.
Once you have the source code on your machine, the BitBake directory
appears as follows:
<literallayout class='monospaced'>
$ ls -al
total 100
drwxrwxr-x. 9 wmat wmat 4096 Jan 31 13:44 .
drwxrwxr-x. 3 wmat wmat 4096 Feb 4 10:45 ..
-rw-rw-r--. 1 wmat wmat 365 Nov 26 04:55 AUTHORS
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 bin
drwxrwxr-x. 4 wmat wmat 4096 Jan 31 13:44 build
-rw-rw-r--. 1 wmat wmat 16501 Nov 26 04:55 ChangeLog
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 classes
drwxrwxr-x. 2 wmat wmat 4096 Nov 26 04:55 conf
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 contrib
-rw-rw-r--. 1 wmat wmat 17987 Nov 26 04:55 COPYING
drwxrwxr-x. 3 wmat wmat 4096 Nov 26 04:55 doc
-rw-rw-r--. 1 wmat wmat 69 Nov 26 04:55 .gitignore
-rw-rw-r--. 1 wmat wmat 849 Nov 26 04:55 HEADER
drwxrwxr-x. 5 wmat wmat 4096 Jan 31 13:44 lib
-rw-rw-r--. 1 wmat wmat 195 Nov 26 04:55 MANIFEST.in
-rw-rw-r--. 1 wmat wmat 2887 Nov 26 04:55 TODO
</literallayout>
</para>
<para>
At this point, you should have BitBake cloned to
a directory that matches the previous listing except for
dates and user names.
</para>
</section>
<section id='setting-up-the-bitbake-environment'>
<title>Setting Up the BitBake Environment</title>
<para>
First, you need to be sure that you can run BitBake.
Set your working directory to where your local BitBake
files are and run the following command:
<literallayout class='monospaced'>
$ ./bin/bitbake --version
BitBake Build Tool Core version 1.23.0, bitbake version 1.23.0
</literallayout>
The console output tells you what version you are running.
</para>
<para>
The recommended method to run BitBake is from a directory of your
choice.
To be able to run BitBake from any directory, you need to add the
executable binary to your binary to your shell's environment
<filename>PATH</filename> variable.
First, look at your current <filename>PATH</filename> variable
by entering the following:
<literallayout class='monospaced'>
$ echo $PATH
</literallayout>
Next, add the directory location for the BitBake binary to the
<filename>PATH</filename>.
Here is an example that adds the
<filename>/home/scott-lenovo/bitbake/bin</filename> directory
to the front of the <filename>PATH</filename> variable:
<literallayout class='monospaced'>
$ export PATH=/home/scott-lenovo/bitbake/bin:$PATH
</literallayout>
You should now be able to enter the <filename>bitbake</filename>
command from the command line while working from any directory.
</para>
</section>
<section id='the-hello-world-example'>
<title>The Hello World Example</title>
<para>
The overall goal of this exercise is to build a
complete "Hello World" example utilizing task and layer
concepts.
Because this is how modern projects such as OpenEmbedded and
the Yocto Project utilize BitBake, the example
provides an excellent starting point for understanding
BitBake.
</para>
<para>
To help you understand how to use BitBake to build targets,
the example starts with nothing but the <filename>bitbake</filename>
command, which causes BitBake to fail and report problems.
The example progresses by adding pieces to the build to
eventually conclude with a working, minimal "Hello World"
example.
</para>
<para>
While every attempt is made to explain what is happening during
the example, the descriptions cannot cover everything.
You can find further information throughout this manual.
Also, you can actively participate in the
<ulink url='http://lists.openembedded.org/mailman/listinfo/bitbake-devel'></ulink>
discussion mailing list about the BitBake build tool.
</para>
<note>
This example was inspired by and drew heavily from these sources:
<itemizedlist>
<listitem><para>
<ulink url="http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html">Mailing List post - The BitBake equivalent of "Hello, World!"</ulink>
</para></listitem>
<listitem><para>
<ulink url="http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/">Hambedded Linux blog post - From Bitbake Hello World to an Image</ulink>
</para></listitem>
</itemizedlist>
</note>
<para>
As stated earlier, the goal of this example
is to eventually compile "Hello World".
However, it is unknown what BitBake needs and what you have
to provide in order to achieve that goal.
Recall that BitBake utilizes three types of metadata files:
<link linkend='configuration-files'>Configuration Files</link>,
<link linkend='classes'>Classes</link>, and
<link linkend='recipes'>Recipes</link>.
But where do they go?
How does BitBake find them?
BitBake's error messaging helps you answer these types of questions
and helps you better understand exactly what is going on.
</para>
<para>
Following is the complete "Hello World" example.
</para>
<orderedlist>
<listitem><para><emphasis>Create a Project Directory:</emphasis>
First, set up a directory for the "Hello World" project.
Here is how you can do so in your home directory:
<literallayout class='monospaced'>
$ mkdir ~/hello
$ cd ~/hello
</literallayout>
This is the directory that BitBake will use to do all of
its work.
You can use this directory to keep all the metafiles needed
by BitBake.
Having a project directory is a good way to isolate your
project.
</para></listitem>
<listitem><para><emphasis>Run Bitbake:</emphasis>
At this point, you have nothing but a project directory.
Run the <filename>bitbake</filename> command and see what
it does:
<literallayout class='monospaced'>
$ bitbake
The BBPATH variable is not set and bitbake did not
find a conf/bblayers.conf file in the expected location.
Maybe you accidentally invoked bitbake from the wrong directory?
DEBUG: Removed the following variables from the environment:
GNOME_DESKTOP_SESSION_ID, XDG_CURRENT_DESKTOP,
GNOME_KEYRING_CONTROL, DISPLAY, SSH_AGENT_PID, LANG, no_proxy,
XDG_SESSION_PATH, XAUTHORITY, SESSION_MANAGER, SHLVL,
MANDATORY_PATH, COMPIZ_CONFIG_PROFILE, WINDOWID, EDITOR,
GPG_AGENT_INFO, SSH_AUTH_SOCK, GDMSESSION, GNOME_KEYRING_PID,
XDG_SEAT_PATH, XDG_CONFIG_DIRS, LESSOPEN, DBUS_SESSION_BUS_ADDRESS,
_, XDG_SESSION_COOKIE, DESKTOP_SESSION, LESSCLOSE, DEFAULTS_PATH,
UBUNTU_MENUPROXY, OLDPWD, XDG_DATA_DIRS, COLORTERM, LS_COLORS
</literallayout>
The majority of this output is specific to environment variables
that are not directly relevant to BitBake.
However, the very first message regarding the
<filename>BBPATH</filename> variable and the
<filename>conf/bblayers.conf</filename> file
is relevant.</para>
<para>
When you run BitBake, it begins looking for metadata files.
The
<link linkend='var-BBPATH'><filename>BBPATH</filename></link>
variable is what tells BitBake where to look for those files.
<filename>BBPATH</filename> is not set and you need to set it.
Without <filename>BBPATH</filename>, Bitbake cannot
find any configuration files (<filename>.conf</filename>)
or recipe files (<filename>.bb</filename>) at all.
BitBake also cannot find the <filename>bitbake.conf</filename>
file.
</para></listitem>
<listitem><para><emphasis>Setting <filename>BBPATH</filename>:</emphasis>
For this example, you can set <filename>BBPATH</filename>
in the same manner that you set <filename>PATH</filename>
earlier in the appendix.
You should realize, though, that it is much more flexible to set the
<filename>BBPATH</filename> variable up in a configuration
file for each project.</para>
<para>From your shell, enter the following commands to set and
export the <filename>BBPATH</filename> variable:
<literallayout class='monospaced'>
$ BBPATH="<replaceable>projectdirectory</replaceable>"
$ export BBPATH
</literallayout>
Use your actual project directory in the command.
BitBake uses that directory to find the metadata it needs for
your project.
<note>
When specifying your project directory, do not use the
tilde ("~") character as BitBake does not expand that character
as the shell would.
</note>
</para></listitem>
<listitem><para><emphasis>Run Bitbake:</emphasis>
Now that you have <filename>BBPATH</filename> defined, run
the <filename>bitbake</filename> command again:
<literallayout class='monospaced'>
$ bitbake
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 173, in parse_config_file
return bb.parse.handle(fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 99, in handle
return h['handle'](fn, data, include)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 120, in handle
abs_fn = resolve_file(fn, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/__init__.py", line 117, in resolve_file
raise IOError("file %s not found in %s" % (fn, bbpath))
IOError: file conf/bitbake.conf not found in /home/scott-lenovo/hello
ERROR: Unable to parse conf/bitbake.conf: file conf/bitbake.conf not found in /home/scott-lenovo/hello
</literallayout>
This sample output shows that BitBake could not find the
<filename>conf/bitbake.conf</filename> file in the project
directory.
This file is the first thing BitBake must find in order
to build a target.
And, since the project directory for this example is
empty, you need to provide a <filename>conf/bitbake.conf</filename>
file.
</para></listitem>
<listitem><para><emphasis>Creating <filename>conf/bitbake.conf</filename>:</emphasis>
The <filename>conf/bitbake.conf</filename> includes a number of
configuration variables BitBake uses for metadata and recipe
files.
For this example, you need to create the file in your project directory
and define some key BitBake variables.
For more information on the <filename>bitbake.conf</filename>,
see
<ulink url='http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#an-overview-of-bitbakeconf'></ulink>
</para>
<para>Use the following commands to create the <filename>conf</filename>
directory in the project directory:
<literallayout class='monospaced'>
$ mkdir conf
</literallayout>
From within the <filename>conf</filename> directory, use
some editor to create the <filename>bitbake.conf</filename>
so that it contains the following:
<literallayout class='monospaced'>
TMPDIR = "${<link linkend='var-TOPDIR'>TOPDIR</link>}/tmp"
<link linkend='var-CACHE'>CACHE</link> = "${TMPDIR}/cache"
<link linkend='var-STAMP'>STAMP</link> = "${TMPDIR}/stamps"
<link linkend='var-T'>T</link> = "${TMPDIR}/work"
<link linkend='var-B'>B</link> = "${TMPDIR}"
</literallayout>
The <filename>TMPDIR</filename> variable establishes a directory
that BitBake uses for build output and intermediate files (other
than the cached information used by the
<link linkend='setscene'>Setscene</link> process.
Here, the <filename>TMPDIR</filename> directory is set to
<filename>hello/tmp</filename>.
<note><title>Tip</title>
You can always safely delete the <filename>tmp</filename>
directory in order to rebuild a BitBake target.
The build process creates the directory for you
when you run BitBake.
</note></para>
<para>For information about each of the other variables defined in this
example, click on the links to take you to the definitions in
the glossary.
</para></listitem>
<listitem><para><emphasis>Run Bitbake:</emphasis>
After making sure that the <filename>conf/bitbake.conf</filename>
file exists, you can run the <filename>bitbake</filename>
command again:
<literallayout class='monospaced'>
$ bitbake
ERROR: Traceback (most recent call last):
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 163, in wrapped
return func(fn, *args)
File "/home/scott-lenovo/bitbake/lib/bb/cookerdata.py", line 177, in _inherit
bb.parse.BBHandler.inherit(bbclass, "configuration INHERITs", 0, data)
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/BBHandler.py", line 92, in inherit
include(fn, file, lineno, d, "inherit")
File "/home/scott-lenovo/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 100, in include
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), oldfn, lineno)
ParseError: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inherit file classes/base.bbclass
</literallayout>
In the sample output, BitBake could not find the
<filename>classes/base.bbclass</filename> file.
You need to create that file next.
</para></listitem>
<listitem><para><emphasis>Creating <filename>classes/base.bbclass</filename>:</emphasis>
BitBake uses class files to provide common code and functionality.
The minimally required class for BitBake is the
<filename>classes/base.bbclass</filename> file.
The <filename>base</filename> class is implicitly inherited by
every recipe.
BitBake looks for the class in the <filename>classes</filename>
directory of the project (i.e <filename>hello/classes</filename>
in this example).
</para>
<para>Create the <filename>classes</filename> directory as follows:
<literallayout class='monospaced'>
$ cd $HOME/hello
$ mkdir classes
</literallayout>
Move to the <filename>classes</filename> directory and then
create the <filename>base.bbclass</filename> file by inserting
this single line:
<literallayout class='monospaced'>
addtask build
</literallayout>
The minimal task that BitBake runs is the
<filename>do_build</filename> task.
This is all the example needs in order to build the project.
Of course, the <filename>base.bbclass</filename> can have much
more depending on which build environments BitBake is
supporting.
For more information on the <filename>base.bbclass</filename> file,
you can look at
<ulink url='http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#tasks'></ulink>.
</para></listitem>
<listitem><para><emphasis>Run Bitbake:</emphasis>
After making sure that the <filename>classes/base.bbclass</filename>
file exists, you can run the <filename>bitbake</filename>
command again:
<literallayout class='monospaced'>
$ bitbake
Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.
</literallayout>
BitBake is finally reporting no errors.
However, you can see that it really does not have anything
to do.
You need to create a recipe that gives BitBake something to do.
</para></listitem>
<listitem><para><emphasis>Creating a Layer:</emphasis>
While it is not really necessary for such a small example,
it is good practice to create a layer in which to keep your
code separate from the general metadata used by BitBake.
Thus, this example creates and uses a layer called "mylayer".
<note>
You can find additional information on adding a layer at
<ulink url='http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#adding-an-example-layer'></ulink>.
</note>
</para>
<para>Minimally, you need a recipe file and a layer configuration
file in your layer.
The configuration file needs to be in the <filename>conf</filename>
directory inside the layer.
Use these commands to set up the layer and the <filename>conf</filename>
directory:
<literallayout class='monospaced'>
$ cd $HOME
$ mkdir mylayer
$ cd mylayer
$ mkdir conf
</literallayout>
Move to the <filename>conf</filename> directory and create a
<filename>layer.conf</filename> file that has the following:
<literallayout class='monospaced'>
BBPATH .= ":${<link linkend='var-LAYERDIR'>LAYERDIR</link>}"
<link linkend='var-BBFILES'>BBFILES</link> += "${LAYERDIR}/*.bb"
<link linkend='var-BBFILE_COLLECTIONS'>BBFILE_COLLECTIONS</link> += "mylayer"
<link linkend='var-BBFILE_PATTERN'>BBFILE_PATTERN_mylayer</link> := "^${LAYERDIR_RE}/"
</literallayout>
For information on these variables, click the links
to go to the definitions in the glossary.</para>
<para>You need to create the recipe file next.
Inside your layer at the top-level, use an editor and create
a recipe file named <filename>printhello.bb</filename> that
has the following:
<literallayout class='monospaced'>
<link linkend='var-DESCRIPTION'>DESCRIPTION</link> = "Prints Hello World"
<link linkend='var-PN'>PN</link> = 'printhello'
<link linkend='var-PV'>PV</link> = '1'
python do_build() {
bb.plain("********************");
bb.plain("* *");
bb.plain("* Hello, World! *");
bb.plain("* *");
bb.plain("********************");
}
</literallayout>
The recipe file simply provides a description of the
recipe, the name, version, and the <filename>do_build</filename>
task, which prints out "Hello World" to the console.
For more information on these variables, follow the links
to the glossary.
</para></listitem>
<listitem><para><emphasis>Run Bitbake With a Target:</emphasis>
Now that a BitBake target exists, run the command and provide
that target:
<literallayout class='monospaced'>
$ cd $HOME/hello
$ bitbake printhello
ERROR: no recipe files to build, check your BBPATH and BBFILES?
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
</literallayout>
We have created the layer with the recipe and the layer
configuration file but it still seems that BitBake cannot
find the recipe.
BitBake needs a <filename>conf/bblayers.conf</filename> that
lists the layers for the project.
Without this file, BitBake cannot find the recipe.
</para></listitem>
<listitem><para><emphasis>Creating <filename>conf/bblayers.conf</filename>:</emphasis>
BitBake uses the <filename>conf/bblayers.conf</filename> file
to locate layers needed for the project.
This file must reside in the <filename>conf</filename> directory
of the project (i.e. <filename>hello/conf</filename> for this
example).</para>
<para>Set your working directory to the <filename>hello/conf</filename>
directory and then create the <filename>bblayers.conf</filename>
file so that it contains the following:
<literallayout class='monospaced'>
BBLAYERS ?= " \
/home/&lt;you&gt;/mylayer \
"
</literallayout>
You need to provide your own information for
<filename>you</filename> in the file.
</para></listitem>
<listitem><para><emphasis>Run Bitbake With a Target:</emphasis>
Now that you have supplied the <filename>bblayers.conf</filename>
file, run the <filename>bitbake</filename> command and provide
the target:
<literallayout class='monospaced'>
$ bitbake printhello
Parsing recipes: 100% |##################################################################################|
Time: 00:00:00
Parsing of 1 .bb files complete (0 cached, 1 parsed). 1 targets, 0 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
NOTE: Preparing RunQueue
NOTE: Executing RunQueue Tasks
********************
* *
* Hello, World! *
* *
********************
NOTE: Tasks Summary: Attempted 1 tasks of which 0 didn't need to be rerun and all succeeded.
</literallayout>
BitBake finds the <filename>printhello</filename> recipe and
successfully runs the task.
<note>
After the first execution, re-running
<filename>bitbake printhello</filename> again will not
result in a BitBake run that prints the same console
output.
The reason for this is that the first time the
<filename>printhello.bb</filename> recipe's
<filename>do_build</filename> task executes
successfully, BitBake writes a stamp file for the task.
Thus, the next time you attempt to run the task
using that same <filename>bitbake</filename> command,
BitBake notices the stamp and therefore determines
that the task does not need to be re-run.
If you delete the <filename>tmp</filename> directory
or run <filename>bitbake -c clean printhello</filename>
and then re-run the build, the "Hello, World!" message will
be printed again.
</note>
</para></listitem>
</orderedlist>
</section>
</appendix>

View File

@@ -1,721 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id="bitbake-user-manual-intro">
<title>Overview</title>
<para>
Welcome to the BitBake User Manual.
This manual provides information on the BitBake tool.
The information attempts to be as independent as possible regarding
systems that use BitBake, such as OpenEmbedded and the
Yocto Project.
In some cases, scenarios or examples within the context of
a build system are used in the manual to help with understanding.
For these cases, the manual clearly states the context.
</para>
<section id="intro">
<title>Introduction</title>
<para>
Fundamentally, BitBake is a generic task execution
engine that allows shell and Python tasks to be run
efficiently and in parallel while working within
complex inter-task dependency constraints.
One of BitBake's main users, OpenEmbedded, takes this core
and builds embedded Linux software stacks using
a task-oriented approach.
</para>
<para>
Conceptually, BitBake is similar to GNU Make in
some regards but has significant differences:
<itemizedlist>
<listitem><para>
BitBake executes tasks according to provided
metadata that builds up the tasks.
Metadata is stored in recipe (<filename>.bb</filename>)
and related recipe "append" (<filename>.bbappend</filename>)
files, configuration (<filename>.conf</filename>) and
underlying include (<filename>.inc</filename>) files, and
in class (<filename>.bbclass</filename>) files.
The metadata provides
BitBake with instructions on what tasks to run and
the dependencies between those tasks.
</para></listitem>
<listitem><para>
BitBake includes a fetcher library for obtaining source
code from various places such as local files, source control
systems, or websites.
</para></listitem>
<listitem><para>
The instructions for each unit to be built (e.g. a piece
of software) are known as "recipe" files and
contain all the information about the unit
(dependencies, source file locations, checksums, description
and so on).
</para></listitem>
<listitem><para>
BitBake includes a client/server abstraction and can
be used from a command line or used as a service over
XML-RPC and has several different user interfaces.
</para></listitem>
</itemizedlist>
</para>
</section>
<section id="history-and-goals">
<title>History and Goals</title>
<para>
BitBake was originally a part of the OpenEmbedded project.
It was inspired by the Portage package management system
used by the Gentoo Linux distribution.
On December 7, 2004, OpenEmbedded project team member
Chris Larson split the project into two distinct pieces:
<itemizedlist>
<listitem><para>BitBake, a generic task executor</para></listitem>
<listitem><para>OpenEmbedded, a metadata set utilized by
BitBake</para></listitem>
</itemizedlist>
Today, BitBake is the primary basis of the
<ulink url="http://www.openembedded.org/">OpenEmbedded</ulink>
project, which is being used to build and maintain Linux
distributions such as the
<ulink url='http://www.angstrom-distribution.org/'>Angstrom Distribution</ulink>,
and which is also being used as the build tool for Linux projects
such as the
<ulink url='http://www.yoctoproject.org'>Yocto Project</ulink>.
</para>
<para>
Prior to BitBake, no other build tool adequately met the needs of
an aspiring embedded Linux distribution.
All of the build systems used by traditional desktop Linux
distributions lacked important functionality, and none of the
ad hoc Buildroot-based systems, prevalent in the
embedded space, were scalable or maintainable.
</para>
<para>
Some important original goals for BitBake were:
<itemizedlist>
<listitem><para>
Handle cross-compilation.
</para></listitem>
<listitem><para>
Handle inter-package dependencies (build time on
target architecture, build time on native
architecture, and runtime).
</para></listitem>
<listitem><para>
Support running any number of tasks within a given
package, including, but not limited to, fetching
upstream sources, unpacking them, patching them,
configuring them, and so forth.
</para></listitem>
<listitem><para>
Be Linux distribution agnostic for both build and
target systems.
</para></listitem>
<listitem><para>
Be architecture agnostic.
</para></listitem>
<listitem><para>
Support multiple build and target operating systems
(e.g. Cygwin, the BSDs, and so forth).
</para></listitem>
<listitem><para>
Be self contained, rather than tightly
integrated into the build machine's root
filesystem.
</para></listitem>
<listitem><para>
Handle conditional metadata on the target architecture,
operating system, distribution, and machine.
</para></listitem>
<listitem><para>
Be easy to use the tools to supply local metadata and packages
against which to operate.
</para></listitem>
<listitem><para>
Be easy to use BitBake to collaborate between multiple
projects for their builds.
</para></listitem>
<listitem><para>
Provide an inheritance mechanism to share
common metadata between many packages.
</para></listitem>
</itemizedlist>
Over time it became apparent that some further requirements
were necessary:
<itemizedlist>
<listitem><para>
Handle variants of a base recipe (e.g. native, sdk,
and multilib).
</para></listitem>
<listitem><para>
Split metadata into layers and allow layers
to enhance or override other layers.
</para></listitem>
<listitem><para>
Allow representation of a given set of input variables
to a task as a checksum.
Based on that checksum, allow acceleration of builds
with prebuilt components.
</para></listitem>
</itemizedlist>
BitBake satisfies all the original requirements and many more
with extensions being made to the basic functionality to
reflect the additional requirements.
Flexibility and power have always been the priorities.
BitBake is highly extensible and supports embedded Python code and
execution of any arbitrary tasks.
</para>
</section>
<section id="Concepts">
<title>Concepts</title>
<para>
BitBake is a program written in the Python language.
At the highest level, BitBake interprets metadata, decides
what tasks are required to run, and executes those tasks.
Similar to GNU Make, BitBake controls how software is
built.
GNU Make achieves its control through "makefiles", while
BitBake uses "recipes".
</para>
<para>
BitBake extends the capabilities of a simple
tool like GNU Make by allowing for the definition of much more
complex tasks, such as assembling entire embedded Linux
distributions.
</para>
<para>
The remainder of this section introduces several concepts
that should be understood in order to better leverage
the power of BitBake.
</para>
<section id='recipes'>
<title>Recipes</title>
<para>
BitBake Recipes, which are denoted by the file extension
<filename>.bb</filename>, are the most basic metadata files.
These recipe files provide BitBake with the following:
<itemizedlist>
<listitem><para>Descriptive information about the
package (author, homepage, license, and so on)</para></listitem>
<listitem><para>The version of the recipe</para></listitem>
<listitem><para>Existing dependencies (both build
and runtime dependencies)</para></listitem>
<listitem><para>Where the source code resides and
how to fetch it</para></listitem>
<listitem><para>Whether the source code requires
any patches, where to find them, and how to apply
them</para></listitem>
<listitem><para>How to configure and compile the
source code</para></listitem>
<listitem><para>Where on the target machine to install the
package or packages created</para></listitem>
</itemizedlist>
</para>
<para>
Within the context of BitBake, or any project utilizing BitBake
as its build system, files with the <filename>.bb</filename>
extension are referred to as recipes.
<note>
The term "package" is also commonly used to describe recipes.
However, since the same word is used to describe packaged
output from a project, it is best to maintain a single
descriptive term - "recipes".
Put another way, a single "recipe" file is quite capable
of generating a number of related but separately installable
"packages".
In fact, that ability is fairly common.
</note>
</para>
</section>
<section id='configuration-files'>
<title>Configuration Files</title>
<para>
Configuration files, which are denoted by the
<filename>.conf</filename> extension, define
various configuration variables that govern the project's build
process.
These files fall into several areas that define
machine configuration options, distribution configuration
options, compiler tuning options, general common
configuration options, and user configuration options.
The main configuration file is the sample
<filename>bitbake.conf</filename> file, which is
located within the BitBake source tree
<filename>conf</filename> directory.
</para>
</section>
<section id='classes'>
<title>Classes</title>
<para>
Class files, which are denoted by the
<filename>.bbclass</filename> extension, contain
information that is useful to share between metadata files.
The BitBake source tree currently comes with one class metadata file
called <filename>base.bbclass</filename>.
You can find this file in the
<filename>classes</filename> directory.
The <filename>base.bbclass</filename> class files is special since it
is always included automatically for all recipes
and classes.
This class contains definitions for standard basic tasks such
as fetching, unpacking, configuring (empty by default),
compiling (runs any Makefile present), installing (empty by
default) and packaging (empty by default).
These tasks are often overridden or extended by other classes
added during the project development process.
</para>
</section>
<section id='layers'>
<title>Layers</title>
<para>
Layers allow you to isolate different types of
customizations from each other.
While you might find it tempting to keep everything in one layer
when working on a single project, the more modular you organize
your metadata, the easier it is to cope with future changes.
</para>
<para>
To illustrate how you can use layers to keep things modular,
consider customizations you might make to support a specific target machine.
These types of customizations typically reside in a special layer,
rather than a general layer, called a Board Support Package (BSP)
Layer.
Furthermore, the machine customizations should be isolated from
recipes and metadata that support a new GUI environment, for
example.
This situation gives you a couple of layers: one for the machine
configurations and one for the GUI environment.
It is important to understand, however, that the BSP layer can still
make machine-specific additions to recipes within
the GUI environment layer without polluting the GUI layer itself
with those machine-specific changes.
You can accomplish this through a recipe that is a BitBake append
(<filename>.bbappend</filename>) file.
</para>
</section>
<section id='append-bbappend-files'>
<title>Append Files</title>
<para>
Append files, which are files that have the
<filename>.bbappend</filename> file extension, extend or
override information in an existing recipe file.
</para>
<para>
BitBake expects every append file to have a corresponding recipe file.
Furthermore, the append file and corresponding recipe file
must use the same root filename.
The filenames can differ only in the file type suffix used
(e.g. <filename>formfactor_0.0.bb</filename> and
<filename>formfactor_0.0.bbappend</filename>).
</para>
<para>
Information in append files extends or
overrides the information in the underlying,
similarly-named recipe files.
</para>
<para>
When you name an append file, you can use the
wildcard character (%) to allow for matching recipe names.
For example, suppose you have an append file named
as follows:
<literallayout class='monospaced'>
busybox_1.21.%.bbappend
</literallayout>
That append file would match any <filename>busybox_1.21.x.bb</filename>
version of the recipe.
So, the append file would match the following recipe names:
<literallayout class='monospaced'>
busybox_1.21.1.bb
busybox_1.21.2.bb
busybox_1.21.3.bb
</literallayout>
If the <filename>busybox</filename> recipe was updated to
<filename>busybox_1.3.0.bb</filename>, the append name would not
match.
However, if you named the append file
<filename>busybox_1.%.bbappend</filename>, then you would have a match.
</para>
<para>
In the most general case, you could name the append file something as
simple as <filename>busybox_%.bbappend</filename> to be entirely
version independent.
</para>
</section>
</section>
<section id='obtaining-bitbake'>
<title>Obtaining BitBake</title>
<para>
You can obtain BitBake several different ways:
<itemizedlist>
<listitem><para><emphasis>Cloning BitBake:</emphasis>
Using Git to clone the BitBake source code repository
is the recommended method for obtaining BitBake.
Cloning the repository makes it easy to get bug fixes
and have access to stable branches and the master
branch.
Once you have cloned BitBake, you should use
the latest stable
branch for development since the master branch is for
BitBake development and might contain less stable changes.
</para>
<para>You usually need a version of BitBake
that matches the metadata you are using.
The metadata is generally backwards compatible but
not forward compatible.</para>
<para>Here is an example that clones the BitBake repository:
<literallayout class='monospaced'>
$ git clone git://git.openembedded.org/bitbake
</literallayout>
This command clones the BitBake Git repository into a
directory called <filename>bitbake</filename>.
Alternatively, you can
designate a directory after the
<filename>git clone</filename> command
if you want to call the new directory something
other than <filename>bitbake</filename>.
Here is an example that names the directory
<filename>bbdev</filename>:
<literallayout class='monospaced'>
$ git clone git://git.openembedded.org/bitbake bbdev
</literallayout></para></listitem>
<listitem><para><emphasis>Installation using your Distribution
Package Management System:</emphasis>
This method is not
recommended because the BitBake version that is
provided by your distribution, in most cases,
is several
releases behind a snapshot of the BitBake repository.
</para></listitem>
<listitem><para><emphasis>Taking a snapshot of BitBake:</emphasis>
Downloading a snapshot of BitBake from the
source code repository gives you access to a known
branch or release of BitBake.
<note>
Cloning the Git repository, as described earlier,
is the preferred method for getting BitBake.
Cloning the repository makes it easier to update as
patches are added to the stable branches.
</note></para>
<para>The following example downloads a snapshot of
BitBake version 1.17.0:
<literallayout class='monospaced'>
$ wget http://git.openembedded.org/bitbake/snapshot/bitbake-1.17.0.tar.gz
$ tar zxpvf bitbake-1.17.0.tar.gz
</literallayout>
After extraction of the tarball using the tar utility,
you have a directory entitled
<filename>bitbake-1.17.0</filename>.
</para></listitem>
<listitem><para><emphasis>Using the BitBake that Comes With Your
Build Checkout:</emphasis>
A final possibility for getting a copy of BitBake is that it
already comes with your checkout of a larger Bitbake-based build
system, such as Poky or Yocto Project.
Rather than manually checking out individual layers and
gluing them together yourself, you can check
out an entire build system.
The checkout will already include a version of BitBake that
has been thoroughly tested for compatibility with the other
components.
For information on how to check out a particular BitBake-based
build system, consult that build system's supporting documentation.
</para></listitem>
</itemizedlist>
</para>
</section>
<section id="bitbake-user-manual-command">
<title>The BitBake Command</title>
<para>
The <filename>bitbake</filename> command is the primary interface
to the BitBake tool.
This section presents the BitBake command syntax and provides
several execution examples.
</para>
<section id='usage-and-syntax'>
<title>Usage and syntax</title>
<para>
Following is the usage and syntax for BitBake:
<literallayout class='monospaced'>
$ bitbake -h
Usage: bitbake [options] [recipename/target recipe:do_task ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.
Options:
--version show program's version number and exit
-h, --help show this help message and exit
-b BUILDFILE, --buildfile=BUILDFILE
Execute tasks from a specific .bb recipe directly.
WARNING: Does not handle any dependencies from other
recipes.
-k, --continue Continue as much as possible after an error. While the
target that failed and anything depending on it cannot
be built, as much as possible will be built before
stopping.
-a, --tryaltconfigs Continue with builds by trying to use alternative
providers where possible.
-f, --force Force the specified targets/task to run (invalidating
any existing stamp file).
-c CMD, --cmd=CMD Specify the task to execute. The exact options
available depend on the metadata. Some examples might
be 'compile' or 'populate_sysroot' or 'listtasks' may
give a list of the tasks available.
-C INVALIDATE_STAMP, --clear-stamp=INVALIDATE_STAMP
Invalidate the stamp for the specified task such as
'compile' and then run the default task for the
specified target(s).
-r PREFILE, --read=PREFILE
Read the specified file before bitbake.conf.
-R POSTFILE, --postread=POSTFILE
Read the specified file after bitbake.conf.
-v, --verbose Enable tracing of shell tasks (with 'set -x').
Also print bb.note(...) messages to stdout (in
addition to writing them to ${T}/log.do_&lt;task&gt;).
-D, --debug Increase the debug level. You can specify this
more than once. -D sets the debug level to 1,
where only bb.debug(1, ...) messages are printed
to stdout; -DD sets the debug level to 2, where
both bb.debug(1, ...) and bb.debug(2, ...)
messages are printed; etc. Without -D, no debug
messages are printed. Note that -D only affects
output to stdout. All debug messages are written
to ${T}/log.do_taskname, regardless of the debug
level.
-n, --dry-run Don't execute, just go through the motions.
-S SIGNATURE_HANDLER, --dump-signatures=SIGNATURE_HANDLER
Dump out the signature construction information, with
no task execution. The SIGNATURE_HANDLER parameter is
passed to the handler. Two common values are none and
printdiff but the handler may define more/less. none
means only dump the signature, printdiff means compare
the dumped signature with the cached one.
-p, --parse-only Quit after parsing the BB recipes.
-s, --show-versions Show current and preferred versions of all recipes.
-e, --environment Show the global or per-recipe environment complete
with information about where variables were
set/changed.
-g, --graphviz Save dependency tree information for the specified
targets in the dot syntax.
-I EXTRA_ASSUME_PROVIDED, --ignore-deps=EXTRA_ASSUME_PROVIDED
Assume these dependencies don't exist and are already
provided (equivalent to ASSUME_PROVIDED). Useful to
make dependency graphs more appealing
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
-P, --profile Profile the command and save reports.
-u UI, --ui=UI The user interface to use (taskexp, knotty or
ncurses - default knotty).
-t SERVERTYPE, --servertype=SERVERTYPE
Choose which server type to use (process or xmlrpc -
default process).
--token=XMLRPCTOKEN Specify the connection token to be used when
connecting to a remote server.
--revisions-changed Set the exit code depending on whether upstream
floating revisions have changed or not.
--server-only Run bitbake without a UI, only starting a server
(cooker) process.
-B BIND, --bind=BIND The name/address for the bitbake server to bind to.
--no-setscene Do not run any setscene tasks. sstate will be ignored
and everything needed, built.
--setscene-only Only run setscene tasks, don't run any real tasks.
--remote-server=REMOTE_SERVER
Connect to the specified server.
-m, --kill-server Terminate the remote server.
--observe-only Connect to a server as an observing-only client.
--status-only Check the status of the remote bitbake server.
-w WRITEEVENTLOG, --write-log=WRITEEVENTLOG
Writes the event log of the build to a bitbake event
json file. Use '' (empty string) to assign the name
automatically.
</literallayout>
</para>
</section>
<section id='bitbake-examples'>
<title>Examples</title>
<para>
This section presents some examples showing how to use BitBake.
</para>
<section id='example-executing-a-task-against-a-single-recipe'>
<title>Executing a Task Against a Single Recipe</title>
<para>
Executing tasks for a single recipe file is relatively simple.
You specify the file in question, and BitBake parses
it and executes the specified task.
If you do not specify a task, BitBake executes the default
task, which is "build”.
BitBake obeys inter-task dependencies when doing
so.
</para>
<para>
The following command runs the build task, which is
the default task, on the <filename>foo_1.0.bb</filename>
recipe file:
<literallayout class='monospaced'>
$ bitbake -b foo_1.0.bb
</literallayout>
The following command runs the clean task on the
<filename>foo.bb</filename> recipe file:
<literallayout class='monospaced'>
$ bitbake -b foo.bb -c clean
</literallayout>
<note>
The "-b" option explicitly does not handle recipe
dependencies.
Other than for debugging purposes, it is instead
recommended that you use the syntax presented in the
next section.
</note>
</para>
</section>
<section id='executing-tasks-against-a-set-of-recipe-files'>
<title>Executing Tasks Against a Set of Recipe Files</title>
<para>
There are a number of additional complexities introduced
when one wants to manage multiple <filename>.bb</filename>
files.
Clearly there needs to be a way to tell BitBake what
files are available and, of those, which you
want to execute.
There also needs to be a way for each recipe
to express its dependencies, both for build-time and
runtime.
There must be a way for you to express recipe preferences
when multiple recipes provide the same functionality, or when
there are multiple versions of a recipe.
</para>
<para>
The <filename>bitbake</filename> command, when not using
"--buildfile" or "-b" only accepts a "PROVIDES".
You cannot provide anything else.
By default, a recipe file generally "PROVIDES" its
"packagename" as shown in the following example:
<literallayout class='monospaced'>
$ bitbake foo
</literallayout>
This next example "PROVIDES" the package name and also uses
the "-c" option to tell BitBake to just execute the
<filename>do_clean</filename> task:
<literallayout class='monospaced'>
$ bitbake -c clean foo
</literallayout>
</para>
</section>
<section id='executing-a-list-of-task-and-recipe-combinations'>
<title>Executing a List of Task and Recipe Combinations</title>
<para>
The BitBake command line supports specifying different
tasks for individual targets when you specify multiple
targets.
For example, suppose you had two targets (or recipes)
<filename>myfirstrecipe</filename> and
<filename>mysecondrecipe</filename> and you needed
BitBake to run <filename>taskA</filename> for the first
recipe and <filename>taskB</filename> for the second
recipe:
<literallayout class='monospaced'>
$ bitbake myfirstrecipe:do_taskA mysecondrecipe:do_taskB
</literallayout>
</para>
</section>
<section id='generating-dependency-graphs'>
<title>Generating Dependency Graphs</title>
<para>
BitBake is able to generate dependency graphs using
the <filename>dot</filename> syntax.
You can convert these graphs into images using the
<filename>dot</filename> tool from
<ulink url='http://www.graphviz.org'>Graphviz</ulink>.
</para>
<para>
When you generate a dependency graph, BitBake writes three files
to the current working directory:
<itemizedlist>
<listitem><para>
<emphasis><filename>recipe-depends.dot</filename>:</emphasis>
Shows dependencies between recipes (i.e. a collapsed version of
<filename>task-depends.dot</filename>).
</para></listitem>
<listitem><para>
<emphasis><filename>task-depends.dot</filename>:</emphasis>
Shows dependencies between tasks.
These dependencies match BitBake's internal task execution list.
</para></listitem>
<listitem><para>
<emphasis><filename>pn-buildlist</filename>:</emphasis>
Shows a simple list of targets that are to be built.
</para></listitem>
</itemizedlist>
</para>
<para>
To stop depending on common depends, use the "-I" depend
option and BitBake omits them from the graph.
Leaving this information out can produce more readable graphs.
This way, you can remove from the graph
<filename>DEPENDS</filename> from inherited classes
such as <filename>base.bbclass</filename>.
</para>
<para>
Here are two examples that create dependency graphs.
The second example omits depends common in OpenEmbedded from
the graph:
<literallayout class='monospaced'>
$ bitbake -g foo
$ bitbake -g -I virtual/kernel -I eglibc foo
</literallayout>
</para>
</section>
</section>
</section>
</chapter>

View File

@@ -1,984 +0,0 @@
/*
Generic XHTML / DocBook XHTML CSS Stylesheet.
Browser wrangling and typographic design by
Oyvind Kolas / pippin@gimp.org
Customised for Poky by
Matthew Allum / mallum@o-hand.com
Thanks to:
Liam R. E. Quin
William Skaggs
Jakub Steiner
Structure
---------
The stylesheet is divided into the following sections:
Positioning
Margins, paddings, width, font-size, clearing.
Decorations
Borders, style
Colors
Colors
Graphics
Graphical backgrounds
Nasty IE tweaks
Workarounds needed to make it work in internet explorer,
currently makes the stylesheet non validating, but up until
this point it is validating.
Mozilla extensions
Transparency for footer
Rounded corners on boxes
*/
/*************** /
/ Positioning /
/ ***************/
body {
font-family: Verdana, Sans, sans-serif;
min-width: 640px;
width: 80%;
margin: 0em auto;
padding: 2em 5em 5em 5em;
color: #333;
}
h1,h2,h3,h4,h5,h6,h7 {
font-family: Arial, Sans;
color: #00557D;
clear: both;
}
h1 {
font-size: 2em;
text-align: left;
padding: 0em 0em 0em 0em;
margin: 2em 0em 0em 0em;
}
h2.subtitle {
margin: 0.10em 0em 3.0em 0em;
padding: 0em 0em 0em 0em;
font-size: 1.8em;
padding-left: 20%;
font-weight: normal;
font-style: italic;
}
h2 {
margin: 2em 0em 0.66em 0em;
padding: 0.5em 0em 0em 0em;
font-size: 1.5em;
font-weight: bold;
}
h3.subtitle {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
font-size: 142.14%;
text-align: right;
}
h3 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 140%;
font-weight: bold;
}
h4 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 120%;
font-weight: bold;
}
h5 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 110%;
font-weight: bold;
}
h6 {
margin: 1em 0em 0em 0em;
padding: 1em 0em 0em 0em;
font-size: 110%;
font-weight: bold;
}
.authorgroup {
background-color: transparent;
background-repeat: no-repeat;
padding-top: 256px;
background-image: url("figures/bitbake-title.png");
background-position: left top;
margin-top: -256px;
padding-right: 50px;
margin-left: 0px;
text-align: right;
width: 740px;
}
h3.author {
margin: 0em 0me 0em 0em;
padding: 0em 0em 0em 0em;
font-weight: normal;
font-size: 100%;
color: #333;
clear: both;
}
.author tt.email {
font-size: 66%;
}
.titlepage hr {
width: 0em;
clear: both;
}
.revhistory {
padding-top: 2em;
clear: both;
}
.toc,
.list-of-tables,
.list-of-examples,
.list-of-figures {
padding: 1.33em 0em 2.5em 0em;
color: #00557D;
}
.toc p,
.list-of-tables p,
.list-of-figures p,
.list-of-examples p {
padding: 0em 0em 0em 0em;
padding: 0em 0em 0.3em;
margin: 1.5em 0em 0em 0em;
}
.toc p b,
.list-of-tables p b,
.list-of-figures p b,
.list-of-examples p b{
font-size: 100.0%;
font-weight: bold;
}
.toc dl,
.list-of-tables dl,
.list-of-figures dl,
.list-of-examples dl {
margin: 0em 0em 0.5em 0em;
padding: 0em 0em 0em 0em;
}
.toc dt {
margin: 0em 0em 0em 0em;
padding: 0em 0em 0em 0em;
}
.toc dd {
margin: 0em 0em 0em 2.6em;
padding: 0em 0em 0em 0em;
}
div.glossary dl,
div.variablelist dl {
}
.glossary dl dt,
.variablelist dl dt,
.variablelist dl dt span.term {
font-weight: normal;
width: 20em;
text-align: right;
}
.variablelist dl dt {
margin-top: 0.5em;
}
.glossary dl dd,
.variablelist dl dd {
margin-top: -1em;
margin-left: 25.5em;
}
.glossary dd p,
.variablelist dd p {
margin-top: 0em;
margin-bottom: 1em;
}
div.calloutlist table td {
padding: 0em 0em 0em 0em;
margin: 0em 0em 0em 0em;
}
div.calloutlist table td p {
margin-top: 0em;
margin-bottom: 1em;
}
div p.copyright {
text-align: left;
}
div.legalnotice p.legalnotice-title {
margin-bottom: 0em;
}
p {
line-height: 1.5em;
margin-top: 0em;
}
dl {
padding-top: 0em;
}
hr {
border: solid 1px;
}
.mediaobject,
.mediaobjectco {
text-align: center;
}
img {
border: none;
}
ul {
padding: 0em 0em 0em 1.5em;
}
ul li {
padding: 0em 0em 0em 0em;
}
ul li p {
text-align: left;
}
table {
width :100%;
}
th {
padding: 0.25em;
text-align: left;
font-weight: normal;
vertical-align: top;
}
td {
padding: 0.25em;
vertical-align: top;
}
p a[id] {
margin: 0px;
padding: 0px;
display: inline;
background-image: none;
}
a {
text-decoration: underline;
color: #444;
}
pre {
overflow: auto;
}
a:hover {
text-decoration: underline;
/*font-weight: bold;*/
}
/* This style defines how the permalink character
appears by itself and when hovered over with
the mouse. */
[alt='Permalink'] { color: #eee; }
[alt='Permalink']:hover { color: black; }
div.informalfigure,
div.informalexample,
div.informaltable,
div.figure,
div.table,
div.example {
margin: 1em 0em;
padding: 1em;
page-break-inside: avoid;
}
div.informalfigure p.title b,
div.informalexample p.title b,
div.informaltable p.title b,
div.figure p.title b,
div.example p.title b,
div.table p.title b{
padding-top: 0em;
margin-top: 0em;
font-size: 100%;
font-weight: normal;
}
.mediaobject .caption,
.mediaobject .caption p {
text-align: center;
font-size: 80%;
padding-top: 0.5em;
padding-bottom: 0.5em;
}
.epigraph {
padding-left: 55%;
margin-bottom: 1em;
}
.epigraph p {
text-align: left;
}
.epigraph .quote {
font-style: italic;
}
.epigraph .attribution {
font-style: normal;
text-align: right;
}
span.application {
font-style: italic;
}
.programlisting {
font-family: monospace;
font-size: 80%;
white-space: pre;
margin: 1.33em 0em;
padding: 1.33em;
}
.tip,
.warning,
.caution,
.note {
margin-top: 1em;
margin-bottom: 1em;
}
/* force full width of table within div */
.tip table,
.warning table,
.caution table,
.note table {
border: none;
width: 100%;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
padding: 0.8em 0.0em 0.0em 0.0em;
margin : 0em 0em 0em 0em;
}
.tip p,
.warning p,
.caution p,
.note p {
margin-top: 0.5em;
margin-bottom: 0.5em;
padding-right: 1em;
text-align: left;
}
.acronym {
text-transform: uppercase;
}
b.keycap,
.keycap {
padding: 0.09em 0.3em;
margin: 0em;
}
.itemizedlist li {
clear: none;
}
.filename {
font-size: medium;
font-family: Courier, monospace;
}
div.navheader, div.heading{
position: absolute;
left: 0em;
top: 0em;
width: 100%;
background-color: #cdf;
width: 100%;
}
div.navfooter, div.footing{
position: fixed;
left: 0em;
bottom: 0em;
background-color: #eee;
width: 100%;
}
div.navheader td,
div.navfooter td {
font-size: 66%;
}
div.navheader table th {
/*font-family: Georgia, Times, serif;*/
/*font-size: x-large;*/
font-size: 80%;
}
div.navheader table {
border-left: 0em;
border-right: 0em;
border-top: 0em;
width: 100%;
}
div.navfooter table {
border-left: 0em;
border-right: 0em;
border-bottom: 0em;
width: 100%;
}
div.navheader table td a,
div.navfooter table td a {
color: #777;
text-decoration: none;
}
/* normal text in the footer */
div.navfooter table td {
color: black;
}
div.navheader table td a:visited,
div.navfooter table td a:visited {
color: #444;
}
/* links in header and footer */
div.navheader table td a:hover,
div.navfooter table td a:hover {
text-decoration: underline;
background-color: transparent;
color: #33a;
}
div.navheader hr,
div.navfooter hr {
display: none;
}
.qandaset tr.question td p {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
}
.qandaset tr.answer td p {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
}
.answer td {
padding-bottom: 1.5em;
}
.emphasis {
font-weight: bold;
}
/************* /
/ decorations /
/ *************/
.titlepage {
}
.part .title {
}
.subtitle {
border: none;
}
/*
h1 {
border: none;
}
h2 {
border-top: solid 0.2em;
border-bottom: solid 0.06em;
}
h3 {
border-top: 0em;
border-bottom: solid 0.06em;
}
h4 {
border: 0em;
border-bottom: solid 0.06em;
}
h5 {
border: 0em;
}
*/
.programlisting {
border: solid 1px;
}
div.figure,
div.table,
div.informalfigure,
div.informaltable,
div.informalexample,
div.example {
border: 1px solid;
}
.tip,
.warning,
.caution,
.note {
border: 1px solid;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
border-bottom: 1px solid;
}
.question td {
border-top: 1px solid black;
}
.answer {
}
b.keycap,
.keycap {
border: 1px solid;
}
div.navheader, div.heading{
border-bottom: 1px solid;
}
div.navfooter, div.footing{
border-top: 1px solid;
}
/********* /
/ colors /
/ *********/
body {
color: #333;
background: white;
}
a {
background: transparent;
}
a:hover {
background-color: #dedede;
}
h1,
h2,
h3,
h4,
h5,
h6,
h7,
h8 {
background-color: transparent;
}
hr {
border-color: #aaa;
}
.tip, .warning, .caution, .note {
border-color: #fff;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
border-bottom-color: #fff;
}
.warning {
background-color: #f0f0f2;
}
.caution {
background-color: #f0f0f2;
}
.tip {
background-color: #f0f0f2;
}
.note {
background-color: #f0f0f2;
}
.glossary dl dt,
.variablelist dl dt,
.variablelist dl dt span.term {
color: #044;
}
div.figure,
div.table,
div.example,
div.informalfigure,
div.informaltable,
div.informalexample {
border-color: #aaa;
}
pre.programlisting {
color: black;
background-color: #fff;
border-color: #aaa;
border-width: 2px;
}
.guimenu,
.guilabel,
.guimenuitem {
background-color: #eee;
}
b.keycap,
.keycap {
background-color: #eee;
border-color: #999;
}
div.navheader {
border-color: black;
}
div.navfooter {
border-color: black;
}
/*********** /
/ graphics /
/ ***********/
/*
body {
background-image: url("images/body_bg.jpg");
background-attachment: fixed;
}
.navheader,
.note,
.tip {
background-image: url("images/note_bg.jpg");
background-attachment: fixed;
}
.warning,
.caution {
background-image: url("images/warning_bg.jpg");
background-attachment: fixed;
}
.figure,
.informalfigure,
.example,
.informalexample,
.table,
.informaltable {
background-image: url("images/figure_bg.jpg");
background-attachment: fixed;
}
*/
h1,
h2,
h3,
h4,
h5,
h6,
h7{
}
/*
Example of how to stick an image as part of the title.
div.article .titlepage .title
{
background-image: url("figures/white-on-black.png");
background-position: center;
background-repeat: repeat-x;
}
*/
div.preface .titlepage .title,
div.colophon .title,
div.chapter .titlepage .title,
div.article .titlepage .title
{
}
div.section div.section .titlepage .title,
div.sect2 .titlepage .title {
background: none;
}
h1.title {
background-color: transparent;
background-repeat: no-repeat;
height: 256px;
text-indent: -9000px;
overflow:hidden;
}
h2.subtitle {
background-color: transparent;
text-indent: -9000px;
overflow:hidden;
width: 0px;
display: none;
}
/*************************************** /
/ pippin.gimp.org specific alterations /
/ ***************************************/
/*
div.heading, div.navheader {
color: #777;
font-size: 80%;
padding: 0;
margin: 0;
text-align: left;
position: absolute;
top: 0px;
left: 0px;
width: 100%;
height: 50px;
background: url('/gfx/heading_bg.png') transparent;
background-repeat: repeat-x;
background-attachment: fixed;
border: none;
}
div.heading a {
color: #444;
}
div.footing, div.navfooter {
border: none;
color: #ddd;
font-size: 80%;
text-align:right;
width: 100%;
padding-top: 10px;
position: absolute;
bottom: 0px;
left: 0px;
background: url('/gfx/footing_bg.png') transparent;
}
*/
/****************** /
/ nasty ie tweaks /
/ ******************/
/*
div.heading, div.navheader {
width:expression(document.body.clientWidth + "px");
}
div.footing, div.navfooter {
width:expression(document.body.clientWidth + "px");
margin-left:expression("-5em");
}
body {
padding:expression("4em 5em 0em 5em");
}
*/
/**************************************** /
/ mozilla vendor specific css extensions /
/ ****************************************/
/*
div.navfooter, div.footing{
-moz-opacity: 0.8em;
}
div.figure,
div.table,
div.informalfigure,
div.informaltable,
div.informalexample,
div.example,
.tip,
.warning,
.caution,
.note {
-moz-border-radius: 0.5em;
}
b.keycap,
.keycap {
-moz-border-radius: 0.3em;
}
*/
table tr td table tr td {
display: none;
}
hr {
display: none;
}
table {
border: 0em;
}
.photo {
float: right;
margin-left: 1.5em;
margin-bottom: 1.5em;
margin-top: 0em;
max-width: 17em;
border: 1px solid gray;
padding: 3px;
background: white;
}
.seperator {
padding-top: 2em;
clear: both;
}
#validators {
margin-top: 5em;
text-align: right;
color: #777;
}
@media print {
body {
font-size: 8pt;
}
.noprint {
display: none;
}
}
.tip,
.note {
background: #f0f0f2;
color: #333;
padding: 20px;
margin: 20px;
}
.tip h3,
.note h3 {
padding: 0em;
margin: 0em;
font-size: 2em;
font-weight: bold;
color: #333;
}
.tip a,
.note a {
color: #333;
text-decoration: underline;
}
.footnote {
font-size: small;
color: #333;
}
/* Changes the announcement text */
.tip h3,
.warning h3,
.caution h3,
.note h3 {
font-size:large;
color: #00557D;
}

View File

@@ -1,88 +0,0 @@
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<book id='bitbake-user-manual' lang='en'
xmlns:xi="http://www.w3.org/2003/XInclude"
xmlns="http://docbook.org/ns/docbook"
>
<bookinfo>
<mediaobject>
<imageobject>
<imagedata fileref='figures/bitbake-title.png'
format='SVG'
align='left' scalefit='1' width='100%'/>
</imageobject>
</mediaobject>
<title>
BitBake User Manual
</title>
<authorgroup>
<author>
<firstname>Richard Purdie, Chris Larson, and </firstname> <surname>Phil Blundell</surname>
<affiliation>
<orgname>BitBake Community</orgname>
</affiliation>
<email>bitbake-devel@lists.openembedded.org</email>
</author>
</authorgroup>
<!--
# Add in some revision history if we want it here.
<revhistory>
<revision>
<revnumber>x.x</revnumber>
<date>dd month year</date>
<revremark>Some relevent comment</revremark>
</revision>
<revision>
<revnumber>x.x</revnumber>
<date>dd month year</date>
<revremark>Some relevent comment</revremark>
</revision>
<revision>
<revnumber>x.x</revnumber>
<date>dd month year</date>
<revremark>Some relevent comment</revremark>
</revision>
<revision>
<revnumber>x.x</revnumber>
<date>dd month year</date>
<revremark>Some relevent comment</revremark>
</revision>
</revhistory>
-->
<copyright>
<year>2004-2016</year>
<holder>Richard Purdie</holder>
<holder>Chris Larson</holder>
<holder>and Phil Blundell</holder>
</copyright>
<legalnotice>
<para>
This work is licensed under the Creative Commons Attribution License.
To view a copy of this license, visit
<ulink url="http://creativecommons.org/licenses/by/2.5/">http://creativecommons.org/licenses/by/2.5/</ulink>
or send a letter to Creative Commons, 444 Castro Street,
Suite 900, Mountain View, California 94041, USA.
</para>
</legalnotice>
</bookinfo>
<xi:include href="bitbake-user-manual-intro.xml"/>
<xi:include href="bitbake-user-manual-execution.xml"/>
<xi:include href="bitbake-user-manual-metadata.xml"/>
<xi:include href="bitbake-user-manual-fetching.xml"/>
<xi:include href="bitbake-user-manual-ref-variables.xml"/>
<xi:include href="bitbake-user-manual-hello.xml"/>
</book>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.0 KiB

View File

@@ -89,7 +89,7 @@ quit after parsing the BB files (developers only)
show current and preferred versions of all packages
.TP
.B \-e, \-\-environment
show the global or per-recipe environment (this is what used to be bbread)
show the global or per-package environment (this is what used to be bbread)
.TP
.B \-g, \-\-graphviz
emit the dependency trees of the specified packages in the dot syntax
@@ -105,7 +105,7 @@ Show debug logging for the specified logging domains
profile the command and print a report
.TP
.B \-uUI, \-\-ui=UI
User interface to use. Currently, knotty, taskexp or ncurses can be specified as UI.
User interface to use. Currently, hob, depexp, goggle or ncurses can be specified as UI.
.TP
.B \-tSERVERTYPE, \-\-servertype=SERVERTYPE
Choose which server to use, none, process or xmlrpc.

View File

@@ -0,0 +1,56 @@
topdir = .
manual = $(topdir)/usermanual.xml
# types = pdf txt rtf ps xhtml html man tex texi dvi
# types = pdf txt
types = $(xmltotypes) $(htmltypes)
xmltotypes = pdf txt
htmltypes = html xhtml
htmlxsl = $(if $(filter $@,$(foreach type,$(htmltypes),$(type)-nochunks)),http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl,http://docbook.sourceforge.net/release/xsl/current/$@/chunk.xsl)
htmlcssfile = docbook.css
htmlcss = $(topdir)/html.css
# htmlcssfile =
# htmlcss =
cleanfiles = $(foreach i,$(types),$(topdir)/$(i))
ifdef DEBUG
define command
$(1)
endef
else
define command
@echo $(2) $(3) $(4)
@$(1) >/dev/null
endef
endif
all: $(types)
lint: $(manual) FORCE
$(call command,xmllint --xinclude --postvalid --noout $(manual),XMLLINT $(manual))
$(types) $(foreach type,$(htmltypes),$(type)-nochunks): lint FORCE
$(foreach type,$(htmltypes),$(type)-nochunks): $(if $(htmlcss),$(htmlcss)) $(manual)
@mkdir -p $@
ifdef htmlcss
$(call command,install -m 0644 $(htmlcss) $@/$(htmlcssfile),CP $(htmlcss) $@/$(htmlcssfile))
endif
$(call command,xsltproc --stringparam base.dir $@/ $(if $(htmlcssfile),--stringparam html.stylesheet $(htmlcssfile)) $(htmlxsl) $(manual) > $@/index.$(patsubst %-nochunks,%,$@),XSLTPROC $@ $(manual))
$(htmltypes): $(if $(htmlcss),$(htmlcss)) $(manual)
@mkdir -p $@
ifdef htmlcss
$(call command,install -m 0644 $(htmlcss) $@/$(htmlcssfile),CP $(htmlcss) $@/$(htmlcssfile))
endif
$(call command,xsltproc --stringparam base.dir $@/ $(if $(htmlcssfile),--stringparam html.stylesheet $(htmlcssfile)) $(htmlxsl) $(manual),XSLTPROC $@ $(manual))
$(xmltotypes): $(manual)
$(call command,xmlto --with-dblatex --extensions -o $(topdir)/$@ $@ $(manual),XMLTO $@ $(manual))
clean:
rm -rf $(cleanfiles)
$(foreach i,$(types) $(foreach type,$(htmltypes),$(type)-nochunks),clean-$(i)):
rm -rf $(patsubst clean-%,%,$@)
FORCE:

View File

@@ -0,0 +1,627 @@
<?xml version="1.0"?>
<!--
ex:ts=4:sw=4:sts=4:et
-*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
-->
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<book>
<bookinfo>
<title>BitBake User Manual</title>
<authorgroup>
<corpauthor>BitBake Team</corpauthor>
</authorgroup>
<copyright>
<year>2004, 2005, 2006, 2011</year>
<holder>Chris Larson</holder>
<holder>Phil Blundell</holder>
<holder>Richard Purdie</holder>
</copyright>
<legalnotice>
<para>This work is licensed under the Creative Commons Attribution License. To view a copy of this license, visit <ulink url="http://creativecommons.org/licenses/by/2.5/">http://creativecommons.org/licenses/by/2.5/</ulink> or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.</para>
</legalnotice>
</bookinfo>
<chapter>
<title>Introduction</title>
<section>
<title>Overview</title>
<para>BitBake is, at its simplest, a tool for executing
tasks and managing metadata. As such, its similarities to GNU make and other
build tools are readily apparent. It was inspired by Portage, the package management system used by the Gentoo Linux distribution. BitBake is the basis of the <ulink url="http://www.openembedded.org/">OpenEmbedded</ulink> project, which is being used to build and maintain a number of embedded Linux distributions/projects such as Angstrom and the Yocto project.</para>
</section>
<section>
<title>Background and goals</title>
<para>Prior to BitBake, no other build tool adequately met
the needs of an aspiring embedded Linux distribution. All of the
buildsystems used by traditional desktop Linux distributions lacked
important functionality, and none of the ad-hoc
<emphasis>buildroot</emphasis> systems, prevalent in the
embedded space, were scalable or maintainable.</para>
<para>Some important original goals for BitBake were:
<itemizedlist>
<listitem><para>Handle crosscompilation.</para></listitem>
<listitem><para>Handle interpackage dependencies (build time on target architecture, build time on native architecture, and runtime).</para></listitem>
<listitem><para>Support running any number of tasks within a given package, including, but not limited to, fetching upstream sources, unpacking them, patching them, configuring them, et cetera.</para></listitem>
<listitem><para>Must be Linux distribution agnostic (both build and target).</para></listitem>
<listitem><para>Must be architecture agnostic</para></listitem>
<listitem><para>Must support multiple build and target operating systems (including Cygwin, the BSDs, etc).</para></listitem>
<listitem><para>Must be able to be self contained, rather than tightly integrated into the build machine's root filesystem.</para></listitem>
<listitem><para>There must be a way to handle conditional metadata (on target architecture, operating system, distribution, machine).</para></listitem>
<listitem><para>It must be easy for the person using the tools to supply their own local metadata and packages to operate against.</para></listitem>
<listitem><para>Must make it easy to collaborate
between multiple projects using BitBake for their
builds.</para></listitem>
<listitem><para>Should provide an inheritance mechanism to
share common metadata between many packages.</para></listitem>
</itemizedlist>
</para>
<para>Over time it has become apparent that some further requirements were necessary:
<itemizedlist>
<listitem><para>Handle variants of a base recipe (native, sdk, multilib).</para></listitem>
<listitem><para>Able to split metadata into layers and allow layers to override each other.</para></listitem>
<listitem><para>Allow representation of a given set of input variables to a task as a checksum.</para></listitem>
<listitem><para>based on that checksum, allow acceleration of builds with prebuilt components.</para></listitem>
</itemizedlist>
</para>
<para>BitBake satisfies all the original requirements and many more with extensions being made to the basic functionality to reflect the additionl requirements. Flexibility and power have always been the priorities. It is highly extensible, supporting embedded Python code and execution of any arbitrary tasks.</para>
</section>
</chapter>
<chapter>
<title>Metadata</title>
<section>
<title>Description</title>
<itemizedlist>
<para>BitBake metadata can be classified into 3 major areas:</para>
<listitem>
<para>Configuration Files</para>
</listitem>
<listitem>
<para>.bb Files</para>
</listitem>
<listitem>
<para>Classes</para>
</listitem>
</itemizedlist>
<para>What follows are a large number of examples of BitBake metadata. Any syntax which isn't supported in any of the aforementioned areas will be documented as such.</para>
<section>
<title>Basic variable setting</title>
<para><screen><varname>VARIABLE</varname> = "value"</screen></para>
<para>In this example, <varname>VARIABLE</varname> is <literal>value</literal>.</para>
</section>
<section>
<title>Variable expansion</title>
<para>BitBake supports variables referencing one another's contents using a syntax which is similar to shell scripting</para>
<para><screen><varname>A</varname> = "aval"
<varname>B</varname> = "pre${A}post"</screen></para>
<para>This results in <varname>A</varname> containing <literal>aval</literal> and <varname>B</varname> containing <literal>preavalpost</literal>.</para>
</section>
<section>
<title>Setting a default value (?=)</title>
<para><screen><varname>A</varname> ?= "aval"</screen></para>
<para>If <varname>A</varname> is set before the above is called, it will retain its previous value. If <varname>A</varname> is unset prior to the above call, <varname>A</varname> will be set to <literal>aval</literal>. Note that this assignment is immediate, so if there are multiple ?= assignments to a single variable, the first of those will be used.</para>
</section>
<section>
<title>Setting a weak default value (??=)</title>
<para><screen><varname>A</varname> ??= "somevalue"
<varname>A</varname> ??= "someothervalue"</screen></para>
<para>If <varname>A</varname> is set before the above, it will retain that value. If <varname>A</varname> is unset prior to the above, <varname>A</varname> will be set to <literal>someothervalue</literal>. This is a lazy/weak assignment in that the assignment does not occur until the end of the parsing process, so that the last, rather than the first, ??= assignment to a given variable will be used. Any other setting of A using = or ?= will however override the value set with ??=</para>
</section>
<section>
<title>Immediate variable expansion (:=)</title>
<para>:= results in a variable's contents being expanded immediately, rather than when the variable is actually used.</para>
<para><screen><varname>T</varname> = "123"
<varname>A</varname> := "${B} ${A} test ${T}"
<varname>T</varname> = "456"
<varname>B</varname> = "${T} bval"
<varname>C</varname> = "cval"
<varname>C</varname> := "${C}append"</screen></para>
<para>In that example, <varname>A</varname> would contain <literal> test 123</literal>, <varname>B</varname> would contain <literal>456 bval</literal>, and <varname>C</varname> would be <literal>cvalappend</literal>.</para>
</section>
<section>
<title>Appending (+=) and prepending (=+)</title>
<para><screen><varname>B</varname> = "bval"
<varname>B</varname> += "additionaldata"
<varname>C</varname> = "cval"
<varname>C</varname> =+ "test"</screen></para>
<para>In this example, <varname>B</varname> is now <literal>bval additionaldata</literal> and <varname>C</varname> is <literal>test cval</literal>.</para>
</section>
<section>
<title>Appending (.=) and prepending (=.) without spaces</title>
<para><screen><varname>B</varname> = "bval"
<varname>B</varname> .= "additionaldata"
<varname>C</varname> = "cval"
<varname>C</varname> =. "test"</screen></para>
<para>In this example, <varname>B</varname> is now <literal>bvaladditionaldata</literal> and <varname>C</varname> is <literal>testcval</literal>. In contrast to the above appending and prepending operators, no additional space
will be introduced.</para>
</section>
<section>
<title>Appending and Prepending (override style syntax)</title>
<para><screen><varname>B</varname> = "bval"
<varname>B_append</varname> = " additional data"
<varname>C</varname> = "cval"
<varname>C_prepend</varname> = "additional data "</screen></para>
<para>This example results in <varname>B</varname> becoming <literal>bval additional data</literal>
and <varname>C</varname> becoming <literal>additional data cval</literal>. Note the spaces in the append.
Unlike the += operator, additional space is not automatically added. You must take steps to add space
yourself.</para>
</section>
<section>
<title>Removing (override style syntax)</title>
<para><screen><varname>FOO</varname> = "123 456 789 123456 123 456 123 456"
<varname>FOO_remove</varname> = "123"
<varname>FOO_remove</varname> = "456"</screen></para>
<para>In this example, <varname>FOO</varname> is now <literal>789 123456</literal>.</para>
</section>
<section>
<title>Conditional metadata set</title>
<para>OVERRIDES is a <quote>:</quote> separated variable containing each item you want to satisfy conditions. So, if you have a variable which is conditional on <quote>arm</quote>, and <quote>arm</quote> is in OVERRIDES, then the <quote>arm</quote> specific version of the variable is used rather than the non-conditional version. Example:</para>
<para><screen><varname>OVERRIDES</varname> = "architecture:os:machine"
<varname>TEST</varname> = "defaultvalue"
<varname>TEST_os</varname> = "osspecificvalue"
<varname>TEST_condnotinoverrides</varname> = "othercondvalue"</screen></para>
<para>In this example, <varname>TEST</varname> would be <literal>osspecificvalue</literal>, due to the condition <quote>os</quote> being in <varname>OVERRIDES</varname>.</para>
</section>
<section>
<title>Conditional appending</title>
<para>BitBake also supports appending and prepending to variables based on whether something is in OVERRIDES. Example:</para>
<para><screen><varname>DEPENDS</varname> = "glibc ncurses"
<varname>OVERRIDES</varname> = "machine:local"
<varname>DEPENDS_append_machine</varname> = " libmad"</screen></para>
<para>In this example, <varname>DEPENDS</varname> is set to <literal>glibc ncurses libmad</literal>.</para>
</section>
<section>
<title>Inclusion</title>
<para>Next, there is the <literal>include</literal> directive, which causes BitBake to parse whatever file you specify, and insert it at that location, which is not unlike <command>make</command>. However, if the path specified on the <literal>include</literal> line is a relative path, BitBake will locate the first one it can find within <envar>BBPATH</envar>.</para>
</section>
<section>
<title>Requiring inclusion</title>
<para>In contrast to the <literal>include</literal> directive, <literal>require</literal> will
raise an ParseError if the file to be included cannot be found. Otherwise it will behave just like the <literal>
include</literal> directive.</para>
</section>
<section>
<title>Python variable expansion</title>
<para><screen><varname>DATE</varname> = "${@time.strftime('%Y%m%d',time.gmtime())}"</screen></para>
<para>This would result in the <varname>DATE</varname> variable containing today's date.</para>
</section>
<section>
<title>Defining executable metadata</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para><screen>do_mytask () {
echo "Hello, world!"
}</screen></para>
<para>This is essentially identical to setting a variable, except that this variable happens to be executable shell code.</para>
<para><screen>python do_printdate () {
import time
print time.strftime('%Y%m%d', time.gmtime())
}</screen></para>
<para>This is the similar to the previous, but flags it as Python so that BitBake knows it is Python code.</para>
</section>
<section>
<title>Defining Python functions into the global Python namespace</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para><screen>def get_depends(bb, d):
if d.getVar('SOMECONDITION', True):
return "dependencywithcond"
else:
return "dependency"
<varname>SOMECONDITION</varname> = "1"
<varname>DEPENDS</varname> = "${@get_depends(bb, d)}"</screen></para>
<para>This would result in <varname>DEPENDS</varname> containing <literal>dependencywithcond</literal>.</para>
</section>
<section>
<title>Variable flags</title>
<para>Variables can have associated flags which provide a way of tagging extra information onto a variable. Several flags are used internally by BitBake but they can be used externally too if needed. The standard operations mentioned above also work on flags.</para>
<para><screen><varname>VARIABLE</varname>[<varname>SOMEFLAG</varname>] = "value"</screen></para>
<para>In this example, <varname>VARIABLE</varname> has a flag, <varname>SOMEFLAG</varname> which is set to <literal>value</literal>.</para>
</section>
<section>
<title>Inheritance</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para>The <literal>inherit</literal> directive is a means of specifying what classes of functionality your .bb requires. It is a rudimentary form of inheritance. For example, you can easily abstract out the tasks involved in building a package that uses autoconf and automake, and put that into a bbclass for your packages to make use of. A given bbclass is located by searching for classes/filename.bbclass in <envar>BBPATH</envar>, where filename is what you inherited.</para>
</section>
<section>
<title>Tasks</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para>In BitBake, each step that needs to be run for a given .bb is known as a task. There is a command <literal>addtask</literal> to add new tasks (must be a defined Python executable metadata and must start with <quote>do_</quote>) and describe intertask dependencies.</para>
<para><screen>python do_printdate () {
import time
print time.strftime('%Y%m%d', time.gmtime())
}
addtask printdate before do_build</screen></para>
<para>This defines the necessary Python function and adds it as a task which is now a dependency of do_build, the default task. If anyone executes the do_build task, that will result in do_printdate being run first.</para>
</section>
<section>
<title>Task Flags</title>
<para>Tasks support a number of flags which control various functionality of the task. These are as follows:</para>
<para>'dirs' - directories which should be created before the task runs</para>
<para>'cleandirs' - directories which should created before the task runs but should be empty</para>
<para>'noexec' - marks the tasks as being empty and no execution required. These are used as dependency placeholders or used when added tasks need to be subsequently disabled.</para>
<para>'nostamp' - don't generate a stamp file for a task. This means the task is always rexecuted.</para>
<para>'fakeroot' - this task needs to be run in a fakeroot environment, obtained by adding the variables in FAKEROOTENV to the environment.</para>
<para>'umask' - the umask to run the task under.</para>
<para> For the 'deptask', 'rdeptask', 'depends', 'rdepends' and 'recrdeptask' flags please see the dependencies section.</para>
</section>
<section>
<title>Events</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para>BitBake allows installation of event handlers. Events are triggered at certain points during operation, such as the beginning of operation against a given .bb, the start of a given task, task failure, task success, et cetera. The intent is to make it easy to do things like email notification on build failure.</para>
<para><screen>addhandler myclass_eventhandler
python myclass_eventhandler() {
from bb.event import getName
from bb import data
print("The name of the Event is %s" % getName(e))
print("The file we run for is %s" % data.getVar('FILE', e.data, True))
}
</screen></para><para>
This event handler gets called every time an event is triggered. A global variable <varname>e</varname> is defined. <varname>e</varname>.data contains an instance of bb.data. With the getName(<varname>e</varname>)
method one can get the name of the triggered event.</para><para>The above event handler prints the name
of the event and the content of the <varname>FILE</varname> variable.</para>
</section>
<section>
<title>Variants</title>
<para>Two BitBake features exist to facilitate the creation of multiple buildable incarnations from a single recipe file.</para>
<para>The first is <varname>BBCLASSEXTEND</varname>. This variable is a space separated list of classes used to "extend" the recipe for each variant. As an example, setting <screen>BBCLASSEXTEND = "native"</screen> results in a second incarnation of the current recipe being available. This second incarnation will have the "native" class inherited.</para>
<para>The second feature is <varname>BBVERSIONS</varname>. This variable allows a single recipe to build multiple versions of a project from a single recipe file, and allows you to specify conditional metadata (using the <varname>OVERRIDES</varname> mechanism) for a single version, or an optionally named range of versions:</para>
<para><screen>BBVERSIONS = "1.0 2.0 git"
SRC_URI_git = "git://someurl/somepath.git"</screen></para>
<para><screen>BBVERSIONS = "1.0.[0-6]:1.0.0+ \
1.0.[7-9]:1.0.7+"
SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;patch=1"</screen></para>
<para>Note that the name of the range will default to the original version of the recipe, so given OE, a recipe file of foo_1.0.0+.bb will default the name of its versions to 1.0.0+. This is useful, as the range name is not only placed into overrides; it's also made available for the metadata to use in the form of the <varname>BPV</varname> variable, for use in file:// search paths (<varname>FILESPATH</varname>).</para>
</section>
</section>
<section>
<title>Variable interaction: Worked Examples</title>
<para>Despite the documentation of the different forms of variable definition above, it can be hard to work out what happens when variable operators are combined. This section documents some common questions people have regarding the way variables interact.</para>
<section>
<title>Override and append ordering</title>
<para>There is often confusion about which order overrides and the various append operators take effect.</para>
<para><screen><varname>OVERRIDES</varname> = "foo"
<varname>A_foo_append</varname> = "X"</screen></para>
<para>In this case, X is unconditionally appended to the variable <varname>A_foo</varname>. Since foo is an override, A_foo would then replace <varname>A</varname>.</para>
<para><screen><varname>OVERRIDES</varname> = "foo"
<varname>A</varname> = "X"
<varname>A_append_foo</varname> = "Y"</screen></para>
<para>In this case, only when foo is in OVERRIDES, Y is appended to the variable <varname>A</varname> so the value of <varname>A</varname> would become XY (NB: no spaces are appended).</para>
<para><screen><varname>OVERRIDES</varname> = "foo"
<varname>A_foo_append</varname> = "X"
<varname>A_foo_append</varname> += "Y"</screen></para>
<para>This behaves as per the first case above, but the value of <varname>A</varname> would be "X Y" instead of just "X".</para>
<para><screen><varname>A</varname> = "1"
<varname>A_append</varname> = "2"
<varname>A_append</varname> = "3"
<varname>A</varname> += "4"
<varname>A</varname> .= "5"</screen></para>
<para>Would ultimately result in <varname>A</varname> taking the value "1 4523" since the _append operator executes at the same time as the expansion of other overrides.</para>
</section>
<section>
<title>Key Expansion</title>
<para>Key expansion happens at the data store finalisation time just before overrides are expanded.</para>
<para><screen><varname>A${B}</varname> = "X"
<varname>B</varname> = "2"
<varname>A2</varname> = "Y"</screen></para>
<para>So in this case <varname>A2</varname> would take the value of "X".</para>
</section>
</section>
<section>
<title>Dependency handling</title>
<para>BitBake handles dependencies at the task level since to allow for efficient operation with multiple processed executing in parallel. A robust method of specifying task dependencies is therefore needed. </para>
<section>
<title>Dependencies internal to the .bb file</title>
<para>Where the dependencies are internal to a given .bb file, the dependencies are handled by the previously detailed addtask directive.</para>
</section>
<section>
<title>Build Dependencies</title>
<para>DEPENDS lists build time dependencies. The 'deptask' flag for tasks is used to signify the task of each item listed in DEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_configure[deptask] = "do_populate_staging"</screen></para>
<para>means the do_populate_staging task of each item in DEPENDS must have completed before do_configure can execute.</para>
</section>
<section>
<title>Runtime Dependencies</title>
<para>The PACKAGES variable lists runtime packages and each of these can have RDEPENDS and RRECOMMENDS runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each item runtime dependency which must have completed before that task can be executed.</para>
<para><screen>do_package_write[rdeptask] = "do_package"</screen></para>
<para>means the do_package task of each item in RDEPENDS must have completed before do_package_write can execute.</para>
</section>
<section>
<title>Recursive Dependencies</title>
<para>These are specified with the 'recrdeptask' flag which is used signify the task(s) of dependencies which must have completed before that task can be executed. It works by looking though the build and runtime dependencies of the current recipe as well as any inter-task dependencies the task has, then adding a dependency on the listed task. It will then recurse through the dependencies of those tasks and so on.</para>
<para>It may be desireable to recurse not just through the dependencies of those tasks but through the build and runtime dependencies of dependent tasks too. If that is the case, the taskname itself should be referenced in the task list, e.g. do_a[recrdeptask] = "do_a do_b".</para>
</section>
<section>
<title>Inter task</title>
<para>The 'depends' flag for tasks is a more generic form of which allows an interdependency on specific tasks rather than specifying the data in DEPENDS.</para>
<para><screen>do_patch[depends] = "quilt-native:do_populate_staging"</screen></para>
<para>means the do_populate_staging task of the target quilt-native must have completed before the do_patch can execute.</para>
<para>The 'rdepends' flag works in a similar way but takes targets in the runtime namespace instead of the build time dependency namespace.</para>
</section>
</section>
<section>
<title>Parsing</title>
<section>
<title>Configuration files</title>
<para>The first kind of metadata in BitBake is configuration metadata. This metadata is global, and therefore affects <emphasis>all</emphasis> packages and tasks which are executed.</para>
<para>BitBake will first search the current working directory for an optional "conf/bblayers.conf" configuration file. This file is expected to contain a BBLAYERS variable which is a space delimited list of 'layer' directories. For each directory in this list, a "conf/layer.conf" file will be searched for and parsed with the LAYERDIR variable being set to the directory where the layer was found. The idea is these files will setup BBPATH and other variables correctly for a given build directory automatically for the user.</para>
<para>BitBake will then expect to find 'conf/bitbake.conf' somewhere in the user specified <envar>BBPATH</envar>. That configuration file generally has include directives to pull in any other metadata (generally files specific to architecture, machine, <emphasis>local</emphasis> and so on).</para>
<para>Only variable definitions and include directives are allowed in .conf files.</para>
</section>
<section>
<title>Classes</title>
<para>BitBake classes are our rudimentary inheritance mechanism. As briefly mentioned in the metadata introduction, they're parsed when an <literal>inherit</literal> directive is encountered, and they are located in classes/ relative to the directories in <envar>BBPATH</envar>.</para>
</section>
<section>
<title>.bb files</title>
<para>A BitBake (.bb) file is a logical unit of tasks to be executed. Normally this is a package to be built. Inter-.bb dependencies are obeyed. The files themselves are located via the <varname>BBFILES</varname> variable, which is set to a space separated list of .bb files, and does handle wildcards.</para>
</section>
</section>
</chapter>
<chapter>
<title>File download support</title>
<section>
<title>Overview</title>
<para>BitBake provides support to download files this procedure is called fetching and it handled by the fetch and fetch2 modules. At this point the original fetch code is considered to be replaced by fetch2 and this manual only related to the fetch2 codebase.</para>
<para>The SRC_URI is normally used to tell BitBake which files to fetch. The next sections will describe the available fetchers and their options. Each fetcher honors a set of variables and per URI parameters separated by a <quote>;</quote> consisting of a key and a value. The semantics of the variables and parameters are defined by the fetcher. BitBake tries to have consistent semantics between the different fetchers.
</para>
<para>The overall fetch process is that first, fetches are attempted from PREMIRRORS. If those don't work, the original SRC_URI is attempted and if that fails, BitBake will fall back to MIRRORS. Cross urls are supported, so its possible to mirror a git repository on an http server as a tarball for example. Some example commonly used mirror definitions are:</para>
<para><screen><varname>PREMIRRORS</varname> ?= "\
bzr://.*/.* http://somemirror.org/sources/ \n \
cvs://.*/.* http://somemirror.org/sources/ \n \
git://.*/.* http://somemirror.org/sources/ \n \
hg://.*/.* http://somemirror.org/sources/ \n \
osc://.*/.* http://somemirror.org/sources/ \n \
p4://.*/.* http://somemirror.org/sources/ \n \
svk://.*/.* http://somemirror.org/sources/ \n \
svn://.*/.* http://somemirror.org/sources/ \n"
<varname>MIRRORS</varname> =+ "\
ftp://.*/.* http://somemirror.org/sources/ \n \
http://.*/.* http://somemirror.org/sources/ \n \
https://.*/.* http://somemirror.org/sources/ \n"</screen></para>
<para>Non-local downloaded output is placed into the directory specified by the <varname>DL_DIR</varname>. For non local archive downloads the code can verify sha256 and md5 checksums for the download to ensure the file has been downloaded correctly. These may be specified either in the form <varname>SRC_URI[md5sum]</varname> for the md5 checksum and <varname>SRC_URI[sha256sum]</varname> for the sha256 checksum or as parameters on the SRC_URI such as SRC_URI="http://example.com/foobar.tar.bz2;md5sum=4a8e0f237e961fd7785d19d07fdb994d". If <varname>BB_STRICT_CHECKSUM</varname> is set, any download without a checksum will trigger an error message. In cases where multiple files are listed in SRC_URI, the name parameter is used assign names to the urls and these are then specified in the checksums in the form SRC_URI[name.sha256sum].</para>
</section>
<section>
<title>Local file fetcher</title>
<para>The URN for the local file fetcher is <emphasis>file</emphasis>. The filename can be either absolute or relative. If the filename is relative, <varname>FILESPATH</varname> and failing that <varname>FILESDIR</varname> will be used to find the appropriate relative file. The metadata usually extend these variables to include variations of the values in <varname>OVERRIDES</varname>. Single files and complete directories can be specified.
<screen><varname>SRC_URI</varname>= "file://relativefile.patch"
<varname>SRC_URI</varname>= "file://relativefile.patch;this=ignored"
<varname>SRC_URI</varname>= "file:///Users/ich/very_important_software"
</screen>
</para>
</section>
<section>
<title>CVS fetcher</title>
<para>The URN for the CVS fetcher is <emphasis>cvs</emphasis>. This fetcher honors the variables <varname>CVSDIR</varname>, <varname>SRCDATE</varname>, <varname>FETCHCOMMAND_cvs</varname>, <varname>UPDATECOMMAND_cvs</varname>. <varname>DL_DIR</varname> specifies where a temporary checkout is saved. <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build). <varname>FETCHCOMMAND</varname> and <varname>UPDATECOMMAND</varname> specify which executables to use for the CVS checkout or update.
</para>
<para>The supported parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname> and <varname>scmdata</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout. By default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>. If <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally, <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR</varname>.
<screen><varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
<varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
</screen>
</para>
</section>
<section>
<title>HTTP/FTP fetcher</title>
<para>The URNs for the HTTP/FTP fetcher are <emphasis>http</emphasis>, <emphasis>https</emphasis> and <emphasis>ftp</emphasis>. This fetcher honors the variables <varname>FETCHCOMMAND_wget</varname>. <varname>FETCHCOMMAND</varname> contains the command used for fetching. <quote>${URI}</quote> and <quote>${FILES}</quote> will be replaced by the URI and basename of the file to be fetched.
</para>
<para><screen><varname>SRC_URI</varname> = "http://oe.handhelds.org/not_there.aac"
<varname>SRC_URI</varname> = "ftp://oe.handhelds.org/not_there_as_well.aac"
<varname>SRC_URI</varname> = "ftp://you@oe.handheld.sorg/home/you/secret.plan"
</screen></para>
</section>
<section>
<title>SVN fetcher</title>
<para>The URN for the SVN fetcher is <emphasis>svn</emphasis>.
</para>
<para>This fetcher honors the variables <varname>FETCHCOMMAND_svn</varname>, <varname>SVNDIR</varname>, <varname>SRCREV</varname>. <varname>FETCHCOMMAND</varname> contains the subversion command. <varname>SRCREV</varname> specifies which revision to use when doing the fetching.
</para>
<para>The supported parameters are <varname>proto</varname>, <varname>rev</varname> and <varname>scmdata</varname>. <varname>proto</varname> is the Subversion protocol, <varname>rev</varname> is the Subversion revision. If <varname>scmdata</varname> is set to <quote>keep</quote>, the <quote>.svn</quote> directories will be available during compile-time.
</para>
<para><screen><varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn;module=vip;proto=http;rev=667"
<varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn/;module=opie;proto=svn+ssh;date=20060126"
</screen></para>
</section>
<section>
<title>GIT fetcher</title>
<para>The URN for the GIT Fetcher is <emphasis>git</emphasis>.
</para>
<para>The variable <varname>GITDIR</varname> will be used as the base directory where the git tree is cloned to.
</para>
<para>The parameters are <emphasis>tag</emphasis>, <emphasis>protocol</emphasis> and <emphasis>scmdata</emphasis>. <emphasis>tag</emphasis> is a Git tag, the default is <quote>master</quote>. <emphasis>protocol</emphasis> is the Git protocol to use and defaults to <quote>git</quote> if a hostname is set, otherwise its <quote>file</quote>. If <emphasis>scmdata</emphasis> is set to <quote>keep</quote>, the <quote>.git</quote> directory will be available during compile-time.
</para>
<para><screen><varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
<varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
</screen></para>
</section>
</chapter>
<chapter>
<title>The BitBake command</title>
<section>
<title>Introduction</title>
<para>bitbake is the primary command in the system. It facilitates executing tasks in a single .bb file, or executing a given task on a set of multiple .bb files, accounting for interdependencies amongst them.</para>
</section>
<section>
<title>Usage and syntax</title>
<para>
<screen><prompt>$ </prompt>bitbake --help
usage: bitbake [options] [package ...]
Executes the specified task (default is 'build') for a given set of BitBake files.
It expects that BBFILES is defined, which is a space separated list of files to
be executed. BBFILES does support wildcards.
Default BBFILES are the .bb files in the current directory.
options:
--version show program's version number and exit
-h, --help show this help message and exit
-b BUILDFILE, --buildfile=BUILDFILE
execute the task against this .bb file, rather than a
package from BBFILES.
-k, --continue continue as much as possible after an error. While the
target that failed, and those that depend on it,
cannot be remade, the other dependencies of these
targets can be processed all the same.
-f, --force force run of specified cmd, regardless of stamp status
-i, --interactive drop into the interactive mode also called the BitBake
shell.
-c CMD, --cmd=CMD Specify task to execute. Note that this only executes
the specified task for the providee and the packages
it depends on, i.e. 'compile' does not implicitly call
stage for the dependencies (IOW: use only if you know
what you are doing). Depending on the base.bbclass a
listtasks task is defined and will show available
tasks
-r FILE, --read=FILE read the specified file before bitbake.conf
-v, --verbose output more chit-chat to the terminal
-D, --debug Increase the debug level. You can specify this more
than once.
-n, --dry-run don't execute, just go through the motions
-p, --parse-only quit after parsing the BB files (developers only)
-s, --show-versions show current and preferred versions of all packages
-e, --environment show the global or per-package environment (this is
what used to be bbread)
-g, --graphviz emit the dependency trees of the specified packages in
the dot syntax
-I IGNORED_DOT_DEPS, --ignore-deps=IGNORED_DOT_DEPS
Stop processing at the given list of dependencies when
generating dependency graphs. This can help to make
the graph more appealing
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
-P, --profile profile the command and print a report
</screen>
</para>
<para>
<example>
<title>Executing a task against a single .bb</title>
<para>Executing tasks for a single file is relatively simple. You specify the file in question, and BitBake parses it and executes the specified task (or <quote>build</quote> by default). It obeys intertask dependencies when doing so.</para>
<para><quote>clean</quote> task:</para>
<para><screen><prompt>$ </prompt>bitbake -b blah_1.0.bb -c clean</screen></para>
<para><quote>build</quote> task:</para>
<para><screen><prompt>$ </prompt>bitbake -b blah_1.0.bb</screen></para>
</example>
</para>
<para>
<example>
<title>Executing tasks against a set of .bb files</title>
<para>There are a number of additional complexities introduced when one wants to manage multiple .bb files. Clearly there needs to be a way to tell BitBake what files are available, and of those, which we want to execute at this time. There also needs to be a way for each .bb to express its dependencies, both for build time and runtime. There must be a way for the user to express their preferences when multiple .bb's provide the same functionality, or when there are multiple versions of a .bb.</para>
<para>The next section, Metadata, outlines how to specify such things.</para>
<para>Note that the bitbake command, when not using --buildfile, accepts a <varname>PROVIDER</varname>, not a filename or anything else. By default, a .bb generally PROVIDES its packagename, packagename-version, and packagename-version-revision.</para>
<screen><prompt>$ </prompt>bitbake blah</screen>
<screen><prompt>$ </prompt>bitbake blah-1.0</screen>
<screen><prompt>$ </prompt>bitbake blah-1.0-r0</screen>
<screen><prompt>$ </prompt>bitbake -c clean blah</screen>
<screen><prompt>$ </prompt>bitbake virtual/whatever</screen>
<screen><prompt>$ </prompt>bitbake -c clean virtual/whatever</screen>
</example>
<example>
<title>Generating dependency graphs</title>
<para>BitBake is able to generate dependency graphs using the dot syntax. These graphs can be converted
to images using the <application>dot</application> application from <ulink url="http://www.graphviz.org">Graphviz</ulink>.
Two files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing dependency information at the package level and <emphasis>task-depends.dot</emphasis> containing a breakdown of the dependencies at the task level. To stop depending on common depends, one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. This way, <varname>DEPENDS</varname> from inherited classes such as base.bbclass can be removed from the graph.</para>
<screen><prompt>$ </prompt>bitbake -g blah</screen>
<screen><prompt>$ </prompt>bitbake -g -I virtual/whatever -I bloom blah</screen>
</example>
</para>
</section>
<section>
<title>Special variables</title>
<para>Certain variables affect BitBake operation:</para>
<section>
<title><varname>BB_NUMBER_THREADS</varname></title>
<para> The number of threads BitBake should run at once (default: 1).</para>
</section>
</section>
<section>
<title>Metadata</title>
<para>As you may have seen in the usage information, or in the information about .bb files, the <varname>BBFILES</varname> variable is how the BitBake tool locates its files. This variable is a space separated list of files that are available, and supports wildcards.
<example>
<title>Setting BBFILES</title>
<programlisting><varname>BBFILES</varname> = "/path/to/bbfiles/*.bb"</programlisting>
</example></para>
<para>With regard to dependencies, it expects the .bb to define a <varname>DEPENDS</varname> variable, which contains a space separated list of <quote>package names</quote>, which themselves are the <varname>PN</varname> variable. The <varname>PN</varname> variable is, in general, set to a component of the .bb filename by default.</para>
<example>
<title>Depending on another .bb</title>
<para>a.bb:
<screen>PN = "package-a"
DEPENDS += "package-b"</screen>
</para>
<para>b.bb:
<screen>PN = "package-b"</screen>
</para>
</example>
<example>
<title>Using PROVIDES</title>
<para>This example shows the usage of the <varname>PROVIDES</varname> variable, which allows a given .bb to specify what functionality it provides.</para>
<para>package1.bb:
<screen>PROVIDES += "virtual/package"</screen>
</para>
<para>package2.bb:
<screen>DEPENDS += "virtual/package"</screen>
</para>
<para>package3.bb:
<screen>PROVIDES += "virtual/package"</screen>
</para>
<para>As you can see, we have two different .bb's that provide the same functionality (virtual/package). Clearly, there needs to be a way for the person running BitBake to control which of those providers gets used. There is, indeed, such a way.</para>
<para>The following would go into a .conf file, to select package1:
<screen>PREFERRED_PROVIDER_virtual/package = "package1"</screen>
</para>
</example>
<example>
<title>Specifying version preference</title>
<para>When there are multiple <quote>versions</quote> of a given package, BitBake defaults to selecting the most recent version, unless otherwise specified. If the .bb in question has a <varname>DEFAULT_PREFERENCE</varname> set lower than the other .bb's (default is 0), then it will not be selected. This allows the person or persons maintaining the repository of .bb files to specify their preference for the default selected version. In addition, the user can specify their preferred version.</para>
<para>If the first .bb is named <filename>a_1.1.bb</filename>, then the <varname>PN</varname> variable will be set to <quote>a</quote>, and the <varname>PV</varname> variable will be set to 1.1.</para>
<para>If we then have an <filename>a_1.2.bb</filename>, BitBake will choose 1.2 by default. However, if we define the following variable in a .conf that BitBake parses, we can change that.
<screen>PREFERRED_VERSION_a = "1.1"</screen>
</para>
</example>
<example>
<title>Using <quote>bbfile collections</quote></title>
<para>bbfile collections exist to allow the user to have multiple repositories of bbfiles that contain the same exact package. For example, one could easily use them to make one's own local copy of an upstream repository, but with custom modifications that one does not want upstream. Usage:</para>
<screen>BBFILES = "/stuff/openembedded/*/*.bb /stuff/openembedded.modified/*/*.bb"
BBFILE_COLLECTIONS = "upstream local"
BBFILE_PATTERN_upstream = "^/stuff/openembedded/"
BBFILE_PATTERN_local = "^/stuff/openembedded.modified/"
BBFILE_PRIORITY_upstream = "5"
BBFILE_PRIORITY_local = "10"</screen>
</example>
</section>
</chapter>
</book>

View File

@@ -1,59 +0,0 @@
<!ENTITY DISTRO "1.4">
<!ENTITY DISTRO_NAME "tbd">
<!ENTITY YOCTO_DOC_VERSION "1.4">
<!ENTITY POKYVERSION "8.0">
<!ENTITY YOCTO_POKY "poky-&DISTRO_NAME;-&POKYVERSION;">
<!ENTITY COPYRIGHT_YEAR "2010-2013">
<!ENTITY YOCTO_DL_URL "http://downloads.yoctoproject.org">
<!ENTITY YOCTO_HOME_URL "http://www.yoctoproject.org">
<!ENTITY YOCTO_LISTS_URL "http://lists.yoctoproject.org">
<!ENTITY YOCTO_BUGZILLA_URL "http://bugzilla.yoctoproject.org">
<!ENTITY YOCTO_WIKI_URL "https://wiki.yoctoproject.org">
<!ENTITY YOCTO_AB_URL "http://autobuilder.yoctoproject.org">
<!ENTITY YOCTO_GIT_URL "http://git.yoctoproject.org">
<!ENTITY YOCTO_ADTREPO_URL "http://adtrepo.yoctoproject.org">
<!ENTITY OE_HOME_URL "http://www.openembedded.org">
<!ENTITY OE_LISTS_URL "http://lists.linuxtogo.org/cgi-bin/mailman">
<!ENTITY OE_DOCS_URL "http://docs.openembedded.org">
<!ENTITY OH_HOME_URL "http://o-hand.com">
<!ENTITY BITBAKE_HOME_URL "http://developer.berlios.de/projects/bitbake/">
<!ENTITY ECLIPSE_MAIN_URL "http://www.eclipse.org/downloads">
<!ENTITY ECLIPSE_DL_URL "http://download.eclipse.org">
<!ENTITY ECLIPSE_DL_PLUGIN_URL "&YOCTO_DL_URL;/releases/eclipse-plugin/&DISTRO;">
<!ENTITY ECLIPSE_UPDATES_URL "&ECLIPSE_DL_URL;/tm/updates/3.3">
<!ENTITY ECLIPSE_INDIGO_URL "&ECLIPSE_DL_URL;/releases/indigo">
<!ENTITY ECLIPSE_JUNO_URL "&ECLIPSE_DL_URL;/releases/juno">
<!ENTITY ECLIPSE_INDIGO_CDT_URL "&ECLIPSE_DL_URL;tools/cdt/releases/indigo">
<!ENTITY YOCTO_DOCS_URL "&YOCTO_HOME_URL;/docs">
<!ENTITY YOCTO_SOURCES_URL "&YOCTO_HOME_URL;/sources/">
<!ENTITY YOCTO_AB_PORT_URL "&YOCTO_AB_URL;:8010">
<!ENTITY YOCTO_AB_NIGHTLY_URL "&YOCTO_AB_URL;/nightly/">
<!ENTITY YOCTO_POKY_URL "&YOCTO_DL_URL;/releases/poky/">
<!ENTITY YOCTO_RELEASE_DL_URL "&YOCTO_DL_URL;/releases/yocto/yocto-&DISTRO;">
<!ENTITY YOCTO_TOOLCHAIN_DL_URL "&YOCTO_RELEASE_DL_URL;/toolchain/">
<!ENTITY YOCTO_ECLIPSE_DL_URL "&YOCTO_RELEASE_DL_URL;/eclipse-plugin/indigo;">
<!ENTITY YOCTO_ADTINSTALLER_DL_URL "&YOCTO_RELEASE_DL_URL;/adt_installer">
<!ENTITY YOCTO_POKY_DL_URL "&YOCTO_RELEASE_DL_URL;/&YOCTO_POKY;.tar.bz2">
<!ENTITY YOCTO_MACHINES_DL_URL "&YOCTO_RELEASE_DL_URL;/machines">
<!ENTITY YOCTO_QEMU_DL_URL "&YOCTO_MACHINES_DL_URL;/qemu">
<!ENTITY YOCTO_PYTHON-i686_DL_URL "&YOCTO_DL_URL;/releases/miscsupport/python-nativesdk-standalone-i686.tar.bz2">
<!ENTITY YOCTO_PYTHON-x86_64_DL_URL "&YOCTO_DL_URL;/releases/miscsupport/python-nativesdk-standalone-x86_64.tar.bz2">
<!ENTITY YOCTO_DOCS_QS_URL "&YOCTO_DOCS_URL;/&YOCTO_DOC_VERSION;/yocto-project-qs/yocto-project-qs.html">
<!ENTITY YOCTO_DOCS_ADT_URL "&YOCTO_DOCS_URL;/&YOCTO_DOC_VERSION;/adt-manual/adt-manual.html">
<!ENTITY YOCTO_DOCS_REF_URL "&YOCTO_DOCS_URL;/&YOCTO_DOC_VERSION;/ref-manual/ref-manual.html">
<!ENTITY YOCTO_DOCS_BSP_URL "&YOCTO_DOCS_URL;/&YOCTO_DOC_VERSION;/bsp-guide/bsp-guide.html">
<!ENTITY YOCTO_DOCS_DEV_URL "&YOCTO_DOCS_URL;/&YOCTO_DOC_VERSION;/dev-manual/dev-manual.html">
<!ENTITY YOCTO_DOCS_KERNEL_URL "&YOCTO_DOCS_URL;/&YOCTO_DOC_VERSION;/kernel-manual/kernel-manual.html">
<!ENTITY YOCTO_ADTPATH_DIR "/opt/poky/&DISTRO;">
<!ENTITY YOCTO_POKY_TARBALL "&YOCTO_POKY;.tar.bz2">
<!ENTITY OE_INIT_PATH "&YOCTO_POKY;/oe-init-build-env">
<!ENTITY OE_INIT_FILE "oe-init-build-env">
<!ENTITY UBUNTU_HOST_PACKAGES_ESSENTIAL "gawk wget git-core diffstat unzip texinfo \
build-essential chrpath">
<!ENTITY FEDORA_HOST_PACKAGES_ESSENTIAL "gawk make wget tar bzip2 gzip python unzip perl patch \
diffutils diffstat git cpp gcc gcc-c++ eglibc-devel texinfo chrpath \
ccache">
<!ENTITY OPENSUSE_HOST_PACKAGES_ESSENTIAL "python gcc gcc-c++ git chrpath make wget python-xml \
diffstat texinfo python-curses">
<!ENTITY CENTOS_HOST_PACKAGES_ESSENTIAL "gawk make wget tar bzip2 gzip python unzip perl patch \
diffutils diffstat git cpp gcc gcc-c++ glibc-devel texinfo chrpath">

Binary file not shown.

File diff suppressed because one or more lines are too long

Binary file not shown.

File diff suppressed because one or more lines are too long

Binary file not shown.

File diff suppressed because one or more lines are too long

View File

@@ -1,39 +0,0 @@
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:d="http://docbook.org/ns/docbook"
xmlns="http://www.w3.org/1999/xhtml"
exclude-result-prefixes="d">
<xsl:template name="component.title">
<xsl:param name="node" select="."/>
<xsl:variable name="level">
<xsl:choose>
<xsl:when test="ancestor::d:section">
<xsl:value-of select="count(ancestor::d:section)+1"/>
</xsl:when>
<xsl:when test="ancestor::d:sect5">6</xsl:when>
<xsl:when test="ancestor::d:sect4">5</xsl:when>
<xsl:when test="ancestor::d:sect3">4</xsl:when>
<xsl:when test="ancestor::d:sect2">3</xsl:when>
<xsl:when test="ancestor::d:sect1">2</xsl:when>
<xsl:otherwise>1</xsl:otherwise>
</xsl:choose>
</xsl:variable>
<xsl:element name="h{$level+1}" namespace="http://www.w3.org/1999/xhtml">
<xsl:attribute name="class">title</xsl:attribute>
<xsl:if test="$generate.id.attributes = 0">
<xsl:call-template name="anchor">
<xsl:with-param name="node" select="$node"/>
<xsl:with-param name="conditional" select="0"/>
</xsl:call-template>
</xsl:if>
<xsl:apply-templates select="$node" mode="object.title.markup">
<xsl:with-param name="allow-anchors" select="1"/>
</xsl:apply-templates>
<xsl:call-template name="permalink">
<xsl:with-param name="node" select="$node"/>
</xsl:call-template>
</xsl:element>
</xsl:template>
</xsl:stylesheet>

View File

@@ -1,64 +0,0 @@
<?xml version='1.0'?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/fo/docbook.xsl" />
<!-- check project-plan.sh for how this is generated, needed to tweak
the cover page
-->
<xsl:include href="/tmp/titlepage.xsl"/>
<!-- To force a page break in document, i.e per section add a
<?hard-pagebreak?> tag.
-->
<xsl:template match="processing-instruction('hard-pagebreak')">
<fo:block break-before='page' />
</xsl:template>
<!--Fix for defualt indent getting TOC all wierd..
See http://sources.redhat.com/ml/docbook-apps/2005-q1/msg00455.html
FIXME: must be a better fix
-->
<xsl:param name="body.start.indent" select="'0'"/>
<!--<xsl:param name="title.margin.left" select="'0'"/>-->
<!-- stop long-ish header titles getting wrapped -->
<xsl:param name="header.column.widths">1 10 1</xsl:param>
<!-- customise headers and footers a little -->
<xsl:template name="head.sep.rule">
<xsl:if test="$header.rule != 0">
<xsl:attribute name="border-bottom-width">0.5pt</xsl:attribute>
<xsl:attribute name="border-bottom-style">solid</xsl:attribute>
<xsl:attribute name="border-bottom-color">#cccccc</xsl:attribute>
</xsl:if>
</xsl:template>
<xsl:template name="foot.sep.rule">
<xsl:if test="$footer.rule != 0">
<xsl:attribute name="border-top-width">0.5pt</xsl:attribute>
<xsl:attribute name="border-top-style">solid</xsl:attribute>
<xsl:attribute name="border-top-color">#cccccc</xsl:attribute>
</xsl:if>
</xsl:template>
<xsl:attribute-set name="header.content.properties">
<xsl:attribute name="color">#cccccc</xsl:attribute>
</xsl:attribute-set>
<xsl:attribute-set name="footer.content.properties">
<xsl:attribute name="color">#cccccc</xsl:attribute>
</xsl:attribute-set>
<!-- general settings -->
<xsl:param name="fop1.extensions" select="1"></xsl:param>
<xsl:param name="paper.type" select="'A4'"></xsl:param>
<xsl:param name="section.autolabel" select="1"></xsl:param>
<xsl:param name="body.font.family" select="'verasans'"></xsl:param>
<xsl:param name="title.font.family" select="'verasans'"></xsl:param>
<xsl:param name="monospace.font.family" select="'veramono'"></xsl:param>
</xsl:stylesheet>

View File

@@ -1,25 +0,0 @@
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:d="http://docbook.org/ns/docbook"
xmlns="http://www.w3.org/1999/xhtml"
exclude-result-prefixes="d">
<xsl:template name="division.title">
<xsl:param name="node" select="."/>
<h1>
<xsl:attribute name="class">title</xsl:attribute>
<xsl:call-template name="anchor">
<xsl:with-param name="node" select="$node"/>
<xsl:with-param name="conditional" select="0"/>
</xsl:call-template>
<xsl:apply-templates select="$node" mode="object.title.markup">
<xsl:with-param name="allow-anchors" select="1"/>
</xsl:apply-templates>
<xsl:call-template name="permalink">
<xsl:with-param name="node" select="$node"/>
</xsl:call-template>
</h1>
</xsl:template>
</xsl:stylesheet>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

View File

@@ -1,58 +0,0 @@
<fop version="1.0">
<!-- Strict user configuration -->
<strict-configuration>true</strict-configuration>
<!-- Strict FO validation -->
<strict-validation>true</strict-validation>
<!--
Set the baseDir so common/openedhand.svg references in plans still
work ok. Note, relative file references to current dir should still work.
-->
<base>../template</base>
<font-base>../template</font-base>
<!-- Source resolution in dpi (dots/pixels per inch) for determining the
size of pixels in SVG and bitmap images, default: 72dpi -->
<!-- <source-resolution>72</source-resolution> -->
<!-- Target resolution in dpi (dots/pixels per inch) for specifying the
target resolution for generated bitmaps, default: 72dpi -->
<!-- <target-resolution>72</target-resolution> -->
<!-- default page-height and page-width, in case
value is specified as auto -->
<default-page-settings height="11in" width="8.26in"/>
<!-- <use-cache>false</use-cache> -->
<renderers>
<renderer mime="application/pdf">
<fonts>
<font metrics-file="VeraMono.xml"
kerning="yes"
embed-url="VeraMono.ttf">
<font-triplet name="veramono" style="normal" weight="normal"/>
</font>
<font metrics-file="VeraMoBd.xml"
kerning="yes"
embed-url="VeraMoBd.ttf">
<font-triplet name="veramono" style="normal" weight="bold"/>
</font>
<font metrics-file="Vera.xml"
kerning="yes"
embed-url="Vera.ttf">
<font-triplet name="verasans" style="normal" weight="normal"/>
<font-triplet name="verasans" style="normal" weight="bold"/>
<font-triplet name="verasans" style="italic" weight="normal"/>
<font-triplet name="verasans" style="italic" weight="bold"/>
</font>
<auto-detect/>
</fonts>
</renderer>
</renderers>
</fop>

View File

@@ -1,21 +0,0 @@
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:d="http://docbook.org/ns/docbook"
xmlns="http://www.w3.org/1999/xhtml"
exclude-result-prefixes="d">
<xsl:template name="formal.object.heading">
<xsl:param name="object" select="."/>
<xsl:param name="title">
<xsl:apply-templates select="$object" mode="object.title.markup">
<xsl:with-param name="allow-anchors" select="1"/>
</xsl:apply-templates>
</xsl:param>
<p class="title">
<b><xsl:copy-of select="$title"/></b>
<xsl:call-template name="permalink">
<xsl:with-param name="node" select="$object"/>
</xsl:call-template>
</p>
</xsl:template>
</xsl:stylesheet>

View File

@@ -1,14 +0,0 @@
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:d="http://docbook.org/ns/docbook"
xmlns="http://www.w3.org/1999/xhtml">
<xsl:template match="glossentry/glossterm">
<xsl:apply-imports/>
<xsl:if test="$generate.permalink != 0">
<xsl:call-template name="permalink">
<xsl:with-param name="node" select=".."/>
</xsl:call-template>
</xsl:if>
</xsl:template>
</xsl:stylesheet>

View File

@@ -1,25 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0"
xmlns="http://www.w3.org/1999/xhtml"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:param name="generate.permalink" select="1"/>
<xsl:param name="permalink.text"></xsl:param>
<xsl:template name="permalink">
<xsl:param name="node"/>
<xsl:if test="$generate.permalink != '0'">
<span class="permalink">
<a alt="Permalink" title="Permalink">
<xsl:attribute name="href">
<xsl:call-template name="href.target">
<xsl:with-param name="object" select="$node"/>
</xsl:call-template>
</xsl:attribute>
<xsl:copy-of select="$permalink.text"/>
</a>
</span>
</xsl:if>
</xsl:template>
</xsl:stylesheet>

View File

@@ -1,55 +0,0 @@
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:d="http://docbook.org/ns/docbook"
xmlns="http://www.w3.org/1999/xhtml" exclude-result-prefixes="d">
<xsl:template name="section.title">
<xsl:variable name="section"
select="(ancestor::section |
ancestor::simplesect|
ancestor::sect1|
ancestor::sect2|
ancestor::sect3|
ancestor::sect4|
ancestor::sect5)[last()]"/>
<xsl:variable name="renderas">
<xsl:choose>
<xsl:when test="$section/@renderas = 'sect1'">1</xsl:when>
<xsl:when test="$section/@renderas = 'sect2'">2</xsl:when>
<xsl:when test="$section/@renderas = 'sect3'">3</xsl:when>
<xsl:when test="$section/@renderas = 'sect4'">4</xsl:when>
<xsl:when test="$section/@renderas = 'sect5'">5</xsl:when>
<xsl:otherwise><xsl:value-of select="''"/></xsl:otherwise>
</xsl:choose>
</xsl:variable>
<xsl:variable name="level">
<xsl:choose>
<xsl:when test="$renderas != ''">
<xsl:value-of select="$renderas"/>
</xsl:when>
<xsl:otherwise>
<xsl:call-template name="section.level">
<xsl:with-param name="node" select="$section"/>
</xsl:call-template>
</xsl:otherwise>
</xsl:choose>
</xsl:variable>
<xsl:call-template name="section.heading">
<xsl:with-param name="section" select="$section"/>
<xsl:with-param name="level" select="$level"/>
<xsl:with-param name="title">
<xsl:apply-templates select="$section" mode="object.title.markup">
<xsl:with-param name="allow-anchors" select="1"/>
</xsl:apply-templates>
<xsl:if test="$level &gt; 0">
<xsl:call-template name="permalink">
<xsl:with-param name="node" select="$section"/>
</xsl:call-template>
</xsl:if>
</xsl:with-param>
</xsl:call-template>
</xsl:template>
</xsl:stylesheet>

File diff suppressed because it is too large Load Diff

View File

@@ -1,51 +0,0 @@
#!/bin/sh
if [ -z "$1" -o -z "$2" ]; then
echo "usage: [-v] $0 <docbook file> <templatedir>"
echo
echo "*NOTE* you need xsltproc, fop and nwalsh docbook stylesheets"
echo " installed for this to work!"
echo
exit 0
fi
FO=`echo $1 | sed s/.xml/.fo/` || exit 1
PDF=`echo $1 | sed s/.xml/.pdf/` || exit 1
TEMPLATEDIR=$2
##
# These URI should be rewritten by your distribution's xml catalog to
# match your localy installed XSL stylesheets.
XSL_BASE_URI="http://docbook.sourceforge.net/release/xsl/current"
# Creates a temporary XSL stylesheet based on titlepage.xsl
xsltproc -o /tmp/titlepage.xsl \
--xinclude \
$XSL_BASE_URI/template/titlepage.xsl \
$TEMPLATEDIR/titlepage.templates.xml || exit 1
# Creates the file needed for FOP
xsltproc --xinclude \
--stringparam hyphenate false \
--stringparam formal.title.placement "figure after" \
--stringparam ulink.show 1 \
--stringparam body.font.master 9 \
--stringparam title.font.master 11 \
--stringparam draft.watermark.image "$TEMPLATEDIR/draft.png" \
--stringparam chapter.autolabel 1 \
--stringparam appendix.autolabel A \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
--output $FO \
$TEMPLATEDIR/db-pdf.xsl \
$1 || exit 1
# Invokes the Java version of FOP. Uses the additional configuration file common/fop-config.xml
fop -c $TEMPLATEDIR/fop-config.xml -fo $FO -pdf $PDF || exit 1
rm -f $FO
rm -f /tmp/titlepage.xsl
echo
echo " #### Success! $PDF ready. ####"
echo

View File

@@ -3,7 +3,7 @@
#
# This is a copy on write dictionary and set which abuses classes to try and be nice and fast.
#
# Copyright (C) 2006 Tim Ansell
# Copyright (C) 2006 Tim Amsell
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -23,17 +23,19 @@
# Assign a file to __warn__ to get warnings about slow operations.
#
from __future__ import print_function
import copy
import types
ImmutableTypes = (
types.NoneType,
bool,
complex,
float,
int,
long,
tuple,
frozenset,
str
basestring
)
MUTABLE = "__mutable__"
@@ -59,7 +61,7 @@ class COWDictMeta(COWMeta):
__call__ = cow
def __setitem__(cls, key, value):
if value is not None and not isinstance(value, ImmutableTypes):
if not isinstance(value, ImmutableTypes):
if not isinstance(value, COWMeta):
cls.__hasmutable__ = True
key += MUTABLE
@@ -114,7 +116,7 @@ class COWDictMeta(COWMeta):
cls.__setitem__(key, cls.__marker__)
def __revertitem__(cls, key):
if key not in cls.__dict__:
if not cls.__dict__.has_key(key):
key += MUTABLE
delattr(cls, key)
@@ -181,7 +183,7 @@ class COWSetMeta(COWDictMeta):
COWDictMeta.__delitem__(cls, repr(hash(value)))
def __in__(cls, value):
return repr(hash(value)) in COWDictMeta
return COWDictMeta.has_key(repr(hash(value)))
def iterkeys(cls):
raise TypeError("sets don't have keys")
@@ -190,10 +192,12 @@ class COWSetMeta(COWDictMeta):
raise TypeError("sets don't have 'items'")
# These are the actual classes you use!
class COWDictBase(object, metaclass = COWDictMeta):
class COWDictBase(object):
__metaclass__ = COWDictMeta
__count__ = 0
class COWSetBase(object, metaclass = COWSetMeta):
class COWSetBase(object):
__metaclass__ = COWSetMeta
__count__ = 0
if __name__ == "__main__":
@@ -283,7 +287,7 @@ if __name__ == "__main__":
except KeyError:
print("Yay! deleted key raises error")
if 'b' in b:
if b.has_key('b'):
print("Boo!")
else:
print("Yay - has_key with delete works!")

View File

@@ -21,11 +21,11 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
__version__ = "1.34.0"
__version__ = "1.20.0"
import sys
if sys.version_info < (3, 4, 0):
raise RuntimeError("Sorry, python 3.4.0 or later is required for this version of bitbake")
if sys.version_info < (2, 7, 3):
raise RuntimeError("Sorry, python 2.7.3 or later is required for this version of bitbake")
class BBHandledException(Exception):
@@ -70,8 +70,6 @@ logger = logging.getLogger("BitBake")
logger.addHandler(NullHandler())
logger.setLevel(logging.DEBUG - 2)
mainlogger = logging.getLogger("BitBake.Main")
# This has to be imported after the setLoggerClass, as the import of bb.msg
# can result in construction of the various loggers.
import bb.msg
@@ -81,31 +79,32 @@ sys.modules['bb.fetch'] = sys.modules['bb.fetch2']
# Messaging convenience functions
def plain(*args):
mainlogger.plain(''.join(args))
logger.plain(''.join(args))
def debug(lvl, *args):
if isinstance(lvl, str):
mainlogger.warning("Passed invalid debug level '%s' to bb.debug", lvl)
if isinstance(lvl, basestring):
logger.warn("Passed invalid debug level '%s' to bb.debug", lvl)
args = (lvl,) + args
lvl = 1
mainlogger.debug(lvl, ''.join(args))
logger.debug(lvl, ''.join(args))
def note(*args):
mainlogger.info(''.join(args))
logger.info(''.join(args))
def warn(*args):
mainlogger.warning(''.join(args))
logger.warn(''.join(args))
def error(*args, **kwargs):
mainlogger.error(''.join(args), extra=kwargs)
def error(*args):
logger.error(''.join(args))
def fatal(*args):
logger.critical(''.join(args))
sys.exit(1)
def fatal(*args, **kwargs):
mainlogger.critical(''.join(args), extra=kwargs)
raise BBHandledException()
def deprecated(func, name=None, advice=""):
"""This is a decorator which can be used to mark functions
as deprecated. It will result in a warning being emitted
as deprecated. It will result in a warning being emmitted
when the function is used."""
import warnings
@@ -142,3 +141,6 @@ def deprecate_import(current, modulename, fromlist, renames = None):
setattr(sys.modules[current], newname, newobj)
deprecate_import(__name__, "bb.fetch", ("MalformedUrl", "encodeurl", "decodeurl"))
deprecate_import(__name__, "bb.utils", ("mkdirhier", "movefile", "copyfile", "which"))
deprecate_import(__name__, "bb.utils", ["vercmp_string"], ["vercmp"])

View File

@@ -23,51 +23,31 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import sys
import logging
import shlex
import glob
import time
import stat
import bb
import bb.msg
import bb.process
import bb.progress
from bb import data, event, utils
from contextlib import nested
from bb import event, utils
bblogger = logging.getLogger('BitBake')
logger = logging.getLogger('BitBake.Build')
NULL = open(os.devnull, 'r+')
__mtime_cache = {}
def cached_mtime_noerror(f):
if f not in __mtime_cache:
try:
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
except OSError:
return 0
return __mtime_cache[f]
def reset_cache():
global __mtime_cache
__mtime_cache = {}
# When we execute a Python function, we'd like certain things
# in all namespaces, hence we add them to __builtins__.
# When we execute a python function we'd like certain things
# in all namespaces, hence we add them to __builtins__
# If we do not do this and use the exec globals, they will
# not be available to subfunctions.
if hasattr(__builtins__, '__setitem__'):
builtins = __builtins__
else:
builtins = __builtins__.__dict__
builtins['bb'] = bb
builtins['os'] = os
__builtins__['bb'] = bb
__builtins__['os'] = os
class FuncFailed(Exception):
def __init__(self, name = None, logfile = None):
@@ -91,14 +71,12 @@ class TaskBase(event.Event):
def __init__(self, t, logfile, d):
self._task = t
self._package = d.getVar("PF")
self._mc = d.getVar("BB_CURRENT_MC")
self.taskfile = d.getVar("FILE")
self._package = d.getVar("PF", True)
self.taskfile = d.getVar("FILE", True)
self.taskname = self._task
self.logfile = logfile
self.time = time.time()
event.Event.__init__(self)
self._message = "recipe %s: task %s: %s" % (d.getVar("PF"), t, self.getDisplayName())
self._message = "recipe %s: task %s: %s" % (d.getVar("PF", True), t, self.getDisplayName())
def getTask(self):
return self._task
@@ -113,9 +91,6 @@ class TaskBase(event.Event):
class TaskStarted(TaskBase):
"""Task execution started"""
def __init__(self, t, logfile, taskflags, d):
super(TaskStarted, self).__init__(t, logfile, d)
self.taskflags = taskflags
class TaskSucceeded(TaskBase):
"""Task execution completed"""
@@ -139,25 +114,6 @@ class TaskInvalid(TaskBase):
super(TaskInvalid, self).__init__(task, None, metadata)
self._message = "No such task '%s'" % task
class TaskProgress(event.Event):
"""
Task made some progress that could be reported to the user, usually in
the form of a progress bar or similar.
NOTE: this class does not inherit from TaskBase since it doesn't need
to - it's fired within the task context itself, so we don't have any of
the context information that you do in the case of the other events.
The event PID can be used to determine which task it came from.
The progress value is normally 0-100, but can also be negative
indicating that progress has been made but we aren't able to determine
how much.
The rate is optional, this is simply an extra string to display to the
user if specified.
"""
def __init__(self, progress, rate=None):
self.progress = progress
self.rate = rate
event.Event.__init__(self)
class LogTee(object):
def __init__(self, logger, outfile):
@@ -181,27 +137,22 @@ class LogTee(object):
def flush(self):
self.outfile.flush()
#
# pythonexception allows the python exceptions generated to be raised
# as the real exceptions (not FuncFailed) and without a backtrace at the
# origin of the failure.
#
def exec_func(func, d, dirs = None, pythonexception=False):
"""Execute a BB 'function'"""
def exec_func(func, d, dirs = None):
"""Execute an BB 'function'"""
try:
oldcwd = os.getcwd()
except:
oldcwd = None
body = d.getVar(func)
if not body:
if body is None:
logger.warn("Function %s doesn't exist", func)
return
flags = d.getVarFlags(func)
cleandirs = flags.get('cleandirs') if flags else None
cleandirs = flags.get('cleandirs')
if cleandirs:
for cdir in d.expand(cleandirs).split():
bb.utils.remove(cdir, True)
bb.utils.mkdirhier(cdir)
if flags and dirs is None:
if dirs is None:
dirs = flags.get('dirs')
if dirs:
dirs = d.expand(dirs).split()
@@ -211,33 +162,28 @@ def exec_func(func, d, dirs = None, pythonexception=False):
bb.utils.mkdirhier(adir)
adir = dirs[-1]
else:
adir = None
body = d.getVar(func, False)
if not body:
if body is None:
logger.warning("Function %s doesn't exist", func)
return
adir = d.getVar('B', True)
bb.utils.mkdirhier(adir)
ispython = flags.get('python')
lockflag = flags.get('lockfiles')
if lockflag:
lockfiles = [f for f in d.expand(lockflag).split()]
lockfiles = [d.expand(f) for f in lockflag.split()]
else:
lockfiles = None
tempdir = d.getVar('T')
tempdir = d.getVar('T', True)
# or func allows items to be executed outside of the normal
# task set, such as buildhistory
task = d.getVar('BB_RUNTASK') or func
task = d.getVar('BB_RUNTASK', True) or func
if task == func:
taskfunc = task
else:
taskfunc = "%s.%s" % (task, func)
runfmt = d.getVar('BB_RUNFMT') or "run.{func}.{pid}"
runfmt = d.getVar('BB_RUNFMT', True) or "run.{func}.{pid}"
runfn = runfmt.format(taskfunc=taskfunc, task=task, func=func, pid=os.getpid())
runfile = os.path.join(tempdir, runfn)
bb.utils.mkdirhier(os.path.dirname(runfile))
@@ -258,57 +204,42 @@ def exec_func(func, d, dirs = None, pythonexception=False):
with bb.utils.fileslocked(lockfiles):
if ispython:
exec_func_python(func, d, runfile, cwd=adir, pythonexception=pythonexception)
exec_func_python(func, d, runfile, cwd=adir)
else:
exec_func_shell(func, d, runfile, cwd=adir)
try:
curcwd = os.getcwd()
except:
curcwd = None
if oldcwd and curcwd != oldcwd:
try:
bb.warn("Task %s changed cwd to %s" % (func, curcwd))
os.chdir(oldcwd)
except:
pass
_functionfmt = """
def {function}(d):
{body}
{function}(d)
"""
logformatter = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
def exec_func_python(func, d, runfile, cwd=None, pythonexception=False):
def exec_func_python(func, d, runfile, cwd=None):
"""Execute a python BB 'function'"""
code = _functionfmt.format(function=func)
bbfile = d.getVar('FILE', True)
code = _functionfmt.format(function=func, body=d.getVar(func, True))
bb.utils.mkdirhier(os.path.dirname(runfile))
with open(runfile, 'w') as script:
bb.data.emit_func_python(func, script, d)
script.write(code)
if cwd:
try:
olddir = os.getcwd()
except OSError as e:
bb.warn("%s: Cannot get cwd: %s" % (func, e))
except OSError:
olddir = None
os.chdir(cwd)
bb.debug(2, "Executing python function %s" % func)
try:
text = "def %s(d):\n%s" % (func, d.getVar(func, False))
fn = d.getVarFlag(func, "filename", False)
lineno = int(d.getVarFlag(func, "lineno", False))
bb.methodpool.insert_method(func, text, fn, lineno - 1)
comp = utils.better_compile(code, func, "exec_python_func() autogenerated")
utils.better_exec(comp, {"d": d}, code, "exec_python_func() autogenerated", pythonexception=pythonexception)
except (bb.parse.SkipRecipe, bb.build.FuncFailed):
raise
comp = utils.better_compile(code, func, bbfile)
utils.better_exec(comp, {"d": d}, code, bbfile)
except:
if pythonexception:
if sys.exc_info()[0] in (bb.parse.SkipPackage, bb.build.FuncFailed):
raise
raise FuncFailed(func, None)
finally:
bb.debug(2, "Python function %s finished" % func)
@@ -316,26 +247,8 @@ def exec_func_python(func, d, runfile, cwd=None, pythonexception=False):
if cwd and olddir:
try:
os.chdir(olddir)
except OSError as e:
bb.warn("%s: Cannot restore cwd %s: %s" % (func, olddir, e))
def shell_trap_code():
return '''#!/bin/sh\n
# Emit a useful diagnostic if something fails:
bb_exit_handler() {
ret=$?
case $ret in
0) ;;
*) case $BASH_VERSION in
"") echo "WARNING: exit code $ret from a shell command.";;
*) echo "WARNING: ${BASH_SOURCE[0]}:${BASH_LINENO[0]} exit $ret from '$BASH_COMMAND'";;
esac
exit $ret
esac
}
trap 'bb_exit_handler' 0
set -e
'''
except OSError:
pass
def exec_func_shell(func, d, runfile, cwd=None):
"""Execute a shell function from the metadata
@@ -349,7 +262,23 @@ def exec_func_shell(func, d, runfile, cwd=None):
d.delVarFlag('PWD', 'export')
with open(runfile, 'w') as script:
script.write(shell_trap_code())
script.write('''#!/bin/sh\n
# Emit a useful diagnostic if something fails:
bb_exit_handler() {
ret=$?
case $ret in
0) ;;
*) case $BASH_VERSION in
"") echo "WARNING: exit code $ret from a shell command.";;
*) echo "WARNING: ${BASH_SOURCE[0]}:${BASH_LINENO[0]} exit $ret from
\"$BASH_COMMAND\"";;
esac
exit $ret
esac
}
trap 'bb_exit_handler' 0
set -e
''')
bb.data.emit_func(func, script, d)
@@ -362,14 +291,14 @@ def exec_func_shell(func, d, runfile, cwd=None):
# cleanup
ret=$?
trap '' 0
exit $ret
exit $?
''')
os.chmod(runfile, 0o775)
os.chmod(runfile, 0775)
cmd = runfile
if d.getVarFlag(func, 'fakeroot', False):
fakerootcmd = d.getVar('FAKEROOT')
if d.getVarFlag(func, 'fakeroot'):
fakerootcmd = d.getVar('FAKEROOT', True)
if fakerootcmd:
cmd = [fakerootcmd, runfile]
@@ -378,75 +307,14 @@ exit $ret
else:
logfile = sys.stdout
progress = d.getVarFlag(func, 'progress')
if progress:
if progress == 'percent':
# Use default regex
logfile = bb.progress.BasicProgressHandler(d, outfile=logfile)
elif progress.startswith('percent:'):
# Use specified regex
logfile = bb.progress.BasicProgressHandler(d, regex=progress.split(':', 1)[1], outfile=logfile)
elif progress.startswith('outof:'):
# Use specified regex
logfile = bb.progress.OutOfProgressHandler(d, regex=progress.split(':', 1)[1], outfile=logfile)
else:
bb.warn('%s: invalid task progress varflag value "%s", ignoring' % (func, progress))
bb.debug(2, "Executing shell function %s" % func)
fifobuffer = bytearray()
def readfifo(data):
nonlocal fifobuffer
fifobuffer.extend(data)
while fifobuffer:
message, token, nextmsg = fifobuffer.partition(b"\00")
if token:
splitval = message.split(b' ', 1)
cmd = splitval[0].decode("utf-8")
if len(splitval) > 1:
value = splitval[1].decode("utf-8")
else:
value = ''
if cmd == 'bbplain':
bb.plain(value)
elif cmd == 'bbnote':
bb.note(value)
elif cmd == 'bbwarn':
bb.warn(value)
elif cmd == 'bberror':
bb.error(value)
elif cmd == 'bbfatal':
# The caller will call exit themselves, so bb.error() is
# what we want here rather than bb.fatal()
bb.error(value)
elif cmd == 'bbfatal_log':
bb.error(value, forcelog=True)
elif cmd == 'bbdebug':
splitval = value.split(' ', 1)
level = int(splitval[0])
value = splitval[1]
bb.debug(level, value)
else:
bb.warn("Unrecognised command '%s' on FIFO" % cmd)
fifobuffer = nextmsg
else:
break
tempdir = d.getVar('T')
fifopath = os.path.join(tempdir, 'fifo.%s' % os.getpid())
if os.path.exists(fifopath):
os.unlink(fifopath)
os.mkfifo(fifopath)
with open(fifopath, 'r+b', buffering=0) as fifo:
try:
bb.debug(2, "Executing shell function %s" % func)
try:
with open(os.devnull, 'r+') as stdin:
bb.process.run(cmd, shell=False, stdin=stdin, log=logfile, extrafiles=[(fifo,readfifo)])
except bb.process.CmdError:
logfn = d.getVar('BB_LOGFILE')
raise FuncFailed(func, logfn)
finally:
os.unlink(fifopath)
try:
with open(os.devnull, 'r+') as stdin:
bb.process.run(cmd, shell=False, stdin=stdin, log=logfile)
except bb.process.CmdError:
logfn = d.getVar('BB_LOGFILE', True)
raise FuncFailed(func, logfn)
bb.debug(2, "Shell function %s finished" % func)
@@ -455,7 +323,7 @@ def _task_data(fn, task, d):
localdata.setVar('BB_FILENAME', fn)
localdata.setVar('BB_CURRENTTASK', task[3:])
localdata.setVar('OVERRIDES', 'task-%s:%s' %
(task[3:].replace('_', '-'), d.getVar('OVERRIDES', False)))
(task[3:], d.getVar('OVERRIDES', False)))
localdata.finalize()
bb.data.expandKeys(localdata)
return localdata
@@ -466,7 +334,7 @@ def _exec_task(fn, task, d, quieterr):
Execution of a task involves a bit more setup than executing a function,
running it with its own local metadata, and with some useful variables set.
"""
if not d.getVarFlag(task, 'task', False):
if not d.getVarFlag(task, 'task'):
event.fire(TaskInvalid(task, d), d)
logger.error("No such task: %s" % task)
return 1
@@ -474,29 +342,22 @@ def _exec_task(fn, task, d, quieterr):
logger.debug(1, "Executing task %s", task)
localdata = _task_data(fn, task, d)
tempdir = localdata.getVar('T')
tempdir = localdata.getVar('T', True)
if not tempdir:
bb.fatal("T variable not set, unable to build")
# Change nice level if we're asked to
nice = localdata.getVar("BB_TASK_NICE_LEVEL")
nice = localdata.getVar("BB_TASK_NICE_LEVEL", True)
if nice:
curnice = os.nice(0)
nice = int(nice) - curnice
newnice = os.nice(nice)
logger.debug(1, "Renice to %s " % newnice)
ionice = localdata.getVar("BB_TASK_IONICE_LEVEL")
if ionice:
try:
cls, prio = ionice.split(".", 1)
bb.utils.ioprio_set(os.getpid(), int(cls), int(prio))
except:
bb.warn("Invalid ionice level %s" % ionice)
bb.utils.mkdirhier(tempdir)
# Determine the logfile to generate
logfmt = localdata.getVar('BB_LOGFMT') or 'log.{task}.{pid}'
logfmt = localdata.getVar('BB_LOGFMT', True) or 'log.{task}.{pid}'
logbase = logfmt.format(task=task, pid=os.getpid())
# Document the order of the tasks...
@@ -527,10 +388,7 @@ def _exec_task(fn, task, d, quieterr):
self.triggered = False
logging.Handler.__init__(self, logging.ERROR)
def emit(self, record):
if getattr(record, 'forcelog', False):
self.triggered = False
else:
self.triggered = True
self.triggered = True
# Handle logfiles
si = open('/dev/null', 'r')
@@ -551,7 +409,7 @@ def _exec_task(fn, task, d, quieterr):
os.dup2(logfile.fileno(), oso[1])
os.dup2(logfile.fileno(), ose[1])
# Ensure Python logging goes to the logfile
# Ensure python logging goes to the logfile
handler = logging.StreamHandler(logfile)
handler.setFormatter(logformatter)
# Always enable full debug output into task logfiles
@@ -563,36 +421,22 @@ def _exec_task(fn, task, d, quieterr):
localdata.setVar('BB_LOGFILE', logfn)
localdata.setVar('BB_RUNTASK', task)
localdata.setVar('BB_TASK_LOGGER', bblogger)
flags = localdata.getVarFlags(task)
event.fire(TaskStarted(task, logfn, localdata), localdata)
try:
try:
event.fire(TaskStarted(task, logfn, flags, localdata), localdata)
except (bb.BBHandledException, SystemExit):
return 1
except FuncFailed as exc:
for func in (prefuncs or '').split():
exec_func(func, localdata)
exec_func(task, localdata)
for func in (postfuncs or '').split():
exec_func(func, localdata)
except FuncFailed as exc:
if quieterr:
event.fire(TaskFailedSilent(task, logfn, localdata), localdata)
else:
errprinted = errchk.triggered
logger.error(str(exc))
return 1
try:
for func in (prefuncs or '').split():
exec_func(func, localdata)
exec_func(task, localdata)
for func in (postfuncs or '').split():
exec_func(func, localdata)
except FuncFailed as exc:
if quieterr:
event.fire(TaskFailedSilent(task, logfn, localdata), localdata)
else:
errprinted = errchk.triggered
logger.error(str(exc))
event.fire(TaskFailed(task, logfn, localdata, errprinted), localdata)
return 1
except bb.BBHandledException:
event.fire(TaskFailed(task, logfn, localdata, True), localdata)
return 1
event.fire(TaskFailed(task, logfn, localdata, errprinted), localdata)
return 1
finally:
sys.stdout.flush()
sys.stderr.flush()
@@ -617,19 +461,19 @@ def _exec_task(fn, task, d, quieterr):
bb.utils.remove(loglink)
event.fire(TaskSucceeded(task, logfn, localdata), localdata)
if not localdata.getVarFlag(task, 'nostamp', False) and not localdata.getVarFlag(task, 'selfstamp', False):
if not localdata.getVarFlag(task, 'nostamp') and not localdata.getVarFlag(task, 'selfstamp'):
make_stamp(task, localdata)
return 0
def exec_task(fn, task, d, profile = False):
try:
try:
quieterr = False
if d.getVarFlag(task, "quieterrors", False) is not None:
if d.getVarFlag(task, "quieterrors") is not None:
quieterr = True
if profile:
profname = "profile-%s.log" % (d.getVar("PN") + "-" + task)
if profile:
profname = "profile-%s.log" % (d.getVar("PN", True) + "-" + task)
try:
import cProfile as profile
except:
@@ -652,7 +496,7 @@ def exec_task(fn, task, d, profile = False):
event.fire(failedevent, d)
return 1
def stamp_internal(taskname, d, file_name, baseonly=False, noextra=False):
def stamp_internal(taskname, d, file_name):
"""
Internal stamp helper function
Makes sure the stamp directory exists
@@ -666,17 +510,12 @@ def stamp_internal(taskname, d, file_name, baseonly=False, noextra=False):
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stamp[file_name]
stamp = d.stamp_base[file_name].get(taskflagname) or d.stamp[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMP')
file_name = d.getVar('BB_FILENAME')
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info') or ""
if baseonly:
return stamp
if noextra:
extrainfo = ""
stamp = d.getVarFlag(taskflagname, 'stamp-base', True) or d.getVar('STAMP', True)
file_name = d.getVar('BB_FILENAME', True)
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
if not stamp:
return
@@ -684,7 +523,7 @@ def stamp_internal(taskname, d, file_name, baseonly=False, noextra=False):
stamp = bb.parse.siggen.stampfile(stamp, file_name, taskname, extrainfo)
stampdir = os.path.dirname(stamp)
if cached_mtime_noerror(stampdir) == 0:
if bb.parse.cached_mtime_noerror(stampdir) == 0:
bb.utils.mkdirhier(stampdir)
return stamp
@@ -702,12 +541,12 @@ def stamp_cleanmask_internal(taskname, d, file_name):
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stampclean[file_name]
stamp = d.stamp_base_clean[file_name].get(taskflagname) or d.stampclean[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMPCLEAN')
file_name = d.getVar('BB_FILENAME')
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info') or ""
stamp = d.getVarFlag(taskflagname, 'stamp-base-clean', True) or d.getVar('STAMPCLEAN', True)
file_name = d.getVar('BB_FILENAME', True)
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
if not stamp:
return []
@@ -725,13 +564,13 @@ def make_stamp(task, d, file_name = None):
for mask in cleanmask:
for name in glob.glob(mask):
# Preserve sigdata files in the stamps directory
if "sigdata" in name or "sigbasedata" in name:
if "sigdata" in name:
continue
# Preserve taint files in the stamps directory
if name.endswith('.taint'):
continue
os.unlink(name)
stamp = stamp_internal(task, d, file_name)
# Remove the file and recreate to force timestamp
# change on broken NFS filesystems
@@ -742,9 +581,8 @@ def make_stamp(task, d, file_name = None):
# If we're in task context, write out a signature file for each task
# as it completes
if not task.endswith("_setscene") and task != "do_setscene" and not file_name:
stampbase = stamp_internal(task, d, None, True)
file_name = d.getVar('BB_FILENAME')
bb.parse.siggen.dump_sigtask(file_name, task, stampbase, True)
file_name = d.getVar('BB_FILENAME', True)
bb.parse.siggen.dump_sigtask(file_name, task, d.getVar('STAMP', True), True)
def del_stamp(task, d, file_name = None):
"""
@@ -765,22 +603,22 @@ def write_taint(task, d, file_name = None):
if file_name:
taintfn = d.stamp[file_name] + '.' + task + '.taint'
else:
taintfn = d.getVar('STAMP') + '.' + task + '.taint'
taintfn = d.getVar('STAMP', True) + '.' + task + '.taint'
bb.utils.mkdirhier(os.path.dirname(taintfn))
# The specific content of the taint file is not really important,
# we just need it to be random, so a random UUID is used
with open(taintfn, 'w') as taintf:
taintf.write(str(uuid.uuid4()))
def stampfile(taskname, d, file_name = None, noextra=False):
def stampfile(taskname, d, file_name = None):
"""
Return the stamp for a given task
(d can be a data dict or dataCache)
"""
return stamp_internal(taskname, d, file_name, noextra=noextra)
return stamp_internal(taskname, d, file_name)
def add_tasks(tasklist, d):
task_deps = d.getVar('_task_deps', False)
task_deps = d.getVar('_task_deps')
if not task_deps:
task_deps = {}
if not 'tasks' in task_deps:
@@ -790,7 +628,6 @@ def add_tasks(tasklist, d):
for task in tasklist:
task = d.expand(task)
d.setVarFlag(task, 'task', 1)
if not task in task_deps['tasks']:
@@ -822,86 +659,9 @@ def add_tasks(tasklist, d):
# don't assume holding a reference
d.setVar('_task_deps', task_deps)
def addtask(task, before, after, d):
if task[:3] != "do_":
task = "do_" + task
def remove_task(task, kill, d):
"""Remove an BB 'task'.
d.setVarFlag(task, "task", 1)
bbtasks = d.getVar('__BBTASKS', False) or []
if task not in bbtasks:
bbtasks.append(task)
d.setVar('__BBTASKS', bbtasks)
If kill is 1, also remove tasks that depend on this task."""
existing = d.getVarFlag(task, "deps", False) or []
if after is not None:
# set up deps for function
for entry in after.split():
if entry not in existing:
existing.append(entry)
d.setVarFlag(task, "deps", existing)
if before is not None:
# set up things that depend on this func
for entry in before.split():
existing = d.getVarFlag(entry, "deps", False) or []
if task not in existing:
d.setVarFlag(entry, "deps", [task] + existing)
def deltask(task, d):
if task[:3] != "do_":
task = "do_" + task
bbtasks = d.getVar('__BBTASKS', False) or []
if task in bbtasks:
bbtasks.remove(task)
d.delVarFlag(task, 'task')
d.setVar('__BBTASKS', bbtasks)
d.delVarFlag(task, 'deps')
for bbtask in d.getVar('__BBTASKS', False) or []:
deps = d.getVarFlag(bbtask, 'deps', False) or []
if task in deps:
deps.remove(task)
d.setVarFlag(bbtask, 'deps', deps)
def preceedtask(task, with_recrdeptasks, d):
"""
Returns a set of tasks in the current recipe which were specified as
precondition by the task itself ("after") or which listed themselves
as precondition ("before"). Preceeding tasks specified via the
"recrdeptask" are included in the result only if requested. Beware
that this may lead to the task itself being listed.
"""
preceed = set()
preceed.update(d.getVarFlag(task, 'deps') or [])
if with_recrdeptasks:
recrdeptask = d.getVarFlag(task, 'recrdeptask')
if recrdeptask:
preceed.update(recrdeptask.split())
return preceed
def tasksbetween(task_start, task_end, d):
"""
Return the list of tasks between two tasks in the current recipe,
where task_start is to start at and task_end is the task to end at
(and task_end has a dependency chain back to task_start).
"""
outtasks = []
tasks = list(filter(lambda k: d.getVarFlag(k, "task"), d.keys()))
def follow_chain(task, endtask, chain=None):
if not chain:
chain = []
chain.append(task)
for othertask in tasks:
if othertask == task:
continue
if task == endtask:
for ctask in chain:
if ctask not in outtasks:
outtasks.append(ctask)
else:
deps = d.getVarFlag(othertask, 'deps', False)
if task in deps:
follow_chain(othertask, endtask, chain)
chain.pop()
follow_chain(task_start, task_end)
return outtasks
d.delVarFlag(task, 'task')

View File

@@ -28,16 +28,22 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import logging
import pickle
from collections import defaultdict
import bb.utils
logger = logging.getLogger("BitBake.Cache")
__cache_version__ = "151"
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info("Importing cPickle failed. "
"Falling back to a very slow implementation.")
__cache_version__ = "147"
def getCacheFile(path, filename, data_hash):
return os.path.join(path, filename + "." + data_hash)
@@ -71,16 +77,16 @@ class RecipeInfoCommon(object):
@classmethod
def flaglist(cls, flag, varlist, metadata, squash=False):
out_dict = dict((var, metadata.getVarFlag(var, flag))
out_dict = dict((var, metadata.getVarFlag(var, flag, True))
for var in varlist)
if squash:
return dict((k,v) for (k,v) in out_dict.items() if v)
return dict((k,v) for (k,v) in out_dict.iteritems() if v)
else:
return out_dict
@classmethod
def getvar(cls, var, metadata, expand = True):
return metadata.getVar(var, expand) or ''
def getvar(cls, var, metadata):
return metadata.getVar(var, True) or ''
class CoreRecipeInfo(RecipeInfoCommon):
@@ -93,7 +99,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.timestamp = bb.parse.cached_mtime(filename)
self.variants = self.listvar('__VARIANTS', metadata) + ['']
self.appends = self.listvar('__BBAPPEND', metadata)
self.nocache = self.getvar('BB_DONT_CACHE', metadata)
self.nocache = self.getvar('__BB_DONT_CACHE', metadata)
self.skipreason = self.getvar('__SKIPPED', metadata)
if self.skipreason:
@@ -123,6 +129,8 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.not_world = self.getvar('EXCLUDE_FROM_WORLD', metadata)
self.stamp = self.getvar('STAMP', metadata)
self.stampclean = self.getvar('STAMPCLEAN', metadata)
self.stamp_base = self.flaglist('stamp-base', self.tasks, metadata)
self.stamp_base_clean = self.flaglist('stamp-base-clean', self.tasks, metadata)
self.stamp_extrainfo = self.flaglist('stamp-extra-info', self.tasks, metadata)
self.file_checksums = self.flaglist('file-checksums', self.tasks, metadata, True)
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
@@ -134,11 +142,10 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
self.rdepends_pkg = self.pkgvar('RDEPENDS', self.packages, metadata)
self.rrecommends_pkg = self.pkgvar('RRECOMMENDS', self.packages, metadata)
self.inherits = self.getvar('__inherit_cache', metadata, expand=False)
self.inherits = self.getvar('__inherit_cache', metadata)
self.fakerootenv = self.getvar('FAKEROOTENV', metadata)
self.fakerootdirs = self.getvar('FAKEROOTDIRS', metadata)
self.fakerootnoenv = self.getvar('FAKEROOTNOENV', metadata)
self.extradepsfunc = self.getvar('calculate_extra_depends', metadata)
@classmethod
def init_cacheData(cls, cachedata):
@@ -151,6 +158,8 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.stamp = {}
cachedata.stampclean = {}
cachedata.stamp_base = {}
cachedata.stamp_base_clean = {}
cachedata.stamp_extrainfo = {}
cachedata.file_checksums = {}
cachedata.fn_provides = {}
@@ -174,7 +183,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootenv = {}
cachedata.fakerootnoenv = {}
cachedata.fakerootdirs = {}
cachedata.extradepsfunc = {}
def add_cacheData(self, cachedata, fn):
cachedata.task_deps[fn] = self.task_deps
@@ -184,6 +192,8 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.pkg_dp[fn] = self.defaultpref
cachedata.stamp[fn] = self.stamp
cachedata.stampclean[fn] = self.stampclean
cachedata.stamp_base[fn] = self.stamp_base
cachedata.stamp_base_clean[fn] = self.stamp_base_clean
cachedata.stamp_extrainfo[fn] = self.stamp_extrainfo
cachedata.file_checksums[fn] = self.file_checksums
@@ -210,22 +220,19 @@ class CoreRecipeInfo(RecipeInfoCommon):
rprovides += self.rprovides_pkg[package]
for rprovide in rprovides:
if fn not in cachedata.rproviders[rprovide]:
cachedata.rproviders[rprovide].append(fn)
cachedata.rproviders[rprovide].append(fn)
for package in self.packages_dynamic:
cachedata.packages_dynamic[package].append(fn)
# Build hash of runtime depends and recommends
# Build hash of runtime depends and rececommends
for package in self.packages + [self.pn]:
cachedata.rundeps[fn][package] = list(self.rdepends) + self.rdepends_pkg[package]
cachedata.runrecs[fn][package] = list(self.rrecommends) + self.rrecommends_pkg[package]
# Collect files we may need for possible world-dep
# calculations
if self.not_world:
logger.debug(1, "EXCLUDE FROM WORLD: %s", fn)
else:
if not self.not_world:
cachedata.possible_world.append(fn)
# create a collection of all targets for sanity checking
@@ -234,7 +241,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.universe_target.append(self.pn)
cachedata.hashfn[fn] = self.hashfilename
for task, taskhash in self.basetaskhashes.items():
for task, taskhash in self.basetaskhashes.iteritems():
identifier = '%s.%s' % (fn, task)
cachedata.basetaskhash[identifier] = taskhash
@@ -242,146 +249,24 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootenv[fn] = self.fakerootenv
cachedata.fakerootnoenv[fn] = self.fakerootnoenv
cachedata.fakerootdirs[fn] = self.fakerootdirs
cachedata.extradepsfunc[fn] = self.extradepsfunc
def virtualfn2realfn(virtualfn):
"""
Convert a virtual file name to a real one + the associated subclass keyword
"""
mc = ""
if virtualfn.startswith('multiconfig:'):
elems = virtualfn.split(':')
mc = elems[1]
virtualfn = ":".join(elems[2:])
fn = virtualfn
cls = ""
if virtualfn.startswith('virtual:'):
elems = virtualfn.split(':')
cls = ":".join(elems[1:-1])
fn = elems[-1]
return (fn, cls, mc)
def realfn2virtual(realfn, cls, mc):
"""
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if cls:
realfn = "virtual:" + cls + ":" + realfn
if mc:
realfn = "multiconfig:" + mc + ":" + realfn
return realfn
def variant2virtual(realfn, variant):
"""
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if variant == "":
return realfn
if variant.startswith("multiconfig:"):
elems = variant.split(":")
if elems[2]:
return "multiconfig:" + elems[1] + ":virtual:" + ":".join(elems[2:]) + ":" + realfn
return "multiconfig:" + elems[1] + ":" + realfn
return "virtual:" + variant + ":" + realfn
def parse_recipe(bb_data, bbfile, appends, mc=''):
"""
Parse a recipe
"""
chdir_back = False
bb_data.setVar("__BBMULTICONFIG", mc)
# expand tmpdir to include this topdir
bb_data.setVar('TMPDIR', bb_data.getVar('TMPDIR') or "")
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
oldpath = os.path.abspath(os.getcwd())
bb.parse.cached_mtime_noerror(bbfile_loc)
# The ConfHandler first looks if there is a TOPDIR and if not
# then it would call getcwd().
# Previously, we chdir()ed to bbfile_loc, called the handler
# and finally chdir()ed back, a couple of thousand times. We now
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
if not bb_data.getVar('TOPDIR', False):
chdir_back = True
bb_data.setVar('TOPDIR', bbfile_loc)
try:
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
bb_data = bb.parse.handle(bbfile, bb_data)
if chdir_back:
os.chdir(oldpath)
return bb_data
except:
if chdir_back:
os.chdir(oldpath)
raise
class NoCache(object):
def __init__(self, databuilder):
self.databuilder = databuilder
self.data = databuilder.data
def loadDataFull(self, virtualfn, appends):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug(1, "Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = virtualfn2realfn(virtualfn)
bb_data = self.load_bbfile(virtualfn, appends, virtonly=True)
return bb_data[virtual]
def load_bbfile(self, bbfile, appends, virtonly = False):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
if virtonly:
(bbfile, virtual, mc) = virtualfn2realfn(bbfile)
bb_data = self.databuilder.mcdata[mc].createCopy()
bb_data.setVar("__ONLYFINALISE", virtual or "default")
datastores = parse_recipe(bb_data, bbfile, appends, mc)
return datastores
bb_data = self.data.createCopy()
datastores = parse_recipe(bb_data, bbfile, appends)
for mc in self.databuilder.mcdata:
if not mc:
continue
bb_data = self.databuilder.mcdata[mc].createCopy()
newstores = parse_recipe(bb_data, bbfile, appends, mc)
for ns in newstores:
datastores["multiconfig:%s:%s" % (mc, ns)] = newstores[ns]
return datastores
class Cache(NoCache):
class Cache(object):
"""
BitBake Cache implementation
"""
def __init__(self, databuilder, data_hash, caches_array):
super().__init__(databuilder)
data = databuilder.data
def __init__(self, data, data_hash, caches_array):
# Pass caches_array information into Cache Constructor
# It will be used later for deciding whether we
# It will be used in later for deciding whether we
# need extra cache file dump/load support
self.caches_array = caches_array
self.cachedir = data.getVar("CACHE")
self.cachedir = data.getVar("CACHE", True)
self.clean = set()
self.checked = set()
self.depends_cache = {}
self.data = None
self.data_fn = None
self.cacheclean = True
self.data_hash = data_hash
@@ -401,78 +286,72 @@ class Cache(NoCache):
cache_ok = True
if self.caches_array:
for cache_class in self.caches_array:
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
cache_ok = cache_ok and os.path.exists(cachefile)
cache_class.init_cacheData(self)
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
cache_ok = cache_ok and os.path.exists(cachefile)
cache_class.init_cacheData(self)
if cache_ok:
self.load_cachefile()
elif os.path.isfile(self.cachefile):
logger.info("Out of date cache found, rebuilding...")
def load_cachefile(self):
# Firstly, using core cache file information for
# valid checking
with open(self.cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
try:
cache_ver = pickled.load()
bitbake_ver = pickled.load()
except Exception:
logger.info('Invalid cache, rebuilding...')
return
if cache_ver != __cache_version__:
logger.info('Cache version mismatch, rebuilding...')
return
elif bitbake_ver != bb.__version__:
logger.info('Bitbake version mismatch, rebuilding...')
return
cachesize = 0
previous_progress = 0
previous_percent = 0
# Calculate the correct cachesize of all those cache files
for cache_class in self.caches_array:
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
cachesize += os.fstat(cachefile.fileno()).st_size
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
cachesize += os.fstat(cachefile.fileno()).st_size
bb.event.fire(bb.event.CacheLoadStarted(cachesize), self.data)
for cache_class in self.caches_array:
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
# Check cache version information
try:
cache_ver = pickled.load()
bitbake_ver = pickled.load()
except Exception:
logger.info('Invalid cache, rebuilding...')
return
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
while cachefile:
try:
key = pickled.load()
value = pickled.load()
except Exception:
break
if self.depends_cache.has_key(key):
self.depends_cache[key].append(value)
else:
self.depends_cache[key] = [value]
# only fire events on even percentage boundaries
current_progress = cachefile.tell() + previous_progress
current_percent = 100 * current_progress / cachesize
if current_percent > previous_percent:
previous_percent = current_percent
bb.event.fire(bb.event.CacheLoadProgress(current_progress, cachesize),
self.data)
if cache_ver != __cache_version__:
logger.info('Cache version mismatch, rebuilding...')
return
elif bitbake_ver != bb.__version__:
logger.info('Bitbake version mismatch, rebuilding...')
return
# Load the rest of the cache file
current_progress = 0
while cachefile:
try:
key = pickled.load()
value = pickled.load()
except Exception:
break
if not isinstance(key, str):
bb.warn("%s from extras cache is not a string?" % key)
break
if not isinstance(value, RecipeInfoCommon):
bb.warn("%s from extras cache is not a RecipeInfoCommon class?" % value)
break
if key in self.depends_cache:
self.depends_cache[key].append(value)
else:
self.depends_cache[key] = [value]
# only fire events on even percentage boundaries
current_progress = cachefile.tell() + previous_progress
if current_progress > cachesize:
# we might have calculated incorrect total size because a file
# might've been written out just after we checked its size
cachesize = current_progress
current_percent = 100 * current_progress / cachesize
if current_percent > previous_percent:
previous_percent = current_percent
bb.event.fire(bb.event.CacheLoadProgress(current_progress, cachesize),
self.data)
previous_progress += current_progress
previous_progress += current_progress
# Note: depends cache number is corresponding to the parsing file numbers.
# The same file has several caches, still regarded as one item in the cache
@@ -480,33 +359,69 @@ class Cache(NoCache):
len(self.depends_cache)),
self.data)
def parse(self, filename, appends):
@staticmethod
def virtualfn2realfn(virtualfn):
"""
Convert a virtual file name to a real one + the associated subclass keyword
"""
fn = virtualfn
cls = ""
if virtualfn.startswith('virtual:'):
elems = virtualfn.split(':')
cls = ":".join(elems[1:-1])
fn = elems[-1]
return (fn, cls)
@staticmethod
def realfn2virtual(realfn, cls):
"""
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if cls == "":
return realfn
return "virtual:" + cls + ":" + realfn
@classmethod
def loadDataFull(cls, virtualfn, appends, cfgData):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
(fn, virtual) = cls.virtualfn2realfn(virtualfn)
logger.debug(1, "Parsing %s (full)", fn)
cfgData.setVar("__ONLYFINALISE", virtual or "default")
bb_data = cls.load_bbfile(fn, appends, cfgData)
return bb_data[virtual]
@classmethod
def parse(cls, filename, appends, configdata, caches_array):
"""Parse the specified filename, returning the recipe information"""
logger.debug(1, "Parsing %s", filename)
infos = []
datastores = self.load_bbfile(filename, appends)
datastores = cls.load_bbfile(filename, appends, configdata)
depends = []
variants = []
# Process the "real" fn last so we can store variants list
for variant, data in sorted(datastores.items(),
for variant, data in sorted(datastores.iteritems(),
key=lambda i: i[0],
reverse=True):
virtualfn = variant2virtual(filename, variant)
variants.append(variant)
virtualfn = cls.realfn2virtual(filename, variant)
depends = depends + (data.getVar("__depends", False) or [])
if depends and not variant:
data.setVar("__depends", depends)
if virtualfn == filename:
data.setVar("__VARIANTS", " ".join(variants))
info_array = []
for cache_class in self.caches_array:
info = cache_class(filename, data)
info_array.append(info)
for cache_class in caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
info = cache_class(filename, data)
info_array.append(info)
infos.append((virtualfn, info_array))
return infos
def load(self, filename, appends):
def load(self, filename, appends, configdata):
"""Obtain the recipe information for the specified filename,
using cached values if available, otherwise parsing.
@@ -520,20 +435,21 @@ class Cache(NoCache):
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
info_array = self.depends_cache[filename]
for variant in info_array[0].variants:
virtualfn = variant2virtual(filename, variant)
virtualfn = self.realfn2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
else:
logger.debug(1, "Parsing %s", filename)
return self.parse(filename, appends, configdata, self.caches_array)
return cached, infos
def loadData(self, fn, appends, cacheData):
def loadData(self, fn, appends, cfgData, cacheData):
"""Load the recipe info for the specified filename,
parsing and adding to the cache if necessary, and adding
the recipe information to the supplied CacheData instance."""
skipped, virtuals = 0, 0
cached, infos = self.load(fn, appends)
cached, infos = self.load(fn, appends, cfgData)
for virtualfn, info_array in infos:
if info_array[0].skipped:
logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
@@ -610,25 +526,9 @@ class Cache(NoCache):
if hasattr(info_array[0], 'file_checksums'):
for _, fl in info_array[0].file_checksums.items():
fl = fl.strip()
while fl:
# A .split() would be simpler but means spaces or colons in filenames would break
a = fl.find(":True")
b = fl.find(":False")
if ((a < 0) and b) or ((b > 0) and (b < a)):
f = fl[:b+6]
fl = fl[b+7:]
elif ((b < 0) and a) or ((a > 0) and (a < b)):
f = fl[:a+5]
fl = fl[a+6:]
else:
break
fl = fl.strip()
if "*" in f:
continue
f, exist = f.split(":")
if (exist == "True" and not os.path.exists(f)) or (exist == "False" and os.path.exists(f)):
logger.debug(2, "Cache: %s's file checksum list file %s changed",
for f in fl.split():
if not os.path.exists(f):
logger.debug(2, "Cache: %s's file checksum list file %s was removed",
fn, f)
self.remove(fn)
return False
@@ -641,19 +541,16 @@ class Cache(NoCache):
invalid = False
for cls in info_array[0].variants:
virtualfn = variant2virtual(fn, cls)
virtualfn = self.realfn2virtual(fn, cls)
self.clean.add(virtualfn)
if virtualfn not in self.depends_cache:
logger.debug(2, "Cache: %s is not cached", virtualfn)
invalid = True
elif len(self.depends_cache[virtualfn]) != len(self.caches_array):
logger.debug(2, "Cache: Extra caches missing for %s?" % virtualfn)
invalid = True
# If any one of the variants is not present, mark as invalid for all
if invalid:
for cls in info_array[0].variants:
virtualfn = variant2virtual(fn, cls)
virtualfn = self.realfn2virtual(fn, cls)
if virtualfn in self.clean:
logger.debug(2, "Cache: Removing %s from cache", virtualfn)
self.clean.remove(virtualfn)
@@ -690,19 +587,30 @@ class Cache(NoCache):
logger.debug(2, "Cache is clean, not saving.")
return
file_dict = {}
pickler_dict = {}
for cache_class in self.caches_array:
cache_class_name = cache_class.__name__
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "wb") as f:
p = pickle.Pickler(f, pickle.HIGHEST_PROTOCOL)
p.dump(__cache_version__)
p.dump(bb.__version__)
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class_name = cache_class.__name__
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
file_dict[cache_class_name] = open(cachefile, "wb")
pickler_dict[cache_class_name] = pickle.Pickler(file_dict[cache_class_name], pickle.HIGHEST_PROTOCOL)
pickler_dict['CoreRecipeInfo'].dump(__cache_version__)
pickler_dict['CoreRecipeInfo'].dump(bb.__version__)
for key, info_array in self.depends_cache.items():
for info in info_array:
if isinstance(info, RecipeInfoCommon) and info.__class__.__name__ == cache_class_name:
p.dump(key)
p.dump(info)
try:
for key, info_array in self.depends_cache.iteritems():
for info in info_array:
if isinstance(info, RecipeInfoCommon):
cache_class_name = info.__class__.__name__
pickler_dict[cache_class_name].dump(key)
pickler_dict[cache_class_name].dump(info)
finally:
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class_name = cache_class.__name__
file_dict[cache_class_name].close()
del self.depends_cache
@@ -710,13 +618,10 @@ class Cache(NoCache):
def mtime(cachefile):
return bb.parse.cached_mtime_noerror(cachefile)
def add_info(self, filename, info_array, cacheData, parsed=None, watcher=None):
def add_info(self, filename, info_array, cacheData, parsed=None):
if isinstance(info_array[0], CoreRecipeInfo) and (not info_array[0].skipped):
cacheData.add_from_recipeinfo(filename, info_array)
if watcher:
watcher(info_array[0].file_depends)
if not self.has_cache:
return
@@ -730,13 +635,50 @@ class Cache(NoCache):
Save data we need into the cache
"""
realfn = virtualfn2realfn(file_name)[0]
realfn = self.virtualfn2realfn(file_name)[0]
info_array = []
for cache_class in self.caches_array:
info_array.append(cache_class(realfn, data))
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
info_array.append(cache_class(realfn, data))
self.add_info(file_name, info_array, cacheData, parsed)
@staticmethod
def load_bbfile(bbfile, appends, config):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
chdir_back = False
from bb import data, parse
# expand tmpdir to include this topdir
data.setVar('TMPDIR', data.getVar('TMPDIR', config, 1) or "", config)
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
oldpath = os.path.abspath(os.getcwd())
parse.cached_mtime_noerror(bbfile_loc)
bb_data = data.init_db(config)
# The ConfHandler first looks if there is a TOPDIR and if not
# then it would call getcwd().
# Previously, we chdir()ed to bbfile_loc, called the handler
# and finally chdir()ed back, a couple of thousand times. We now
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
if not data.getVar('TOPDIR', bb_data):
chdir_back = True
data.setVar('TOPDIR', bbfile_loc, bb_data)
try:
if appends:
data.setVar('__BBAPPEND', " ".join(appends), bb_data)
bb_data = parse.handle(bbfile, bb_data)
if chdir_back:
os.chdir(oldpath)
return bb_data
except:
if chdir_back:
os.chdir(oldpath)
raise
def init(cooker):
"""
@@ -750,7 +692,7 @@ def init(cooker):
* Its mtime
* The mtimes of all its dependencies
* Whether it caused a parse.SkipRecipe exception
* Whether it caused a parse.SkipPackage exception
Files causing parsing errors are evicted from the cache.
@@ -766,9 +708,8 @@ class CacheData(object):
def __init__(self, caches_array):
self.caches_array = caches_array
for cache_class in self.caches_array:
if not issubclass(cache_class, RecipeInfoCommon):
bb.error("Extra cache data class %s should subclass RecipeInfoCommon class" % cache_class)
cache_class.init_cacheData(self)
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class.init_cacheData(self)
# Direct cache variables
self.task_queues = {}
@@ -795,14 +736,13 @@ class MultiProcessCache(object):
self.cachedata = self.create_cachedata()
self.cachedata_extras = self.create_cachedata()
def init_cache(self, d, cache_file_name=None):
cachedir = (d.getVar("PERSISTENT_DIR") or
d.getVar("CACHE"))
def init_cache(self, d):
cachedir = (d.getVar("PERSISTENT_DIR", True) or
d.getVar("CACHE", True))
if cachedir in [None, '']:
return
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir,
cache_file_name or self.__class__.cache_file_name)
self.cachefile = os.path.join(cachedir, self.__class__.cache_file_name)
logger.debug(1, "Using cache in '%s'", self.cachefile)
glf = bb.utils.lockfile(self.cachefile + ".lock")
@@ -822,11 +762,21 @@ class MultiProcessCache(object):
self.cachedata = data
def internSet(self, items):
new = set()
for i in items:
new.add(intern(i))
return new
def compress_keys(self, data):
# Override in subclasses if desired
return
def create_cachedata(self):
data = [{}]
return data
def save_extras(self):
def save_extras(self, d):
if not self.cachefile:
return
@@ -856,13 +806,21 @@ class MultiProcessCache(object):
if h not in dest[j]:
dest[j][h] = source[j][h]
def save_merge(self):
def save_merge(self, d):
if not self.cachefile:
return
glf = bb.utils.lockfile(self.cachefile + ".lock")
data = self.cachedata
try:
with open(self.cachefile, "rb") as f:
p = pickle.Unpickler(f)
data, version = p.load()
except (IOError, EOFError):
data, version = None, None
if version != self.__class__.CACHE_VERSION:
data = self.create_cachedata()
for f in [y for y in os.listdir(os.path.dirname(self.cachefile)) if y.startswith(os.path.basename(self.cachefile) + '-')]:
f = os.path.join(os.path.dirname(self.cachefile), f)
@@ -871,16 +829,16 @@ class MultiProcessCache(object):
p = pickle.Unpickler(fd)
extradata, version = p.load()
except (IOError, EOFError):
os.unlink(f)
continue
extradata, version = self.create_cachedata(), None
if version != self.__class__.CACHE_VERSION:
os.unlink(f)
continue
self.merge_data(extradata, data)
os.unlink(f)
self.compress_keys(data)
with open(self.cachefile, "wb") as f:
p = pickle.Pickler(f, -1)
p.dump([data, self.__class__.CACHE_VERSION])

View File

@@ -15,17 +15,22 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import glob
import operator
import os
import stat
import pickle
import bb.utils
import logging
from bb.cache import MultiProcessCache
logger = logging.getLogger("BitBake.Cache")
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info("Importing cPickle failed. "
"Falling back to a very slow implementation.")
# mtime cache (non-persistent)
# based upon the assumption that files do not change during bitbake run
class FileMtimeCache(object):
@@ -83,52 +88,3 @@ class FileChecksumCache(MultiProcessCache):
dest[0][h] = source[0][h]
else:
dest[0][h] = source[0][h]
def get_checksums(self, filelist, pn):
"""Get checksums for a list of files"""
def checksum_file(f):
try:
checksum = self.get_checksum(f)
except OSError as e:
bb.warn("Unable to get checksum for %s SRC_URI entry %s: %s" % (pn, os.path.basename(f), e))
return None
return checksum
def checksum_dir(pth):
# Handle directories recursively
dirchecksums = []
for root, dirs, files in os.walk(pth):
for name in files:
fullpth = os.path.join(root, name)
checksum = checksum_file(fullpth)
if checksum:
dirchecksums.append((fullpth, checksum))
return dirchecksums
checksums = []
for pth in filelist.split():
exist = pth.split(":")[1]
if exist == "False":
continue
pth = pth.split(":")[0]
if '*' in pth:
# Handle globs
for f in glob.glob(pth):
if os.path.isdir(f):
if not os.path.islink(f):
checksums.extend(checksum_dir(f))
else:
checksum = checksum_file(f)
if checksum:
checksums.append((f, checksum))
elif os.path.isdir(pth):
if not os.path.islink(pth):
checksums.extend(checksum_dir(pth))
else:
checksum = checksum_file(pth)
if checksum:
checksums.append((pth, checksum))
checksums.sort(key=operator.itemgetter(1))
return checksums

View File

@@ -1,39 +1,21 @@
"""
BitBake code parser
Parses actual code (i.e. python and shell) for functions and in-line
expressions. Used mainly to determine dependencies on other functions
and variables within the BitBake metadata. Also provides a cache for
this information in order to speed up processing.
(Not to be confused with the code that parses the metadata itself,
see lib/bb/parse/ for that).
NOTE: if you change how the parsers gather information you will almost
certainly need to increment CodeParserCache.CACHE_VERSION below so that
any existing codeparser cache gets invalidated. Additionally you'll need
to increment __cache_version__ in cache.py in order to ensure that old
recipe caches don't trigger "Taskhash mismatch" errors.
"""
import ast
import sys
import codegen
import logging
import pickle
import bb.pysh as pysh
import os.path
import bb.utils, bb.data
import hashlib
from itertools import chain
from bb.pysh import pyshyacc, pyshlex, sherrors
from pysh import pyshyacc, pyshlex, sherrors
from bb.cache import MultiProcessCache
logger = logging.getLogger('BitBake.CodeParser')
def bbhash(s):
return hashlib.md5(s.encode("utf-8")).hexdigest()
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info('Importing cPickle failed. Falling back to a very slow implementation.')
def check_indent(codestr):
"""If the code is indented, add a top level piece of code to 'remove' the indentation"""
@@ -46,101 +28,14 @@ def check_indent(codestr):
return codestr
if codestr[i-1] == "\t" or codestr[i-1] == " ":
if codestr[0] == "\n":
# Since we're adding a line, we need to remove one line of any empty padding
# to ensure line numbers are correct
codestr = codestr[1:]
return "if 1:\n" + codestr
return codestr
# Basically pickle, in python 2.7.3 at least, does badly with data duplication
# upon pickling and unpickling. Combine this with duplicate objects and things
# are a mess.
#
# When the sets are originally created, python calls intern() on the set keys
# which significantly improves memory usage. Sadly the pickle/unpickle process
# doesn't call intern() on the keys and results in the same strings being duplicated
# in memory. This also means pickle will save the same string multiple times in
# the cache file.
#
# By having shell and python cacheline objects with setstate/getstate, we force
# the object creation through our own routine where we can call intern (via internSet).
#
# We also use hashable frozensets and ensure we use references to these so that
# duplicates can be removed, both in memory and in the resulting pickled data.
#
# By playing these games, the size of the cache file shrinks dramatically
# meaning faster load times and the reloaded cache files also consume much less
# memory. Smaller cache files, faster load times and lower memory usage is good.
#
# A custom getstate/setstate using tuples is actually worth 15% cachesize by
# avoiding duplication of the attribute names!
class SetCache(object):
def __init__(self):
self.setcache = {}
def internSet(self, items):
new = []
for i in items:
new.append(sys.intern(i))
s = frozenset(new)
h = hash(s)
if h in self.setcache:
return self.setcache[h]
self.setcache[h] = s
return s
codecache = SetCache()
class pythonCacheLine(object):
def __init__(self, refs, execs, contains):
self.refs = codecache.internSet(refs)
self.execs = codecache.internSet(execs)
self.contains = {}
for c in contains:
self.contains[c] = codecache.internSet(contains[c])
def __getstate__(self):
return (self.refs, self.execs, self.contains)
def __setstate__(self, state):
(refs, execs, contains) = state
self.__init__(refs, execs, contains)
def __hash__(self):
l = (hash(self.refs), hash(self.execs))
for c in sorted(self.contains.keys()):
l = l + (c, hash(self.contains[c]))
return hash(l)
def __repr__(self):
return " ".join([str(self.refs), str(self.execs), str(self.contains)])
class shellCacheLine(object):
def __init__(self, execs):
self.execs = codecache.internSet(execs)
def __getstate__(self):
return (self.execs)
def __setstate__(self, state):
(execs) = state
self.__init__(execs)
def __hash__(self):
return hash(self.execs)
def __repr__(self):
return str(self.execs)
class CodeParserCache(MultiProcessCache):
cache_file_name = "bb_codeparser.dat"
# NOTE: you must increment this if you change how the parsers gather information,
# so that an existing cache gets invalidated. Additionally you'll need
# to increment __cache_version__ in cache.py in order to ensure that old
# recipe caches don't trigger "Taskhash mismatch" errors.
CACHE_VERSION = 9
CACHE_VERSION = 3
def __init__(self):
MultiProcessCache.__init__(self)
@@ -149,38 +44,30 @@ class CodeParserCache(MultiProcessCache):
self.pythoncacheextras = self.cachedata_extras[0]
self.shellcacheextras = self.cachedata_extras[1]
# To avoid duplication in the codeparser cache, keep
# a lookup of hashes of objects we already have
self.pythoncachelines = {}
self.shellcachelines = {}
def newPythonCacheLine(self, refs, execs, contains):
cacheline = pythonCacheLine(refs, execs, contains)
h = hash(cacheline)
if h in self.pythoncachelines:
return self.pythoncachelines[h]
self.pythoncachelines[h] = cacheline
return cacheline
def newShellCacheLine(self, execs):
cacheline = shellCacheLine(execs)
h = hash(cacheline)
if h in self.shellcachelines:
return self.shellcachelines[h]
self.shellcachelines[h] = cacheline
return cacheline
def init_cache(self, d):
# Check if we already have the caches
if self.pythoncache:
return
MultiProcessCache.init_cache(self, d)
# cachedata gets re-assigned in the parent
self.pythoncache = self.cachedata[0]
self.shellcache = self.cachedata[1]
def compress_keys(self, data):
# When the dicts are originally created, python calls intern() on the set keys
# which significantly improves memory usage. Sadly the pickle/unpickle process
# doesn't call intern() on the keys and results in the same strings being duplicated
# in memory. This also means pickle will save the same string multiple times in
# the cache file. By interning the data here, the cache file shrinks dramatically
# meaning faster load times and the reloaded cache files also consume much less
# memory. This is worth any performance hit from this loops and the use of the
# intern() data storage.
# Python 3.x may behave better in this area
for h in data[0]:
data[0][h]["refs"] = self.internSet(data[0][h]["refs"])
data[0][h]["execs"] = self.internSet(data[0][h]["execs"])
for h in data[1]:
data[1][h]["execs"] = self.internSet(data[1][h]["execs"])
return
def create_cachedata(self):
data = [{}, {}]
return data
@@ -190,11 +77,11 @@ codeparsercache = CodeParserCache()
def parser_cache_init(d):
codeparsercache.init_cache(d)
def parser_cache_save():
codeparsercache.save_extras()
def parser_cache_save(d):
codeparsercache.save_extras(d)
def parser_cache_savemerge():
codeparsercache.save_merge()
def parser_cache_savemerge(d):
codeparsercache.save_merge(d)
Logger = logging.getLoggerClass()
class BufferedLogger(Logger):
@@ -209,15 +96,12 @@ class BufferedLogger(Logger):
def flush(self):
for record in self.buffer:
if self.target.isEnabledFor(record.levelno):
self.target.handle(record)
self.target.handle(record)
self.buffer = []
class PythonParser():
getvars = (".getVar", ".appendVar", ".prependVar")
getvarflags = (".getVarFlag", ".appendVarFlag", ".prependVarFlag")
containsfuncs = ("bb.utils.contains", "base_contains")
containsanyfuncs = ("bb.utils.contains_any", "bb.utils.filter")
getvars = ("d.getVar", "bb.data.getVar", "data.getVar", "d.appendVar", "d.prependVar")
containsfuncs = ("bb.utils.contains", "base_contains", "oe.utils.contains")
execfuncs = ("bb.build.exec_func", "bb.build.exec_task")
def warn(self, func, arg):
@@ -236,37 +120,11 @@ class PythonParser():
def visit_Call(self, node):
name = self.called_node_name(node.func)
if name and (name.endswith(self.getvars) or name.endswith(self.getvarflags) or name in self.containsfuncs or name in self.containsanyfuncs):
if name in self.getvars or name in self.containsfuncs:
if isinstance(node.args[0], ast.Str):
varname = node.args[0].s
if name in self.containsfuncs and isinstance(node.args[1], ast.Str):
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname].add(node.args[1].s)
elif name in self.containsanyfuncs and isinstance(node.args[1], ast.Str):
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname].update(node.args[1].s.split())
elif name.endswith(self.getvarflags):
if isinstance(node.args[1], ast.Str):
self.references.add('%s[%s]' % (varname, node.args[1].s))
else:
self.warn(node.func, node.args[1])
else:
self.references.add(varname)
self.var_references.add(node.args[0].s)
else:
self.warn(node.func, node.args[0])
elif name and name.endswith(".expand"):
if isinstance(node.args[0], ast.Str):
value = node.args[0].s
d = bb.data.init()
parser = d.expandWithRefs(value, self.name)
self.references |= parser.references
self.execs |= parser.execs
for varname in parser.contains:
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname] |= parser.contains[varname]
elif name in self.execfuncs:
if isinstance(node.args[0], ast.Str):
self.var_execs.add(node.args[0].s)
@@ -289,50 +147,42 @@ class PythonParser():
break
def __init__(self, name, log):
self.name = name
self.var_references = set()
self.var_execs = set()
self.contains = {}
self.execs = set()
self.references = set()
self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, log)
self.log = BufferedLogger('BitBake.Data.%s' % name, logging.DEBUG, log)
self.unhandled_message = "in call of %s, argument '%s' is not a string literal"
self.unhandled_message = "while parsing %s, %s" % (name, self.unhandled_message)
def parse_python(self, node, lineno=0, filename="<string>"):
if not node or not node.strip():
return
h = bbhash(str(node))
def parse_python(self, node):
h = hash(str(node))
if h in codeparsercache.pythoncache:
self.references = set(codeparsercache.pythoncache[h].refs)
self.execs = set(codeparsercache.pythoncache[h].execs)
self.contains = {}
for i in codeparsercache.pythoncache[h].contains:
self.contains[i] = set(codeparsercache.pythoncache[h].contains[i])
self.references = codeparsercache.pythoncache[h]["refs"]
self.execs = codeparsercache.pythoncache[h]["execs"]
return
if h in codeparsercache.pythoncacheextras:
self.references = set(codeparsercache.pythoncacheextras[h].refs)
self.execs = set(codeparsercache.pythoncacheextras[h].execs)
self.contains = {}
for i in codeparsercache.pythoncacheextras[h].contains:
self.contains[i] = set(codeparsercache.pythoncacheextras[h].contains[i])
self.references = codeparsercache.pythoncacheextras[h]["refs"]
self.execs = codeparsercache.pythoncacheextras[h]["execs"]
return
# We can't add to the linenumbers for compile, we can pad to the correct number of blank lines though
node = "\n" * int(lineno) + node
code = compile(check_indent(str(node)), filename, "exec",
code = compile(check_indent(str(node)), "<string>", "exec",
ast.PyCF_ONLY_AST)
for n in ast.walk(code):
if n.__class__.__name__ == "Call":
self.visit_Call(n)
self.execs.update(self.var_execs)
self.references.update(self.var_references)
self.references.update(self.var_execs)
codeparsercache.pythoncacheextras[h] = codeparsercache.newPythonCacheLine(self.references, self.execs, self.contains)
codeparsercache.pythoncacheextras[h] = {}
codeparsercache.pythoncacheextras[h]["refs"] = self.references
codeparsercache.pythoncacheextras[h]["execs"] = self.execs
class ShellParser():
def __init__(self, name, log):
@@ -348,30 +198,29 @@ class ShellParser():
commands it executes.
"""
h = bbhash(str(value))
h = hash(str(value))
if h in codeparsercache.shellcache:
self.execs = set(codeparsercache.shellcache[h].execs)
self.execs = codeparsercache.shellcache[h]["execs"]
return self.execs
if h in codeparsercache.shellcacheextras:
self.execs = set(codeparsercache.shellcacheextras[h].execs)
self.execs = codeparsercache.shellcacheextras[h]["execs"]
return self.execs
self._parse_shell(value)
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)
codeparsercache.shellcacheextras[h] = codeparsercache.newShellCacheLine(self.execs)
return self.execs
def _parse_shell(self, value):
try:
tokens, _ = pyshyacc.parse(value, eof=True, debug=False)
except pyshlex.NeedMore:
raise sherrors.ShellSyntaxError("Unexpected EOF")
self.process_tokens(tokens)
for token in tokens:
self.process_tokens(token)
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)
codeparsercache.shellcacheextras[h] = {}
codeparsercache.shellcacheextras[h]["execs"] = self.execs
return self.execs
def process_tokens(self, tokens):
"""Process a supplied portion of the syntax tree as returned by
@@ -417,24 +266,18 @@ class ShellParser():
"case_clause": case_clause,
}
def process_token_list(tokens):
for token in tokens:
if isinstance(token, list):
process_token_list(token)
continue
name, value = token
try:
more_tokens, words = token_handlers[name](value)
except KeyError:
raise NotImplementedError("Unsupported token type " + name)
for token in tokens:
name, value = token
try:
more_tokens, words = token_handlers[name](value)
except KeyError:
raise NotImplementedError("Unsupported token type " + name)
if more_tokens:
self.process_tokens(more_tokens)
if more_tokens:
self.process_tokens(more_tokens)
if words:
self.process_words(words)
process_token_list(tokens)
if words:
self.process_words(words)
def process_words(self, words):
"""Process a set of 'words' in pyshyacc parlance, which includes
@@ -451,7 +294,7 @@ class ShellParser():
if part[0] in ('`', '$('):
command = pyshlex.wordtree_as_string(part[1:-1])
self._parse_shell(command)
self.parse_shell(command)
if word[0] in ("cmd_name", "cmd_word"):
if word in words:
@@ -470,7 +313,7 @@ class ShellParser():
self.log.debug(1, self.unhandled_template % cmd)
elif cmd == "eval":
command = " ".join(word for _, word in words[1:])
self._parse_shell(command)
self.parse_shell(command)
else:
self.allexecs.add(cmd)
break

View File

@@ -28,15 +28,8 @@ and must not trigger events, directly or indirectly.
Commands are queued in a CommandQueue
"""
from collections import OrderedDict, defaultdict
import bb.event
import bb.cooker
import bb.remotedata
class DataStoreConnectionHandle(object):
def __init__(self, dsindex=0):
self.dsindex = dsindex
class CommandCompleted(bb.event.Event):
pass
@@ -62,7 +55,6 @@ class Command:
self.cooker = cooker
self.cmds_sync = CommandsSync()
self.cmds_async = CommandsAsync()
self.remotedatastores = bb.remotedata.RemoteDatastores(cooker)
# FIXME Add lock for this
self.currentAsyncCommand = None
@@ -76,12 +68,10 @@ class Command:
if not hasattr(command_method, 'readonly') or False == getattr(command_method, 'readonly'):
return None, "Not able to execute not readonly commands in readonly mode"
try:
if getattr(command_method, 'needconfig', False):
self.cooker.updateCacheSync()
result = command_method(self, commandline)
except CommandError as exc:
return None, exc.args[0]
except (Exception, SystemExit):
except Exception:
import traceback
return None, traceback.format_exc()
else:
@@ -96,10 +86,7 @@ class Command:
def runAsyncCommand(self):
try:
if self.cooker.state in (bb.cooker.state.error, bb.cooker.state.shutdown, bb.cooker.state.forceshutdown):
# updateCache will trigger a shutdown of the parser
# and then raise BBHandledException triggering an exit
self.cooker.updateCache()
if self.cooker.state == bb.cooker.state.error:
return False
if self.currentAsyncCommand is not None:
(command, options) = self.currentAsyncCommand
@@ -118,7 +105,7 @@ class Command:
return False
except SystemExit as exc:
arg = exc.args[0]
if isinstance(arg, str):
if isinstance(arg, basestring):
self.finishAsyncCommand(arg)
else:
self.finishAsyncCommand("Exited with %s" % arg)
@@ -133,20 +120,14 @@ class Command:
def finishAsyncCommand(self, msg=None, code=None):
if msg or msg == "":
bb.event.fire(CommandFailed(msg), self.cooker.data)
bb.event.fire(CommandFailed(msg), self.cooker.event_data)
elif code:
bb.event.fire(CommandExit(code), self.cooker.data)
bb.event.fire(CommandExit(code), self.cooker.event_data)
else:
bb.event.fire(CommandCompleted(), self.cooker.data)
bb.event.fire(CommandCompleted(), self.cooker.event_data)
self.currentAsyncCommand = None
self.cooker.finishcommand()
def split_mc_pn(pn):
if pn.startswith("multiconfig:"):
_, mc, pn = pn.split(":", 2)
return (mc, pn)
return ('', pn)
class CommandsSync:
"""
A class of synchronous commands
@@ -193,19 +174,8 @@ class CommandsSync:
"""
varname = params[0]
value = str(params[1])
command.cooker.extraconfigdata[varname] = value
command.cooker.data.setVar(varname, value)
def getSetVariable(self, command, params):
"""
Read the value of a variable from data and set it into the datastore
which effectively expands and locks the value.
"""
varname = params[0]
result = self.getVariable(command, params)
command.cooker.data.setVar(varname, result)
return result
def setConfig(self, command, params):
"""
Set the value of variable in configuration
@@ -226,17 +196,66 @@ class CommandsSync:
"""
command.cooker.disableDataTracking()
def setPrePostConfFiles(self, command, params):
prefiles = params[0].split()
postfiles = params[1].split()
command.cooker.configuration.prefile = prefiles
command.cooker.configuration.postfile = postfiles
setPrePostConfFiles.needconfig = False
def initCooker(self, command, params):
"""
Init the cooker to initial state with nothing parsed
"""
command.cooker.initialize()
def resetCooker(self, command, params):
"""
Reset the cooker to its initial state, thus forcing a reparse for
any async command that has the needcache property set to True
"""
command.cooker.reset()
def getCpuCount(self, command, params):
"""
Get the CPU count on the bitbake server
"""
return bb.utils.cpu_count()
getCpuCount.readonly = True
def matchFile(self, command, params):
fMatch = params[0]
return command.cooker.matchFile(fMatch)
matchFile.needconfig = False
def generateNewImage(self, command, params):
image = params[0]
base_image = params[1]
package_queue = params[2]
timestamp = params[3]
description = params[4]
return command.cooker.generateNewImage(image, base_image,
package_queue, timestamp, description)
def ensureDir(self, command, params):
directory = params[0]
bb.utils.mkdirhier(directory)
def setVarFile(self, command, params):
"""
Save a variable in a file; used for saving in a configuration file
"""
var = params[0]
val = params[1]
default_file = params[2]
op = params[3]
command.cooker.modifyConfigurationVar(var, val, default_file, op)
def removeVarFile(self, command, params):
"""
Remove a variable declaration from a file
"""
var = params[0]
command.cooker.removeConfigurationVar(var)
def createConfigFile(self, command, params):
"""
Create an extra configuration file
"""
name = params[0]
command.cooker.createConfigFile(name)
def setEventMask(self, command, params):
handlerNum = params[0]
@@ -244,290 +263,6 @@ class CommandsSync:
debug_domains = params[2]
mask = params[3]
return bb.event.set_UIHmask(handlerNum, llevel, debug_domains, mask)
setEventMask.needconfig = False
setEventMask.readonly = True
def setFeatures(self, command, params):
"""
Set the cooker features to include the passed list of features
"""
features = params[0]
command.cooker.setFeatures(features)
setFeatures.needconfig = False
# although we change the internal state of the cooker, this is transparent since
# we always take and leave the cooker in state.initial
setFeatures.readonly = True
def updateConfig(self, command, params):
options = params[0]
environment = params[1]
cmdline = params[2]
command.cooker.updateConfigOpts(options, environment, cmdline)
updateConfig.needconfig = False
def parseConfiguration(self, command, params):
"""Instruct bitbake to parse its configuration
NOTE: it is only necessary to call this if you aren't calling any normal action
(otherwise parsing is taken care of automatically)
"""
command.cooker.parseConfiguration()
parseConfiguration.needconfig = False
def getLayerPriorities(self, command, params):
ret = []
# regex objects cannot be marshalled by xmlrpc
for collection, pattern, regex, pri in command.cooker.bbfile_config_priorities:
ret.append((collection, pattern, regex.pattern, pri))
return ret
getLayerPriorities.readonly = True
def getRecipes(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return list(command.cooker.recipecaches[mc].pkg_pn.items())
getRecipes.readonly = True
def getRecipeDepends(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return list(command.cooker.recipecaches[mc].deps.items())
getRecipeDepends.readonly = True
def getRecipeVersions(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].pkg_pepvpr
getRecipeVersions.readonly = True
def getRuntimeDepends(self, command, params):
ret = []
try:
mc = params[0]
except IndexError:
mc = ''
rundeps = command.cooker.recipecaches[mc].rundeps
for key, value in rundeps.items():
if isinstance(value, defaultdict):
value = dict(value)
ret.append((key, value))
return ret
getRuntimeDepends.readonly = True
def getRuntimeRecommends(self, command, params):
ret = []
try:
mc = params[0]
except IndexError:
mc = ''
runrecs = command.cooker.recipecaches[mc].runrecs
for key, value in runrecs.items():
if isinstance(value, defaultdict):
value = dict(value)
ret.append((key, value))
return ret
getRuntimeRecommends.readonly = True
def getRecipeInherits(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].inherits
getRecipeInherits.readonly = True
def getBbFilePriority(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].bbfile_priority
getBbFilePriority.readonly = True
def getDefaultPreference(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return command.cooker.recipecaches[mc].pkg_dp
getDefaultPreference.readonly = True
def getSkippedRecipes(self, command, params):
# Return list sorted by reverse priority order
import bb.cache
skipdict = OrderedDict(sorted(command.cooker.skiplist.items(),
key=lambda x: (-command.cooker.collection.calc_bbfile_priority(bb.cache.virtualfn2realfn(x[0])[0]), x[0])))
return list(skipdict.items())
getSkippedRecipes.readonly = True
def getOverlayedRecipes(self, command, params):
return list(command.cooker.collection.overlayed.items())
getOverlayedRecipes.readonly = True
def getFileAppends(self, command, params):
fn = params[0]
return command.cooker.collection.get_file_appends(fn)
getFileAppends.readonly = True
def getAllAppends(self, command, params):
return command.cooker.collection.bbappends
getAllAppends.readonly = True
def findProviders(self, command, params):
return command.cooker.findProviders()
findProviders.readonly = True
def findBestProvider(self, command, params):
(mc, pn) = split_mc_pn(params[0])
return command.cooker.findBestProvider(pn, mc)
findBestProvider.readonly = True
def allProviders(self, command, params):
try:
mc = params[0]
except IndexError:
mc = ''
return list(bb.providers.allProviders(command.cooker.recipecaches[mc]).items())
allProviders.readonly = True
def getRuntimeProviders(self, command, params):
rprovide = params[0]
try:
mc = params[1]
except IndexError:
mc = ''
all_p = bb.providers.getRuntimeProviders(command.cooker.recipecaches[mc], rprovide)
if all_p:
best = bb.providers.filterProvidersRunTime(all_p, rprovide,
command.cooker.data,
command.cooker.recipecaches[mc])[0][0]
else:
best = None
return all_p, best
getRuntimeProviders.readonly = True
def dataStoreConnectorFindVar(self, command, params):
dsindex = params[0]
name = params[1]
datastore = command.remotedatastores[dsindex]
value, overridedata = datastore._findVar(name)
if value:
content = value.get('_content', None)
if isinstance(content, bb.data_smart.DataSmart):
# Value is a datastore (e.g. BB_ORIGENV) - need to handle this carefully
idx = command.remotedatastores.check_store(content, True)
return {'_content': DataStoreConnectionHandle(idx),
'_connector_origtype': 'DataStoreConnectionHandle',
'_connector_overrides': overridedata}
elif isinstance(content, set):
return {'_content': list(content),
'_connector_origtype': 'set',
'_connector_overrides': overridedata}
else:
value['_connector_overrides'] = overridedata
else:
value = {}
value['_connector_overrides'] = overridedata
return value
dataStoreConnectorFindVar.readonly = True
def dataStoreConnectorGetKeys(self, command, params):
dsindex = params[0]
datastore = command.remotedatastores[dsindex]
return list(datastore.keys())
dataStoreConnectorGetKeys.readonly = True
def dataStoreConnectorGetVarHistory(self, command, params):
dsindex = params[0]
name = params[1]
datastore = command.remotedatastores[dsindex]
return datastore.varhistory.variable(name)
dataStoreConnectorGetVarHistory.readonly = True
def dataStoreConnectorExpandPythonRef(self, command, params):
config_data_dict = params[0]
varname = params[1]
expr = params[2]
config_data = command.remotedatastores.receive_datastore(config_data_dict)
varparse = bb.data_smart.VariableParse(varname, config_data)
return varparse.python_sub(expr)
def dataStoreConnectorRelease(self, command, params):
dsindex = params[0]
if dsindex <= 0:
raise CommandError('dataStoreConnectorRelease: invalid index %d' % dsindex)
command.remotedatastores.release(dsindex)
def dataStoreConnectorSetVarFlag(self, command, params):
dsindex = params[0]
name = params[1]
flag = params[2]
value = params[3]
datastore = command.remotedatastores[dsindex]
datastore.setVarFlag(name, flag, value)
def dataStoreConnectorDelVar(self, command, params):
dsindex = params[0]
name = params[1]
datastore = command.remotedatastores[dsindex]
if len(params) > 2:
flag = params[2]
datastore.delVarFlag(name, flag)
else:
datastore.delVar(name)
def dataStoreConnectorRenameVar(self, command, params):
dsindex = params[0]
name = params[1]
newname = params[2]
datastore = command.remotedatastores[dsindex]
datastore.renameVar(name, newname)
def parseRecipeFile(self, command, params):
"""
Parse the specified recipe file (with or without bbappends)
and return a datastore object representing the environment
for the recipe.
"""
fn = params[0]
appends = params[1]
appendlist = params[2]
if len(params) > 3:
config_data_dict = params[3]
config_data = command.remotedatastores.receive_datastore(config_data_dict)
else:
config_data = None
if appends:
if appendlist is not None:
appendfiles = appendlist
else:
appendfiles = command.cooker.collection.get_file_appends(fn)
else:
appendfiles = []
# We are calling bb.cache locally here rather than on the server,
# but that's OK because it doesn't actually need anything from
# the server barring the global datastore (which we have a remote
# version of)
if config_data:
# We have to use a different function here if we're passing in a datastore
# NOTE: we took a copy above, so we don't do it here again
envdata = bb.cache.parse_recipe(config_data, fn, appendfiles)['']
else:
# Use the standard path
parser = bb.cache.NoCache(command.cooker.databuilder)
envdata = parser.loadDataFull(fn, appendfiles)
idx = command.remotedatastores.store(envdata)
return DataStoreConnectionHandle(idx)
parseRecipeFile.readonly = True
class CommandsAsync:
"""
@@ -542,12 +277,8 @@ class CommandsAsync:
"""
bfile = params[0]
task = params[1]
if len(params) > 2:
hidewarning = params[2]
else:
hidewarning = False
command.cooker.buildFile(bfile, task, hidewarning)
command.cooker.buildFile(bfile, task)
buildFile.needcache = False
def buildTargets(self, command, params):
@@ -597,6 +328,17 @@ class CommandsAsync:
command.finishAsyncCommand()
generateTargetsTree.needcache = True
def findCoreBaseFiles(self, command, params):
"""
Find certain files in COREBASE directory. i.e. Layers
"""
subdir = params[0]
filename = params[1]
command.cooker.findCoreBaseFiles(subdir, filename)
command.finishAsyncCommand()
findCoreBaseFiles.needcache = False
def findConfigFiles(self, command, params):
"""
Find config files which provide appropriate values
@@ -678,6 +420,18 @@ class CommandsAsync:
command.finishAsyncCommand()
compareRevisions.needcache = True
def parseConfigurationFiles(self, command, params):
"""
Parse the configuration files
"""
prefiles = params[0].split()
postfiles = params[1].split()
command.cooker.configuration.prefile = prefiles
command.cooker.configuration.postfile = postfiles
command.cooker.loadConfigurationData()
command.finishAsyncCommand()
parseConfigurationFiles.needcache = False
def triggerEvent(self, command, params):
"""
Trigger a certain event
@@ -687,20 +441,3 @@ class CommandsAsync:
command.currentAsyncCommand = None
triggerEvent.needcache = False
def resetCooker(self, command, params):
"""
Reset the cooker to its initial state, thus forcing a reparse for
any async command that has the needcache property set to True
"""
command.cooker.reset()
command.finishAsyncCommand()
resetCooker.needcache = False
def clientComplete(self, command, params):
"""
Do the right thing when the controlling client exits
"""
command.cooker.clientComplete()
command.finishAsyncCommand()
clientComplete.needcache = False

File diff suppressed because it is too large Load Diff

View File

@@ -22,11 +22,9 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import logging
import os
import re
import sys
import os, sys
from functools import wraps
import logging
import bb
from bb import data
import bb.parse
@@ -35,8 +33,8 @@ logger = logging.getLogger("BitBake")
parselog = logging.getLogger("BitBake.Parsing")
class ConfigParameters(object):
def __init__(self, argv=sys.argv):
self.options, targets = self.parseCommandLine(argv)
def __init__(self):
self.options, targets = self.parseCommandLine()
self.environment = self.parseEnvironment()
self.options.pkgs_to_build = targets or []
@@ -48,7 +46,7 @@ class ConfigParameters(object):
for key, val in self.options.__dict__.items():
setattr(self, key, val)
def parseCommandLine(self, argv=sys.argv):
def parseCommandLine(self):
raise Exception("Caller must implement commandline option parsing")
def parseEnvironment(self):
@@ -65,24 +63,12 @@ class ConfigParameters(object):
raise Exception("Unable to set configuration option 'cmd' on the server: %s" % error)
if not self.options.pkgs_to_build:
bbpkgs, error = server.runCommand(["getVariable", "BBTARGETS"])
bbpkgs, error = server.runCommand(["getVariable", "BBPKGS"])
if error:
raise Exception("Unable to get the value of BBTARGETS from the server: %s" % error)
raise Exception("Unable to get the value of BBPKGS from the server: %s" % error)
if bbpkgs:
self.options.pkgs_to_build.extend(bbpkgs.split())
def updateToServer(self, server, environment):
options = {}
for o in ["abort", "tryaltconfigs", "force", "invalidate_stamp",
"verbose", "debug", "dry_run", "dump_signatures",
"debug_domains", "extra_assume_provided", "profile",
"prefile", "postfile", "tracking"]:
options[o] = getattr(self.options, o)
ret, error = server.runCommand(["updateConfig", options, environment, sys.argv])
if error:
raise Exception("Unable to update the server configuration with local parameters: %s" % error)
def parseActions(self):
# Parse any commandline into actions
action = {'action':None, 'msg':None}
@@ -131,24 +117,16 @@ class CookerConfiguration(object):
self.extra_assume_provided = []
self.prefile = []
self.postfile = []
self.prefile_server = []
self.postfile_server = []
self.debug = 0
self.cmd = None
self.abort = True
self.force = False
self.profile = False
self.nosetscene = False
self.setsceneonly = False
self.invalidate_stamp = False
self.dump_signatures = []
self.dump_signatures = False
self.dry_run = False
self.tracking = False
self.interface = []
self.writeeventlog = False
self.server_only = False
self.limited_deps = False
self.runall = None
self.env = {}
@@ -182,26 +160,11 @@ def catch_parse_error(func):
def wrapped(fn, *args):
try:
return func(fn, *args)
except IOError as exc:
except (IOError, bb.parse.ParseError, bb.data_smart.ExpansionError) as exc:
import traceback
parselog.critical(traceback.format_exc())
parselog.critical( traceback.format_exc())
parselog.critical("Unable to parse %s: %s" % (fn, exc))
sys.exit(1)
except bb.data_smart.ExpansionError as exc:
import traceback
bbdir = os.path.dirname(__file__) + os.sep
exc_class, exc, tb = sys.exc_info()
for tb in iter(lambda: tb.tb_next, None):
# Skip frames in bitbake itself, we only want the metadata
fn, _, _, _ = traceback.extract_tb(tb, 1)[0]
if not fn.startswith(bbdir):
break
parselog.critical("Unable to parse %s" % fn, exc_info=(exc_class, exc, tb))
sys.exit(1)
except bb.parse.ParseError as exc:
parselog.critical(str(exc))
sys.exit(1)
return wrapped
@catch_parse_error
@@ -215,7 +178,7 @@ def _inherit(bbclass, data):
def findConfigFile(configfile, data):
search = []
bbpath = data.getVar("BBPATH")
bbpath = data.getVar("BBPATH", True)
if bbpath:
for i in bbpath.split(":"):
search.append(os.path.join(i, "conf", configfile))
@@ -240,9 +203,9 @@ class CookerDataBuilder(object):
bb.utils.set_context(bb.utils.clean_context())
bb.event.set_class_handlers(bb.event.clean_class_handlers())
self.basedata = bb.data.init()
self.data = bb.data.init()
if self.tracking:
self.basedata.enableTracking()
self.data.enableTracking()
# Keep a datastore of the initial environment variables and their
# values from when BitBake was launched to enable child processes
@@ -253,63 +216,27 @@ class CookerDataBuilder(object):
self.savedenv.setVar(k, cookercfg.env[k])
filtered_keys = bb.utils.approved_variables()
bb.data.inheritFromOS(self.basedata, self.savedenv, filtered_keys)
self.basedata.setVar("BB_ORIGENV", self.savedenv)
bb.data.inheritFromOS(self.data, self.savedenv, filtered_keys)
self.data.setVar("BB_ORIGENV", self.savedenv)
if worker:
self.basedata.setVar("BB_WORKERCONTEXT", "1")
self.data = self.basedata
self.mcdata = {}
self.data.setVar("BB_WORKERCONTEXT", "1")
def parseBaseConfiguration(self):
try:
bb.parse.init_parser(self.basedata)
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
if self.data.getVar("BB_WORKERCONTEXT", False) is None:
bb.fetch.fetcher_init(self.data)
bb.codeparser.parser_cache_init(self.data)
bb.event.fire(bb.event.ConfigParsed(), self.data)
reparse_cnt = 0
while self.data.getVar("BB_INVALIDCONF", False) is True:
if reparse_cnt > 20:
logger.error("Configuration has been re-parsed over 20 times, "
"breaking out of the loop...")
raise Exception("Too deep config re-parse loop. Check locations where "
"BB_INVALIDCONF is being set (ConfigParsed event handlers)")
self.data.setVar("BB_INVALIDCONF", False)
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
reparse_cnt += 1
bb.event.fire(bb.event.ConfigParsed(), self.data)
bb.parse.init_parser(self.data)
self.data_hash = self.data.get_hash()
self.mcdata[''] = self.data
multiconfig = (self.data.getVar("BBMULTICONFIG") or "").split()
for config in multiconfig:
mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), mcdata)
self.mcdata[config] = mcdata
except (SyntaxError, bb.BBHandledException):
raise bb.BBHandledException
except bb.data_smart.ExpansionError as e:
logger.error(str(e))
raise bb.BBHandledException
self.parseConfigurationFiles(self.prefiles, self.postfiles)
except SyntaxError:
sys.exit(1)
except Exception:
logger.exception("Error parsing configuration files")
raise bb.BBHandledException
sys.exit(1)
def _findLayerConf(self, data):
return findConfigFile("bblayers.conf", data)
def parseConfigurationFiles(self, prefiles, postfiles, mc = "default"):
data = bb.data.createCopy(self.basedata)
data.setVar("BB_CURRENT_MC", mc)
def parseConfigurationFiles(self, prefiles, postfiles):
data = self.data
bb.parse.init_parser(data)
# Parse files for loading *before* bitbake.conf and any includes
for f in prefiles:
@@ -323,51 +250,18 @@ class CookerDataBuilder(object):
data.setVar("TOPDIR", os.path.dirname(os.path.dirname(layerconf)))
data = parse_config_file(layerconf, data)
layers = (data.getVar('BBLAYERS') or "").split()
layers = (data.getVar('BBLAYERS', True) or "").split()
data = bb.data.createCopy(data)
approved = bb.utils.approved_variables()
for layer in layers:
if not os.path.isdir(layer):
parselog.critical("Layer directory '%s' does not exist! "
"Please check BBLAYERS in %s" % (layer, layerconf))
sys.exit(1)
parselog.debug(2, "Adding layer %s", layer)
if 'HOME' in approved and '~' in layer:
layer = os.path.expanduser(layer)
if layer.endswith('/'):
layer = layer.rstrip('/')
data.setVar('LAYERDIR', layer)
data.setVar('LAYERDIR_RE', re.escape(layer))
data = parse_config_file(os.path.join(layer, "conf", "layer.conf"), data)
data.expandVarref('LAYERDIR')
data.expandVarref('LAYERDIR_RE')
data.delVar('LAYERDIR_RE')
data.delVar('LAYERDIR')
bbfiles_dynamic = (data.getVar('BBFILES_DYNAMIC') or "").split()
collections = (data.getVar('BBFILE_COLLECTIONS') or "").split()
invalid = []
for entry in bbfiles_dynamic:
parts = entry.split(":", 1)
if len(parts) != 2:
invalid.append(entry)
continue
l, f = parts
if l in collections:
data.appendVar("BBFILES", " " + f)
if invalid:
bb.fatal("BBFILES_DYNAMIC entries must be of the form <collection name>:<filename pattern>, not:\n %s" % "\n ".join(invalid))
layerseries = set((data.getVar("LAYERSERIES_CORENAMES") or "").split())
for c in collections:
compat = set((data.getVar("LAYERSERIES_COMPAT_%s" % c) or "").split())
if compat and not (compat & layerseries):
bb.fatal("Layer %s is not compatible with the core layer which only supports these series: %s (layer is compatible with %s)"
% (c, " ".join(layerseries), " ".join(compat)))
if not data.getVar("BBPATH"):
if not data.getVar("BBPATH", True):
msg = "The BBPATH variable is not set"
if not layerconf:
msg += (" and bitbake did not find a conf/bblayers.conf file in"
@@ -382,21 +276,29 @@ class CookerDataBuilder(object):
data = parse_config_file(p, data)
# Handle any INHERITs and inherit the base class
bbclasses = ["base"] + (data.getVar('INHERIT') or "").split()
bbclasses = ["base"] + (data.getVar('INHERIT', True) or "").split()
for bbclass in bbclasses:
data = _inherit(bbclass, data)
# Nomally we only register event handlers at the end of parsing .bb files
# We register any handlers we've found so far here...
for var in data.getVar('__BBHANDLERS', False) or []:
handlerfn = data.getVarFlag(var, "filename", False)
if not handlerfn:
parselog.critical("Undefined event handler function '%s'" % var)
sys.exit(1)
handlerln = int(data.getVarFlag(var, "lineno", False))
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
for var in data.getVar('__BBHANDLERS') or []:
bb.event.register(var, data.getVar(var), (data.getVarFlag(var, "eventmask", True) or "").split())
if data.getVar("BB_WORKERCONTEXT", False) is None:
bb.fetch.fetcher_init(data)
bb.codeparser.parser_cache_init(data)
bb.event.fire(bb.event.ConfigParsed(), data)
if data.getVar("BB_INVALIDCONF") is True:
data.setVar("BB_INVALIDCONF", False)
self.parseConfigurationFiles(self.prefiles, self.postfiles)
return
bb.parse.init_parser(data)
data.setVar('BBINCLUDED',bb.parse.get_file_depends(data))
self.data = data
self.data_hash = data.get_hash()
return data

View File

@@ -1,5 +1,5 @@
"""
Python Daemonizing helper
Python Deamonizing helper
Configurable daemon behaviors:
@@ -12,11 +12,8 @@ A failed call to fork() now raises an exception.
References:
1) Advanced Programming in the Unix Environment: W. Richard Stevens
http://www.apuebook.com/apue3e.html
2) The Linux Programming Interface: Michael Kerrisk
http://man7.org/tlpi/index.html
3) Unix Programming Frequently Asked Questions:
http://www.faqs.org/faqs/unix-faq/programmer/faq/
2) Unix Programming Frequently Asked Questions:
http://www.erlenstar.demon.co.uk/unix/faq_toc.html
Modified to allow a function to be daemonized and return for
bitbake use by Richard Purdie
@@ -28,7 +25,7 @@ __version__ = "0.2"
# Standard Python modules.
import os # Miscellaneous OS interfaces.
import sys # System-specific parameters and functions.
import sys # System-specific parameters and functions.
# Default daemon parameters.
# File mode creation mask of the daemon.
@@ -131,7 +128,7 @@ def createDaemon(function, logfile):
# of methods to accomplish this task. Three are listed below.
#
# Try the system configuration variable, SC_OPEN_MAX, to obtain the maximum
# number of open file descriptors to close. If it doesn't exist, use
# number of open file descriptors to close. If it doesn't exists, use
# the default value (configurable).
#
# try:
@@ -149,7 +146,7 @@ def createDaemon(function, logfile):
# OR
#
# Use the getrlimit method to retrieve the maximum file descriptor number
# that can be opened by this process. If there is no limit on the
# that can be opened by this process. If there is not limit on the
# resource, use the default value.
#
import resource # Resource usage information.
@@ -178,8 +175,8 @@ def createDaemon(function, logfile):
# os.dup2(0, 2) # standard error (2)
si = open('/dev/null', 'r')
so = open(logfile, 'w')
si = file('/dev/null', 'r')
so = file(logfile, 'w')
se = so

View File

@@ -6,7 +6,7 @@ BitBake 'Data' implementations
Functions for interacting with the data structure used by the
BitBake build tools.
The expandKeys and update_data are the most expensive
The expandData and update_data are the most expensive
operations. At night the cookie monster came by and
suggested 'give me cookies on setting the variables and
things will work out'. Taking this suggestion into account
@@ -15,7 +15,7 @@ Analyse von Algorithmen' lecture and the cookie
monster seems to be right. We will track setVar more carefully
to have faster update_data and expandKeys operations.
This is a trade-off between speed and memory again but
This is a treade-off between speed and memory again but
the speed is more critical here.
"""
@@ -35,7 +35,7 @@ the speed is more critical here.
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
import sys, os, re
if sys.argv[0][-5:] == "pydoc":
@@ -78,6 +78,59 @@ def initVar(var, d):
"""Non-destructive var init for data structure"""
d.initVar(var)
def setVar(var, value, d):
"""Set a variable to a given value"""
d.setVar(var, value)
def getVar(var, d, exp = 0):
"""Gets the value of a variable"""
return d.getVar(var, exp)
def renameVar(key, newkey, d):
"""Renames a variable from key to newkey"""
d.renameVar(key, newkey)
def delVar(var, d):
"""Removes a variable from the data set"""
d.delVar(var)
def appendVar(var, value, d):
"""Append additional value to a variable"""
d.appendVar(var, value)
def setVarFlag(var, flag, flagvalue, d):
"""Set a flag for a given variable to a given value"""
d.setVarFlag(var, flag, flagvalue)
def getVarFlag(var, flag, d):
"""Gets given flag from given var"""
return d.getVarFlag(var, flag)
def delVarFlag(var, flag, d):
"""Removes a given flag from the variable's flags"""
d.delVarFlag(var, flag)
def setVarFlags(var, flags, d):
"""Set the flags for a given variable
Note:
setVarFlags will not clear previous
flags. Think of this method as
addVarFlags
"""
d.setVarFlags(var, flags)
def getVarFlags(var, d):
"""Gets a variable's flags"""
return d.getVarFlags(var)
def delVarFlags(var, d):
"""Removes a variable's flags"""
d.delVarFlags(var)
def keys(d):
"""Return a list of keys in d"""
return d.keys()
@@ -106,12 +159,13 @@ def expandKeys(alterdata, readdata = None):
# These two for loops are split for performance to maximise the
# usefulness of the expand cache
for key in sorted(todolist):
for key in todolist:
ekey = todolist[key]
newval = alterdata.getVar(ekey, False)
if newval is not None:
val = alterdata.getVar(key, False)
if val is not None:
newval = alterdata.getVar(ekey, 0)
if newval:
val = alterdata.getVar(key, 0)
if val is not None and newval is not None:
bb.warn("Variable key %s (%s) replaces original key %s (%s)." % (key, val, ekey, newval))
alterdata.renameVar(key, ekey)
@@ -121,7 +175,7 @@ def inheritFromOS(d, savedenv, permitted):
for s in savedenv.keys():
if s in permitted:
try:
d.setVar(s, savedenv.getVar(s), op = 'from env')
d.setVar(s, getVar(s, savedenv, True), op = 'from env')
if s in exportlist:
d.setVarFlag(s, "export", True, op = 'auto env export')
except TypeError:
@@ -129,52 +183,44 @@ def inheritFromOS(d, savedenv, permitted):
def emit_var(var, o=sys.__stdout__, d = init(), all=False):
"""Emit a variable to be sourced by a shell."""
func = d.getVarFlag(var, "func", False)
if d.getVarFlag(var, 'python', False) and func:
return False
if getVarFlag(var, "python", d):
return 0
export = d.getVarFlag(var, "export", False)
unexport = d.getVarFlag(var, "unexport", False)
export = getVarFlag(var, "export", d)
unexport = getVarFlag(var, "unexport", d)
func = getVarFlag(var, "func", d)
if not all and not export and not unexport and not func:
return False
return 0
try:
if all:
oval = d.getVar(var, False)
val = d.getVar(var)
oval = getVar(var, d, 0)
val = getVar(var, d, 1)
except (KeyboardInterrupt, bb.build.FuncFailed):
raise
except Exception as exc:
o.write('# expansion of %s threw %s: %s\n' % (var, exc.__class__.__name__, str(exc)))
return False
return 0
if all:
d.varhistory.emit(var, oval, val, o, d)
d.varhistory.emit(var, oval, val, o)
if (var.find("-") != -1 or var.find(".") != -1 or var.find('{') != -1 or var.find('}') != -1 or var.find('+') != -1) and not all:
return False
return 0
varExpanded = d.expand(var)
varExpanded = expand(var, d)
if unexport:
o.write('unset %s\n' % varExpanded)
return False
return 0
if val is None:
return False
return 0
val = str(val)
if varExpanded.startswith("BASH_FUNC_"):
varExpanded = varExpanded[10:-2]
val = val[3:] # Strip off "() "
o.write("%s() %s\n" % (varExpanded, val))
o.write("export -f %s\n" % (varExpanded))
return True
if func:
# NOTE: should probably check for unbalanced {} within the var
val = val.rstrip('\n')
o.write("%s() {\n%s\n}\n" % (varExpanded, val))
return 1
@@ -185,33 +231,30 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
# to a shell, we need to escape the quotes in the var
alter = re.sub('"', '\\"', val)
alter = re.sub('\n', ' \\\n', alter)
alter = re.sub('\\$', '\\\\$', alter)
o.write('%s="%s"\n' % (varExpanded, alter))
return False
return 0
def emit_env(o=sys.__stdout__, d = init(), all=False):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
isfunc = lambda key: bool(d.getVarFlag(key, "func", False))
isfunc = lambda key: bool(d.getVarFlag(key, "func"))
keys = sorted((key for key in d.keys() if not key.startswith("__")), key=isfunc)
grouped = groupby(keys, isfunc)
for isfunc, keys in grouped:
for key in sorted(keys):
for key in keys:
emit_var(key, o, d, all and not isfunc) and o.write('\n')
def exported_keys(d):
return (key for key in d.keys() if not key.startswith('__') and
d.getVarFlag(key, 'export', False) and
not d.getVarFlag(key, 'unexport', False))
d.getVarFlag(key, 'export') and
not d.getVarFlag(key, 'unexport'))
def exported_vars(d):
k = list(exported_keys(d))
for key in k:
for key in exported_keys(d):
try:
value = d.getVar(key)
except Exception as err:
bb.warn("%s: Unable to export ${%s}: %s" % (d.getVar("FILE"), key, err))
continue
value = d.getVar(key, True)
except Exception:
pass
if value is not None:
yield key, str(value)
@@ -219,59 +262,23 @@ def exported_vars(d):
def emit_func(func, o=sys.__stdout__, d = init()):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
keys = (key for key in d.keys() if not key.startswith("__") and not d.getVarFlag(key, "func", False))
for key in sorted(keys):
emit_var(key, o, d, False)
keys = (key for key in d.keys() if not key.startswith("__") and not d.getVarFlag(key, "func"))
for key in keys:
emit_var(key, o, d, False) and o.write('\n')
o.write('\n')
emit_var(func, o, d, False) and o.write('\n')
newdeps = bb.codeparser.ShellParser(func, logger).parse_shell(d.getVar(func))
newdeps |= set((d.getVarFlag(func, "vardeps") or "").split())
newdeps = bb.codeparser.ShellParser(func, logger).parse_shell(d.getVar(func, True))
newdeps |= set((d.getVarFlag(func, "vardeps", True) or "").split())
seen = set()
while newdeps:
deps = newdeps
seen |= deps
newdeps = set()
for dep in deps:
if d.getVarFlag(dep, "func", False) and not d.getVarFlag(dep, "python", False):
if d.getVarFlag(dep, "func"):
emit_var(dep, o, d, False) and o.write('\n')
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep))
newdeps |= set((d.getVarFlag(dep, "vardeps") or "").split())
newdeps -= seen
_functionfmt = """
def {function}(d):
{body}"""
def emit_func_python(func, o=sys.__stdout__, d = init()):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
def write_func(func, o, call = False):
body = d.getVar(func, False)
if not body.startswith("def"):
body = _functionfmt.format(function=func, body=body)
o.write(body.strip() + "\n\n")
if call:
o.write(func + "(d)" + "\n\n")
write_func(func, o, True)
pp = bb.codeparser.PythonParser(func, logger)
pp.parse_python(d.getVar(func, False))
newdeps = pp.execs
newdeps |= set((d.getVarFlag(func, "vardeps") or "").split())
seen = set()
while newdeps:
deps = newdeps
seen |= deps
newdeps = set()
for dep in deps:
if d.getVarFlag(dep, "func", False) and d.getVarFlag(dep, "python", False):
write_func(dep, o)
pp = bb.codeparser.PythonParser(dep, logger)
pp.parse_python(d.getVar(dep, False))
newdeps |= pp.execs
newdeps |= set((d.getVarFlag(dep, "vardeps") or "").split())
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep, True))
newdeps |= set((d.getVarFlag(dep, "vardeps", True) or "").split())
newdeps -= seen
def update_data(d):
@@ -288,65 +295,33 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps |= parser.references
deps = deps | (keys & parser.execs)
return deps, value
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude"]) or {}
vardeps = varflags.get("vardeps")
value = d.getVar(key, False)
def handle_contains(value, contains, d):
newvalue = ""
for k in sorted(contains):
l = (d.getVar(k) or "").split()
for item in sorted(contains[k]):
for word in item.split():
if not word in l:
newvalue += "\n%s{%s} = Unset" % (k, item)
break
else:
newvalue += "\n%s{%s} = Set" % (k, item)
if not newvalue:
return value
if not value:
return newvalue
return value + newvalue
if "vardepvalue" in varflags:
value = varflags.get("vardepvalue")
elif varflags.get("func"):
if varflags.get("python"):
parsedvar = d.expandWithRefs(value, key)
parser = bb.codeparser.PythonParser(key, logger)
if value and "\t" in value:
logger.warning("Variable %s contains tabs, please remove these (%s)" % (key, d.getVar("FILE")))
parser.parse_python(value, filename=varflags.get("filename"), lineno=varflags.get("lineno"))
if parsedvar.value and "\t" in parsedvar.value:
logger.warn("Variable %s contains tabs, please remove these (%s)" % (key, d.getVar("FILE", True)))
parser.parse_python(parsedvar.value)
deps = deps | parser.references
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, d)
else:
parsedvar = d.expandWithRefs(value, key)
parser = bb.codeparser.ShellParser(key, logger)
parser.parse_shell(parsedvar.value)
deps = deps | shelldeps
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
value = handle_contains(value, parsedvar.contains, d)
if vardeps is None:
parser.log.flush()
if "prefuncs" in varflags:
deps = deps | set(varflags["prefuncs"].split())
if "postfuncs" in varflags:
deps = deps | set(varflags["postfuncs"].split())
if "exports" in varflags:
deps = deps | set(varflags["exports"].split())
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
else:
parser = d.expandWithRefs(value, key)
deps |= parser.references
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, d)
if "vardepvalueexclude" in varflags:
exclude = varflags.get("vardepvalueexclude")
for excl in exclude.split('|'):
if excl:
value = value.replace(excl, '')
# Add varflags, assuming an exclusion list is set
if varflagsexcl:
@@ -359,11 +334,8 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps |= set((vardeps or "").split())
deps -= set(varflags.get("vardepsexclude", "").split())
except bb.parse.SkipRecipe:
raise
except Exception as e:
bb.warn("Exception during build_dependencies for %s" % key)
raise
raise bb.data_smart.ExpansionError(key, None, e)
return deps, value
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
#d.setVarFlag(key, "vardeps", deps)
@@ -371,13 +343,13 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
def generate_dependencies(d):
keys = set(key for key in d if not key.startswith("__"))
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export", False) and not d.getVarFlag(key, "unexport", False))
varflagsexcl = d.getVar('BB_SIGNATURE_EXCLUDE_FLAGS')
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export") and not d.getVarFlag(key, "unexport"))
varflagsexcl = d.getVar('BB_SIGNATURE_EXCLUDE_FLAGS', True)
deps = {}
values = {}
tasklist = d.getVar('__BBTASKS', False) or []
tasklist = d.getVar('__BBTASKS') or []
for task in tasklist:
deps[task], values[task] = build_dependencies(task, keys, shelldeps, varflagsexcl, d)
newdeps = deps[task]
@@ -395,7 +367,7 @@ def generate_dependencies(d):
return tasklist, deps, values
def inherits_class(klass, d):
val = d.getVar('__inherit_cache', False) or []
val = getVar('__inherit_cache', d) or []
needle = os.path.join('classes', '%s.bbclass' % klass)
for v in val:
if v.endswith(needle):

View File

@@ -39,8 +39,8 @@ from bb.COW import COWDictBase
logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = ["_append", "_prepend", "_remove"]
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>[^A-Z]*))?$')
__expand_var_regexp__ = re.compile(r"\${[^{}@\n\t :]+}")
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>.*))?$')
__expand_var_regexp__ = re.compile(r"\${[^{}@\n\t ]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
def infer_caller_details(loginfo, parent = False, varval = True):
@@ -54,36 +54,27 @@ def infer_caller_details(loginfo, parent = False, varval = True):
return
# Infer caller's likely values for variable (var) and value (value),
# to reduce clutter in the rest of the code.
above = None
def set_above():
if varval and ('variable' not in loginfo or 'detail' not in loginfo):
try:
raise Exception
except Exception:
tb = sys.exc_info()[2]
if parent:
return tb.tb_frame.f_back.f_back.f_back
above = tb.tb_frame.f_back.f_back
else:
return tb.tb_frame.f_back.f_back
if varval and ('variable' not in loginfo or 'detail' not in loginfo):
if not above:
above = set_above()
lcls = above.f_locals.items()
above = tb.tb_frame.f_back
lcls = above.f_locals.items()
for k, v in lcls:
if k == 'value' and 'detail' not in loginfo:
loginfo['detail'] = v
if k == 'var' and 'variable' not in loginfo:
loginfo['variable'] = v
# Infer file/line/function from traceback
# Don't use traceback.extract_stack() since it fills the line contents which
# we don't need and that hits stat syscalls
if 'file' not in loginfo:
if not above:
above = set_above()
f = above.f_back
line = f.f_lineno
file = f.f_code.co_filename
func = f.f_code.co_name
depth = 3
if parent:
depth = 4
file, line, func, text = traceback.extract_stack(limit = depth)[0]
loginfo['file'] = file
loginfo['line'] = line
if func not in loginfo:
@@ -97,7 +88,6 @@ class VariableParse:
self.references = set()
self.execs = set()
self.contains = {}
def var_sub(self, match):
key = match.group()[2:-1]
@@ -108,7 +98,7 @@ class VariableParse:
varparse = self.d.expand_cache[key]
var = varparse.value
else:
var = self.d.getVarFlag(key, "_content")
var = self.d.getVar(key, True)
self.references.add(key)
if var is not None:
return var
@@ -116,21 +106,13 @@ class VariableParse:
return match.group()
def python_sub(self, match):
if isinstance(match, str):
code = match
else:
code = match.group()[3:-1]
if "_remote_data" in self.d:
connector = self.d["_remote_data"]
return connector.expandPythonRef(self.varname, code, self.d)
code = match.group()[3:-1]
codeobj = compile(code.strip(), self.varname or "<expansion>", "eval")
parser = bb.codeparser.PythonParser(self.varname, logger)
parser.parse_python(code)
if self.varname:
vardeps = self.d.getVarFlag(self.varname, "vardeps")
vardeps = self.d.getVarFlag(self.varname, "vardeps", True)
if vardeps is None:
parser.log.flush()
else:
@@ -138,12 +120,7 @@ class VariableParse:
self.references |= parser.references
self.execs |= parser.execs
for k in parser.contains:
if k not in self.contains:
self.contains[k] = parser.contains[k].copy()
else:
self.contains[k].update(parser.contains[k])
value = utils.better_eval(codeobj, DataContext(self.d), {'d' : self.d})
value = utils.better_eval(codeobj, DataContext(self.d))
return str(value)
@@ -154,8 +131,8 @@ class DataContext(dict):
self['d'] = metadata
def __missing__(self, key):
value = self.metadata.getVar(key)
if value is None or self.metadata.getVarFlag(key, 'func', False):
value = self.metadata.getVar(key, True)
if value is None or self.metadata.getVarFlag(key, 'func'):
raise KeyError(key)
else:
return value
@@ -230,19 +207,6 @@ class VariableHistory(object):
new.variables = self.variables.copy()
return new
def __getstate__(self):
vardict = {}
for k, v in self.variables.iteritems():
vardict[k] = v
return {'dataroot': self.dataroot,
'variables': vardict}
def __setstate__(self, state):
self.dataroot = state['dataroot']
self.variables = COWDictBase.copy()
for k, v in state['variables'].items():
self.variables[k] = v
def record(self, *kwonly, **loginfo):
if not self.dataroot._tracking:
return
@@ -261,37 +225,16 @@ class VariableHistory(object):
if var not in self.variables:
self.variables[var] = []
if not isinstance(self.variables[var], list):
return
if 'nodups' in loginfo and loginfo in self.variables[var]:
return
self.variables[var].append(loginfo.copy())
def variable(self, var):
remote_connector = self.dataroot.getVar('_remote_data', False)
if remote_connector:
varhistory = remote_connector.getVarHistory(var)
else:
varhistory = []
if var in self.variables:
varhistory.extend(self.variables[var])
return varhistory
return self.variables[var]
else:
return []
def emit(self, var, oval, val, o, d):
def emit(self, var, oval, val, o):
history = self.variable(var)
# Append override history
if var in d.overridedata:
for (r, override) in d.overridedata[var]:
for event in self.variable(r):
loginfo = event.copy()
if 'flag' in loginfo and not loginfo['flag'].startswith("_"):
continue
loginfo['variable'] = var
loginfo['op'] = 'override[%s]:%s' % (override, loginfo['op'])
history.append(loginfo)
commentVal = re.sub('\n', '\n#', str(oval))
if history:
if len(history) == 1:
@@ -314,7 +257,7 @@ class VariableHistory(object):
flag = ''
o.write("# %s %s:%s%s\n# %s\"%s\"\n" % (event['op'], event['file'], event['line'], display_func, flag, re.sub('\n', '\n# ', event['detail'])))
if len(history) > 1:
o.write("# pre-expansion value:\n")
o.write("# computed:\n")
o.write('# "%s"\n' % (commentVal))
else:
o.write("#\n# $%s\n# [no history recorded]\n#\n" % var)
@@ -338,31 +281,6 @@ class VariableHistory(object):
lines.append(line)
return lines
def get_variable_items_files(self, var, d):
"""
Use variable history to map items added to a list variable and
the files in which they were added.
"""
history = self.variable(var)
finalitems = (d.getVar(var) or '').split()
filemap = {}
isset = False
for event in history:
if 'flag' in event:
continue
if event['op'] == '_remove':
continue
if isset and event['op'] == 'set?':
continue
isset = True
items = d.expand(event['detail']).split()
for item in items:
# This is a little crude but is belt-and-braces to avoid us
# having to handle every possible operation type specifically
if item in finalitems and not item in filemap:
filemap[item] = event['file']
return filemap
def del_var_history(self, var, f=None, line=None):
"""If file f and line are not given, the entire history of var is deleted"""
if var in self.variables:
@@ -372,23 +290,18 @@ class VariableHistory(object):
self.variables[var] = []
class DataSmart(MutableMapping):
def __init__(self):
def __init__(self, special = COWDictBase.copy(), seen = COWDictBase.copy() ):
self.dict = {}
self.inchistory = IncludeHistory()
self.varhistory = VariableHistory(self)
self._tracking = False
self.expand_cache = {}
# cookie monster tribute
# Need to be careful about writes to overridedata as
# its only a shallow copy, could influence other data store
# copies!
self.overridedata = {}
self.overrides = None
self.overridevars = set(["OVERRIDES", "FILE"])
self.inoverride = False
self._special_values = special
self._seen_overrides = seen
self.expand_cache = {}
def enableTracking(self):
self._tracking = True
@@ -398,7 +311,7 @@ class DataSmart(MutableMapping):
def expandWithRefs(self, s, varname):
if not isinstance(s, str): # sanity check
if not isinstance(s, basestring): # sanity check
return VariableParse(varname, self, s)
if varname and varname in self.expand_cache:
@@ -410,20 +323,15 @@ class DataSmart(MutableMapping):
olds = s
try:
s = __expand_var_regexp__.sub(varparse.var_sub, s)
try:
s = __expand_python_regexp__.sub(varparse.python_sub, s)
except SyntaxError as e:
# Likely unmatched brackets, just don't expand the expression
if e.msg != "EOL while scanning string literal":
raise
s = __expand_python_regexp__.sub(varparse.python_sub, s)
if s == olds:
break
except ExpansionError:
raise
except bb.parse.SkipRecipe:
except bb.parse.SkipPackage:
raise
except Exception as exc:
raise ExpansionError(varname, s, exc) from exc
raise ExpansionError(varname, s, exc)
varparse.value = s
@@ -435,34 +343,97 @@ class DataSmart(MutableMapping):
def expand(self, s, varname = None):
return self.expandWithRefs(s, varname).value
def finalize(self, parent = False):
return
def internal_finalize(self, parent = False):
"""Performs final steps upon the datastore, including application of overrides"""
self.overrides = None
def need_overrides(self):
if self.overrides is not None:
return
if self.inoverride:
return
for count in range(5):
self.inoverride = True
# Can end up here recursively so setup dummy values
self.overrides = []
self.overridesset = set()
self.overrides = (self.getVar("OVERRIDES") or "").split(":") or []
self.overridesset = set(self.overrides)
self.inoverride = False
self.expand_cache = {}
newoverrides = (self.getVar("OVERRIDES") or "").split(":") or []
if newoverrides == self.overrides:
break
self.overrides = newoverrides
self.overridesset = set(self.overrides)
else:
bb.fatal("Overrides could not be expanded into a stable state after 5 iterations, overrides must be being referenced by other overridden variables in some recursive fashion. Please provide your configuration to bitbake-devel so we can laugh, er, I mean try and understand how to make it work.")
overrides = (self.getVar("OVERRIDES", True) or "").split(":") or []
finalize_caller = {
'op': 'finalize',
}
infer_caller_details(finalize_caller, parent = parent, varval = False)
#
# Well let us see what breaks here. We used to iterate
# over each variable and apply the override and then
# do the line expanding.
# If we have bad luck - which we will have - the keys
# where in some order that is so important for this
# method which we don't have anymore.
# Anyway we will fix that and write test cases this
# time.
#
# First we apply all overrides
# Then we will handle _append and _prepend and store the _remove
# information for later.
#
# We only want to report finalization once per variable overridden.
finalizes_reported = {}
for o in overrides:
# calculate '_'+override
l = len(o) + 1
# see if one should even try
if o not in self._seen_overrides:
continue
vars = self._seen_overrides[o].copy()
for var in vars:
name = var[:-l]
try:
# Report only once, even if multiple changes.
if name not in finalizes_reported:
finalizes_reported[name] = True
finalize_caller['variable'] = name
finalize_caller['detail'] = 'was: ' + str(self.getVar(name, False))
self.varhistory.record(**finalize_caller)
# Copy history of the override over.
for event in self.varhistory.variable(var):
loginfo = event.copy()
loginfo['variable'] = name
loginfo['op'] = 'override[%s]:%s' % (o, loginfo['op'])
self.varhistory.record(**loginfo)
self.setVar(name, self.getVar(var, False), op = 'finalize', file = 'override[%s]' % o, line = '')
self.delVar(var)
except Exception:
logger.info("Untracked delVar")
# now on to the appends and prepends, and stashing the removes
for op in __setvar_keyword__:
if op in self._special_values:
appends = self._special_values[op] or []
for append in appends:
keep = []
for (a, o) in self.getVarFlag(append, op) or []:
match = True
if o:
for o2 in o.split("_"):
if not o2 in overrides:
match = False
if not match:
keep.append((a ,o))
continue
if op == "_append":
sval = self.getVar(append, False) or ""
sval += a
self.setVar(append, sval)
elif op == "_prepend":
sval = a + (self.getVar(append, False) or "")
self.setVar(append, sval)
elif op == "_remove":
removes = self.getVarFlag(append, "_removeactive", False) or []
removes.extend(a.split())
self.setVarFlag(append, "_removeactive", removes, ignore=True)
# We save overrides that may be applied at some later stage
if keep:
self.setVarFlag(append, op, keep, ignore=True)
else:
self.delVarFlag(append, op, ignore=True)
def initVar(self, var):
self.expand_cache = {}
@@ -473,22 +444,17 @@ class DataSmart(MutableMapping):
dest = self.dict
while dest:
if var in dest:
return dest[var], self.overridedata.get(var, None)
if "_remote_data" in dest:
connector = dest["_remote_data"]["_content"]
return connector.getVar(var)
return dest[var]
if "_data" not in dest:
break
dest = dest["_data"]
return None, self.overridedata.get(var, None)
def _makeShadowCopy(self, var):
if var in self.dict:
return
local_var, _ = self._findVar(var)
local_var = self._findVar(var)
if local_var:
self.dict[var] = copy.copy(local_var)
@@ -498,16 +464,6 @@ class DataSmart(MutableMapping):
def setVar(self, var, value, **loginfo):
#print("var=" + str(var) + " val=" + str(value))
parsing=False
if 'parsing' in loginfo:
parsing=True
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.setVar(var, value)
if not res:
return
if 'op' not in loginfo:
loginfo['op'] = "set"
self.expand_cache = {}
@@ -516,7 +472,7 @@ class DataSmart(MutableMapping):
base = match.group('base')
keyword = match.group("keyword")
override = match.group('add')
l = self.getVarFlag(base, keyword, False) or []
l = self.getVarFlag(base, keyword) or []
l.append([value, override])
self.setVarFlag(base, keyword, l, ignore=True)
# And cause that to be recorded:
@@ -529,119 +485,60 @@ class DataSmart(MutableMapping):
self.varhistory.record(**loginfo)
# todo make sure keyword is not __doc__ or __module__
# pay the cookie monster
try:
self._special_values[keyword].add(base)
except KeyError:
self._special_values[keyword] = set()
self._special_values[keyword].add(base)
# more cookies for the cookie monster
if '_' in var:
self._setvar_update_overrides(base, **loginfo)
if base in self.overridevars:
self._setvar_update_overridevars(var, value)
return
if not var in self.dict:
self._makeShadowCopy(var)
if not parsing:
if "_append" in self.dict[var]:
del self.dict[var]["_append"]
if "_prepend" in self.dict[var]:
del self.dict[var]["_prepend"]
if "_remove" in self.dict[var]:
del self.dict[var]["_remove"]
if var in self.overridedata:
active = []
self.need_overrides()
for (r, o) in self.overridedata[var]:
if o in self.overridesset:
active.append(r)
elif "_" in o:
if set(o.split("_")).issubset(self.overridesset):
active.append(r)
for a in active:
self.delVar(a)
del self.overridedata[var]
# more cookies for the cookie monster
if '_' in var:
self._setvar_update_overrides(var, **loginfo)
self._setvar_update_overrides(var)
# setting var
self.dict[var]["_content"] = value
self.varhistory.record(**loginfo)
if var in self.overridevars:
self._setvar_update_overridevars(var, value)
def _setvar_update_overridevars(self, var, value):
vardata = self.expandWithRefs(value, var)
new = vardata.references
new.update(vardata.contains.keys())
while not new.issubset(self.overridevars):
nextnew = set()
self.overridevars.update(new)
for i in new:
vardata = self.expandWithRefs(self.getVar(i), i)
nextnew.update(vardata.references)
nextnew.update(vardata.contains.keys())
new = nextnew
self.internal_finalize(True)
def _setvar_update_overrides(self, var, **loginfo):
def _setvar_update_overrides(self, var):
# aka pay the cookie monster
override = var[var.rfind('_')+1:]
shortvar = var[:var.rfind('_')]
while override and override.islower():
if shortvar not in self.overridedata:
self.overridedata[shortvar] = []
if [var, override] not in self.overridedata[shortvar]:
# Force CoW by recreating the list first
self.overridedata[shortvar] = list(self.overridedata[shortvar])
self.overridedata[shortvar].append([var, override])
override = None
if "_" in shortvar:
override = var[shortvar.rfind('_')+1:]
shortvar = var[:shortvar.rfind('_')]
if len(shortvar) == 0:
override = None
if len(override) > 0:
if override not in self._seen_overrides:
self._seen_overrides[override] = set()
self._seen_overrides[override].add( var )
def getVar(self, var, expand=True, noweakdefault=False, parsing=False):
return self.getVarFlag(var, "_content", expand, noweakdefault, parsing)
def getVar(self, var, expand=False, noweakdefault=False):
return self.getVarFlag(var, "_content", expand, noweakdefault)
def renameVar(self, key, newkey, **loginfo):
"""
Rename the variable key to newkey
"""
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.renameVar(key, newkey)
if not res:
return
val = self.getVar(key, 0, parsing=True)
val = self.getVar(key, 0)
if val is not None:
loginfo['variable'] = newkey
loginfo['op'] = 'rename from %s' % key
loginfo['detail'] = val
self.varhistory.record(**loginfo)
self.setVar(newkey, val, ignore=True, parsing=True)
self.setVar(newkey, val, ignore=True)
for i in (__setvar_keyword__):
src = self.getVarFlag(key, i, False)
src = self.getVarFlag(key, i)
if src is None:
continue
dest = self.getVarFlag(newkey, i, False) or []
dest = self.getVarFlag(newkey, i) or []
dest.extend(src)
self.setVarFlag(newkey, i, dest, ignore=True)
if key in self.overridedata:
self.overridedata[newkey] = []
for (v, o) in self.overridedata[key]:
self.overridedata[newkey].append([v.replace(key, newkey), o])
self.renameVar(v, v.replace(key, newkey))
if '_' in newkey and val is None:
self._setvar_update_overrides(newkey, **loginfo)
if i in self._special_values and key in self._special_values[i]:
self._special_values[i].remove(key)
self._special_values[i].add(newkey)
loginfo['variable'] = key
loginfo['op'] = 'rename (to)'
@@ -652,53 +549,27 @@ class DataSmart(MutableMapping):
def appendVar(self, var, value, **loginfo):
loginfo['op'] = 'append'
self.varhistory.record(**loginfo)
self.setVar(var + "_append", value, ignore=True, parsing=True)
newvalue = (self.getVar(var, False) or "") + value
self.setVar(var, newvalue, ignore=True)
def prependVar(self, var, value, **loginfo):
loginfo['op'] = 'prepend'
self.varhistory.record(**loginfo)
self.setVar(var + "_prepend", value, ignore=True, parsing=True)
newvalue = value + (self.getVar(var, False) or "")
self.setVar(var, newvalue, ignore=True)
def delVar(self, var, **loginfo):
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.delVar(var)
if not res:
return
loginfo['detail'] = ""
loginfo['op'] = 'del'
self.varhistory.record(**loginfo)
self.expand_cache = {}
self.dict[var] = {}
if var in self.overridedata:
del self.overridedata[var]
if '_' in var:
override = var[var.rfind('_')+1:]
shortvar = var[:var.rfind('_')]
while override and override.islower():
try:
if shortvar in self.overridedata:
# Force CoW by recreating the list first
self.overridedata[shortvar] = list(self.overridedata[shortvar])
self.overridedata[shortvar].remove([var, override])
except ValueError as e:
pass
override = None
if "_" in shortvar:
override = var[shortvar.rfind('_')+1:]
shortvar = var[:shortvar.rfind('_')]
if len(shortvar) == 0:
override = None
if override and override in self._seen_overrides and var in self._seen_overrides[override]:
self._seen_overrides[override].remove(var)
def setVarFlag(self, var, flag, value, **loginfo):
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.setVarFlag(var, flag, value)
if not res:
return
self.expand_cache = {}
if 'op' not in loginfo:
loginfo['op'] = "set"
loginfo['flag'] = flag
@@ -707,10 +578,8 @@ class DataSmart(MutableMapping):
self._makeShadowCopy(var)
self.dict[var][flag] = value
if flag == "_defaultval" and '_' in var:
self._setvar_update_overrides(var, **loginfo)
if flag == "_defaultval" and var in self.overridevars:
self._setvar_update_overridevars(var, value)
if flag == "defaultval" and '_' in var:
self._setvar_update_overrides(var)
if flag == "unexport" or flag == "export":
if not "__exportlist" in self.dict:
@@ -719,71 +588,14 @@ class DataSmart(MutableMapping):
self.dict["__exportlist"]["_content"] = set()
self.dict["__exportlist"]["_content"].add(var)
def getVarFlag(self, var, flag, expand=True, noweakdefault=False, parsing=False):
local_var, overridedata = self._findVar(var)
def getVarFlag(self, var, flag, expand=False, noweakdefault=False):
local_var = self._findVar(var)
value = None
if flag == "_content" and overridedata is not None and not parsing:
match = False
active = {}
self.need_overrides()
for (r, o) in overridedata:
# What about double overrides both with "_" in the name?
if o in self.overridesset:
active[o] = r
elif "_" in o:
if set(o.split("_")).issubset(self.overridesset):
active[o] = r
mod = True
while mod:
mod = False
for o in self.overrides:
for a in active.copy():
if a.endswith("_" + o):
t = active[a]
del active[a]
active[a.replace("_" + o, "")] = t
mod = True
elif a == o:
match = active[a]
del active[a]
if match:
value = self.getVar(match, False)
if local_var is not None and value is None:
if local_var is not None:
if flag in local_var:
value = copy.copy(local_var[flag])
elif flag == "_content" and "_defaultval" in local_var and not noweakdefault:
value = copy.copy(local_var["_defaultval"])
if flag == "_content" and local_var is not None and "_append" in local_var and not parsing:
if not value:
value = ""
self.need_overrides()
for (r, o) in local_var["_append"]:
match = True
if o:
for o2 in o.split("_"):
if not o2 in self.overrides:
match = False
if match:
value = value + r
if flag == "_content" and local_var is not None and "_prepend" in local_var and not parsing:
if not value:
value = ""
self.need_overrides()
for (r, o) in local_var["_prepend"]:
match = True
if o:
for o2 in o.split("_"):
if not o2 in self.overrides:
match = False
if match:
value = r + value
elif flag == "_content" and "defaultval" in local_var and not noweakdefault:
value = copy.copy(local_var["defaultval"])
if expand and value:
# Only getvar (flag == _content) hits the expand cache
cachename = None
@@ -792,38 +604,14 @@ class DataSmart(MutableMapping):
else:
cachename = var + "[" + flag + "]"
value = self.expand(value, cachename)
if value and flag == "_content" and local_var is not None and "_remove" in local_var:
removes = []
self.need_overrides()
for (r, o) in local_var["_remove"]:
match = True
if o:
for o2 in o.split("_"):
if not o2 in self.overrides:
match = False
if match:
removes.extend(self.expand(r).split())
if removes:
filtered = filter(lambda v: v not in removes,
value.split())
value = " ".join(filtered)
if expand and var in self.expand_cache:
# We need to ensure the expand cache has the correct value
# flag == "_content" here
self.expand_cache[var].value = value
if value is not None and flag == "_content" and local_var is not None and "_removeactive" in local_var:
filtered = filter(lambda v: v not in local_var["_removeactive"],
value.split(" "))
value = " ".join(filtered)
return value
def delVarFlag(self, var, flag, **loginfo):
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.delVarFlag(var, flag)
if not res:
return
self.expand_cache = {}
local_var, _ = self._findVar(var)
local_var = self._findVar(var)
if not local_var:
return
if not var in self.dict:
@@ -852,7 +640,6 @@ class DataSmart(MutableMapping):
self.setVarFlag(var, flag, newvalue, ignore=True)
def setVarFlags(self, var, flags, **loginfo):
self.expand_cache = {}
infer_caller_details(loginfo)
if not var in self.dict:
self._makeShadowCopy(var)
@@ -866,7 +653,7 @@ class DataSmart(MutableMapping):
self.dict[var][i] = flags[i]
def getVarFlags(self, var, expand = False, internalflags=False):
local_var, _ = self._findVar(var)
local_var = self._findVar(var)
flags = {}
if local_var:
@@ -882,7 +669,6 @@ class DataSmart(MutableMapping):
def delVarFlags(self, var, **loginfo):
self.expand_cache = {}
if not var in self.dict:
self._makeShadowCopy(var)
@@ -900,25 +686,20 @@ class DataSmart(MutableMapping):
else:
del self.dict[var]
def createCopy(self):
"""
Create a copy of self by setting _data to self
"""
# we really want this to be a DataSmart...
data = DataSmart()
data = DataSmart(seen=self._seen_overrides.copy(), special=self._special_values.copy())
data.dict["_data"] = self.dict
data.varhistory = self.varhistory.copy()
data.varhistory.dataroot = data
data.varhistory.datasmart = data
data.inchistory = self.inchistory.copy()
data._tracking = self._tracking
data.overrides = None
data.overridevars = copy.copy(self.overridevars)
# Should really be a deepcopy but has heavy overhead.
# Instead, we're careful with writes.
data.overridedata = copy.copy(self.overridedata)
return data
def expandVarref(self, variable, parents=False):
@@ -939,55 +720,29 @@ class DataSmart(MutableMapping):
def localkeys(self):
for key in self.dict:
if key not in ['_data', '_remote_data']:
if key != '_data':
yield key
def __iter__(self):
deleted = set()
overrides = set()
def keylist(d):
klist = set()
for key in d:
if key in ["_data", "_remote_data"]:
continue
if key in deleted:
continue
if key in overrides:
if key == "_data":
continue
if not d[key]:
deleted.add(key)
continue
klist.add(key)
if "_data" in d:
klist |= keylist(d["_data"])
if "_remote_data" in d:
connector = d["_remote_data"]["_content"]
for key in connector.getKeys():
if key in deleted:
continue
klist.add(key)
return klist
self.need_overrides()
for var in self.overridedata:
for (r, o) in self.overridedata[var]:
if o in self.overridesset:
overrides.add(var)
elif "_" in o:
if set(o.split("_")).issubset(self.overridesset):
overrides.add(var)
for k in keylist(self.dict):
yield k
for k in overrides:
yield k
def __len__(self):
return len(frozenset(iter(self)))
return len(frozenset(self))
def __getitem__(self, item):
value = self.getVar(item, False)
@@ -1006,8 +761,9 @@ class DataSmart(MutableMapping):
data = {}
d = self.createCopy()
bb.data.expandKeys(d)
bb.data.update_data(d)
config_whitelist = set((d.getVar("BB_HASHCONFIG_WHITELIST") or "").split())
config_whitelist = set((d.getVar("BB_HASHCONFIG_WHITELIST", True) or "").split())
keys = set(key for key in iter(d) if not key.startswith("__"))
for key in keys:
if key in config_whitelist:
@@ -1026,12 +782,13 @@ class DataSmart(MutableMapping):
for key in ["__BBTASKS", "__BBANONFUNCS", "__BBHANDLERS"]:
bb_list = d.getVar(key, False) or []
bb_list.sort()
data.update({key:str(bb_list)})
if key == "__BBANONFUNCS":
for i in bb_list:
value = d.getVar(i, False) or ""
value = d.getVar(i, True) or ""
data.update({i:value})
data_str = str([(k, data[k]) for k in sorted(data.keys())])
return hashlib.md5(data_str.encode("utf-8")).hexdigest()
return hashlib.md5(data_str).hexdigest()

View File

@@ -24,13 +24,13 @@ BitBake build tools.
import os, sys
import warnings
import pickle
try:
import cPickle as pickle
except ImportError:
import pickle
import logging
import atexit
import traceback
import ast
import threading
import bb.utils
import bb.compat
import bb.exceptions
@@ -48,16 +48,6 @@ class Event(object):
def __init__(self):
self.pid = worker_pid
class HeartbeatEvent(Event):
"""Triggered at regular time intervals of 10 seconds. Other events can fire much more often
(runQueueTaskStarted when there are many short tasks) or not at all for long periods
of time (again runQueueTaskStarted, when there is just one long-running task), so this
event is more suitable for doing some task-independent work occassionally."""
def __init__(self, time):
Event.__init__(self)
self.time = time
Registered = 10
AlreadyRegistered = 14
@@ -65,7 +55,6 @@ def get_class_handlers():
return _handlers
def set_class_handlers(h):
global _handlers
_handlers = h
def clean_class_handlers():
@@ -78,33 +67,12 @@ _ui_logfilters = {}
_ui_handler_seq = 0
_event_handler_map = {}
_catchall_handlers = {}
_eventfilter = None
_uiready = False
_thread_lock = threading.Lock()
_thread_lock_enabled = False
if hasattr(__builtins__, '__setitem__'):
builtins = __builtins__
else:
builtins = __builtins__.__dict__
def enable_threadlock():
global _thread_lock_enabled
_thread_lock_enabled = True
def disable_threadlock():
global _thread_lock_enabled
_thread_lock_enabled = False
def execute_handler(name, handler, event, d):
event.data = d
addedd = False
if 'd' not in builtins:
builtins['d'] = d
addedd = True
try:
ret = handler(event)
except (bb.parse.SkipRecipe, bb.BBHandledException):
except bb.parse.SkipPackage:
raise
except Exception:
etype, value, tb = sys.exc_info()
@@ -117,8 +85,6 @@ def execute_handler(name, handler, event, d):
raise
finally:
del event.data
if addedd:
del builtins['d']
def fire_class_handlers(event, d):
if isinstance(event, logging.LogRecord):
@@ -126,12 +92,12 @@ def fire_class_handlers(event, d):
eid = str(event.__class__)[8:-2]
evt_hmap = _event_handler_map.get(eid, {})
for name, handler in list(_handlers.items()):
for name, handler in _handlers.iteritems():
if name in _catchall_handlers or name in evt_hmap:
if _eventfilter:
if not _eventfilter(name, handler, event, d):
continue
execute_handler(name, handler, event, d)
try:
execute_handler(name, handler, event, d)
except Exception:
continue
ui_queue = []
@atexit.register
@@ -139,46 +105,33 @@ def print_ui_queue():
"""If we're exiting before a UI has been spawned, display any queued
LogRecords to the console."""
logger = logging.getLogger("BitBake")
if not _uiready:
if not _ui_handlers:
from bb.msg import BBLogFormatter
stdout = logging.StreamHandler(sys.stdout)
stderr = logging.StreamHandler(sys.stderr)
formatter = BBLogFormatter("%(levelname)s: %(message)s")
stdout.setFormatter(formatter)
stderr.setFormatter(formatter)
console = logging.StreamHandler(sys.stdout)
console.setFormatter(BBLogFormatter("%(levelname)s: %(message)s"))
logger.handlers = [console]
# First check to see if we have any proper messages
msgprint = False
for event in ui_queue[:]:
for event in ui_queue:
if isinstance(event, logging.LogRecord):
if event.levelno > logging.DEBUG:
if event.levelno >= logging.WARNING:
logger.addHandler(stderr)
else:
logger.addHandler(stdout)
logger.handle(event)
msgprint = True
if msgprint:
return
# Nope, so just print all of the messages we have (including debug messages)
logger.addHandler(stdout)
for event in ui_queue[:]:
for event in ui_queue:
if isinstance(event, logging.LogRecord):
logger.handle(event)
def fire_ui_handlers(event, d):
global _thread_lock
global _thread_lock_enabled
if not _uiready:
if not _ui_handlers:
# No UI handlers registered yet, queue up the messages
ui_queue.append(event)
return
if _thread_lock_enabled:
_thread_lock.acquire()
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
@@ -197,9 +150,6 @@ def fire_ui_handlers(event, d):
for h in errors:
del _ui_handlers[h]
if _thread_lock_enabled:
_thread_lock.release()
def fire(event, d):
"""Fire off an Event"""
@@ -218,7 +168,7 @@ def fire_from_worker(event, d):
fire_ui_handlers(event, d)
noop = lambda _: None
def register(name, handler, mask=None, filename=None, lineno=None):
def register(name, handler, mask=[]):
"""Register an Event handler"""
# already registered
@@ -227,18 +177,10 @@ def register(name, handler, mask=None, filename=None, lineno=None):
if handler is not None:
# handle string containing python code
if isinstance(handler, str):
if isinstance(handler, basestring):
tmp = "def %s(e):\n%s" % (name, handler)
try:
code = bb.methodpool.compile_cache(tmp)
if not code:
if filename is None:
filename = "%s(e)" % name
code = compile(tmp, filename, "exec", ast.PyCF_ONLY_AST)
if lineno is not None:
ast.increment_lineno(code, lineno-1)
code = compile(code, filename, "exec")
bb.methodpool.compile_cache_add(tmp, code)
code = compile(tmp, "%s(e)" % name, "exec")
except SyntaxError:
logger.error("Unable to register event handler '%s':\n%s", name,
''.join(traceback.format_exc(limit=0)))
@@ -265,21 +207,7 @@ def remove(name, handler):
"""Remove an Event handler"""
_handlers.pop(name)
def get_handlers():
return _handlers
def set_handlers(handlers):
global _handlers
_handlers = handlers
def set_eventfilter(func):
global _eventfilter
_eventfilter = func
def register_UIHhandler(handler, mainui=False):
if mainui:
global _uiready
_uiready = True
def register_UIHhandler(handler):
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
level, debug_domains = bb.msg.constructLogOptions()
@@ -361,17 +289,6 @@ class RecipeEvent(Event):
class RecipePreFinalise(RecipeEvent):
""" Recipe Parsing Complete but not yet finialised"""
class RecipeTaskPreProcess(RecipeEvent):
"""
Recipe Tasks about to be finalised
The list of tasks should be final at this point and handlers
are only able to change interdependencies
"""
def __init__(self, fn, tasklist):
self.fn = fn
self.tasklist = tasklist
Event.__init__(self)
class RecipeParsed(RecipeEvent):
""" Recipe Parsing Complete """
@@ -393,7 +310,7 @@ class StampUpdate(Event):
targets = property(getTargets)
class BuildBase(Event):
"""Base class for bitbake build events"""
"""Base class for bbmake run events"""
def __init__(self, n, p, failures = 0):
self._name = n
@@ -431,26 +348,21 @@ class BuildBase(Event):
class BuildInit(BuildBase):
"""buildFile or buildTargets was invoked"""
def __init__(self, p=[]):
name = None
BuildBase.__init__(self, name, p)
class BuildStarted(BuildBase, OperationStarted):
"""Event when builds start"""
"""bbmake build run started"""
def __init__(self, n, p, failures = 0):
OperationStarted.__init__(self, "Building Started")
BuildBase.__init__(self, n, p, failures)
class BuildCompleted(BuildBase, OperationCompleted):
"""Event when builds have completed"""
def __init__(self, total, n, p, failures=0, interrupted=0):
"""bbmake build run completed"""
def __init__(self, total, n, p, failures = 0):
if not failures:
OperationCompleted.__init__(self, total, "Building Succeeded")
else:
OperationCompleted.__init__(self, total, "Building Failed")
self._interrupted = interrupted
BuildBase.__init__(self, n, p, failures)
class DiskFull(Event):
@@ -462,27 +374,10 @@ class DiskFull(Event):
self._free = freespace
self._mountpoint = mountpoint
class DiskUsageSample:
def __init__(self, available_bytes, free_bytes, total_bytes):
# Number of bytes available to non-root processes.
self.available_bytes = available_bytes
# Number of bytes available to root processes.
self.free_bytes = free_bytes
# Total capacity of the volume.
self.total_bytes = total_bytes
class MonitorDiskEvent(Event):
"""If BB_DISKMON_DIRS is set, then this event gets triggered each time disk space is checked.
Provides information about devices that are getting monitored."""
def __init__(self, disk_usage):
Event.__init__(self)
# hash of device root path -> DiskUsageSample
self.disk_usage = disk_usage
class NoProvider(Event):
"""No Provider for an Event"""
def __init__(self, item, runtime=False, dependees=None, reasons=None, close_matches=None):
def __init__(self, item, runtime=False, dependees=None, reasons=[], close_matches=[]):
Event.__init__(self)
self._item = item
self._runtime = runtime
@@ -596,16 +491,6 @@ class TargetsTreeGenerated(Event):
Event.__init__(self)
self._model = model
class ReachableStamps(Event):
"""
An event listing all stamps reachable after parsing
which the metadata may use to clean up stale data
"""
def __init__(self, stamps):
Event.__init__(self)
self.stamps = stamps
class FilesMatchingFound(Event):
"""
Event when a list of files matching the supplied pattern has
@@ -683,10 +568,7 @@ class LogHandler(logging.Handler):
etype, value, tb = record.exc_info
if hasattr(tb, 'tb_next'):
tb = list(bb.exceptions.extract_traceback(tb, context=3))
# Need to turn the value into something the logging system can pickle
record.bb_exc_info = (etype, value, tb)
record.bb_exc_formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
value = str(value)
record.exc_info = None
fire(record, None)
@@ -715,46 +597,16 @@ class MetadataEvent(Event):
def __init__(self, eventtype, eventdata):
Event.__init__(self)
self.type = eventtype
self._localdata = eventdata
class ProcessStarted(Event):
"""
Generic process started event (usually part of the initial startup)
where further progress events will be delivered
"""
def __init__(self, processname, total):
Event.__init__(self)
self.processname = processname
self.total = total
class ProcessProgress(Event):
"""
Generic process progress event (usually part of the initial startup)
"""
def __init__(self, processname, progress):
Event.__init__(self)
self.processname = processname
self.progress = progress
class ProcessFinished(Event):
"""
Generic process finished event (usually part of the initial startup)
"""
def __init__(self, processname):
Event.__init__(self)
self.processname = processname
self.data = eventdata
class SanityCheck(Event):
"""
Event to run sanity checks, either raise errors or generate events as return status.
Event to issue sanity check
"""
def __init__(self, generateevents = True):
Event.__init__(self)
self.generateevents = generateevents
class SanityCheckPassed(Event):
"""
Event to indicate sanity check has passed
Event to indicate sanity check is passed
"""
class SanityCheckFailed(Event):
@@ -768,11 +620,8 @@ class SanityCheckFailed(Event):
class NetworkTest(Event):
"""
Event to run network connectivity tests, either raise errors or generate events as return status.
Event to start network test
"""
def __init__(self, generateevents = True):
Event.__init__(self)
self.generateevents = generateevents
class NetworkTestPassed(Event):
"""

View File

@@ -1,4 +1,4 @@
from __future__ import absolute_import
import inspect
import traceback
import bb.namedtuple_with_abc
@@ -86,6 +86,6 @@ def format_exception(etype, value, tb, context=1, limit=None, formatter=None):
def to_string(exc):
if isinstance(exc, SystemExit):
if not isinstance(exc.code, str):
if not isinstance(exc.code, basestring):
return 'Exited with "%d"' % exc.code
return str(exc)

File diff suppressed because it is too large Load Diff

View File

@@ -27,13 +27,14 @@ import os
import sys
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class Bzr(FetchMethod):
def supports(self, ud, d):
def supports(self, url, ud, d):
return ud.type in ['bzr']
def urldata_init(self, ud, d):
@@ -42,14 +43,14 @@ class Bzr(FetchMethod):
"""
# Create paths to bzr checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(d.expand('${BZRDIR}'), ud.host, relpath)
ud.pkgdir = os.path.join(data.expand('${BZRDIR}', d), ud.host, relpath)
ud.setup_revisions(d)
ud.setup_revisons(d)
if not ud.revision:
ud.revision = self.latest_revision(ud, d)
ud.revision = self.latest_revision(ud.url, ud, d)
ud.localfile = d.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision))
ud.localfile = data.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision), d)
def _buildbzrcommand(self, ud, d, command):
"""
@@ -57,7 +58,7 @@ class Bzr(FetchMethod):
command is "fetch", "update", "revno"
"""
basecmd = d.expand('${FETCHCMD_bzr}')
basecmd = data.expand('${FETCHCMD_bzr}', d)
proto = ud.parm.get('protocol', 'http')
@@ -80,47 +81,50 @@ class Bzr(FetchMethod):
return bzrcmd
def download(self, ud, d):
def download(self, loc, ud, d):
"""Fetch url"""
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
bzrcmd = self._buildbzrcommand(ud, d, "update")
logger.debug(1, "BZR Update %s", ud.url)
logger.debug(1, "BZR Update %s", loc)
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
runfetchcmd(bzrcmd, d, workdir=os.path.join(ud.pkgdir, os.path.basename(ud.path)))
os.chdir(os.path.join (ud.pkgdir, os.path.basename(ud.path)))
runfetchcmd(bzrcmd, d)
else:
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
logger.debug(1, "BZR Checkout %s", ud.url)
logger.debug(1, "BZR Checkout %s", loc)
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", bzrcmd)
runfetchcmd(bzrcmd, d, workdir=ud.pkgdir)
runfetchcmd(bzrcmd, d)
os.chdir(ud.pkgdir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude='.bzr' --exclude='.bzrtags'"
tar_flags = "--exclude '.bzr' --exclude '.bzrtags'"
# tar them up to a defined filename
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(ud.pkgdir)),
d, cleanup=[ud.localpath], workdir=ud.pkgdir)
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(ud.pkgdir)), d, cleanup = [ud.localpath])
def supports_srcrev(self):
return True
def _revision_key(self, ud, d, name):
def _revision_key(self, url, ud, d, name):
"""
Return a unique key for the url
"""
return "bzr:" + ud.pkgdir
def _latest_revision(self, ud, d, name):
def _latest_revision(self, url, ud, d, name):
"""
Return the latest upstream revision number
"""
logger.debug(2, "BZR fetcher hitting network for %s", ud.url)
logger.debug(2, "BZR fetcher hitting network for %s", url)
bb.fetch2.check_network_access(d, self._buildbzrcommand(ud, d, "revno"), ud.url)
@@ -128,12 +132,12 @@ class Bzr(FetchMethod):
return output.strip()
def sortable_revision(self, ud, d, name):
def sortable_revision(self, url, ud, d, name):
"""
Return a sortable revision number which in our case is the revision number
"""
return False, self._build_revision(ud, d)
return False, self._build_revision(url, ud, d)
def _build_revision(self, ud, d):
def _build_revision(self, url, ud, d):
return ud.revision

View File

@@ -1,260 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' clearcase implementation
The clearcase fetcher is used to retrieve files from a ClearCase repository.
Usage in the recipe:
SRC_URI = "ccrc://cc.example.org/ccrc;vob=/example_vob;module=/example_module"
SRCREV = "EXAMPLE_CLEARCASE_TAG"
PV = "${@d.getVar("SRCREV", False).replace("/", "+")}"
The fetcher uses the rcleartool or cleartool remote client, depending on which one is available.
Supported SRC_URI options are:
- vob
(required) The name of the clearcase VOB (with prepending "/")
- module
The module in the selected VOB (with prepending "/")
The module and vob parameters are combined to create
the following load rule in the view config spec:
load <vob><module>
- proto
http or https
Related variables:
CCASE_CUSTOM_CONFIG_SPEC
Write a config spec to this variable in your recipe to use it instead
of the default config spec generated by this fetcher.
Please note that the SRCREV loses its functionality if you specify
this variable. SRCREV is still used to label the archive after a fetch,
but it doesn't define what's fetched.
User credentials:
cleartool:
The login of cleartool is handled by the system. No special steps needed.
rcleartool:
In order to use rcleartool with authenticated users an `rcleartool login` is
necessary before using the fetcher.
"""
# Copyright (C) 2014 Siemens AG
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import os
import sys
import shutil
import bb
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
from distutils import spawn
class ClearCase(FetchMethod):
"""Class to fetch urls via 'clearcase'"""
def init(self, d):
pass
def supports(self, ud, d):
"""
Check to see if a given url can be fetched with Clearcase.
"""
return ud.type in ['ccrc']
def debug(self, msg):
logger.debug(1, "ClearCase: %s", msg)
def urldata_init(self, ud, d):
"""
init ClearCase specific variable within url data
"""
ud.proto = "https"
if 'protocol' in ud.parm:
ud.proto = ud.parm['protocol']
if not ud.proto in ('http', 'https'):
raise fetch2.ParameterError("Invalid protocol type", ud.url)
ud.vob = ''
if 'vob' in ud.parm:
ud.vob = ud.parm['vob']
else:
msg = ud.url+": vob must be defined so the fetcher knows what to get."
raise MissingParameterError('vob', msg)
if 'module' in ud.parm:
ud.module = ud.parm['module']
else:
ud.module = ""
ud.basecmd = d.getVar("FETCHCMD_ccrc") or spawn.find_executable("cleartool") or spawn.find_executable("rcleartool")
if d.getVar("SRCREV") == "INVALID":
raise FetchError("Set a valid SRCREV for the clearcase fetcher in your recipe, e.g. SRCREV = \"/main/LATEST\" or any other label of your choice.")
ud.label = d.getVar("SRCREV", False)
ud.customspec = d.getVar("CCASE_CUSTOM_CONFIG_SPEC")
ud.server = "%s://%s%s" % (ud.proto, ud.host, ud.path)
ud.identifier = "clearcase-%s%s-%s" % ( ud.vob.replace("/", ""),
ud.module.replace("/", "."),
ud.label.replace("/", "."))
ud.viewname = "%s-view%s" % (ud.identifier, d.getVar("DATETIME", d, True))
ud.csname = "%s-config-spec" % (ud.identifier)
ud.ccasedir = os.path.join(d.getVar("DL_DIR"), ud.type)
ud.viewdir = os.path.join(ud.ccasedir, ud.viewname)
ud.configspecfile = os.path.join(ud.ccasedir, ud.csname)
ud.localfile = "%s.tar.gz" % (ud.identifier)
self.debug("host = %s" % ud.host)
self.debug("path = %s" % ud.path)
self.debug("server = %s" % ud.server)
self.debug("proto = %s" % ud.proto)
self.debug("type = %s" % ud.type)
self.debug("vob = %s" % ud.vob)
self.debug("module = %s" % ud.module)
self.debug("basecmd = %s" % ud.basecmd)
self.debug("label = %s" % ud.label)
self.debug("ccasedir = %s" % ud.ccasedir)
self.debug("viewdir = %s" % ud.viewdir)
self.debug("viewname = %s" % ud.viewname)
self.debug("configspecfile = %s" % ud.configspecfile)
self.debug("localfile = %s" % ud.localfile)
ud.localfile = os.path.join(d.getVar("DL_DIR"), ud.localfile)
def _build_ccase_command(self, ud, command):
"""
Build up a commandline based on ud
command is: mkview, setcs, rmview
"""
options = []
if "rcleartool" in ud.basecmd:
options.append("-server %s" % ud.server)
basecmd = "%s %s" % (ud.basecmd, command)
if command is 'mkview':
if not "rcleartool" in ud.basecmd:
# Cleartool needs a -snapshot view
options.append("-snapshot")
options.append("-tag %s" % ud.viewname)
options.append(ud.viewdir)
elif command is 'rmview':
options.append("-force")
options.append("%s" % ud.viewdir)
elif command is 'setcs':
options.append("-overwrite")
options.append(ud.configspecfile)
else:
raise FetchError("Invalid ccase command %s" % command)
ccasecmd = "%s %s" % (basecmd, " ".join(options))
self.debug("ccasecmd = %s" % ccasecmd)
return ccasecmd
def _write_configspec(self, ud, d):
"""
Create config spec file (ud.configspecfile) for ccase view
"""
config_spec = ""
custom_config_spec = d.getVar("CCASE_CUSTOM_CONFIG_SPEC", d)
if custom_config_spec is not None:
for line in custom_config_spec.split("\\n"):
config_spec += line+"\n"
bb.warn("A custom config spec has been set, SRCREV is only relevant for the tarball name.")
else:
config_spec += "element * CHECKEDOUT\n"
config_spec += "element * %s\n" % ud.label
config_spec += "load %s%s\n" % (ud.vob, ud.module)
logger.info("Using config spec: \n%s" % config_spec)
with open(ud.configspecfile, 'w') as f:
f.write(config_spec)
def _remove_view(self, ud, d):
if os.path.exists(ud.viewdir):
cmd = self._build_ccase_command(ud, 'rmview');
logger.info("cleaning up [VOB=%s label=%s view=%s]", ud.vob, ud.label, ud.viewname)
bb.fetch2.check_network_access(d, cmd, ud.url)
output = runfetchcmd(cmd, d, workdir=ud.ccasedir)
logger.info("rmview output: %s", output)
def need_update(self, ud, d):
if ("LATEST" in ud.label) or (ud.customspec and "LATEST" in ud.customspec):
ud.identifier += "-%s" % d.getVar("DATETIME",d, True)
return True
if os.path.exists(ud.localpath):
return False
return True
def supports_srcrev(self):
return True
def sortable_revision(self, ud, d, name):
return False, ud.identifier
def download(self, ud, d):
"""Fetch url"""
# Make a fresh view
bb.utils.mkdirhier(ud.ccasedir)
self._write_configspec(ud, d)
cmd = self._build_ccase_command(ud, 'mkview')
logger.info("creating view [VOB=%s label=%s view=%s]", ud.vob, ud.label, ud.viewname)
bb.fetch2.check_network_access(d, cmd, ud.url)
try:
runfetchcmd(cmd, d)
except FetchError as e:
if "CRCLI2008E" in e.msg:
raise FetchError("%s\n%s\n" % (e.msg, "Call `rcleartool login` in your console to authenticate to the clearcase server before running bitbake."))
else:
raise e
# Set configspec: Setting the configspec effectively fetches the files as defined in the configspec
cmd = self._build_ccase_command(ud, 'setcs');
logger.info("fetching data [VOB=%s label=%s view=%s]", ud.vob, ud.label, ud.viewname)
bb.fetch2.check_network_access(d, cmd, ud.url)
output = runfetchcmd(cmd, d, workdir=ud.viewdir)
logger.info("%s", output)
# Copy the configspec to the viewdir so we have it in our source tarball later
shutil.copyfile(ud.configspecfile, os.path.join(ud.viewdir, ud.csname))
# Clean clearcase meta-data before tar
runfetchcmd('tar -czf "%s" .' % (ud.localpath), d, cleanup = [ud.localpath])
# Clean up so we can create a new view next time
self.clean(ud, d);
def clean(self, ud, d):
self._remove_view(ud, d)
bb.utils.remove(ud.configspecfile)

View File

@@ -36,7 +36,7 @@ class Cvs(FetchMethod):
"""
Class to fetch a module or modules from cvs repositories
"""
def supports(self, ud, d):
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with cvs.
"""
@@ -63,16 +63,16 @@ class Cvs(FetchMethod):
if 'fullpath' in ud.parm:
fullpath = '_fullpath'
ud.localfile = d.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath))
ud.localfile = bb.data.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath), d)
def need_update(self, ud, d):
def need_update(self, url, ud, d):
if (ud.date == "now"):
return True
if not os.path.exists(ud.localpath):
return True
return False
def download(self, ud, d):
def download(self, loc, ud, d):
method = ud.parm.get('method', 'pserver')
localdir = ud.parm.get('localdir', ud.module)
@@ -87,10 +87,10 @@ class Cvs(FetchMethod):
cvsroot = ud.path
else:
cvsroot = ":" + method
cvsproxyhost = d.getVar('CVS_PROXY_HOST')
cvsproxyhost = d.getVar('CVS_PROXY_HOST', True)
if cvsproxyhost:
cvsroot += ";proxy=" + cvsproxyhost
cvsproxyport = d.getVar('CVS_PROXY_PORT')
cvsproxyport = d.getVar('CVS_PROXY_PORT', True)
if cvsproxyport:
cvsroot += ";proxyport=" + cvsproxyport
cvsroot += ":" + ud.user
@@ -110,7 +110,7 @@ class Cvs(FetchMethod):
if ud.tag:
options.append("-r %s" % ud.tag)
cvsbasecmd = d.getVar("FETCHCMD_cvs")
cvsbasecmd = d.getVar("FETCHCMD_cvs", True)
cvscmd = cvsbasecmd + " '-d" + cvsroot + "' co " + " ".join(options) + " " + ud.module
cvsupdatecmd = cvsbasecmd + " '-d" + cvsroot + "' update -d -P " + " ".join(options)
@@ -120,26 +120,25 @@ class Cvs(FetchMethod):
# create module directory
logger.debug(2, "Fetch: checking for module directory")
pkg = d.getVar('PN')
pkgdir = os.path.join(d.getVar('CVSDIR'), pkg)
pkg = d.getVar('PN', True)
pkgdir = os.path.join(d.getVar('CVSDIR', True), pkg)
moddir = os.path.join(pkgdir, localdir)
workdir = None
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
logger.info("Update " + ud.url)
logger.info("Update " + loc)
bb.fetch2.check_network_access(d, cvsupdatecmd, ud.url)
# update sources there
workdir = moddir
os.chdir(moddir)
cmd = cvsupdatecmd
else:
logger.info("Fetch " + ud.url)
logger.info("Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(pkgdir)
workdir = pkgdir
os.chdir(pkgdir)
logger.debug(1, "Running %s", cvscmd)
bb.fetch2.check_network_access(d, cvscmd, ud.url)
cmd = cvscmd
runfetchcmd(cmd, d, cleanup=[moddir], workdir=workdir)
runfetchcmd(cmd, d, cleanup = [moddir])
if not os.access(moddir, os.R_OK):
raise FetchError("Directory %s was not readable despite sucessful fetch?!" % moddir, ud.url)
@@ -148,24 +147,24 @@ class Cvs(FetchMethod):
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude='CVS'"
tar_flags = "--exclude 'CVS'"
# tar them up to a defined filename
workdir = None
if 'fullpath' in ud.parm:
workdir = pkgdir
os.chdir(pkgdir)
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, localdir)
else:
workdir = os.path.dirname(os.path.realpath(moddir))
os.chdir(moddir)
os.chdir('..')
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(moddir))
runfetchcmd(cmd, d, cleanup=[ud.localpath], workdir=workdir)
runfetchcmd(cmd, d, cleanup = [ud.localpath])
def clean(self, ud, d):
""" Clean CVS Files and tarballs """
pkg = d.getVar('PN')
pkgdir = os.path.join(d.getVar("CVSDIR"), pkg)
pkg = d.getVar('PN', True)
pkgdir = os.path.join(d.getVar("CVSDIR", True), pkg)
bb.utils.remove(pkgdir, True)
bb.utils.remove(ud.localpath)

View File

@@ -44,15 +44,6 @@ Supported SRC_URI options are:
checkout code and tracking branch requirements.
The default is "0", set bareclone=1 if needed.
- nobranch
Don't check the SHA validation for branch. set this option for the recipe
referring to commit which is valid in tag instead of branch.
The default is "0", set nobranch=1 if needed.
- usehead
For local git:// urls to use the current branch HEAD as the revision for use with
AUTOREV. Implies nobranch.
"""
#Copyright (C) 2005 Richard Purdie
@@ -70,66 +61,19 @@ Supported SRC_URI options are:
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import collections
import errno
import fnmatch
import os
import re
import subprocess
import tempfile
import bb
import bb.progress
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class GitProgressHandler(bb.progress.LineFilterProgressHandler):
"""Extract progress information from git output"""
def __init__(self, d):
self._buffer = ''
self._count = 0
super(GitProgressHandler, self).__init__(d)
# Send an initial progress event so the bar gets shown
self._fire_progress(-1)
def write(self, string):
self._buffer += string
stages = ['Counting objects', 'Compressing objects', 'Receiving objects', 'Resolving deltas']
stage_weights = [0.2, 0.05, 0.5, 0.25]
stagenum = 0
for i, stage in reversed(list(enumerate(stages))):
if stage in self._buffer:
stagenum = i
self._buffer = ''
break
self._status = stages[stagenum]
percs = re.findall(r'(\d+)%', string)
if percs:
progress = int(round((int(percs[-1]) * stage_weights[stagenum]) + (sum(stage_weights[:stagenum]) * 100)))
rates = re.findall(r'([\d.]+ [a-zA-Z]*/s+)', string)
if rates:
rate = rates[-1]
else:
rate = None
self.update(progress, rate)
else:
if stagenum == 0:
percs = re.findall(r': (\d+)', string)
if percs:
count = int(percs[-1])
if count > self._count:
self._count = count
self._fire_progress(-count)
super(GitProgressHandler, self).write(string)
class Git(FetchMethod):
"""Class to fetch a module or modules from git repositories"""
def init(self, d):
pass
def supports(self, ud, d):
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with git.
"""
@@ -157,98 +101,33 @@ class Git(FetchMethod):
ud.rebaseable = ud.parm.get("rebaseable","0") == "1"
ud.nobranch = ud.parm.get("nobranch","0") == "1"
# usehead implies nobranch
ud.usehead = ud.parm.get("usehead","0") == "1"
if ud.usehead:
if ud.proto != "file":
raise bb.fetch2.ParameterError("The usehead option is only for use with local ('protocol=file') git repositories", ud.url)
ud.nobranch = 1
# bareclone implies nocheckout
ud.bareclone = ud.parm.get("bareclone","0") == "1"
if ud.bareclone:
ud.nocheckout = 1
ud.unresolvedrev = {}
branches = ud.parm.get("branch", "master").split(',')
if len(branches) != len(ud.names):
raise bb.fetch2.ParameterError("The number of name and branch parameters is not balanced", ud.url)
ud.cloneflags = "-s -n"
if ud.bareclone:
ud.cloneflags += " --mirror"
ud.shallow = d.getVar("BB_GIT_SHALLOW") == "1"
ud.shallow_extra_refs = (d.getVar("BB_GIT_SHALLOW_EXTRA_REFS") or "").split()
depth_default = d.getVar("BB_GIT_SHALLOW_DEPTH")
if depth_default is not None:
try:
depth_default = int(depth_default or 0)
except ValueError:
raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH: %s" % depth_default)
else:
if depth_default < 0:
raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH: %s" % depth_default)
else:
depth_default = 1
ud.shallow_depths = collections.defaultdict(lambda: depth_default)
revs_default = d.getVar("BB_GIT_SHALLOW_REVS", True)
ud.shallow_revs = []
ud.branches = {}
for pos, name in enumerate(ud.names):
branch = branches[pos]
for name in ud.names:
branch = branches[ud.names.index(name)]
ud.branches[name] = branch
ud.unresolvedrev[name] = branch
shallow_depth = d.getVar("BB_GIT_SHALLOW_DEPTH_%s" % name)
if shallow_depth is not None:
try:
shallow_depth = int(shallow_depth or 0)
except ValueError:
raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH_%s: %s" % (name, shallow_depth))
else:
if shallow_depth < 0:
raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH_%s: %s" % (name, shallow_depth))
ud.shallow_depths[name] = shallow_depth
ud.basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
revs = d.getVar("BB_GIT_SHALLOW_REVS_%s" % name)
if revs is not None:
ud.shallow_revs.extend(revs.split())
elif revs_default is not None:
ud.shallow_revs.extend(revs_default.split())
ud.write_tarballs = ((data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True) or "0") != "0") or ud.rebaseable
if (ud.shallow and
not ud.shallow_revs and
all(ud.shallow_depths[n] == 0 for n in ud.names)):
# Shallow disabled for this URL
ud.shallow = False
if ud.usehead:
ud.unresolvedrev['default'] = 'HEAD'
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0"
write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0"
ud.write_tarballs = write_tarballs != "0" or ud.rebaseable
ud.write_shallow_tarballs = (d.getVar("BB_GENERATE_SHALLOW_TARBALLS") or write_tarballs) != "0"
ud.setup_revisions(d)
ud.setup_revisons(d)
for name in ud.names:
# Ensure anything that doesn't look like a sha256 checksum/revision is translated into one
if not ud.revisions[name] or len(ud.revisions[name]) != 40 or (False in [c in "abcdef0123456789" for c in ud.revisions[name]]):
if ud.revisions[name]:
ud.unresolvedrev[name] = ud.revisions[name]
ud.revisions[name] = self.latest_revision(ud, d, name)
gitsrcname = '%s%s' % (ud.host.replace(':', '.'), ud.path.replace('/', '.').replace('*', '.'))
if gitsrcname.startswith('.'):
gitsrcname = gitsrcname[1:]
ud.branches[name] = ud.revisions[name]
ud.revisions[name] = self.latest_revision(ud.url, ud, d, name)
gitsrcname = '%s%s' % (ud.host.replace(':','.'), ud.path.replace('/', '.').replace('*', '.'))
# for rebaseable git repo, it is necessary to keep mirror tar ball
# per revision, so that even the revision disappears from the
# upstream repo in the future, the mirror will remain intact and still
@@ -256,209 +135,104 @@ class Git(FetchMethod):
if ud.rebaseable:
for name in ud.names:
gitsrcname = gitsrcname + '_' + ud.revisions[name]
dl_dir = d.getVar("DL_DIR")
gitdir = d.getVar("GITDIR") or (dl_dir + "/git2/")
ud.mirrortarball = 'git2_%s.tar.gz' % (gitsrcname)
ud.fullmirror = os.path.join(d.getVar("DL_DIR", True), ud.mirrortarball)
gitdir = d.getVar("GITDIR", True) or (d.getVar("DL_DIR", True) + "/git2/")
ud.clonedir = os.path.join(gitdir, gitsrcname)
ud.localfile = ud.clonedir
mirrortarball = 'git2_%s.tar.gz' % gitsrcname
ud.fullmirror = os.path.join(dl_dir, mirrortarball)
ud.mirrortarballs = [mirrortarball]
if ud.shallow:
tarballname = gitsrcname
if ud.bareclone:
tarballname = "%s_bare" % tarballname
if ud.shallow_revs:
tarballname = "%s_%s" % (tarballname, "_".join(sorted(ud.shallow_revs)))
for name, revision in sorted(ud.revisions.items()):
tarballname = "%s_%s" % (tarballname, ud.revisions[name][:7])
depth = ud.shallow_depths[name]
if depth:
tarballname = "%s-%s" % (tarballname, depth)
shallow_refs = []
if not ud.nobranch:
shallow_refs.extend(ud.branches.values())
if ud.shallow_extra_refs:
shallow_refs.extend(r.replace('refs/heads/', '').replace('*', 'ALL') for r in ud.shallow_extra_refs)
if shallow_refs:
tarballname = "%s_%s" % (tarballname, "_".join(sorted(shallow_refs)).replace('/', '.'))
fetcher = self.__class__.__name__.lower()
ud.shallowtarball = '%sshallow_%s.tar.gz' % (fetcher, tarballname)
ud.fullshallow = os.path.join(dl_dir, ud.shallowtarball)
ud.mirrortarballs.insert(0, ud.shallowtarball)
def localpath(self, ud, d):
def localpath(self, url, ud, d):
return ud.clonedir
def need_update(self, ud, d):
def need_update(self, u, ud, d):
if not os.path.exists(ud.clonedir):
return True
os.chdir(ud.clonedir)
for name in ud.names:
if not self._contains_ref(ud, d, name, ud.clonedir):
if not self._contains_ref(ud.revisions[name], d):
return True
if ud.shallow and ud.write_shallow_tarballs and not os.path.exists(ud.fullshallow):
return True
if ud.write_tarballs and not os.path.exists(ud.fullmirror):
return True
return False
def try_premirror(self, ud, d):
def try_premirror(self, u, ud, d):
# If we don't do this, updating an existing checkout with only premirrors
# is not possible
if d.getVar("BB_FETCH_PREMIRRORONLY") is not None:
if d.getVar("BB_FETCH_PREMIRRORONLY", True) is not None:
return True
if os.path.exists(ud.clonedir):
return False
return True
def download(self, ud, d):
def download(self, loc, ud, d):
"""Fetch url"""
no_clone = not os.path.exists(ud.clonedir)
need_update = no_clone or self.need_update(ud, d)
if ud.user:
username = ud.user + '@'
else:
username = ""
# A current clone is preferred to either tarball, a shallow tarball is
# preferred to an out of date clone, and a missing clone will use
# either tarball.
if ud.shallow and os.path.exists(ud.fullshallow) and need_update:
ud.localpath = ud.fullshallow
return
elif os.path.exists(ud.fullmirror) and no_clone:
ud.repochanged = not os.path.exists(ud.fullmirror)
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.clonedir) and os.path.exists(ud.fullmirror):
bb.utils.mkdirhier(ud.clonedir)
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir)
os.chdir(ud.clonedir)
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d)
repourl = self._get_repo_url(ud)
repourl = "%s://%s%s%s" % (ud.proto, username, ud.host, ud.path)
# If the repo still doesn't exist, fallback to cloning it
if not os.path.exists(ud.clonedir):
# We do this since git will use a "-l" option automatically for local urls where possible
if repourl.startswith("file://"):
repourl = repourl[7:]
clone_cmd = "LANG=C %s clone --bare --mirror %s %s --progress" % (ud.basecmd, repourl, ud.clonedir)
clone_cmd = "%s clone --bare --mirror %s %s" % (ud.basecmd, repourl, ud.clonedir)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, clone_cmd, ud.url)
progresshandler = GitProgressHandler(d)
runfetchcmd(clone_cmd, d, log=progresshandler)
bb.fetch2.check_network_access(d, clone_cmd)
runfetchcmd(clone_cmd, d)
os.chdir(ud.clonedir)
# Update the checkout if needed
needupdate = False
for name in ud.names:
if not self._contains_ref(ud, d, name, ud.clonedir):
if not self._contains_ref(ud.revisions[name], d):
needupdate = True
if needupdate:
try:
runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s remote rm origin" % ud.basecmd, d)
except bb.fetch2.FetchError:
logger.debug(1, "No Origin")
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d, workdir=ud.clonedir)
fetch_cmd = "LANG=C %s fetch -f --prune --progress %s refs/*:refs/*" % (ud.basecmd, repourl)
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d)
fetch_cmd = "%s fetch -f --prune %s refs/*:refs/*" % (ud.basecmd, repourl)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
progresshandler = GitProgressHandler(d)
runfetchcmd(fetch_cmd, d, log=progresshandler, workdir=ud.clonedir)
runfetchcmd("%s prune-packed" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d, workdir=ud.clonedir)
try:
os.unlink(ud.fullmirror)
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
for name in ud.names:
if not self._contains_ref(ud, d, name, ud.clonedir):
raise bb.fetch2.FetchError("Unable to find revision %s in branch %s even from upstream" % (ud.revisions[name], ud.branches[name]))
runfetchcmd(fetch_cmd, d)
runfetchcmd("%s prune-packed" % ud.basecmd, d)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
ud.repochanged = True
def build_mirror_data(self, ud, d):
if ud.shallow and ud.write_shallow_tarballs:
if not os.path.exists(ud.fullshallow):
if os.path.islink(ud.fullshallow):
os.unlink(ud.fullshallow)
tempdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
shallowclone = os.path.join(tempdir, 'git')
try:
self.clone_shallow_local(ud, shallowclone, d)
logger.info("Creating tarball of git repository")
runfetchcmd("tar -czf %s ." % ud.fullshallow, d, workdir=shallowclone)
runfetchcmd("touch %s.done" % ud.fullshallow, d)
finally:
bb.utils.remove(tempdir, recurse=True)
elif ud.write_tarballs and not os.path.exists(ud.fullmirror):
def build_mirror_data(self, url, ud, d):
# Generate a mirror tarball if needed
if ud.write_tarballs and (ud.repochanged or not os.path.exists(ud.fullmirror)):
# it's possible that this symlink points to read-only filesystem with PREMIRROR
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
os.chdir(ud.clonedir)
logger.info("Creating tarball of git repository")
runfetchcmd("tar -czf %s ." % ud.fullmirror, d, workdir=ud.clonedir)
runfetchcmd("touch %s.done" % ud.fullmirror, d)
def clone_shallow_local(self, ud, dest, d):
"""Clone the repo and make it shallow.
The upstream url of the new clone isn't set at this time, as it'll be
set correctly when unpacked."""
runfetchcmd("%s clone %s %s %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, dest), d)
to_parse, shallow_branches = [], []
for name in ud.names:
revision = ud.revisions[name]
depth = ud.shallow_depths[name]
if depth:
to_parse.append('%s~%d^{}' % (revision, depth - 1))
# For nobranch, we need a ref, otherwise the commits will be
# removed, and for non-nobranch, we truncate the branch to our
# srcrev, to avoid keeping unnecessary history beyond that.
branch = ud.branches[name]
if ud.nobranch:
ref = "refs/shallow/%s" % name
elif ud.bareclone:
ref = "refs/heads/%s" % branch
else:
ref = "refs/remotes/origin/%s" % branch
shallow_branches.append(ref)
runfetchcmd("%s update-ref %s %s" % (ud.basecmd, ref, revision), d, workdir=dest)
# Map srcrev+depths to revisions
parsed_depths = runfetchcmd("%s rev-parse %s" % (ud.basecmd, " ".join(to_parse)), d, workdir=dest)
# Resolve specified revisions
parsed_revs = runfetchcmd("%s rev-parse %s" % (ud.basecmd, " ".join('"%s^{}"' % r for r in ud.shallow_revs)), d, workdir=dest)
shallow_revisions = parsed_depths.splitlines() + parsed_revs.splitlines()
# Apply extra ref wildcards
all_refs = runfetchcmd('%s for-each-ref "--format=%%(refname)"' % ud.basecmd,
d, workdir=dest).splitlines()
for r in ud.shallow_extra_refs:
if not ud.bareclone:
r = r.replace('refs/heads/', 'refs/remotes/origin/')
if '*' in r:
matches = filter(lambda a: fnmatch.fnmatchcase(a, r), all_refs)
shallow_branches.extend(matches)
else:
shallow_branches.append(r)
# Make the repository shallow
shallow_cmd = ['git', 'make-shallow', '-s']
for b in shallow_branches:
shallow_cmd.append('-r')
shallow_cmd.append(b)
shallow_cmd.extend(shallow_revisions)
runfetchcmd(subprocess.list2cmdline(shallow_cmd), d, workdir=dest)
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, os.path.join(".") ), d)
runfetchcmd("touch %s.done" % (ud.fullmirror), d)
def unpack(self, ud, destdir, d):
""" unpack the downloaded src to destdir"""
subdir = ud.parm.get("subpath", "")
if subdir != "":
readpathspec = ":%s" % subdir
def_destsuffix = "%s/" % os.path.basename(subdir.rstrip('/'))
readpathspec = ":%s" % (subdir)
def_destsuffix = "%s/" % os.path.basename(subdir)
else:
readpathspec = ""
def_destsuffix = "git/"
@@ -468,28 +242,34 @@ class Git(FetchMethod):
if os.path.exists(destdir):
bb.utils.prunedir(destdir)
if ud.shallow and (not os.path.exists(ud.clonedir) or self.need_update(ud, d)):
bb.utils.mkdirhier(destdir)
runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=destdir)
else:
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d)
cloneflags = "-s -n"
if ud.bareclone:
cloneflags += " --mirror"
repourl = self._get_repo_url(ud)
runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, repourl), d, workdir=destdir)
# Versions of git prior to 1.7.9.2 have issues where foo.git and foo get confused
# and you end up with some horrible union of the two when you attempt to clone it
# The least invasive workaround seems to be a symlink to the real directory to
# fool git into ignoring any .git version that may also be present.
#
# The issue is fixed in more recent versions of git so we can drop this hack in future
# when that version becomes common enough.
clonedir = ud.clonedir
if not ud.path.endswith(".git"):
indirectiondir = destdir[:-1] + ".indirectionsymlink"
if os.path.exists(indirectiondir):
os.remove(indirectiondir)
bb.utils.mkdirhier(os.path.dirname(indirectiondir))
os.symlink(ud.clonedir, indirectiondir)
clonedir = indirectiondir
runfetchcmd("git clone %s %s/ %s" % (cloneflags, clonedir, destdir), d)
if not ud.nocheckout:
os.chdir(destdir)
if subdir != "":
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d,
workdir=destdir)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d, workdir=destdir)
elif not ud.nobranch:
branchname = ud.branches[ud.names[0]]
runfetchcmd("%s checkout -B %s %s" % (ud.basecmd, branchname, \
ud.revisions[ud.names[0]]), d, workdir=destdir)
runfetchcmd("%s branch --set-upstream %s origin/%s" % (ud.basecmd, branchname, \
branchname), d, workdir=destdir)
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d)
else:
runfetchcmd("%s checkout %s" % (ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=destdir)
runfetchcmd("%s checkout %s" % (ud.basecmd, ud.revisions[ud.names[0]]), d)
return True
def clean(self, ud, d):
@@ -497,163 +277,50 @@ class Git(FetchMethod):
bb.utils.remove(ud.localpath, True)
bb.utils.remove(ud.fullmirror)
bb.utils.remove(ud.fullmirror + ".done")
def supports_srcrev(self):
return True
def _contains_ref(self, ud, d, name, wd):
cmd = ""
if ud.nobranch:
cmd = "%s log --pretty=oneline -n 1 %s -- 2> /dev/null | wc -l" % (
ud.basecmd, ud.revisions[name])
else:
cmd = "%s branch --contains %s --list %s 2> /dev/null | wc -l" % (
ud.basecmd, ud.revisions[name], ud.branches[name])
try:
output = runfetchcmd(cmd, d, quiet=True, workdir=wd)
except bb.fetch2.FetchError:
return False
def _contains_ref(self, tag, d):
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
cmd = "%s log --pretty=oneline -n 1 %s -- 2> /dev/null | wc -l" % (basecmd, tag)
output = runfetchcmd(cmd, d, quiet=True)
if len(output.split()) > 1:
raise bb.fetch2.FetchError("The command '%s' gave output with more then 1 line unexpectedly, output: '%s'" % (cmd, output))
return output.split()[0] != "0"
def _get_repo_url(self, ud):
def _revision_key(self, url, ud, d, name):
"""
Return the repository URL
Return a unique key for the url
"""
return "git:" + ud.host + ud.path.replace('/', '.') + ud.branches[name]
def _latest_revision(self, url, ud, d, name):
"""
Compute the HEAD revision for the url
"""
if ud.user:
username = ud.user + '@'
else:
username = ""
return "%s://%s%s%s" % (ud.proto, username, ud.host, ud.path)
def _revision_key(self, ud, d, name):
"""
Return a unique key for the url
"""
return "git:" + ud.host + ud.path.replace('/', '.') + ud.unresolvedrev[name]
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
cmd = "%s ls-remote %s://%s%s%s refs/heads/%s refs/tags/%s" % \
(basecmd, ud.proto, username, ud.host, ud.path, ud.branches[name], ud.branches[name])
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, cmd)
output = runfetchcmd(cmd, d, True)
if not output:
raise bb.fetch2.FetchError("The command %s gave empty output unexpectedly" % cmd, url)
return output.split()[0]
def _lsremote(self, ud, d, search):
"""
Run git ls-remote with the specified search string
"""
# Prevent recursion e.g. in OE if SRCPV is in PV, PV is in WORKDIR,
# and WORKDIR is in PATH (as a result of RSS), our call to
# runfetchcmd() exports PATH so this function will get called again (!)
# In this scenario the return call of the function isn't actually
# important - WORKDIR isn't needed in PATH to call git ls-remote
# anyway.
if d.getVar('_BB_GIT_IN_LSREMOTE', False):
return ''
d.setVar('_BB_GIT_IN_LSREMOTE', '1')
try:
repourl = self._get_repo_url(ud)
cmd = "%s ls-remote %s %s" % \
(ud.basecmd, repourl, search)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, cmd, repourl)
output = runfetchcmd(cmd, d, True)
if not output:
raise bb.fetch2.FetchError("The command %s gave empty output unexpectedly" % cmd, ud.url)
finally:
d.delVar('_BB_GIT_IN_LSREMOTE')
return output
def _latest_revision(self, ud, d, name):
"""
Compute the HEAD revision for the url
"""
output = self._lsremote(ud, d, "")
# Tags of the form ^{} may not work, need to fallback to other form
if ud.unresolvedrev[name][:5] == "refs/" or ud.usehead:
head = ud.unresolvedrev[name]
tag = ud.unresolvedrev[name]
else:
head = "refs/heads/%s" % ud.unresolvedrev[name]
tag = "refs/tags/%s" % ud.unresolvedrev[name]
for s in [head, tag + "^{}", tag]:
for l in output.strip().split('\n'):
sha1, ref = l.split()
if s == ref:
return sha1
raise bb.fetch2.FetchError("Unable to resolve '%s' in upstream git repository in git ls-remote output for %s" % \
(ud.unresolvedrev[name], ud.host+ud.path))
def latest_versionstring(self, ud, d):
"""
Compute the latest release name like "x.y.x" in "x.y.x+gitHASH"
by searching through the tags output of ls-remote, comparing
versions and returning the highest match.
"""
pupver = ('', '')
tagregex = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX') or "(?P<pver>([0-9][\.|_]?)+)")
try:
output = self._lsremote(ud, d, "refs/tags/*")
except bb.fetch2.FetchError or bb.fetch2.NetworkAccess:
return pupver
verstring = ""
revision = ""
for line in output.split("\n"):
if not line:
break
tag_head = line.split("/")[-1]
# Ignore non-released branches
m = re.search("(alpha|beta|rc|final)+", tag_head)
if m:
continue
# search for version in the line
tag = tagregex.search(tag_head)
if tag == None:
continue
tag = tag.group('pver')
tag = tag.replace("_", ".")
if verstring and bb.utils.vercmp(("0", tag, ""), ("0", verstring, "")) < 0:
continue
verstring = tag
revision = line.split()[0]
pupver = (verstring, revision)
return pupver
def _build_revision(self, ud, d, name):
def _build_revision(self, url, ud, d, name):
return ud.revisions[name]
def gitpkgv_revision(self, ud, d, name):
"""
Return a sortable revision number by counting commits in the history
Based on gitpkgv.bblass in meta-openembedded
"""
rev = self._build_revision(ud, d, name)
localpath = ud.localpath
rev_file = os.path.join(localpath, "oe-gitpkgv_" + rev)
if not os.path.exists(localpath):
commits = None
else:
if not os.path.exists(rev_file) or not os.path.getsize(rev_file):
from pipes import quote
commits = bb.fetch2.runfetchcmd(
"git rev-list %s -- | wc -l" % quote(rev),
d, quiet=True).strip().lstrip('0')
if commits:
open(rev_file, "w").write("%d\n" % int(commits))
else:
commits = open(rev_file, "r").readline(128).strip()
if commits:
return False, "%s+%s" % (commits, rev[:7])
else:
return True, str(rev)
def checkstatus(self, fetch, ud, d):
def checkstatus(self, uri, ud, d):
fetchcmd = "%s ls-remote %s" % (ud.basecmd, uri)
try:
self._lsremote(ud, d, "")
runfetchcmd(fetchcmd, d, quiet=True)
return True
except bb.fetch2.FetchError:
except FetchError:
return False

View File

@@ -1,91 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' git annex implementation
"""
# Copyright (C) 2014 Otavio Salvador
# Copyright (C) 2014 O.S. Systems Software LTDA.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import bb
from bb.fetch2.git import Git
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class GitANNEX(Git):
def supports(self, ud, d):
"""
Check to see if a given url can be fetched with git.
"""
return ud.type in ['gitannex']
def urldata_init(self, ud, d):
super(GitANNEX, self).urldata_init(ud, d)
if ud.shallow:
ud.shallow_extra_refs += ['refs/heads/git-annex', 'refs/heads/synced/*']
def uses_annex(self, ud, d, wd):
for name in ud.names:
try:
runfetchcmd("%s rev-list git-annex" % (ud.basecmd), d, quiet=True, workdir=wd)
return True
except bb.fetch.FetchError:
pass
return False
def update_annex(self, ud, d, wd):
try:
runfetchcmd("%s annex get --all" % (ud.basecmd), d, quiet=True, workdir=wd)
except bb.fetch.FetchError:
return False
runfetchcmd("chmod u+w -R %s/annex" % (ud.clonedir), d, quiet=True, workdir=wd)
return True
def download(self, ud, d):
Git.download(self, ud, d)
if not ud.shallow or ud.localpath != ud.fullshallow:
if self.uses_annex(ud, d, ud.clonedir):
self.update_annex(ud, d, ud.clonedir)
def clone_shallow_local(self, ud, dest, d):
super(GitANNEX, self).clone_shallow_local(ud, dest, d)
try:
runfetchcmd("%s annex init" % ud.basecmd, d, workdir=dest)
except bb.fetch.FetchError:
pass
if self.uses_annex(ud, d, dest):
runfetchcmd("%s annex get" % ud.basecmd, d, workdir=dest)
runfetchcmd("chmod u+w -R %s/.git/annex" % (dest), d, quiet=True, workdir=dest)
def unpack(self, ud, destdir, d):
Git.unpack(self, ud, destdir, d)
try:
runfetchcmd("%s annex init" % (ud.basecmd), d, workdir=ud.destdir)
except bb.fetch.FetchError:
pass
annex = self.uses_annex(ud, d, ud.destdir)
if annex:
runfetchcmd("%s annex get" % (ud.basecmd), d, workdir=ud.destdir)
runfetchcmd("chmod u+w -R %s/.git/annex" % (ud.destdir), d, quiet=True, workdir=ud.destdir)

View File

@@ -2,16 +2,6 @@
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' git submodules implementation
Inherits from and extends the Git fetcher to retrieve submodules of a git repository
after cloning.
SRC_URI = "gitsm://<see Git fetcher for syntax>"
See the Git fetcher, git://, for usage documentation.
NOTE: Switching a SRC_URI from "git://" to "gitsm://" requires a clean of your recipe.
"""
# Copyright (C) 2013 Richard Purdie
@@ -31,74 +21,28 @@ NOTE: Switching a SRC_URI from "git://" to "gitsm://" requires a clean of your r
import os
import bb
from bb import data
from bb.fetch2.git import Git
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class GitSM(Git):
def supports(self, ud, d):
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with git.
"""
return ud.type in ['gitsm']
def uses_submodules(self, ud, d, wd):
def uses_submodules(self, ud, d):
for name in ud.names:
try:
runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revisions[name]), d, quiet=True, workdir=wd)
runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revisions[name]), d, quiet=True)
return True
except bb.fetch.FetchError:
pass
return False
def _set_relative_paths(self, repopath):
"""
Fix submodule paths to be relative instead of absolute,
so that when we move the repo it doesn't break
(In Git 1.7.10+ this is done automatically)
"""
submodules = []
with open(os.path.join(repopath, '.gitmodules'), 'r') as f:
for line in f.readlines():
if line.startswith('[submodule'):
submodules.append(line.split('"')[1])
for module in submodules:
repo_conf = os.path.join(repopath, module, '.git')
if os.path.exists(repo_conf):
with open(repo_conf, 'r') as f:
lines = f.readlines()
newpath = ''
for i, line in enumerate(lines):
if line.startswith('gitdir:'):
oldpath = line.split(': ')[-1].rstrip()
if oldpath.startswith('/'):
newpath = '../' * (module.count('/') + 1) + '.git/modules/' + module
lines[i] = 'gitdir: %s\n' % newpath
break
if newpath:
with open(repo_conf, 'w') as f:
for line in lines:
f.write(line)
repo_conf2 = os.path.join(repopath, '.git', 'modules', module, 'config')
if os.path.exists(repo_conf2):
with open(repo_conf2, 'r') as f:
lines = f.readlines()
newpath = ''
for i, line in enumerate(lines):
if line.lstrip().startswith('worktree = '):
oldpath = line.split(' = ')[-1].rstrip()
if oldpath.startswith('/'):
newpath = '../' * (module.count('/') + 3) + module
lines[i] = '\tworktree = %s\n' % newpath
break
if newpath:
with open(repo_conf2, 'w') as f:
for line in lines:
f.write(line)
def update_submodules(self, ud, d):
def update_submodules(self, u, ud, d):
# We have to convert bare -> full repo, do the submodule bit, then convert back
tmpclonedir = ud.clonedir + ".tmp"
gitdir = tmpclonedir + os.sep + ".git"
@@ -106,30 +50,29 @@ class GitSM(Git):
os.mkdir(tmpclonedir)
os.rename(ud.clonedir, gitdir)
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*true/bare = false/'", d)
runfetchcmd(ud.basecmd + " reset --hard", d, workdir=tmpclonedir)
runfetchcmd(ud.basecmd + " checkout -f " + ud.revisions[ud.names[0]], d, workdir=tmpclonedir)
runfetchcmd(ud.basecmd + " submodule update --init --recursive", d, workdir=tmpclonedir)
self._set_relative_paths(tmpclonedir)
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*false/bare = true/'", d, workdir=tmpclonedir)
os.chdir(tmpclonedir)
runfetchcmd("git reset --hard", d)
runfetchcmd("git submodule init", d)
runfetchcmd("git submodule update", d)
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*false/bare = true/'", d)
os.rename(gitdir, ud.clonedir,)
bb.utils.remove(tmpclonedir, True)
def download(self, ud, d):
Git.download(self, ud, d)
def download(self, loc, ud, d):
Git.download(self, loc, ud, d)
if not ud.shallow or ud.localpath != ud.fullshallow:
submodules = self.uses_submodules(ud, d, ud.clonedir)
if submodules:
self.update_submodules(ud, d)
def clone_shallow_local(self, ud, dest, d):
super(GitSM, self).clone_shallow_local(ud, dest, d)
runfetchcmd('cp -fpPRH "%s/modules" "%s/"' % (ud.clonedir, os.path.join(dest, '.git')), d)
os.chdir(ud.clonedir)
submodules = self.uses_submodules(ud, d)
if submodules:
self.update_submodules(loc, ud, d)
def unpack(self, ud, destdir, d):
Git.unpack(self, ud, destdir, d)
os.chdir(ud.destdir)
submodules = self.uses_submodules(ud, d)
if submodules:
runfetchcmd("cp -r " + ud.clonedir + "/modules " + ud.destdir + "/.git/", d)
runfetchcmd("git submodule init", d)
runfetchcmd("git submodule update", d)
if self.uses_submodules(ud, d, ud.destdir):
runfetchcmd(ud.basecmd + " checkout " + ud.revisions[ud.names[0]], d, workdir=ud.destdir)
runfetchcmd(ud.basecmd + " submodule update --init --recursive", d, workdir=ud.destdir)

View File

@@ -28,7 +28,7 @@ import os
import sys
import logging
import bb
import errno
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
@@ -37,19 +37,12 @@ from bb.fetch2 import logger
class Hg(FetchMethod):
"""Class to fetch from mercurial repositories"""
def supports(self, ud, d):
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with mercurial.
"""
return ud.type in ['hg']
def supports_checksum(self, urldata):
"""
Don't require checksums for local archives created from
repository checkouts.
"""
return False
def urldata_init(self, ud, d):
"""
init hg specific variable within url data
@@ -59,36 +52,21 @@ class Hg(FetchMethod):
ud.module = ud.parm["module"]
if 'protocol' in ud.parm:
ud.proto = ud.parm['protocol']
elif not ud.host:
ud.proto = 'file'
else:
ud.proto = "hg"
# Create paths to mercurial checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(data.expand('${HGDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
ud.setup_revisions(d)
ud.setup_revisons(d)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
elif not ud.revision:
ud.revision = self.latest_revision(ud, d)
ud.revision = self.latest_revision(ud.url, ud, d)
# Create paths to mercurial checkouts
hgsrcname = '%s_%s_%s' % (ud.module.replace('/', '.'), \
ud.host, ud.path.replace('/', '.'))
mirrortarball = 'hg_%s.tar.gz' % hgsrcname
ud.fullmirror = os.path.join(d.getVar("DL_DIR"), mirrortarball)
ud.mirrortarballs = [mirrortarball]
ud.localfile = data.expand('%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
hgdir = d.getVar("HGDIR") or (d.getVar("DL_DIR") + "/hg/")
ud.pkgdir = os.path.join(hgdir, hgsrcname)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
ud.localfile = ud.moddir
ud.basecmd = d.getVar("FETCHCMD_hg") or "/usr/bin/env hg"
ud.write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS")
def need_update(self, ud, d):
def need_update(self, url, ud, d):
revTag = ud.parm.get('rev', 'tip')
if revTag == "tip":
return True
@@ -96,21 +74,14 @@ class Hg(FetchMethod):
return True
return False
def try_premirror(self, ud, d):
# If we don't do this, updating an existing checkout with only premirrors
# is not possible
if d.getVar("BB_FETCH_PREMIRRORONLY") is not None:
return True
if os.path.exists(ud.moddir):
return False
return True
def _buildhgcommand(self, ud, d, command):
"""
Build up an hg commandline based on ud
command is "fetch", "update", "info"
"""
basecmd = data.expand('${FETCHCMD_hg}', d)
proto = ud.parm.get('protocol', 'http')
host = ud.host
@@ -127,7 +98,7 @@ class Hg(FetchMethod):
hgroot = ud.user + "@" + host + ud.path
if command == "info":
return "%s identify -i %s://%s/%s" % (ud.basecmd, proto, hgroot, ud.module)
return "%s identify -i %s://%s/%s" % (basecmd, proto, hgroot, ud.module)
options = [];
@@ -139,132 +110,78 @@ class Hg(FetchMethod):
options.append("-r %s" % ud.revision)
if command == "fetch":
if ud.user and ud.pswd:
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" clone %s %s://%s/%s %s" % (ud.basecmd, ud.user, ud.pswd, proto, " ".join(options), proto, hgroot, ud.module, ud.module)
else:
cmd = "%s clone %s %s://%s/%s %s" % (ud.basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
cmd = "%s clone %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
elif command == "pull":
# do not pass options list; limiting pull to rev causes the local
# repo not to contain it and immediately following "update" command
# will crash
if ud.user and ud.pswd:
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" pull" % (ud.basecmd, ud.user, ud.pswd, proto)
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" pull" % (basecmd, ud.user, ud.pswd, proto)
else:
cmd = "%s pull" % (ud.basecmd)
cmd = "%s pull" % (basecmd)
elif command == "update":
if ud.user and ud.pswd:
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" update -C %s" % (ud.basecmd, ud.user, ud.pswd, proto, " ".join(options))
else:
cmd = "%s update -C %s" % (ud.basecmd, " ".join(options))
cmd = "%s update -C %s" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid hg command %s" % command, ud.url)
return cmd
def download(self, ud, d):
def download(self, loc, ud, d):
"""Fetch url"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.pkgdir) and os.path.exists(ud.fullmirror):
bb.utils.mkdirhier(ud.pkgdir)
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d, workdir=ud.pkgdir)
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
# Found the source, check whether need pull
updatecmd = self._buildhgcommand(ud, d, "update")
updatecmd = self._buildhgcommand(ud, d, "pull")
logger.info("Update " + loc)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
try:
runfetchcmd(updatecmd, d, workdir=ud.moddir)
except bb.fetch2.FetchError:
# Runnning pull in the repo
pullcmd = self._buildhgcommand(ud, d, "pull")
logger.info("Pulling " + ud.url)
# update sources there
logger.debug(1, "Running %s", pullcmd)
bb.fetch2.check_network_access(d, pullcmd, ud.url)
runfetchcmd(pullcmd, d, workdir=ud.moddir)
try:
os.unlink(ud.fullmirror)
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
bb.fetch2.check_network_access(d, updatecmd, ud.url)
runfetchcmd(updatecmd, d)
# No source found, clone it.
if not os.path.exists(ud.moddir):
else:
fetchcmd = self._buildhgcommand(ud, d, "fetch")
logger.info("Fetch " + ud.url)
logger.info("Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd, ud.url)
runfetchcmd(fetchcmd, d, workdir=ud.pkgdir)
runfetchcmd(fetchcmd, d)
# Even when we clone (fetch), we still need to update as hg's clone
# won't checkout the specified revision if its on a branch
updatecmd = self._buildhgcommand(ud, d, "update")
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
runfetchcmd(updatecmd, d, workdir=ud.moddir)
runfetchcmd(updatecmd, d)
def clean(self, ud, d):
""" Clean the hg dir """
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.hg' --exclude '.hgrags'"
bb.utils.remove(ud.localpath, True)
bb.utils.remove(ud.fullmirror)
bb.utils.remove(ud.fullmirror + ".done")
os.chdir(ud.pkgdir)
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d, cleanup = [ud.localpath])
def supports_srcrev(self):
return True
def _latest_revision(self, ud, d, name):
def _latest_revision(self, url, ud, d, name):
"""
Compute tip revision for the url
"""
bb.fetch2.check_network_access(d, self._buildhgcommand(ud, d, "info"), ud.url)
bb.fetch2.check_network_access(d, self._buildhgcommand(ud, d, "info"))
output = runfetchcmd(self._buildhgcommand(ud, d, "info"), d)
return output.strip()
def _build_revision(self, ud, d, name):
def _build_revision(self, url, ud, d, name):
return ud.revision
def _revision_key(self, ud, d, name):
def _revision_key(self, url, ud, d, name):
"""
Return a unique key for the url
"""
return "hg:" + ud.moddir
def build_mirror_data(self, ud, d):
# Generate a mirror tarball if needed
if ud.write_tarballs == "1" and not os.path.exists(ud.fullmirror):
# it's possible that this symlink points to read-only filesystem with PREMIRROR
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
logger.info("Creating tarball of hg repository")
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, ud.module), d, workdir=ud.pkgdir)
runfetchcmd("touch %s.done" % (ud.fullmirror), d, workdir=ud.pkgdir)
def localpath(self, ud, d):
return ud.pkgdir
def unpack(self, ud, destdir, d):
"""
Make a local clone or export for the url
"""
revflag = "-r %s" % ud.revision
subdir = ud.parm.get("destsuffix", ud.module)
codir = "%s/%s" % (destdir, subdir)
scmdata = ud.parm.get("scmdata", "")
if scmdata != "nokeep":
if not os.access(os.path.join(codir, '.hg'), os.R_OK):
logger.debug(2, "Unpack: creating new hg repository in '" + codir + "'")
runfetchcmd("%s init %s" % (ud.basecmd, codir), d)
logger.debug(2, "Unpack: updating source in '" + codir + "'")
runfetchcmd("%s pull %s" % (ud.basecmd, ud.moddir), d, workdir=codir)
runfetchcmd("%s up -C %s" % (ud.basecmd, revflag), d, workdir=codir)
else:
logger.debug(2, "Unpack: extracting source to '" + codir + "'")
runfetchcmd("%s archive -t files %s %s" % (ud.basecmd, revflag, codir), d, workdir=ud.moddir)

View File

@@ -26,14 +26,15 @@ BitBake build tools.
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import urllib.request, urllib.parse, urllib.error
import urllib
import bb
import bb.utils
from bb import data
from bb.fetch2 import FetchMethod, FetchError
from bb.fetch2 import logger
class Local(FetchMethod):
def supports(self, urldata, d):
def supports(self, url, urldata, d):
"""
Check to see if a given url represents a local fetch.
"""
@@ -41,74 +42,70 @@ class Local(FetchMethod):
def urldata_init(self, ud, d):
# We don't set localfile as for this fetcher the file is already local!
ud.decodedurl = urllib.parse.unquote(ud.url.split("://")[1].split(";")[0])
ud.decodedurl = urllib.unquote(ud.url.split("://")[1].split(";")[0])
ud.basename = os.path.basename(ud.decodedurl)
ud.basepath = ud.decodedurl
ud.needdonestamp = False
return
def localpath(self, urldata, d):
def localpath(self, url, urldata, d):
"""
Return the local filename of a given url assuming a successful fetch.
"""
return self.localpaths(urldata, d)[-1]
def localpaths(self, urldata, d):
"""
Return the local filename of a given url assuming a successful fetch.
"""
searched = []
path = urldata.decodedurl
newpath = path
if path[0] == "/":
return [path]
filespath = d.getVar('FILESPATH')
if filespath:
logger.debug(2, "Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
newpath, hist = bb.utils.which(filespath, path, history=True)
searched.extend(hist)
if (not newpath or not os.path.exists(newpath)) and path.find("*") != -1:
# For expressions using '*', best we can do is take the first directory in FILESPATH that exists
newpath, hist = bb.utils.which(filespath, ".", history=True)
searched.extend(hist)
logger.debug(2, "Searching for %s in path: %s" % (path, newpath))
return searched
if not os.path.exists(newpath):
dldirfile = os.path.join(d.getVar("DL_DIR"), path)
logger.debug(2, "Defaulting to %s for %s" % (dldirfile, path))
bb.utils.mkdirhier(os.path.dirname(dldirfile))
searched.append(dldirfile)
return searched
return searched
if path[0] != "/":
filespath = data.getVar('FILESPATH', d, True)
if filespath:
logger.debug(2, "Searching for %s in paths: \n%s" % (path, "\n ".join(filespath.split(":"))))
newpath = bb.utils.which(filespath, path)
if not newpath:
filesdir = data.getVar('FILESDIR', d, True)
if filesdir:
logger.debug(2, "Searching for %s in path: %s" % (path, filesdir))
newpath = os.path.join(filesdir, path)
if (not newpath or not os.path.exists(newpath)) and path.find("*") != -1:
# For expressions using '*', best we can do is take the first directory in FILESPATH that exists
newpath = bb.utils.which(filespath, ".")
logger.debug(2, "Searching for %s in path: %s" % (path, newpath))
return newpath
if not os.path.exists(newpath):
dldirfile = os.path.join(d.getVar("DL_DIR", True), path)
logger.debug(2, "Defaulting to %s for %s" % (dldirfile, path))
bb.utils.mkdirhier(os.path.dirname(dldirfile))
return dldirfile
return newpath
def need_update(self, ud, d):
if ud.url.find("*") != -1:
def need_update(self, url, ud, d):
if url.find("*") != -1:
return False
if os.path.exists(ud.localpath):
return False
return True
def download(self, urldata, d):
def download(self, url, urldata, d):
"""Fetch urls (no-op for Local method)"""
# no need to fetch local files, we'll deal with them in place.
if self.supports_checksum(urldata) and not os.path.exists(urldata.localpath):
locations = []
filespath = d.getVar('FILESPATH')
filespath = data.getVar('FILESPATH', d, True)
if filespath:
locations = filespath.split(":")
locations.append(d.getVar("DL_DIR"))
filesdir = data.getVar('FILESDIR', d, True)
if filesdir:
locations.append(filesdir)
locations.append(d.getVar("DL_DIR", True))
msg = "Unable to find file " + urldata.url + " anywhere. The paths that were searched were:\n " + "\n ".join(locations)
msg = "Unable to find file " + url + " anywhere. The paths that were searched were:\n " + "\n ".join(locations)
raise FetchError(msg)
return True
def checkstatus(self, fetch, urldata, d):
def checkstatus(self, url, urldata, d):
"""
Check the status of the url
"""
if urldata.localpath.find("*") != -1:
logger.info("URL %s looks like a glob and was therefore not checked.", urldata.url)
logger.info("URL %s looks like a glob and was therefore not checked.", url)
return True
if os.path.exists(urldata.localpath):
return True

View File

@@ -1,306 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' NPM implementation
The NPM fetcher is used to retrieve files from the npmjs repository
Usage in the recipe:
SRC_URI = "npm://registry.npmjs.org/;name=${PN};version=${PV}"
Suported SRC_URI options are:
- name
- version
npm://registry.npmjs.org/${PN}/-/${PN}-${PV}.tgz would become npm://registry.npmjs.org;name=${PN};version=${PV}
The fetcher all triggers off the existence of ud.localpath. If that exists and has the ".done" stamp, its assumed the fetch is good/done
"""
import os
import sys
import urllib.request, urllib.parse, urllib.error
import json
import subprocess
import signal
import bb
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import ChecksumError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
from bb.fetch2 import UnpackError
from bb.fetch2 import ParameterError
from distutils import spawn
def subprocess_setup():
# Python installs a SIGPIPE handler by default. This is usually not what
# non-Python subprocesses expect.
# SIGPIPE errors are known issues with gzip/bash
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
class Npm(FetchMethod):
"""Class to fetch urls via 'npm'"""
def init(self, d):
pass
def supports(self, ud, d):
"""
Check to see if a given url can be fetched with npm
"""
return ud.type in ['npm']
def debug(self, msg):
logger.debug(1, "NpmFetch: %s", msg)
def clean(self, ud, d):
logger.debug(2, "Calling cleanup %s" % ud.pkgname)
bb.utils.remove(ud.localpath, False)
bb.utils.remove(ud.pkgdatadir, True)
bb.utils.remove(ud.fullmirror, False)
def urldata_init(self, ud, d):
"""
init NPM specific variable within url data
"""
if 'downloadfilename' in ud.parm:
ud.basename = ud.parm['downloadfilename']
else:
ud.basename = os.path.basename(ud.path)
# can't call it ud.name otherwise fetcher base class will start doing sha1stuff
# TODO: find a way to get an sha1/sha256 manifest of pkg & all deps
ud.pkgname = ud.parm.get("name", None)
if not ud.pkgname:
raise ParameterError("NPM fetcher requires a name parameter", ud.url)
ud.version = ud.parm.get("version", None)
if not ud.version:
raise ParameterError("NPM fetcher requires a version parameter", ud.url)
ud.bbnpmmanifest = "%s-%s.deps.json" % (ud.pkgname, ud.version)
ud.bbnpmmanifest = ud.bbnpmmanifest.replace('/', '-')
ud.registry = "http://%s" % (ud.url.replace('npm://', '', 1).split(';'))[0]
prefixdir = "npm/%s" % ud.pkgname
ud.pkgdatadir = d.expand("${DL_DIR}/%s" % prefixdir)
if not os.path.exists(ud.pkgdatadir):
bb.utils.mkdirhier(ud.pkgdatadir)
ud.localpath = d.expand("${DL_DIR}/npm/%s" % ud.bbnpmmanifest)
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -O -t 2 -T 30 -nv --passive-ftp --no-check-certificate "
ud.prefixdir = prefixdir
ud.write_tarballs = ((d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0") != "0")
mirrortarball = 'npm_%s-%s.tar.xz' % (ud.pkgname, ud.version)
mirrortarball = ud.mirrortarball.replace('/', '-')
ud.fullmirror = os.path.join(d.getVar("DL_DIR"), mirrortarball)
ud.mirrortarballs = [mirrortarball]
def need_update(self, ud, d):
if os.path.exists(ud.localpath):
return False
return True
def _runwget(self, ud, d, command, quiet):
logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
bb.fetch2.check_network_access(d, command, ud.url)
dldir = d.getVar("DL_DIR")
runfetchcmd(command, d, quiet, workdir=dldir)
def _unpackdep(self, ud, pkg, data, destdir, dldir, d):
file = data[pkg]['tgz']
logger.debug(2, "file to extract is %s" % file)
if file.endswith('.tgz') or file.endswith('.tar.gz') or file.endswith('.tar.Z'):
cmd = 'tar xz --strip 1 --no-same-owner --warning=no-unknown-keyword -f %s/%s' % (dldir, file)
else:
bb.fatal("NPM package %s downloaded not a tarball!" % file)
# Change to subdir before executing command
if not os.path.exists(destdir):
os.makedirs(destdir)
path = d.getVar('PATH')
if path:
cmd = "PATH=\"%s\" %s" % (path, cmd)
bb.note("Unpacking %s to %s/" % (file, destdir))
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True, cwd=destdir)
if ret != 0:
raise UnpackError("Unpack command %s failed with return value %s" % (cmd, ret), ud.url)
if 'deps' not in data[pkg]:
return
for dep in data[pkg]['deps']:
self._unpackdep(ud, dep, data[pkg]['deps'], "%s/node_modules/%s" % (destdir, dep), dldir, d)
def unpack(self, ud, destdir, d):
dldir = d.getVar("DL_DIR")
with open("%s/npm/%s" % (dldir, ud.bbnpmmanifest)) as datafile:
workobj = json.load(datafile)
dldir = "%s/%s" % (os.path.dirname(ud.localpath), ud.pkgname)
if 'subdir' in ud.parm:
unpackdir = '%s/%s' % (destdir, ud.parm.get('subdir'))
else:
unpackdir = '%s/npmpkg' % destdir
self._unpackdep(ud, ud.pkgname, workobj, unpackdir, dldir, d)
def _parse_view(self, output):
'''
Parse the output of npm view --json; the last JSON result
is assumed to be the one that we're interested in.
'''
pdata = None
outdeps = {}
datalines = []
bracelevel = 0
for line in output.splitlines():
if bracelevel:
datalines.append(line)
elif '{' in line:
datalines = []
datalines.append(line)
bracelevel = bracelevel + line.count('{') - line.count('}')
if datalines:
pdata = json.loads('\n'.join(datalines))
return pdata
def _getdependencies(self, pkg, data, version, d, ud, optional=False, fetchedlist=None):
if fetchedlist is None:
fetchedlist = []
pkgfullname = pkg
if version != '*' and not '/' in version:
pkgfullname += "@'%s'" % version
logger.debug(2, "Calling getdeps on %s" % pkg)
fetchcmd = "npm view %s --json --registry %s" % (pkgfullname, ud.registry)
output = runfetchcmd(fetchcmd, d, True)
pdata = self._parse_view(output)
if not pdata:
raise FetchError("The command '%s' returned no output" % fetchcmd)
if optional:
pkg_os = pdata.get('os', None)
if pkg_os:
if not isinstance(pkg_os, list):
pkg_os = [pkg_os]
blacklist = False
for item in pkg_os:
if item.startswith('!'):
blacklist = True
break
if (not blacklist and 'linux' not in pkg_os) or '!linux' in pkg_os:
logger.debug(2, "Skipping %s since it's incompatible with Linux" % pkg)
return
#logger.debug(2, "Output URL is %s - %s - %s" % (ud.basepath, ud.basename, ud.localfile))
outputurl = pdata['dist']['tarball']
data[pkg] = {}
data[pkg]['tgz'] = os.path.basename(outputurl)
if not outputurl in fetchedlist:
self._runwget(ud, d, "%s --directory-prefix=%s %s" % (self.basecmd, ud.prefixdir, outputurl), False)
fetchedlist.append(outputurl)
dependencies = pdata.get('dependencies', {})
optionalDependencies = pdata.get('optionalDependencies', {})
dependencies.update(optionalDependencies)
depsfound = {}
optdepsfound = {}
data[pkg]['deps'] = {}
for dep in dependencies:
if dep in optionalDependencies:
optdepsfound[dep] = dependencies[dep]
else:
depsfound[dep] = dependencies[dep]
for dep, version in optdepsfound.items():
self._getdependencies(dep, data[pkg]['deps'], version, d, ud, optional=True, fetchedlist=fetchedlist)
for dep, version in depsfound.items():
self._getdependencies(dep, data[pkg]['deps'], version, d, ud, fetchedlist=fetchedlist)
def _getshrinkeddependencies(self, pkg, data, version, d, ud, lockdown, manifest, toplevel=True):
logger.debug(2, "NPM shrinkwrap file is %s" % data)
if toplevel:
name = data.get('name', None)
if name and name != pkg:
for obj in data.get('dependencies', []):
if obj == pkg:
self._getshrinkeddependencies(obj, data['dependencies'][obj], data['dependencies'][obj]['version'], d, ud, lockdown, manifest, False)
return
outputurl = "invalid"
if ('resolved' not in data) or (not data['resolved'].startswith('http')):
# will be the case for ${PN}
fetchcmd = "npm view %s@%s dist.tarball --registry %s" % (pkg, version, ud.registry)
logger.debug(2, "Found this matching URL: %s" % str(fetchcmd))
outputurl = runfetchcmd(fetchcmd, d, True)
else:
outputurl = data['resolved']
self._runwget(ud, d, "%s --directory-prefix=%s %s" % (self.basecmd, ud.prefixdir, outputurl), False)
manifest[pkg] = {}
manifest[pkg]['tgz'] = os.path.basename(outputurl).rstrip()
manifest[pkg]['deps'] = {}
if pkg in lockdown:
sha1_expected = lockdown[pkg][version]
sha1_data = bb.utils.sha1_file("npm/%s/%s" % (ud.pkgname, manifest[pkg]['tgz']))
if sha1_expected != sha1_data:
msg = "\nFile: '%s' has %s checksum %s when %s was expected" % (manifest[pkg]['tgz'], 'sha1', sha1_data, sha1_expected)
raise ChecksumError('Checksum mismatch!%s' % msg)
else:
logger.debug(2, "No lockdown data for %s@%s" % (pkg, version))
if 'dependencies' in data:
for obj in data['dependencies']:
logger.debug(2, "Found dep is %s" % str(obj))
self._getshrinkeddependencies(obj, data['dependencies'][obj], data['dependencies'][obj]['version'], d, ud, lockdown, manifest[pkg]['deps'], False)
def download(self, ud, d):
"""Fetch url"""
jsondepobj = {}
shrinkobj = {}
lockdown = {}
if not os.listdir(ud.pkgdatadir) and os.path.exists(ud.fullmirror):
dest = d.getVar("DL_DIR")
bb.utils.mkdirhier(dest)
runfetchcmd("tar -xJf %s" % (ud.fullmirror), d, workdir=dest)
return
shwrf = d.getVar('NPM_SHRINKWRAP')
logger.debug(2, "NPM shrinkwrap file is %s" % shwrf)
if shwrf:
try:
with open(shwrf) as datafile:
shrinkobj = json.load(datafile)
except Exception as e:
raise FetchError('Error loading NPM_SHRINKWRAP file "%s" for %s: %s' % (shwrf, ud.pkgname, str(e)))
elif not ud.ignore_checksums:
logger.warning('Missing shrinkwrap file in NPM_SHRINKWRAP for %s, this will lead to unreliable builds!' % ud.pkgname)
lckdf = d.getVar('NPM_LOCKDOWN')
logger.debug(2, "NPM lockdown file is %s" % lckdf)
if lckdf:
try:
with open(lckdf) as datafile:
lockdown = json.load(datafile)
except Exception as e:
raise FetchError('Error loading NPM_LOCKDOWN file "%s" for %s: %s' % (lckdf, ud.pkgname, str(e)))
elif not ud.ignore_checksums:
logger.warning('Missing lockdown file in NPM_LOCKDOWN for %s, this will lead to unreproducible builds!' % ud.pkgname)
if ('name' not in shrinkobj):
self._getdependencies(ud.pkgname, jsondepobj, ud.version, d, ud)
else:
self._getshrinkeddependencies(ud.pkgname, shrinkobj, ud.version, d, ud, lockdown, jsondepobj)
with open(ud.localpath, 'w') as outfile:
json.dump(jsondepobj, outfile)
def build_mirror_data(self, ud, d):
# Generate a mirror tarball if needed
if ud.write_tarballs and not os.path.exists(ud.fullmirror):
# it's possible that this symlink points to read-only filesystem with PREMIRROR
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
dldir = d.getVar("DL_DIR")
logger.info("Creating tarball of npm data")
runfetchcmd("tar -cJf %s npm/%s npm/%s" % (ud.fullmirror, ud.bbnpmmanifest, ud.pkgname), d,
workdir=dldir)
runfetchcmd("touch %s.done" % (ud.fullmirror), d, workdir=dldir)

View File

@@ -10,6 +10,7 @@ import os
import sys
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
@@ -19,7 +20,7 @@ class Osc(FetchMethod):
"""Class to fetch a module or modules from Opensuse build server
repositories."""
def supports(self, ud, d):
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with osc.
"""
@@ -33,20 +34,20 @@ class Osc(FetchMethod):
# Create paths to osc checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(d.getVar('OSCDIR'), ud.host)
ud.pkgdir = os.path.join(data.expand('${OSCDIR}', d), ud.host)
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
else:
pv = d.getVar("PV", False)
pv = data.getVar("PV", d, 0)
rev = bb.fetch2.srcrev_internal_helper(ud, d)
if rev and rev != True:
ud.revision = rev
else:
ud.revision = ""
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision))
ud.localfile = data.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision), d)
def _buildosccommand(self, ud, d, command):
"""
@@ -54,7 +55,7 @@ class Osc(FetchMethod):
command is "fetch", "update", "info"
"""
basecmd = d.expand('${FETCHCMD_osc}')
basecmd = data.expand('${FETCHCMD_osc}', d)
proto = ud.parm.get('protocol', 'ocs')
@@ -76,32 +77,34 @@ class Osc(FetchMethod):
return osccmd
def download(self, ud, d):
def download(self, loc, ud, d):
"""
Fetch url
"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(d.getVar('OSCDIR'), ud.path, ud.module), os.R_OK):
if os.access(os.path.join(data.expand('${OSCDIR}', d), ud.path, ud.module), os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ ud.url)
logger.info("Update "+ loc)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", oscupdatecmd)
bb.fetch2.check_network_access(d, oscupdatecmd, ud.url)
runfetchcmd(oscupdatecmd, d, workdir=ud.moddir)
runfetchcmd(oscupdatecmd, d)
else:
oscfetchcmd = self._buildosccommand(ud, d, "fetch")
logger.info("Fetch " + ud.url)
logger.info("Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", oscfetchcmd)
bb.fetch2.check_network_access(d, oscfetchcmd, ud.url)
runfetchcmd(oscfetchcmd, d, workdir=ud.pkgdir)
runfetchcmd(oscfetchcmd, d)
os.chdir(os.path.join(ud.pkgdir + ud.path))
# tar them up to a defined filename
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d,
cleanup=[ud.localpath], workdir=os.path.join(ud.pkgdir + ud.path))
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d, cleanup = [ud.localpath])
def supports_srcrev(self):
return False
@@ -111,7 +114,7 @@ class Osc(FetchMethod):
Generate a .oscrc to be used for this run.
"""
config_path = os.path.join(d.getVar('OSCDIR'), "oscrc")
config_path = os.path.join(data.expand('${OSCDIR}', d), "oscrc")
if (os.path.exists(config_path)):
os.remove(config_path)
@@ -120,8 +123,8 @@ class Osc(FetchMethod):
f.write("apisrv = %s\n" % ud.host)
f.write("scheme = http\n")
f.write("su-wrapper = su -c\n")
f.write("build-root = %s\n" % d.getVar('WORKDIR'))
f.write("urllist = %s\n" % d.getVar("OSCURLLIST"))
f.write("build-root = %s\n" % data.expand('${WORKDIR}', d))
f.write("urllist = http://moblin-obs.jf.intel.com:8888/build/%(project)s/%(repository)s/%(buildarch)s/:full/%(name)s.rpm\n")
f.write("extra-pkgs = gzip\n")
f.write("\n")
f.write("[%s]\n" % ud.host)

View File

@@ -1,12 +1,14 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementation for perforce
BitBake 'Fetch' implementations
Classes for obtaining upstream sources for the
BitBake build tools.
"""
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2016 Kodak Alaris, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -23,187 +25,174 @@ BitBake 'Fetch' implementation for perforce
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
from future_builtins import zip
import os
import subprocess
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
class Perforce(FetchMethod):
""" Class to fetch from perforce repositories """
def supports(self, ud, d):
""" Check to see if a given url can be fetched with perforce. """
def supports(self, url, ud, d):
return ud.type in ['p4']
def urldata_init(self, ud, d):
"""
Initialize perforce specific variables within url data. If P4CONFIG is
provided by the env, use it. If P4PORT is specified by the recipe, use
its values, which may override the settings in P4CONFIG.
"""
ud.basecmd = d.getVar('FETCHCMD_p4')
if not ud.basecmd:
ud.basecmd = "/usr/bin/env p4"
ud.dldir = d.getVar('P4DIR')
if not ud.dldir:
ud.dldir = '%s/%s' % (d.getVar('DL_DIR'), 'p4')
path = ud.url.split('://')[1]
path = path.split(';')[0]
delim = path.find('@');
def doparse(url, d):
parm = {}
path = url.split("://")[1]
delim = path.find("@");
if delim != -1:
(ud.user, ud.pswd) = path.split('@')[0].split(':')
ud.path = path.split('@')[1]
(user, pswd, host, port) = path.split('@')[0].split(":")
path = path.split('@')[1]
else:
ud.path = path
(host, port) = data.getVar('P4PORT', d).split(':')
user = ""
pswd = ""
ud.usingp4config = False
p4port = d.getVar('P4PORT')
if path.find(";") != -1:
keys=[]
values=[]
plist = path.split(';')
for item in plist:
if item.count('='):
(key, value) = item.split('=')
keys.append(key)
values.append(value)
if p4port:
logger.debug(1, 'Using recipe provided P4PORT: %s' % p4port)
ud.host = p4port
else:
logger.debug(1, 'Trying to use P4CONFIG to automatically set P4PORT...')
ud.usingp4config = True
p4cmd = '%s info | grep "Server address"' % ud.basecmd
bb.fetch2.check_network_access(d, p4cmd, ud.url)
ud.host = runfetchcmd(p4cmd, d, True)
ud.host = ud.host.split(': ')[1].strip()
logger.debug(1, 'Determined P4PORT to be: %s' % ud.host)
if not ud.host:
raise FetchError('Could not determine P4PORT from P4CONFIG')
if ud.path.find('/...') >= 0:
ud.pathisdir = True
else:
ud.pathisdir = False
parm = dict(zip(keys, values))
path = "//" + path.split(';')[0]
host += ":%s" % (port)
parm["cset"] = Perforce.getcset(d, path, host, user, pswd, parm)
cleanedpath = ud.path.replace('/...', '').replace('/', '.')
cleanedhost = ud.host.replace(':', '.')
ud.pkgdir = os.path.join(ud.dldir, cleanedhost, cleanedpath)
return host, path, user, pswd, parm
doparse = staticmethod(doparse)
ud.setup_revisions(d)
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, ud.revision))
def _buildp4command(self, ud, d, command, depot_filename=None):
"""
Build a p4 commandline. Valid commands are "changes", "print", and
"files". depot_filename is the full path to the file in the depot
including the trailing '#rev' value.
"""
def getcset(d, depot, host, user, pswd, parm):
p4opt = ""
if "cset" in parm:
return parm["cset"];
if user:
p4opt += " -u %s" % (user)
if pswd:
p4opt += " -P %s" % (pswd)
if host:
p4opt += " -p %s" % (host)
if ud.user:
p4opt += ' -u "%s"' % (ud.user)
p4date = data.getVar("P4DATE", d, True)
if "revision" in parm:
depot += "#%s" % (parm["revision"])
elif "label" in parm:
depot += "@%s" % (parm["label"])
elif p4date:
depot += "@%s" % (p4date)
if ud.pswd:
p4opt += ' -P "%s"' % (ud.pswd)
p4cmd = data.getVar('FETCHCOMMAND_p4', d, True)
logger.debug(1, "Running %s%s changes -m 1 %s", p4cmd, p4opt, depot)
p4file, errors = bb.process.run("%s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
cset = p4file.strip()
logger.debug(1, "READ %s", cset)
if not cset:
return -1
if ud.host and not ud.usingp4config:
p4opt += ' -p %s' % (ud.host)
return cset.split(' ')[1]
getcset = staticmethod(getcset)
if hasattr(ud, 'revision') and ud.revision:
pathnrev = '%s@%s' % (ud.path, ud.revision)
else:
pathnrev = '%s' % (ud.path)
def urldata_init(self, ud, d):
(host, path, user, pswd, parm) = Perforce.doparse(ud.url, d)
if depot_filename:
if ud.pathisdir: # Remove leading path to obtain filename
filename = depot_filename[len(ud.path)-1:]
else:
filename = depot_filename[depot_filename.rfind('/'):]
filename = filename[:filename.find('#')] # Remove trailing '#rev'
# If a label is specified, we use that as our filename
if command == 'changes':
p4cmd = '%s%s changes -m 1 //%s' % (ud.basecmd, p4opt, pathnrev)
elif command == 'print':
if depot_filename != None:
p4cmd = '%s%s print -o "p4/%s" "%s"' % (ud.basecmd, p4opt, filename, depot_filename)
else:
raise FetchError('No depot file name provided to p4 %s' % command, ud.url)
elif command == 'files':
p4cmd = '%s%s files //%s' % (ud.basecmd, p4opt, pathnrev)
else:
raise FetchError('Invalid p4 command %s' % command, ud.url)
if "label" in parm:
ud.localfile = "%s.tar.gz" % (parm["label"])
return
return p4cmd
base = path
which = path.find('/...')
if which != -1:
base = path[:which-1]
def _p4listfiles(self, ud, d):
base = self._strip_leading_slashes(base)
cset = Perforce.getcset(d, path, host, user, pswd, parm)
ud.localfile = data.expand('%s+%s+%s.tar.gz' % (host, base.replace('/', '.'), cset), d)
def download(self, loc, ud, d):
"""
Return a list of the file names which are present in the depot using the
'p4 files' command, including trailing '#rev' file revision indicator
Fetch urls
"""
p4cmd = self._buildp4command(ud, d, 'files')
bb.fetch2.check_network_access(d, p4cmd, ud.url)
p4fileslist = runfetchcmd(p4cmd, d, True)
p4fileslist = [f.rstrip() for f in p4fileslist.splitlines()]
if not p4fileslist:
raise FetchError('Unable to fetch listing of p4 files from %s@%s' % (ud.host, ud.path))
(host, depot, user, pswd, parm) = Perforce.doparse(loc, d)
if depot.find('/...') != -1:
path = depot[:depot.find('/...')]
else:
path = depot
module = parm.get('module', os.path.basename(path))
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "p4:%s" % data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
# Get the p4 command
p4opt = ""
if user:
p4opt += " -u %s" % (user)
if pswd:
p4opt += " -P %s" % (pswd)
if host:
p4opt += " -p %s" % (host)
p4cmd = data.getVar('FETCHCOMMAND', localdata, True)
# create temp directory
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
data.setVar('TMPBASE', data.expand('${WORKDIR}/oep4.XXXXXX', localdata), localdata)
tmpfile, errors = bb.process.run(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
tmpfile = tmpfile.strip()
if not tmpfile:
raise FetchError("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.", loc)
if "label" in parm:
depot = "%s@%s" % (depot, parm["label"])
else:
cset = Perforce.getcset(d, depot, host, user, pswd, parm)
depot = "%s@%s" % (depot, cset)
os.chdir(tmpfile)
logger.info("Fetch " + loc)
logger.info("%s%s files %s", p4cmd, p4opt, depot)
p4file, errors = bb.process.run("%s%s files %s" % (p4cmd, p4opt, depot))
p4file = [f.rstrip() for f in p4file.splitlines()]
if not p4file:
raise FetchError("Fetch: unable to get the P4 files from %s" % depot, loc)
count = 0
filelist = []
for filename in p4fileslist:
item = filename.split(' - ')
lastaction = item[1].split()
logger.debug(1, 'File: %s Last Action: %s' % (item[0], lastaction[0]))
if lastaction[0] == 'delete':
for file in p4file:
list = file.split()
if list[2] == "delete":
continue
filelist.append(item[0])
return filelist
dest = list[0][len(path)+1:]
where = dest.find("#")
def download(self, ud, d):
""" Get the list of files, fetch each one """
filelist = self._p4listfiles(ud, d)
if not filelist:
raise FetchError('No files found in depot %s@%s' % (ud.host, ud.path))
subprocess.call("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module, dest[:where], list[0]), shell=True)
count = count + 1
bb.utils.remove(ud.pkgdir, True)
bb.utils.mkdirhier(ud.pkgdir)
for afile in filelist:
p4fetchcmd = self._buildp4command(ud, d, 'print', afile)
bb.fetch2.check_network_access(d, p4fetchcmd, ud.url)
runfetchcmd(p4fetchcmd, d, workdir=ud.pkgdir)
runfetchcmd('tar -czf %s p4' % (ud.localpath), d, cleanup=[ud.localpath], workdir=ud.pkgdir)
def clean(self, ud, d):
""" Cleanup p4 specific files and dirs"""
bb.utils.remove(ud.localpath)
bb.utils.remove(ud.pkgdir, True)
def supports_srcrev(self):
return True
def _revision_key(self, ud, d, name):
""" Return a unique key for the url """
return 'p4:%s' % ud.pkgdir
def _latest_revision(self, ud, d, name):
""" Return the latest upstream scm revision number """
p4cmd = self._buildp4command(ud, d, "changes")
bb.fetch2.check_network_access(d, p4cmd, ud.url)
tip = runfetchcmd(p4cmd, d, True)
if not tip:
raise FetchError('Could not determine the latest perforce changelist')
tipcset = tip.split(' ')[1]
logger.debug(1, 'p4 tip found to be changelist %s' % tipcset)
return tipcset
def sortable_revision(self, ud, d, name):
""" Return a sortable revision number """
return False, self._build_revision(ud, d)
def _build_revision(self, ud, d):
return ud.revision
if count == 0:
logger.error()
raise FetchError("Fetch: No files gathered from the P4 fetch", loc)
runfetchcmd("tar -czf %s %s" % (ud.localpath, module), d, cleanup = [ud.localpath])
# cleanup
bb.utils.prunedir(tmpfile)

View File

@@ -25,12 +25,13 @@ BitBake "Fetch" repo (git) implementation
import os
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
class Repo(FetchMethod):
"""Class to fetch a module or modules from repo (git) repositories"""
def supports(self, ud, d):
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with repo.
"""
@@ -50,17 +51,17 @@ class Repo(FetchMethod):
if not ud.manifest.endswith('.xml'):
ud.manifest += '.xml'
ud.localfile = d.expand("repo_%s%s_%s_%s.tar.gz" % (ud.host, ud.path.replace("/", "."), ud.manifest, ud.branch))
ud.localfile = data.expand("repo_%s%s_%s_%s.tar.gz" % (ud.host, ud.path.replace("/", "."), ud.manifest, ud.branch), d)
def download(self, ud, d):
def download(self, loc, ud, d):
"""Fetch url"""
if os.access(os.path.join(d.getVar("DL_DIR"), ud.localfile), os.R_OK):
if os.access(os.path.join(data.getVar("DL_DIR", d, True), ud.localfile), os.R_OK):
logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
return
gitsrcname = "%s%s" % (ud.host, ud.path.replace("/", "."))
repodir = d.getVar("REPODIR") or os.path.join(d.getVar("DL_DIR"), "repo")
repodir = data.getVar("REPODIR", d, True) or os.path.join(data.getVar("DL_DIR", d, True), "repo")
codir = os.path.join(repodir, gitsrcname, ud.manifest)
if ud.user:
@@ -68,29 +69,30 @@ class Repo(FetchMethod):
else:
username = ""
repodir = os.path.join(codir, "repo")
bb.utils.mkdirhier(repodir)
if not os.path.exists(os.path.join(repodir, ".repo")):
bb.utils.mkdirhier(os.path.join(codir, "repo"))
os.chdir(os.path.join(codir, "repo"))
if not os.path.exists(os.path.join(codir, "repo", ".repo")):
bb.fetch2.check_network_access(d, "repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), ud.url)
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d, workdir=repodir)
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d)
bb.fetch2.check_network_access(d, "repo sync %s" % ud.url, ud.url)
runfetchcmd("repo sync", d, workdir=repodir)
runfetchcmd("repo sync", d)
os.chdir(codir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude='.repo' --exclude='.git'"
tar_flags = "--exclude '.repo' --exclude '.git'"
# Create a cache
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.join(".", "*") ), d, workdir=codir)
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.join(".", "*") ), d)
def supports_srcrev(self):
return False
def _build_revision(self, ud, d):
def _build_revision(self, url, ud, d):
return ud.manifest
def _want_sortable_revision(self, ud, d):
def _want_sortable_revision(self, url, ud, d):
return False

View File

@@ -1,98 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementation for Amazon AWS S3.
Class for fetching files from Amazon S3 using the AWS Command Line Interface.
The aws tool must be correctly installed and configured prior to use.
"""
# Copyright (C) 2017, Andre McCurdy <armccurdy@gmail.com>
#
# Based in part on bb.fetch2.wget:
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import bb
import urllib.request, urllib.parse, urllib.error
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
class S3(FetchMethod):
"""Class to fetch urls via 'aws s3'"""
def supports(self, ud, d):
"""
Check to see if a given url can be fetched with s3.
"""
return ud.type in ['s3']
def recommends_checksum(self, urldata):
return True
def urldata_init(self, ud, d):
if 'downloadfilename' in ud.parm:
ud.basename = ud.parm['downloadfilename']
else:
ud.basename = os.path.basename(ud.path)
ud.localfile = d.expand(urllib.parse.unquote(ud.basename))
ud.basecmd = d.getVar("FETCHCMD_s3") or "/usr/bin/env aws s3"
def download(self, ud, d):
"""
Fetch urls
Assumes localpath was called first
"""
cmd = '%s cp s3://%s%s %s' % (ud.basecmd, ud.host, ud.path, ud.localpath)
bb.fetch2.check_network_access(d, cmd, ud.url)
runfetchcmd(cmd, d)
# Additional sanity checks copied from the wget class (although there
# are no known issues which mean these are required, treat the aws cli
# tool with a little healthy suspicion).
if not os.path.exists(ud.localpath):
raise FetchError("The aws cp command returned success for s3://%s%s but %s doesn't exist?!" % (ud.host, ud.path, ud.localpath))
if os.path.getsize(ud.localpath) == 0:
os.remove(ud.localpath)
raise FetchError("The aws cp command for s3://%s%s resulted in a zero size file?! Deleting and failing since this isn't right." % (ud.host, ud.path))
return True
def checkstatus(self, fetch, ud, d):
"""
Check the status of a URL
"""
cmd = '%s ls s3://%s%s' % (ud.basecmd, ud.host, ud.path)
bb.fetch2.check_network_access(d, cmd, ud.url)
output = runfetchcmd(cmd, d)
# "aws s3 ls s3://mybucket/foo" will exit with success even if the file
# is not found, so check output of the command to confirm success.
if not output:
raise FetchError("The aws ls command for s3://%s%s gave empty output" % (ud.host, ud.path))
return True

View File

@@ -61,15 +61,18 @@ SRC_URI = "sftp://user@host.example.com/dir/path.file.txt"
import os
import bb
import urllib.request, urllib.parse, urllib.error
import urllib
import commands
from bb import data
from bb.fetch2 import URI
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
class SFTP(FetchMethod):
"""Class to fetch urls via 'sftp'"""
def supports(self, ud, d):
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with sftp.
"""
@@ -90,19 +93,19 @@ class SFTP(FetchMethod):
else:
ud.basename = os.path.basename(ud.path)
ud.localfile = d.expand(urllib.parse.unquote(ud.basename))
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
def download(self, ud, d):
def download(self, uri, ud, d):
"""Fetch urls"""
urlo = URI(ud.url)
basecmd = 'sftp -oBatchMode=yes'
urlo = URI(uri)
basecmd = 'sftp -oPasswordAuthentication=no'
port = ''
if urlo.port:
port = '-P %d' % urlo.port
urlo.port = None
dldir = d.getVar('DL_DIR')
dldir = data.getVar('DL_DIR', d, True)
lpath = os.path.join(dldir, ud.localfile)
user = ''
@@ -118,8 +121,9 @@ class SFTP(FetchMethod):
remote = '%s%s:%s' % (user, urlo.hostname, path)
cmd = '%s %s %s %s' % (basecmd, port, remote, lpath)
cmd = '%s %s %s %s' % (basecmd, port, commands.mkarg(remote),
commands.mkarg(lpath))
bb.fetch2.check_network_access(d, cmd, ud.url)
bb.fetch2.check_network_access(d, cmd, uri)
runfetchcmd(cmd, d)
return True

View File

@@ -43,6 +43,7 @@ IETF secsh internet draft:
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import re, os
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
@@ -71,8 +72,8 @@ __pattern__ = re.compile(r'''
class SSH(FetchMethod):
'''Class to fetch a module or modules via Secure Shell'''
def supports(self, urldata, d):
return __pattern__.match(urldata.url) != None
def supports(self, url, urldata, d):
return __pattern__.match(url) != None
def supports_checksum(self, urldata):
return False
@@ -86,13 +87,12 @@ class SSH(FetchMethod):
m = __pattern__.match(urldata.url)
path = m.group('path')
host = m.group('host')
urldata.localpath = os.path.join(d.getVar('DL_DIR'),
os.path.basename(os.path.normpath(path)))
urldata.localpath = os.path.join(d.getVar('DL_DIR', True), os.path.basename(path))
def download(self, urldata, d):
dldir = d.getVar('DL_DIR')
def download(self, url, urldata, d):
dldir = d.getVar('DL_DIR', True)
m = __pattern__.match(urldata.url)
m = __pattern__.match(url)
path = m.group('path')
host = m.group('host')
port = m.group('port')
@@ -113,10 +113,12 @@ class SSH(FetchMethod):
fr = host
fr += ':%s' % path
import commands
cmd = 'scp -B -r %s %s %s/' % (
portarg,
fr,
dldir
commands.mkarg(fr),
commands.mkarg(dldir)
)
bb.fetch2.check_network_access(d, cmd, urldata.url)

View File

@@ -0,0 +1,97 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementations
This implementation is for svk. It is based on the svn implementation
"""
# Copyright (C) 2006 Holger Hans Peter Freyther
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
class Svk(FetchMethod):
"""Class to fetch a module or modules from svk repositories"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with svk.
"""
return ud.type in ['svk']
def urldata_init(self, ud, d):
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
else:
ud.module = ud.parm["module"]
ud.revision = ud.parm.get('rev', "")
ud.localfile = data.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ud.date), d)
def need_update(self, url, ud, d):
if ud.date == "now":
return True
if not os.path.exists(ud.localpath):
return True
return False
def download(self, loc, ud, d):
"""Fetch urls"""
svkroot = ud.host + ud.path
svkcmd = "svk co -r {%s} %s/%s" % (ud.date, svkroot, ud.module)
if ud.revision:
svkcmd = "svk co -r %s %s/%s" % (ud.revision, svkroot, ud.module)
# create temp directory
localdata = data.createCopy(d)
data.update_data(localdata)
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
data.setVar('TMPBASE', data.expand('${WORKDIR}/oesvk.XXXXXX', localdata), localdata)
tmpfile, errors = bb.process.run(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
tmpfile = tmpfile.strip()
if not tmpfile:
logger.error()
raise FetchError("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.", loc)
# check out sources there
os.chdir(tmpfile)
logger.info("Fetch " + loc)
logger.debug(1, "Running %s", svkcmd)
runfetchcmd(svkcmd, d, cleanup = [tmpfile])
os.chdir(os.path.join(tmpfile, os.path.dirname(ud.module)))
# tar them up to a defined filename
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.basename(ud.module)), d, cleanup = [ud.localpath])
# cleanup
bb.utils.prunedir(tmpfile)

View File

@@ -28,6 +28,7 @@ import sys
import logging
import bb
import re
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
@@ -36,7 +37,7 @@ from bb.fetch2 import logger
class Svn(FetchMethod):
"""Class to fetch a module or modules from svn repositories"""
def supports(self, ud, d):
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with svn.
"""
@@ -49,26 +50,21 @@ class Svn(FetchMethod):
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
ud.basecmd = d.getVar('FETCHCMD_svn')
ud.basecmd = d.getVar('FETCHCMD_svn', True)
ud.module = ud.parm["module"]
if not "path_spec" in ud.parm:
ud.path_spec = ud.module
else:
ud.path_spec = ud.parm["path_spec"]
# Create paths to svn checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(d.expand('${SVNDIR}'), ud.host, relpath)
ud.pkgdir = os.path.join(data.expand('${SVNDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
ud.setup_revisions(d)
ud.setup_revisons(d)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
ud.localfile = d.expand('%s_%s_%s_%s_.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision))
ud.localfile = data.expand('%s_%s_%s_%s_.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
def _buildsvncommand(self, ud, d, command):
"""
@@ -78,9 +74,9 @@ class Svn(FetchMethod):
proto = ud.parm.get('protocol', 'svn')
svn_ssh = None
if proto == "svn+ssh" and "ssh" in ud.parm:
svn_ssh = ud.parm["ssh"]
svn_rsh = None
if proto == "svn+ssh" and "rsh" in ud.parm:
svn_rsh = ud.parm["rsh"]
svnroot = ud.host + ud.path
@@ -105,52 +101,54 @@ class Svn(FetchMethod):
suffix = "@%s" % (ud.revision)
if command == "fetch":
transportuser = ud.parm.get("transportuser", "")
svncmd = "%s co %s %s://%s%s/%s%s %s" % (ud.basecmd, " ".join(options), proto, transportuser, svnroot, ud.module, suffix, ud.path_spec)
svncmd = "%s co %s %s://%s/%s%s %s" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module, suffix, ud.module)
elif command == "update":
svncmd = "%s update %s" % (ud.basecmd, " ".join(options))
else:
raise FetchError("Invalid svn command %s" % command, ud.url)
if svn_ssh:
svncmd = "SVN_SSH=\"%s\" %s" % (svn_ssh, svncmd)
if svn_rsh:
svncmd = "svn_RSH=\"%s\" %s" % (svn_rsh, svncmd)
return svncmd
def download(self, ud, d):
def download(self, loc, ud, d):
"""Fetch url"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(ud.moddir, '.svn'), os.R_OK):
svnupdatecmd = self._buildsvncommand(ud, d, "update")
logger.info("Update " + ud.url)
logger.info("Update " + loc)
# update sources there
os.chdir(ud.moddir)
# We need to attempt to run svn upgrade first in case its an older working format
try:
runfetchcmd(ud.basecmd + " upgrade", d, workdir=ud.moddir)
runfetchcmd(ud.basecmd + " upgrade", d)
except FetchError:
pass
logger.debug(1, "Running %s", svnupdatecmd)
bb.fetch2.check_network_access(d, svnupdatecmd, ud.url)
runfetchcmd(svnupdatecmd, d, workdir=ud.moddir)
runfetchcmd(svnupdatecmd, d)
else:
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
logger.info("Fetch " + ud.url)
logger.info("Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", svnfetchcmd)
bb.fetch2.check_network_access(d, svnfetchcmd, ud.url)
runfetchcmd(svnfetchcmd, d, workdir=ud.pkgdir)
runfetchcmd(svnfetchcmd, d)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude='.svn'"
tar_flags = "--exclude '.svn'"
os.chdir(ud.pkgdir)
# tar them up to a defined filename
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.path_spec), d,
cleanup=[ud.localpath], workdir=ud.pkgdir)
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d, cleanup = [ud.localpath])
def clean(self, ud, d):
""" Clean SVN specific files and dirs """
@@ -162,17 +160,17 @@ class Svn(FetchMethod):
def supports_srcrev(self):
return True
def _revision_key(self, ud, d, name):
def _revision_key(self, url, ud, d, name):
"""
Return a unique key for the url
"""
return "svn:" + ud.moddir
def _latest_revision(self, ud, d, name):
def _latest_revision(self, url, ud, d, name):
"""
Return the latest upstream revision number
"""
bb.fetch2.check_network_access(d, self._buildsvncommand(ud, d, "log1"), ud.url)
bb.fetch2.check_network_access(d, self._buildsvncommand(ud, d, "log1"))
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "log1"), d, True)
@@ -182,12 +180,12 @@ class Svn(FetchMethod):
return revision
def sortable_revision(self, ud, d, name):
def sortable_revision(self, url, ud, d, name):
"""
Return a sortable revision number which in our case is the revision number
"""
return False, self._build_revision(ud, d)
return False, self._build_revision(url, ud, d)
def _build_revision(self, ud, d):
def _build_revision(self, url, ud, d):
return ud.revision

View File

@@ -25,46 +25,19 @@ BitBake build tools.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import re
import tempfile
import subprocess
import os
import logging
import bb
import bb.progress
import urllib.request, urllib.parse, urllib.error
import urllib
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
from bb.utils import export_proxies
from bs4 import BeautifulSoup
from bs4 import SoupStrainer
class WgetProgressHandler(bb.progress.LineFilterProgressHandler):
"""
Extract progress information from wget output.
Note: relies on --progress=dot (with -v or without -q/-nv) being
specified on the wget command line.
"""
def __init__(self, d):
super(WgetProgressHandler, self).__init__(d)
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
def writeline(self, line):
percs = re.findall(r'(\d+)%\s+([\d.]+[A-Z])', line)
if percs:
progress = int(percs[-1][0])
rate = percs[-1][1] + '/s'
self.update(progress, rate)
return False
return True
class Wget(FetchMethod):
"""Class to fetch urls via 'wget'"""
def supports(self, ud, d):
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with wget.
"""
@@ -83,532 +56,42 @@ class Wget(FetchMethod):
else:
ud.basename = os.path.basename(ud.path)
ud.localfile = d.expand(urllib.parse.unquote(ud.basename))
if not ud.localfile:
ud.localfile = d.expand(urllib.parse.unquote(ud.host + ud.path).replace("/", "."))
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30 --passive-ftp --no-check-certificate"
def _runwget(self, ud, d, command, quiet):
progresshandler = WgetProgressHandler(d)
logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
bb.fetch2.check_network_access(d, command, ud.url)
runfetchcmd(command + ' --progress=dot -v', d, quiet, log=progresshandler)
def download(self, ud, d):
def download(self, uri, ud, d, checkonly = False):
"""Fetch urls"""
fetchcmd = self.basecmd
basecmd = d.getVar("FETCHCMD_wget", True) or "/usr/bin/env wget -t 2 -T 30 -nv --passive-ftp --no-check-certificate"
if 'downloadfilename' in ud.parm:
dldir = d.getVar("DL_DIR")
if not checkonly and 'downloadfilename' in ud.parm:
dldir = d.getVar("DL_DIR", True)
bb.utils.mkdirhier(os.path.dirname(dldir + os.sep + ud.localfile))
fetchcmd += " -O " + dldir + os.sep + ud.localfile
basecmd += " -O " + dldir + os.sep + ud.localfile
if ud.user and ud.pswd:
fetchcmd += " --user=%s --password=%s --auth-no-challenge" % (ud.user, ud.pswd)
uri = ud.url.split(";")[0]
if os.path.exists(ud.localpath):
if checkonly:
fetchcmd = d.getVar("CHECKCOMMAND_wget", True) or d.expand(basecmd + " --spider '${URI}'")
elif os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
fetchcmd += d.expand(" -c -P ${DL_DIR} '%s'" % uri)
fetchcmd = d.getVar("RESUMECOMMAND_wget", True) or d.expand(basecmd + " -c -P ${DL_DIR} '${URI}'")
else:
fetchcmd += d.expand(" -P ${DL_DIR} '%s'" % uri)
fetchcmd = d.getVar("FETCHCOMMAND_wget", True) or d.expand(basecmd + " -P ${DL_DIR} '${URI}'")
self._runwget(ud, d, fetchcmd, False)
uri = uri.split(";")[0]
fetchcmd = fetchcmd.replace("${URI}", uri.split(";")[0])
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
if not checkonly:
logger.info("fetch " + uri)
logger.debug(2, "executing " + fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd)
runfetchcmd(fetchcmd, d, quiet=checkonly)
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(ud.localpath):
if not os.path.exists(ud.localpath) and not checkonly:
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, ud.localpath), uri)
if os.path.getsize(ud.localpath) == 0:
os.remove(ud.localpath)
raise FetchError("The fetch of %s resulted in a zero size file?! Deleting and failing since this isn't right." % (uri), uri)
return True
def checkstatus(self, fetch, ud, d, try_again=True):
import urllib.request, urllib.error, urllib.parse, socket, http.client
from urllib.response import addinfourl
from bb.fetch2 import FetchConnectionCache
class HTTPConnectionCache(http.client.HTTPConnection):
if fetch.connection_cache:
def connect(self):
"""Connect to the host and port specified in __init__."""
sock = fetch.connection_cache.get_connection(self.host, self.port)
if sock:
self.sock = sock
else:
self.sock = socket.create_connection((self.host, self.port),
self.timeout, self.source_address)
fetch.connection_cache.add_connection(self.host, self.port, self.sock)
if self._tunnel_host:
self._tunnel()
class CacheHTTPHandler(urllib.request.HTTPHandler):
def http_open(self, req):
return self.do_open(HTTPConnectionCache, req)
def do_open(self, http_class, req):
"""Return an addinfourl object for the request, using http_class.
http_class must implement the HTTPConnection API from httplib.
The addinfourl return value is a file-like object. It also
has methods and attributes including:
- info(): return a mimetools.Message object for the headers
- geturl(): return the original request URL
- code: HTTP status code
"""
host = req.host
if not host:
raise urlllib2.URLError('no host given')
h = http_class(host, timeout=req.timeout) # will parse host:port
h.set_debuglevel(self._debuglevel)
headers = dict(req.unredirected_hdrs)
headers.update(dict((k, v) for k, v in list(req.headers.items())
if k not in headers))
# We want to make an HTTP/1.1 request, but the addinfourl
# class isn't prepared to deal with a persistent connection.
# It will try to read all remaining data from the socket,
# which will block while the server waits for the next request.
# So make sure the connection gets closed after the (only)
# request.
# Don't close connection when connection_cache is enabled,
if fetch.connection_cache is None:
headers["Connection"] = "close"
else:
headers["Connection"] = "Keep-Alive" # Works for HTTP/1.0
headers = dict(
(name.title(), val) for name, val in list(headers.items()))
if req._tunnel_host:
tunnel_headers = {}
proxy_auth_hdr = "Proxy-Authorization"
if proxy_auth_hdr in headers:
tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr]
# Proxy-Authorization should not be sent to origin
# server.
del headers[proxy_auth_hdr]
h.set_tunnel(req._tunnel_host, headers=tunnel_headers)
try:
h.request(req.get_method(), req.selector, req.data, headers)
except socket.error as err: # XXX what error?
# Don't close connection when cache is enabled.
if fetch.connection_cache is None:
h.close()
raise urllib.error.URLError(err)
else:
try:
r = h.getresponse(buffering=True)
except TypeError: # buffering kw not supported
r = h.getresponse()
# Pick apart the HTTPResponse object to get the addinfourl
# object initialized properly.
# Wrap the HTTPResponse object in socket's file object adapter
# for Windows. That adapter calls recv(), so delegate recv()
# to read(). This weird wrapping allows the returned object to
# have readline() and readlines() methods.
# XXX It might be better to extract the read buffering code
# out of socket._fileobject() and into a base class.
r.recv = r.read
# no data, just have to read
r.read()
class fp_dummy(object):
def read(self):
return ""
def readline(self):
return ""
def close(self):
pass
resp = addinfourl(fp_dummy(), r.msg, req.get_full_url())
resp.code = r.status
resp.msg = r.reason
# Close connection when server request it.
if fetch.connection_cache is not None:
if 'Connection' in r.msg and r.msg['Connection'] == 'close':
fetch.connection_cache.remove_connection(h.host, h.port)
return resp
class HTTPMethodFallback(urllib.request.BaseHandler):
"""
Fallback to GET if HEAD is not allowed (405 HTTP error)
"""
def http_error_405(self, req, fp, code, msg, headers):
fp.read()
fp.close()
newheaders = dict((k,v) for k,v in list(req.headers.items())
if k.lower() not in ("content-length", "content-type"))
return self.parent.open(urllib.request.Request(req.get_full_url(),
headers=newheaders,
origin_req_host=req.origin_req_host,
unverifiable=True))
"""
Some servers (e.g. GitHub archives, hosted on Amazon S3) return 403
Forbidden when they actually mean 405 Method Not Allowed.
"""
http_error_403 = http_error_405
"""
Some servers (e.g. FusionForge) returns 406 Not Acceptable when they
actually mean 405 Method Not Allowed.
"""
http_error_406 = http_error_405
class FixedHTTPRedirectHandler(urllib.request.HTTPRedirectHandler):
"""
urllib2.HTTPRedirectHandler resets the method to GET on redirect,
when we want to follow redirects using the original method.
"""
def redirect_request(self, req, fp, code, msg, headers, newurl):
newreq = urllib.request.HTTPRedirectHandler.redirect_request(self, req, fp, code, msg, headers, newurl)
newreq.get_method = lambda: req.get_method()
return newreq
exported_proxies = export_proxies(d)
handlers = [FixedHTTPRedirectHandler, HTTPMethodFallback]
if export_proxies:
handlers.append(urllib.request.ProxyHandler())
handlers.append(CacheHTTPHandler())
# XXX: Since Python 2.7.9 ssl cert validation is enabled by default
# see PEP-0476, this causes verification errors on some https servers
# so disable by default.
import ssl
if hasattr(ssl, '_create_unverified_context'):
handlers.append(urllib.request.HTTPSHandler(context=ssl._create_unverified_context()))
opener = urllib.request.build_opener(*handlers)
try:
uri = ud.url.split(";")[0]
r = urllib.request.Request(uri)
r.get_method = lambda: "HEAD"
def add_basic_auth(login_str, request):
'''Adds Basic auth to http request, pass in login:password as string'''
import base64
encodeuser = base64.b64encode(login_str.encode('utf-8')).decode("utf-8")
authheader = "Basic %s" % encodeuser
r.add_header("Authorization", authheader)
if ud.user:
add_basic_auth(ud.user, r)
try:
import netrc, urllib.parse
n = netrc.netrc()
login, unused, password = n.authenticators(urllib.parse.urlparse(uri).hostname)
add_basic_auth("%s:%s" % (login, password), r)
except (TypeError, ImportError, IOError, netrc.NetrcParseError):
pass
opener.open(r)
except urllib.error.URLError as e:
if try_again:
logger.debug(2, "checkstatus: trying again")
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug(2, "checkstatus() urlopen failed: %s" % e)
return False
return True
def _parse_path(self, regex, s):
"""
Find and group name, version and archive type in the given string s
"""
m = regex.search(s)
if m:
pname = ''
pver = ''
ptype = ''
mdict = m.groupdict()
if 'name' in mdict.keys():
pname = mdict['name']
if 'pver' in mdict.keys():
pver = mdict['pver']
if 'type' in mdict.keys():
ptype = mdict['type']
bb.debug(3, "_parse_path: %s, %s, %s" % (pname, pver, ptype))
return (pname, pver, ptype)
return None
def _modelate_version(self, version):
if version[0] in ['.', '-']:
if version[1].isdigit():
version = version[1] + version[0] + version[2:len(version)]
else:
version = version[1:len(version)]
version = re.sub('-', '.', version)
version = re.sub('_', '.', version)
version = re.sub('(rc)+', '.1000.', version)
version = re.sub('(beta)+', '.100.', version)
version = re.sub('(alpha)+', '.10.', version)
if version[0] == 'v':
version = version[1:len(version)]
return version
def _vercmp(self, old, new):
"""
Check whether 'new' is newer than 'old' version. We use existing vercmp() for the
purpose. PE is cleared in comparison as it's not for build, and PR is cleared too
for simplicity as it's somehow difficult to get from various upstream format
"""
(oldpn, oldpv, oldsuffix) = old
(newpn, newpv, newsuffix) = new
"""
Check for a new suffix type that we have never heard of before
"""
if (newsuffix):
m = self.suffix_regex_comp.search(newsuffix)
if not m:
bb.warn("%s has a possible unknown suffix: %s" % (newpn, newsuffix))
return False
"""
Not our package so ignore it
"""
if oldpn != newpn:
return False
oldpv = self._modelate_version(oldpv)
newpv = self._modelate_version(newpv)
return bb.utils.vercmp(("0", oldpv, ""), ("0", newpv, ""))
def _fetch_index(self, uri, ud, d):
"""
Run fetch checkstatus to get directory information
"""
f = tempfile.NamedTemporaryFile()
agent = "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12"
fetchcmd = self.basecmd
fetchcmd += " -O " + f.name + " --user-agent='" + agent + "' '" + uri + "'"
try:
self._runwget(ud, d, fetchcmd, True)
fetchresult = f.read()
except bb.fetch2.BBFetchException:
fetchresult = ""
f.close()
return fetchresult
def _check_latest_version(self, url, package, package_regex, current_version, ud, d):
"""
Return the latest version of a package inside a given directory path
If error or no version, return ""
"""
valid = 0
version = ['', '', '']
bb.debug(3, "VersionURL: %s" % (url))
soup = BeautifulSoup(self._fetch_index(url, ud, d), "html.parser", parse_only=SoupStrainer("a"))
if not soup:
bb.debug(3, "*** %s NO SOUP" % (url))
return ""
for line in soup.find_all('a', href=True):
bb.debug(3, "line['href'] = '%s'" % (line['href']))
bb.debug(3, "line = '%s'" % (str(line)))
newver = self._parse_path(package_regex, line['href'])
if not newver:
newver = self._parse_path(package_regex, str(line))
if newver:
bb.debug(3, "Upstream version found: %s" % newver[1])
if valid == 0:
version = newver
valid = 1
elif self._vercmp(version, newver) < 0:
version = newver
pupver = re.sub('_', '.', version[1])
bb.debug(3, "*** %s -> UpstreamVersion = %s (CurrentVersion = %s)" %
(package, pupver or "N/A", current_version[1]))
if valid:
return pupver
return ""
def _check_latest_version_by_dir(self, dirver, package, package_regex,
current_version, ud, d):
"""
Scan every directory in order to get upstream version.
"""
version_dir = ['', '', '']
version = ['', '', '']
dirver_regex = re.compile("(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])+(\d+))")
s = dirver_regex.search(dirver)
if s:
version_dir[1] = s.group('ver')
else:
version_dir[1] = dirver
dirs_uri = bb.fetch.encodeurl([ud.type, ud.host,
ud.path.split(dirver)[0], ud.user, ud.pswd, {}])
bb.debug(3, "DirURL: %s, %s" % (dirs_uri, package))
soup = BeautifulSoup(self._fetch_index(dirs_uri, ud, d), "html.parser", parse_only=SoupStrainer("a"))
if not soup:
return version[1]
for line in soup.find_all('a', href=True):
s = dirver_regex.search(line['href'].strip("/"))
if s:
sver = s.group('ver')
# When prefix is part of the version directory it need to
# ensure that only version directory is used so remove previous
# directories if exists.
#
# Example: pfx = '/dir1/dir2/v' and version = '2.5' the expected
# result is v2.5.
spfx = s.group('pfx').split('/')[-1]
version_dir_new = ['', sver, '']
if self._vercmp(version_dir, version_dir_new) <= 0:
dirver_new = spfx + sver
path = ud.path.replace(dirver, dirver_new, True) \
.split(package)[0]
uri = bb.fetch.encodeurl([ud.type, ud.host, path,
ud.user, ud.pswd, {}])
pupver = self._check_latest_version(uri,
package, package_regex, current_version, ud, d)
if pupver:
version[1] = pupver
version_dir = version_dir_new
return version[1]
def _init_regexes(self, package, ud, d):
"""
Match as many patterns as possible such as:
gnome-common-2.20.0.tar.gz (most common format)
gtk+-2.90.1.tar.gz
xf86-input-synaptics-12.6.9.tar.gz
dri2proto-2.3.tar.gz
blktool_4.orig.tar.gz
libid3tag-0.15.1b.tar.gz
unzip552.tar.gz
icu4c-3_6-src.tgz
genext2fs_1.3.orig.tar.gz
gst-fluendo-mp3
"""
# match most patterns which uses "-" as separator to version digits
pn_prefix1 = "[a-zA-Z][a-zA-Z0-9]*([-_][a-zA-Z]\w+)*\+?[-_]"
# a loose pattern such as for unzip552.tar.gz
pn_prefix2 = "[a-zA-Z]+"
# a loose pattern such as for 80325-quicky-0.4.tar.gz
pn_prefix3 = "[0-9]+[-]?[a-zA-Z]+"
# Save the Package Name (pn) Regex for use later
pn_regex = "(%s|%s|%s)" % (pn_prefix1, pn_prefix2, pn_prefix3)
# match version
pver_regex = "(([A-Z]*\d+[a-zA-Z]*[\.\-_]*)+)"
# match arch
parch_regex = "-source|_all_"
# src.rpm extension was added only for rpm package. Can be removed if the rpm
# packaged will always be considered as having to be manually upgraded
psuffix_regex = "(tar\.gz|tgz|tar\.bz2|zip|xz|tar\.lz|rpm|bz2|orig\.tar\.gz|tar\.xz|src\.tar\.gz|src\.tgz|svnr\d+\.tar\.bz2|stable\.tar\.gz|src\.rpm)"
# match name, version and archive type of a package
package_regex_comp = re.compile("(?P<name>%s?\.?v?)(?P<pver>%s)(?P<arch>%s)?[\.-](?P<type>%s$)"
% (pn_regex, pver_regex, parch_regex, psuffix_regex))
self.suffix_regex_comp = re.compile(psuffix_regex)
# compile regex, can be specific by package or generic regex
pn_regex = d.getVar('UPSTREAM_CHECK_REGEX')
if pn_regex:
package_custom_regex_comp = re.compile(pn_regex)
else:
version = self._parse_path(package_regex_comp, package)
if version:
package_custom_regex_comp = re.compile(
"(?P<name>%s)(?P<pver>%s)(?P<arch>%s)?[\.-](?P<type>%s)" %
(re.escape(version[0]), pver_regex, parch_regex, psuffix_regex))
else:
package_custom_regex_comp = None
return package_custom_regex_comp
def latest_versionstring(self, ud, d):
"""
Manipulate the URL and try to obtain the latest package version
sanity check to ensure same name and type.
"""
package = ud.path.split("/")[-1]
current_version = ['', d.getVar('PV'), '']
"""possible to have no version in pkg name, such as spectrum-fw"""
if not re.search("\d+", package):
current_version[1] = re.sub('_', '.', current_version[1])
current_version[1] = re.sub('-', '.', current_version[1])
return (current_version[1], '')
package_regex = self._init_regexes(package, ud, d)
if package_regex is None:
bb.warn("latest_versionstring: package %s don't match pattern" % (package))
return ('', '')
bb.debug(3, "latest_versionstring, regex: %s" % (package_regex.pattern))
uri = ""
regex_uri = d.getVar("UPSTREAM_CHECK_URI")
if not regex_uri:
path = ud.path.split(package)[0]
# search for version matches on folders inside the path, like:
# "5.7" in http://download.gnome.org/sources/${PN}/5.7/${PN}-${PV}.tar.gz
dirver_regex = re.compile("(?P<dirver>[^/]*(\d+\.)*\d+([-_]r\d+)*)/")
m = dirver_regex.search(path)
if m:
pn = d.getVar('PN')
dirver = m.group('dirver')
dirver_pn_regex = re.compile("%s\d?" % (re.escape(pn)))
if not dirver_pn_regex.search(dirver):
return (self._check_latest_version_by_dir(dirver,
package, package_regex, current_version, ud, d), '')
uri = bb.fetch.encodeurl([ud.type, ud.host, path, ud.user, ud.pswd, {}])
else:
uri = regex_uri
return (self._check_latest_version(uri, package, package_regex,
current_version, ud, d), '')
def checkstatus(self, uri, ud, d):
return self.download(uri, ud, d, True)

View File

@@ -1,552 +0,0 @@
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2003, 2004 Phil Blundell
# Copyright (C) 2003 - 2005 Michael 'Mickey' Lauer
# Copyright (C) 2005 Holger Hans Peter Freyther
# Copyright (C) 2005 ROAD GmbH
# Copyright (C) 2006 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import logging
import optparse
import warnings
import fcntl
import bb
from bb import event
import bb.msg
from bb import cooker
from bb import ui
from bb import server
from bb import cookerdata
logger = logging.getLogger("BitBake")
class BBMainException(Exception):
pass
def present_options(optionlist):
if len(optionlist) > 1:
return ' or '.join([', '.join(optionlist[:-1]), optionlist[-1]])
else:
return optionlist[0]
class BitbakeHelpFormatter(optparse.IndentedHelpFormatter):
def format_option(self, option):
# We need to do this here rather than in the text we supply to
# add_option() because we don't want to call list_extension_modules()
# on every execution (since it imports all of the modules)
# Note also that we modify option.help rather than the returned text
# - this is so that we don't have to re-format the text ourselves
if option.dest == 'ui':
valid_uis = list_extension_modules(bb.ui, 'main')
option.help = option.help.replace('@CHOICES@', present_options(valid_uis))
elif option.dest == 'servertype':
valid_server_types = list_extension_modules(bb.server, 'BitBakeServer')
option.help = option.help.replace('@CHOICES@', present_options(valid_server_types))
return optparse.IndentedHelpFormatter.format_option(self, option)
def list_extension_modules(pkg, checkattr):
"""
Lists extension modules in a specific Python package
(e.g. UIs, servers). NOTE: Calling this function will import all of the
submodules of the specified module in order to check for the specified
attribute; this can have unusual side-effects. As a result, this should
only be called when displaying help text or error messages.
Parameters:
pkg: previously imported Python package to list
checkattr: attribute to look for in module to determine if it's valid
as the type of extension you are looking for
"""
import pkgutil
pkgdir = os.path.dirname(pkg.__file__)
modules = []
for _, modulename, _ in pkgutil.iter_modules([pkgdir]):
if os.path.isdir(os.path.join(pkgdir, modulename)):
# ignore directories
continue
try:
module = __import__(pkg.__name__, fromlist=[modulename])
except:
# If we can't import it, it's not valid
continue
module_if = getattr(module, modulename)
if getattr(module_if, 'hidden_extension', False):
continue
if not checkattr or hasattr(module_if, checkattr):
modules.append(modulename)
return modules
def import_extension_module(pkg, modulename, checkattr):
try:
# Dynamically load the UI based on the ui name. Although we
# suggest a fixed set this allows you to have flexibility in which
# ones are available.
module = __import__(pkg.__name__, fromlist=[modulename])
return getattr(module, modulename)
except AttributeError:
modules = present_options(list_extension_modules(pkg, checkattr))
raise BBMainException('FATAL: Unable to import extension module "%s" from %s. '
'Valid extension modules: %s' % (modulename, pkg.__name__, modules))
# Display bitbake/OE warnings via the BitBake.Warnings logger, ignoring others"""
warnlog = logging.getLogger("BitBake.Warnings")
_warnings_showwarning = warnings.showwarning
def _showwarning(message, category, filename, lineno, file=None, line=None):
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno)
warnlog.warning(s)
warnings.showwarning = _showwarning
warnings.filterwarnings("ignore")
warnings.filterwarnings("default", module="(<string>$|(oe|bb)\.)")
warnings.filterwarnings("ignore", category=PendingDeprecationWarning)
warnings.filterwarnings("ignore", category=ImportWarning)
warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
class BitBakeConfigParameters(cookerdata.ConfigParameters):
def parseCommandLine(self, argv=sys.argv):
parser = optparse.OptionParser(
formatter=BitbakeHelpFormatter(),
version="BitBake Build Tool Core version %s" % bb.__version__,
usage="""%prog [options] [recipename/target recipe:do_task ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.""")
parser.add_option("-b", "--buildfile", action="store", dest="buildfile", default=None,
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
parser.add_option("-k", "--continue", action="store_false", dest="abort", default=True,
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
parser.add_option("-a", "--tryaltconfigs", action="store_true",
dest="tryaltconfigs", default=False,
help="Continue with builds by trying to use alternative providers "
"where possible.")
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
parser.add_option("-c", "--cmd", action="store", dest="cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-C", "--clear-stamp", action="store", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
parser.add_option("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
help="Enable tracing of shell tasks (with 'set -x'). "
"Also print bb.note(...) messages to stdout (in "
"addition to writing them to ${T}/log.do_<task>).")
parser.add_option("-D", "--debug", action="count", dest="debug", default=0,
help="Increase the debug level. You can specify this "
"more than once. -D sets the debug level to 1, "
"where only bb.debug(1, ...) messages are printed "
"to stdout; -DD sets the debug level to 2, where "
"both bb.debug(1, ...) and bb.debug(2, ...) "
"messages are printed; etc. Without -D, no debug "
"messages are printed. Note that -D only affects "
"output to stdout. All debug messages are written "
"to ${T}/log.do_taskname, regardless of the debug "
"level.")
parser.add_option("-q", "--quiet", action="count", dest="quiet", default=0,
help="Output less log message data to the terminal. You can specify this more than once.")
parser.add_option("-n", "--dry-run", action="store_true", dest="dry_run", default=False,
help="Don't execute, just go through the motions.")
parser.add_option("-S", "--dump-signatures", action="append", dest="dump_signatures",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means compare the dumped signature with the cached one.")
parser.add_option("-p", "--parse-only", action="store_true",
dest="parse_only", default=False,
help="Quit after parsing the BB recipes.")
parser.add_option("-s", "--show-versions", action="store_true",
dest="show_versions", default=False,
help="Show current and preferred versions of all recipes.")
parser.add_option("-e", "--environment", action="store_true",
dest="show_environment", default=False,
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-g", "--graphviz", action="store_true", dest="dot_graph", default=False,
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing")
parser.add_option("-l", "--log-domains", action="append", dest="debug_domains", default=[],
help="Show debug logging for the specified logging domains")
parser.add_option("-P", "--profile", action="store_true", dest="profile", default=False,
help="Profile the command and save reports.")
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
parser.add_option("-u", "--ui", action="store", dest="ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %default).")
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
parser.add_option("-t", "--servertype", action="store", dest="servertype",
default=["process", "xmlrpc"]["BBSERVER" in os.environ],
help="Choose which server type to use (@CHOICES@ - default %default).")
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
parser.add_option("", "--revisions-changed", action="store_true",
dest="revisions_changed", default=False,
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
parser.add_option("", "--server-only", action="store_true",
dest="server_only", default=False,
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
parser.add_option("", "--foreground", action="store_true",
help="Run bitbake server in foreground.")
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
help="The name/address for the bitbake server to bind to.")
parser.add_option("-T", "--idle-timeout", type=int,
default=int(os.environ.get("BBTIMEOUT", "0")),
help="Set timeout to unload bitbake server due to inactivity")
parser.add_option("", "--no-setscene", action="store_true",
dest="nosetscene", default=False,
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
parser.add_option("", "--setscene-only", action="store_true",
dest="setsceneonly", default=False,
help="Only run setscene tasks, don't run any real tasks.")
parser.add_option("", "--remote-server", action="store", dest="remote_server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
parser.add_option("-m", "--kill-server", action="store_true",
dest="kill_server", default=False,
help="Terminate the remote server.")
parser.add_option("", "--observe-only", action="store_true",
dest="observe_only", default=False,
help="Connect to a server as an observing-only client.")
parser.add_option("", "--status-only", action="store_true",
dest="status_only", default=False,
help="Check the status of the remote bitbake server.")
parser.add_option("-w", "--write-log", action="store", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
parser.add_option("", "--runall", action="store", dest="runall",
help="Run the specified task for all build targets and their dependencies.")
options, targets = parser.parse_args(argv)
if options.quiet and options.verbose:
parser.error("options --quiet and --verbose are mutually exclusive")
if options.quiet and options.debug:
parser.error("options --quiet and --debug are mutually exclusive")
# use configuration files from environment variables
if "BBPRECONF" in os.environ:
options.prefile.append(os.environ["BBPRECONF"])
if "BBPOSTCONF" in os.environ:
options.postfile.append(os.environ["BBPOSTCONF"])
# fill in proper log name if not supplied
if options.writeeventlog is not None and len(options.writeeventlog) == 0:
from datetime import datetime
eventlog = "bitbake_eventlog_%s.json" % datetime.now().strftime("%Y%m%d%H%M%S")
options.writeeventlog = eventlog
# if BBSERVER says to autodetect, let's do that
if options.remote_server:
port = -1
if options.remote_server != 'autostart':
host, port = options.remote_server.split(":", 2)
port = int(port)
# use automatic port if port set to -1, means read it from
# the bitbake.lock file; this is a bit tricky, but we always expect
# to be in the base of the build directory if we need to have a
# chance to start the server later, anyway
if port == -1:
lock_location = "./bitbake.lock"
# we try to read the address at all times; if the server is not started,
# we'll try to start it after the first connect fails, below
try:
lf = open(lock_location, 'r')
remotedef = lf.readline()
[host, port] = remotedef.split(":")
port = int(port)
lf.close()
options.remote_server = remotedef
except Exception as e:
if options.remote_server != 'autostart':
raise BBMainException("Failed to read bitbake.lock (%s), invalid port" % str(e))
return options, targets[1:]
def start_server(servermodule, configParams, configuration, features):
server = servermodule.BitBakeServer()
single_use = not configParams.server_only and os.getenv('BBSERVER') != 'autostart'
if configParams.bind:
(host, port) = configParams.bind.split(':')
server.initServer((host, int(port)), single_use=single_use,
idle_timeout=configParams.idle_timeout)
configuration.interface = [server.serverImpl.host, server.serverImpl.port]
else:
server.initServer(single_use=single_use)
configuration.interface = []
try:
configuration.setServerRegIdleCallback(server.getServerIdleCB())
cooker = bb.cooker.BBCooker(configuration, features)
server.addcooker(cooker)
server.saveConnectionDetails()
except Exception as e:
while hasattr(server, "event_queue"):
import queue
try:
event = server.event_queue.get(block=False)
except (queue.Empty, IOError):
break
if isinstance(event, logging.LogRecord):
logger.handle(event)
raise
if not configParams.foreground:
server.detach()
cooker.shutdown()
cooker.lock.close()
return server
def bitbake_main(configParams, configuration):
# Python multiprocessing requires /dev/shm on Linux
if sys.platform.startswith('linux') and not os.access('/dev/shm', os.W_OK | os.X_OK):
raise BBMainException("FATAL: /dev/shm does not exist or is not writable")
# Unbuffer stdout to avoid log truncation in the event
# of an unorderly exit as well as to provide timely
# updates to log files for use with tail
try:
if sys.stdout.name == '<stdout>':
# Reopen with O_SYNC (unbuffered)
fl = fcntl.fcntl(sys.stdout.fileno(), fcntl.F_GETFL)
fl |= os.O_SYNC
fcntl.fcntl(sys.stdout.fileno(), fcntl.F_SETFL, fl)
except:
pass
configuration.setConfigParameters(configParams)
if configParams.server_only:
if configParams.servertype != "xmlrpc":
raise BBMainException("FATAL: If '--server-only' is defined, we must set the "
"servertype as 'xmlrpc'.\n")
if not configParams.bind:
raise BBMainException("FATAL: The '--server-only' option requires a name/address "
"to bind to with the -B option.\n")
else:
try:
#Checking that the port is a number
int(configParams.bind.split(":")[1])
except (ValueError,IndexError):
raise BBMainException(
"FATAL: Malformed host:port bind parameter")
if configParams.remote_server:
raise BBMainException("FATAL: The '--server-only' option conflicts with %s.\n" %
("the BBSERVER environment variable" if "BBSERVER" in os.environ \
else "the '--remote-server' option"))
elif configParams.foreground:
raise BBMainException("FATAL: The '--foreground' option can only be used "
"with --server-only.\n")
if configParams.bind and configParams.servertype != "xmlrpc":
raise BBMainException("FATAL: If '-B' or '--bind' is defined, we must "
"set the servertype as 'xmlrpc'.\n")
if configParams.remote_server and configParams.servertype != "xmlrpc":
raise BBMainException("FATAL: If '--remote-server' is defined, we must "
"set the servertype as 'xmlrpc'.\n")
if configParams.observe_only and (not configParams.remote_server or configParams.bind):
raise BBMainException("FATAL: '--observe-only' can only be used by UI clients "
"connecting to a server.\n")
if configParams.kill_server and not configParams.remote_server:
raise BBMainException("FATAL: '--kill-server' can only be used to "
"terminate a remote server")
if "BBDEBUG" in os.environ:
level = int(os.environ["BBDEBUG"])
if level > configuration.debug:
configuration.debug = level
bb.msg.init_msgconfig(configParams.verbose, configuration.debug,
configuration.debug_domains)
server, server_connection, ui_module = setup_bitbake(configParams, configuration)
if server_connection is None and configParams.kill_server:
return 0
if not configParams.server_only:
if configParams.status_only:
server_connection.terminate()
return 0
try:
return ui_module.main(server_connection.connection, server_connection.events,
configParams)
finally:
bb.event.ui_queue = []
server_connection.terminate()
else:
print("Bitbake server address: %s, server port: %s" % (server.serverImpl.host,
server.serverImpl.port))
if configParams.foreground:
server.serverImpl.serve_forever()
return 0
return 1
def setup_bitbake(configParams, configuration, extrafeatures=None, setup_logging=True):
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
if setup_logging and not configParams.status_only:
# In status only mode there are no logs and no UI
logger.addHandler(handler)
# Clear away any spurious environment variables while we stoke up the cooker
cleanedvars = bb.utils.clean_environment()
if configParams.server_only:
featureset = []
ui_module = None
else:
ui_module = import_extension_module(bb.ui, configParams.ui, 'main')
# Collect the feature set for the UI
featureset = getattr(ui_module, "featureSet", [])
if configParams.server_only:
for param in ('prefile', 'postfile'):
value = getattr(configParams, param)
if value:
setattr(configuration, "%s_server" % param, value)
param = "%s_server" % param
if extrafeatures:
for feature in extrafeatures:
if not feature in featureset:
featureset.append(feature)
servermodule = import_extension_module(bb.server,
configParams.servertype,
'BitBakeServer')
if configParams.remote_server:
if os.getenv('BBSERVER') == 'autostart':
if configParams.remote_server == 'autostart' or \
not servermodule.check_connection(configParams.remote_server, timeout=2):
configParams.bind = 'localhost:0'
srv = start_server(servermodule, configParams, configuration, featureset)
configParams.remote_server = '%s:%d' % tuple(configuration.interface)
bb.event.ui_queue = []
# we start a stub server that is actually a XMLRPClient that connects to a real server
from bb.server.xmlrpc import BitBakeXMLRPCClient
server = servermodule.BitBakeXMLRPCClient(configParams.observe_only,
configParams.xmlrpctoken)
server.saveConnectionDetails(configParams.remote_server)
else:
# we start a server with a given configuration
server = start_server(servermodule, configParams, configuration, featureset)
bb.event.ui_queue = []
if configParams.server_only:
server_connection = None
else:
try:
server_connection = server.establishConnection(featureset)
except Exception as e:
bb.fatal("Could not connect to server %s: %s" % (configParams.remote_server, str(e)))
if configParams.kill_server:
server_connection.connection.terminateServer()
bb.event.ui_queue = []
return None, None, None
server_connection.setupEventQueue()
# Restore the environment in case the UI needs it
for k in cleanedvars:
os.environ[k] = cleanedvars[k]
logger.removeHandler(handler)
return server, server_connection, ui_module

View File

@@ -19,22 +19,11 @@
from bb.utils import better_compile, better_exec
def insert_method(modulename, code, fn, lineno):
def insert_method(modulename, code, fn):
"""
Add code of a module should be added. The methods
will be simply added, no checking will be done
"""
comp = better_compile(code, modulename, fn, lineno=lineno)
comp = better_compile(code, modulename, fn )
better_exec(comp, None, code, fn)
compilecache = {}
def compile_cache(code):
h = hash(code)
if h in compilecache:
return compilecache[h]
return None
def compile_cache_add(code, compileobj):
h = hash(code)
compilecache[h] = compileobj

View File

@@ -52,10 +52,10 @@ def getMountedDev(path):
parentDev = os.stat(path).st_dev
currentDev = parentDev
# When the current directory's device is different from the
# parent's, then the current directory is a mount point
# parrent's, then the current directory is a mount point
while parentDev == currentDev:
mountPoint = path
# Use dirname to get the parent's directory
# Use dirname to get the parrent's directory
path = os.path.dirname(path)
# Reach the "/"
if path == mountPoint:
@@ -77,7 +77,7 @@ def getDiskData(BBDirs, configuration):
"""Prepare disk data for disk space monitor"""
# Save the device IDs, need the ID to be unique (the dictionary's key is
# unique), so that when more than one directory is located on the same
# unique), so that when more than one directories are located in the same
# device, we just monitor it once
devDict = {}
for pathSpaceInode in BBDirs.split():
@@ -129,7 +129,7 @@ def getDiskData(BBDirs, configuration):
bb.utils.mkdirhier(path)
dev = getMountedDev(path)
# Use path/action as the key
devDict[(path, action)] = [dev, minSpace, minInode]
devDict[os.path.join(path, action)] = [dev, minSpace, minInode]
return devDict
@@ -141,7 +141,7 @@ def getInterval(configuration):
spaceDefault = 50 * 1024 * 1024
inodeDefault = 5 * 1024
interval = configuration.getVar("BB_DISKMON_WARNINTERVAL")
interval = configuration.getVar("BB_DISKMON_WARNINTERVAL", True)
if not interval:
return spaceDefault, inodeDefault
else:
@@ -179,7 +179,7 @@ class diskMonitor:
self.enableMonitor = False
self.configuration = configuration
BBDirs = configuration.getVar("BB_DISKMON_DIRS") or None
BBDirs = configuration.getVar("BB_DISKMON_DIRS", True) or None
if BBDirs:
self.devDict = getDiskData(BBDirs, configuration)
if self.devDict:
@@ -187,11 +187,11 @@ class diskMonitor:
if self.spaceInterval and self.inodeInterval:
self.enableMonitor = True
# These are for saving the previous disk free space and inode, we
# use them to avoid printing too many warning messages
# use them to avoid print too many warning messages
self.preFreeS = {}
self.preFreeI = {}
# This is for STOPTASKS and ABORT, to avoid printing the message
# repeatedly while waiting for the tasks to finish
# This is for STOPTASKS and ABORT, to avoid print the message repeatly
# during waiting the tasks to finish
self.checked = {}
for k in self.devDict:
self.preFreeS[k] = 0
@@ -205,25 +205,22 @@ class diskMonitor:
""" Take action for the monitor """
if self.enableMonitor:
diskUsage = {}
for k, attributes in self.devDict.items():
path, action = k
dev, minSpace, minInode = attributes
for k in self.devDict:
path = os.path.dirname(k)
action = os.path.basename(k)
dev = self.devDict[k][0]
minSpace = self.devDict[k][1]
minInode = self.devDict[k][2]
st = os.statvfs(path)
# The available free space, integer number
# The free space, float point number
freeSpace = st.f_bavail * st.f_frsize
# Send all relevant information in the event.
freeSpaceRoot = st.f_bfree * st.f_frsize
totalSpace = st.f_blocks * st.f_frsize
diskUsage[dev] = bb.event.DiskUsageSample(freeSpace, freeSpaceRoot, totalSpace)
if minSpace and freeSpace < minSpace:
# Always show warning, the self.checked would always be False if the action is WARN
if self.preFreeS[k] == 0 or self.preFreeS[k] - freeSpace > self.spaceInterval and not self.checked[k]:
logger.warning("The free space of %s (%s) is running low (%.3fGB left)" % \
logger.warn("The free space of %s (%s) is running low (%.3fGB left)" % \
(path, dev, freeSpace / 1024 / 1024 / 1024.0))
self.preFreeS[k] = freeSpace
@@ -238,18 +235,20 @@ class diskMonitor:
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
# The free inodes, integer number
# The free inodes, float point number
freeInode = st.f_favail
if minInode and freeInode < minInode:
# Some filesystems use dynamic inodes so can't run out
# (e.g. btrfs). This is reported by the inode count being 0.
# Some fs formats' (e.g., btrfs) statvfs.f_files (inodes) is
# zero, this is a feature of the fs, we disable the inode
# checking for such a fs.
if st.f_files == 0:
logger.info("Inode check for %s is unavaliable, will remove it from disk monitor" % path)
self.devDict[k][2] = None
continue
# Always show warning, the self.checked would always be False if the action is WARN
if self.preFreeI[k] == 0 or self.preFreeI[k] - freeInode > self.inodeInterval and not self.checked[k]:
logger.warning("The free inode of %s (%s) is running low (%.3fK left)" % \
logger.warn("The free inode of %s (%s) is running low (%.3fK left)" % \
(path, dev, freeInode / 1024.0))
self.preFreeI[k] = freeInode
@@ -263,6 +262,4 @@ class diskMonitor:
self.checked[k] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
bb.event.fire(bb.event.MonitorDiskEvent(diskUsage), self.configuration)
return

View File

@@ -57,7 +57,7 @@ class BBLogFormatter(logging.Formatter):
}
color_enabled = False
BASECOLOR, BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = list(range(29,38))
BASECOLOR, BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = range(29,38)
COLORS = {
DEBUG3 : CYAN,
@@ -90,9 +90,8 @@ class BBLogFormatter(logging.Formatter):
if self.color_enabled:
record = self.colorize(record)
msg = logging.Formatter.format(self, record)
if hasattr(record, 'bb_exc_formatted'):
msg += '\n' + ''.join(record.bb_exc_formatted)
elif hasattr(record, 'bb_exc_info'):
if hasattr(record, 'bb_exc_info'):
etype, value, tb = record.bb_exc_info
formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
msg += '\n' + ''.join(formatted)
@@ -127,21 +126,7 @@ class BBLogFilter(object):
return True
return False
class BBLogFilterStdErr(BBLogFilter):
def filter(self, record):
if not BBLogFilter.filter(self, record):
return False
if record.levelno >= logging.ERROR:
return True
return False
class BBLogFilterStdOut(BBLogFilter):
def filter(self, record):
if not BBLogFilter.filter(self, record):
return False
if record.levelno < logging.ERROR:
return True
return False
# Message control functions
#
@@ -151,7 +136,7 @@ loggerDefaultVerbose = False
loggerVerboseLogs = False
loggerDefaultDomains = []
def init_msgconfig(verbose, debug, debug_domains=None):
def init_msgconfig(verbose, debug, debug_domains = []):
"""
Set default verbosity and debug levels config the logger
"""
@@ -159,10 +144,7 @@ def init_msgconfig(verbose, debug, debug_domains=None):
bb.msg.loggerDefaultVerbose = verbose
if verbose:
bb.msg.loggerVerboseLogs = True
if debug_domains:
bb.msg.loggerDefaultDomains = debug_domains
else:
bb.msg.loggerDefaultDomains = []
bb.msg.loggerDefaultDomains = debug_domains
def constructLogOptions():
debug = loggerDefaultDebugLevel
@@ -182,13 +164,10 @@ def constructLogOptions():
debug_domains["BitBake.%s" % domainarg] = logging.DEBUG - dlevel + 1
return level, debug_domains
def addDefaultlogFilter(handler, cls = BBLogFilter, forcelevel=None):
def addDefaultlogFilter(handler):
level, debug_domains = constructLogOptions()
if forcelevel is not None:
level = forcelevel
cls(handler, level, debug_domains)
BBLogFilter(handler, level, debug_domains)
#
# Message handling functions
@@ -201,25 +180,3 @@ def fatal(msgdomain, msg):
logger = logging.getLogger("BitBake")
logger.critical(msg)
sys.exit(1)
def logger_create(name, output=sys.stderr, level=logging.INFO, preserve_handlers=False, color='auto'):
"""Standalone logger creation function"""
logger = logging.getLogger(name)
console = logging.StreamHandler(output)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if color == 'always' or (color == 'auto' and output.isatty()):
format.enable_color()
console.setFormatter(format)
if preserve_handlers:
logger.addHandler(console)
else:
logger.handlers = [console]
logger.setLevel(level)
return logger
def has_console_handler(logger):
for handler in logger.handlers:
if isinstance(handler, logging.StreamHandler):
if handler.stream in [sys.stderr, sys.stdout]:
return True
return False

View File

@@ -202,8 +202,8 @@ if __name__ == '__main__':
print(rec5._replace(k=222)._my_custom_method()) # MyMixIn's
print(rec5._replace(k=222).count(2)) # MyMixIn's
# Note that behavior: the standard namedtuple methods cannot be
# overridden by a foreign mix-in -- even if the mix-in is declared
# None that behavior: the standard namedtuple methods cannot be
# overriden by a foreign mix-in -- even if the mix-in is declared
# as the leftmost base class (but, obviously, you can override them
# in the defined class or its subclasses):

View File

@@ -26,10 +26,9 @@ File parsers for the BitBake build tools.
handlers = []
import errno
import logging
import os
import stat
import logging
import bb
import bb.utils
import bb.siggen
@@ -50,11 +49,8 @@ class ParseError(Exception):
else:
return "ParseError in %s: %s" % (self.filename, self.msg)
class SkipRecipe(Exception):
"""Exception raised to skip this recipe"""
class SkipPackage(SkipRecipe):
"""Exception raised to skip this recipe (use SkipRecipe in new code)"""
class SkipPackage(Exception):
"""Exception raised to skip this package"""
__mtime_cache = {}
def cached_mtime(f):
@@ -71,23 +67,13 @@ def cached_mtime_noerror(f):
return __mtime_cache[f]
def update_mtime(f):
try:
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
except OSError:
if f in __mtime_cache:
del __mtime_cache[f]
return 0
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
return __mtime_cache[f]
def update_cache(f):
if f in __mtime_cache:
logger.debug(1, "Updating mtime cache for %s" % f)
update_mtime(f)
def mark_dependency(d, f):
if f.startswith('./'):
f = "%s/%s" % (os.getcwd(), f[2:])
deps = (d.getVar('__depends', False) or [])
deps = (d.getVar('__depends') or [])
s = (f, cached_mtime_noerror(f))
if s not in deps:
deps.append(s)
@@ -95,7 +81,7 @@ def mark_dependency(d, f):
def check_dependency(d, f):
s = (f, cached_mtime_noerror(f))
deps = (d.getVar('__depends', False) or [])
deps = (d.getVar('__depends') or [])
return s in deps
def supports(fn, data):
@@ -123,24 +109,25 @@ def init_parser(d):
def resolve_file(fn, d):
if not os.path.isabs(fn):
bbpath = d.getVar("BBPATH")
bbpath = d.getVar("BBPATH", True)
newfn, attempts = bb.utils.which(bbpath, fn, history=True)
for af in attempts:
mark_dependency(d, af)
if not newfn:
raise IOError(errno.ENOENT, "file %s not found in %s" % (fn, bbpath))
raise IOError("file %s not found in %s" % (fn, bbpath))
fn = newfn
mark_dependency(d, fn)
if not os.path.isfile(fn):
raise IOError(errno.ENOENT, "file %s not found" % fn)
raise IOError("file %s not found" % fn)
logger.debug(2, "LOAD %s", fn)
return fn
# Used by OpenEmbedded metadata
__pkgsplit_cache__={}
def vars_from_file(mypkg, d):
if not mypkg or not mypkg.endswith((".bb", ".bbappend")):
if not mypkg:
return (None, None, None)
if mypkg in __pkgsplit_cache__:
return __pkgsplit_cache__[mypkg]
@@ -161,8 +148,8 @@ def vars_from_file(mypkg, d):
def get_file_depends(d):
'''Return the dependent files'''
dep_files = []
depends = d.getVar('__base_depends', False) or []
depends = depends + (d.getVar('__depends', False) or [])
depends = d.getVar('__base_depends', True) or []
depends = depends + (d.getVar('__depends', True) or [])
for (fn, _) in depends:
dep_files.append(os.path.abspath(fn))
return " ".join(dep_files)

View File

@@ -21,7 +21,8 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from __future__ import absolute_import
from future_builtins import filter
import re
import string
import logging
@@ -30,6 +31,8 @@ import itertools
from bb import methodpool
from bb.parse import logger
_bbversions_re = re.compile(r"\[(?P<from>[0-9]+)-(?P<to>[0-9]+)\]")
class StatementGroup(list):
def eval(self, data):
for statement in self:
@@ -67,33 +70,6 @@ class ExportNode(AstNode):
def eval(self, data):
data.setVarFlag(self.var, "export", 1, op = 'exported')
class UnsetNode(AstNode):
def __init__(self, filename, lineno, var):
AstNode.__init__(self, filename, lineno)
self.var = var
def eval(self, data):
loginfo = {
'variable': self.var,
'file': self.filename,
'line': self.lineno,
}
data.delVar(self.var,**loginfo)
class UnsetFlagNode(AstNode):
def __init__(self, filename, lineno, var, flag):
AstNode.__init__(self, filename, lineno)
self.var = var
self.flag = flag
def eval(self, data):
loginfo = {
'variable': self.var,
'file': self.filename,
'line': self.lineno,
}
data.delVarFlag(self.var, self.flag, **loginfo)
class DataNode(AstNode):
"""
Various data related updates. For the sake of sanity
@@ -107,9 +83,9 @@ class DataNode(AstNode):
def getFunc(self, key, data):
if 'flag' in self.groupd and self.groupd['flag'] != None:
return data.getVarFlag(key, self.groupd['flag'], expand=False, noweakdefault=True)
return data.getVarFlag(key, self.groupd['flag'], noweakdefault=True)
else:
return data.getVar(key, False, noweakdefault=True, parsing=True)
return data.getVar(key, noweakdefault=True)
def eval(self, data):
groupd = self.groupd
@@ -130,6 +106,7 @@ class DataNode(AstNode):
val = groupd["value"]
elif "colon" in groupd and groupd["colon"] != None:
e = data.createCopy()
bb.data.update_data(e)
op = "immediate"
val = e.expand(groupd["value"], key + "[:=]")
elif "append" in groupd and groupd["append"] != None:
@@ -151,7 +128,7 @@ class DataNode(AstNode):
if 'flag' in groupd and groupd['flag'] != None:
flag = groupd['flag']
elif groupd["lazyques"]:
flag = "_defaultval"
flag = "defaultval"
loginfo['op'] = op
loginfo['detail'] = groupd["value"]
@@ -159,42 +136,27 @@ class DataNode(AstNode):
if flag:
data.setVarFlag(key, flag, val, **loginfo)
else:
data.setVar(key, val, parsing=True, **loginfo)
data.setVar(key, val, **loginfo)
class MethodNode(AstNode):
tr_tbl = str.maketrans('/.+-@%&', '_______')
def __init__(self, filename, lineno, func_name, body, python, fakeroot):
def __init__(self, filename, lineno, func_name, body):
AstNode.__init__(self, filename, lineno)
self.func_name = func_name
self.body = body
self.python = python
self.fakeroot = fakeroot
def eval(self, data):
text = '\n'.join(self.body)
funcname = self.func_name
if self.func_name == "__anonymous":
funcname = ("__anon_%s_%s" % (self.lineno, self.filename.translate(MethodNode.tr_tbl)))
self.python = True
funcname = ("__anon_%s_%s" % (self.lineno, self.filename.translate(string.maketrans('/.+-@', '_____'))))
text = "def %s(d):\n" % (funcname) + text
bb.methodpool.insert_method(funcname, text, self.filename, self.lineno - len(self.body))
anonfuncs = data.getVar('__BBANONFUNCS', False) or []
bb.methodpool.insert_method(funcname, text, self.filename)
anonfuncs = data.getVar('__BBANONFUNCS') or []
anonfuncs.append(funcname)
data.setVar('__BBANONFUNCS', anonfuncs)
if data.getVar(funcname, False):
# clean up old version of this piece of metadata, as its
# flags could cause problems
data.delVarFlag(funcname, 'python')
data.delVarFlag(funcname, 'fakeroot')
if self.python:
data.setVarFlag(funcname, "python", "1")
if self.fakeroot:
data.setVarFlag(funcname, "fakeroot", "1")
data.setVarFlag(funcname, "func", 1)
data.setVar(funcname, text, parsing=True)
data.setVarFlag(funcname, 'filename', self.filename)
data.setVarFlag(funcname, 'lineno', str(self.lineno - len(self.body)))
data.setVar(funcname, text)
else:
data.setVarFlag(self.func_name, "func", 1)
data.setVar(self.func_name, text)
class PythonMethodNode(AstNode):
def __init__(self, filename, lineno, function, modulename, body):
@@ -208,12 +170,31 @@ class PythonMethodNode(AstNode):
# 'this' file. This means we will not parse methods from
# bb classes twice
text = '\n'.join(self.body)
bb.methodpool.insert_method(self.modulename, text, self.filename, self.lineno - len(self.body) - 1)
bb.methodpool.insert_method(self.modulename, text, self.filename)
data.setVarFlag(self.function, "func", 1)
data.setVarFlag(self.function, "python", 1)
data.setVar(self.function, text, parsing=True)
data.setVarFlag(self.function, 'filename', self.filename)
data.setVarFlag(self.function, 'lineno', str(self.lineno - len(self.body) - 1))
data.setVar(self.function, text)
class MethodFlagsNode(AstNode):
def __init__(self, filename, lineno, key, m):
AstNode.__init__(self, filename, lineno)
self.key = key
self.m = m
def eval(self, data):
if data.getVar(self.key):
# clean up old version of this piece of metadata, as its
# flags could cause problems
data.setVarFlag(self.key, 'python', None)
data.setVarFlag(self.key, 'fakeroot', None)
if self.m.group("py") is not None:
data.setVarFlag(self.key, "python", "1")
else:
data.delVarFlag(self.key, "python")
if self.m.group("fr") is not None:
data.setVarFlag(self.key, "fakeroot", "1")
else:
data.delVarFlag(self.key, "fakeroot")
class ExportFuncsNode(AstNode):
def __init__(self, filename, lineno, fns, classname):
@@ -226,28 +207,24 @@ class ExportFuncsNode(AstNode):
for func in self.n:
calledfunc = self.classname + "_" + func
if data.getVar(func, False) and not data.getVarFlag(func, 'export_func', False):
if data.getVar(func) and not data.getVarFlag(func, 'export_func'):
continue
if data.getVar(func, False):
if data.getVar(func):
data.setVarFlag(func, 'python', None)
data.setVarFlag(func, 'func', None)
for flag in [ "func", "python" ]:
if data.getVarFlag(calledfunc, flag, False):
data.setVarFlag(func, flag, data.getVarFlag(calledfunc, flag, False))
if data.getVarFlag(calledfunc, flag):
data.setVarFlag(func, flag, data.getVarFlag(calledfunc, flag))
for flag in [ "dirs" ]:
if data.getVarFlag(func, flag, False):
data.setVarFlag(calledfunc, flag, data.getVarFlag(func, flag, False))
data.setVarFlag(func, "filename", "autogenerated")
data.setVarFlag(func, "lineno", 1)
if data.getVarFlag(func, flag):
data.setVarFlag(calledfunc, flag, data.getVarFlag(func, flag))
if data.getVarFlag(calledfunc, "python", False):
data.setVar(func, " bb.build.exec_func('" + calledfunc + "', d)\n", parsing=True)
if data.getVarFlag(calledfunc, "python"):
data.setVar(func, " bb.build.exec_func('" + calledfunc + "', d)\n")
else:
if "-" in self.classname:
bb.fatal("The classname %s contains a dash character and is calling an sh function %s using EXPORT_FUNCTIONS. Since a dash is illegal in sh function names, this cannot work, please rename the class or don't use EXPORT_FUNCTIONS." % (self.classname, calledfunc))
data.setVar(func, " " + calledfunc + "\n", parsing=True)
data.setVar(func, " " + calledfunc + "\n")
data.setVarFlag(func, 'export_func', '1')
class AddTaskNode(AstNode):
@@ -258,15 +235,29 @@ class AddTaskNode(AstNode):
self.after = after
def eval(self, data):
bb.build.addtask(self.func, self.before, self.after, data)
var = self.func
if self.func[:3] != "do_":
var = "do_" + self.func
class DelTaskNode(AstNode):
def __init__(self, filename, lineno, func):
AstNode.__init__(self, filename, lineno)
self.func = func
data.setVarFlag(var, "task", 1)
bbtasks = data.getVar('__BBTASKS') or []
if not var in bbtasks:
bbtasks.append(var)
data.setVar('__BBTASKS', bbtasks)
def eval(self, data):
bb.build.deltask(self.func, data)
existing = data.getVarFlag(var, "deps") or []
if self.after is not None:
# set up deps for function
for entry in self.after.split():
if entry not in existing:
existing.append(entry)
data.setVarFlag(var, "deps", existing)
if self.before is not None:
# set up things that depend on this func
for entry in self.before.split():
existing = data.getVarFlag(entry, "deps") or []
if var not in existing:
data.setVarFlag(entry, "deps", [var] + existing)
class BBHandlerNode(AstNode):
def __init__(self, filename, lineno, fns):
@@ -274,7 +265,7 @@ class BBHandlerNode(AstNode):
self.hs = fns.split()
def eval(self, data):
bbhands = data.getVar('__BBHANDLERS', False) or []
bbhands = data.getVar('__BBHANDLERS') or []
for h in self.hs:
bbhands.append(h)
data.setVarFlag(h, "handler", 1)
@@ -294,21 +285,18 @@ def handleInclude(statements, filename, lineno, m, force):
def handleExport(statements, filename, lineno, m):
statements.append(ExportNode(filename, lineno, m.group(1)))
def handleUnset(statements, filename, lineno, m):
statements.append(UnsetNode(filename, lineno, m.group(1)))
def handleUnsetFlag(statements, filename, lineno, m):
statements.append(UnsetFlagNode(filename, lineno, m.group(1), m.group(2)))
def handleData(statements, filename, lineno, groupd):
statements.append(DataNode(filename, lineno, groupd))
def handleMethod(statements, filename, lineno, func_name, body, python, fakeroot):
statements.append(MethodNode(filename, lineno, func_name, body, python, fakeroot))
def handleMethod(statements, filename, lineno, func_name, body):
statements.append(MethodNode(filename, lineno, func_name, body))
def handlePythonMethod(statements, filename, lineno, funcname, modulename, body):
statements.append(PythonMethodNode(filename, lineno, funcname, modulename, body))
def handleMethodFlags(statements, filename, lineno, key, m):
statements.append(MethodFlagsNode(filename, lineno, key, m))
def handleExportFuncs(statements, filename, lineno, m, classname):
statements.append(ExportFuncsNode(filename, lineno, m.group(1), classname))
@@ -321,13 +309,6 @@ def handleAddTask(statements, filename, lineno, m):
statements.append(AddTaskNode(filename, lineno, func, before, after))
def handleDelTask(statements, filename, lineno, m):
func = m.group("func")
if func is None:
return
statements.append(DelTaskNode(filename, lineno, func))
def handleBBHandlers(statements, filename, lineno, m):
statements.append(BBHandlerNode(filename, lineno, m.group(1)))
@@ -336,26 +317,22 @@ def handleInherit(statements, filename, lineno, m):
statements.append(InheritNode(filename, lineno, classes))
def finalize(fn, d, variant = None):
saved_handlers = bb.event.get_handlers().copy()
for var in d.getVar('__BBHANDLERS', False) or []:
all_handlers = {}
for var in d.getVar('__BBHANDLERS') or []:
# try to add the handler
handlerfn = d.getVarFlag(var, "filename", False)
if not handlerfn:
bb.fatal("Undefined event handler function '%s'" % var)
handlerln = int(d.getVarFlag(var, "lineno", False))
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
bb.event.register(var, d.getVar(var), (d.getVarFlag(var, "eventmask", True) or "").split())
bb.event.fire(bb.event.RecipePreFinalise(fn), d)
bb.data.expandKeys(d)
bb.data.update_data(d)
code = []
for funcname in d.getVar("__BBANONFUNCS", False) or []:
for funcname in d.getVar("__BBANONFUNCS") or []:
code.append("%s(d)" % funcname)
bb.utils.better_exec("\n".join(code), {"d": d})
bb.data.update_data(d)
tasklist = d.getVar('__BBTASKS', False) or []
bb.event.fire(bb.event.RecipeTaskPreProcess(fn, list(tasklist)), d)
tasklist = d.getVar('__BBTASKS') or []
bb.build.add_tasks(tasklist, d)
bb.parse.siggen.finalise(fn, d, variant)
@@ -363,28 +340,46 @@ def finalize(fn, d, variant = None):
d.setVar('BBINCLUDED', bb.parse.get_file_depends(d))
bb.event.fire(bb.event.RecipeParsed(fn), d)
bb.event.set_handlers(saved_handlers)
def _create_variants(datastores, names, function, onlyfinalise):
def _create_variants(datastores, names, function):
def create_variant(name, orig_d, arg = None):
if onlyfinalise and name not in onlyfinalise:
return
new_d = bb.data.createCopy(orig_d)
function(arg or name, new_d)
datastores[name] = new_d
for variant in list(datastores.keys()):
for variant, variant_d in datastores.items():
for name in names:
if not variant:
# Based on main recipe
create_variant(name, datastores[""])
create_variant(name, variant_d)
else:
create_variant("%s-%s" % (variant, name), datastores[variant], name)
create_variant("%s-%s" % (variant, name), variant_d, name)
def _expand_versions(versions):
def expand_one(version, start, end):
for i in xrange(start, end + 1):
ver = _bbversions_re.sub(str(i), version, 1)
yield ver
versions = iter(versions)
while True:
try:
version = next(versions)
except StopIteration:
break
range_ver = _bbversions_re.search(version)
if not range_ver:
yield version
else:
newversions = expand_one(version, int(range_ver.group("from")),
int(range_ver.group("to")))
versions = itertools.chain(newversions, versions)
def multi_finalize(fn, d):
appends = (d.getVar("__BBAPPEND") or "").split()
appends = (d.getVar("__BBAPPEND", True) or "").split()
for append in appends:
logger.debug(1, "Appending .bbappend file %s to %s", append, fn)
logger.debug(2, "Appending .bbappend file %s to %s", append, fn)
bb.parse.BBHandler.handle(append, d, True)
onlyfinalise = d.getVar("__ONLYFINALISE", False)
@@ -393,11 +388,55 @@ def multi_finalize(fn, d):
d = bb.data.createCopy(safe_d)
try:
finalize(fn, d)
except bb.parse.SkipRecipe as e:
except bb.parse.SkipPackage as e:
d.setVar("__SKIPPED", e.args[0])
datastores = {"": safe_d}
extended = d.getVar("BBCLASSEXTEND") or ""
versions = (d.getVar("BBVERSIONS", True) or "").split()
if versions:
pv = orig_pv = d.getVar("PV", True)
baseversions = {}
def verfunc(ver, d, pv_d = None):
if pv_d is None:
pv_d = d
overrides = d.getVar("OVERRIDES", True).split(":")
pv_d.setVar("PV", ver)
overrides.append(ver)
bpv = baseversions.get(ver) or orig_pv
pv_d.setVar("BPV", bpv)
overrides.append(bpv)
d.setVar("OVERRIDES", ":".join(overrides))
versions = list(_expand_versions(versions))
for pos, version in enumerate(list(versions)):
try:
pv, bpv = version.split(":", 2)
except ValueError:
pass
else:
versions[pos] = pv
baseversions[pv] = bpv
if pv in versions and not baseversions.get(pv):
versions.remove(pv)
else:
pv = versions.pop()
# This is necessary because our existing main datastore
# has already been finalized with the old PV, we need one
# that's been finalized with the new PV.
d = bb.data.createCopy(safe_d)
verfunc(pv, d, safe_d)
try:
finalize(fn, d)
except bb.parse.SkipPackage as e:
d.setVar("__SKIPPED", e.args[0])
_create_variants(datastores, versions, verfunc)
extended = d.getVar("BBCLASSEXTEND", True) or ""
if extended:
# the following is to support bbextends with arguments, for e.g. multilib
# an example is as follows:
@@ -415,7 +454,7 @@ def multi_finalize(fn, d):
else:
extendedmap[ext] = ext
pn = d.getVar("PN")
pn = d.getVar("PN", True)
def extendfunc(name, d):
if name != extendedmap[name]:
d.setVar("BBEXTENDCURR", extendedmap[name])
@@ -425,15 +464,19 @@ def multi_finalize(fn, d):
bb.parse.BBHandler.inherit(extendedmap[name], fn, 0, d)
safe_d.setVar("BBCLASSEXTEND", extended)
_create_variants(datastores, extendedmap.keys(), extendfunc, onlyfinalise)
_create_variants(datastores, extendedmap.keys(), extendfunc)
for variant in datastores.keys():
for variant, variant_d in datastores.iteritems():
if variant:
try:
if not onlyfinalise or variant in onlyfinalise:
finalize(fn, datastores[variant], variant)
except bb.parse.SkipRecipe as e:
datastores[variant].setVar("__SKIPPED", e.args[0])
finalize(fn, variant_d, variant)
except bb.parse.SkipPackage as e:
variant_d.setVar("__SKIPPED", e.args[0])
if len(datastores) > 1:
variants = filter(None, datastores.iterkeys())
safe_d.setVar("__VARIANTS", " ".join(variants))
datastores[""] = d
return datastores

View File

@@ -25,14 +25,14 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from __future__ import absolute_import
import re, bb, os
import logging
import bb.build, bb.utils
from bb import data
from . import ConfHandler
from .. import resolve_file, ast, logger, ParseError
from .. import resolve_file, ast, logger
from .ConfHandler import include, init
# For compatibility
@@ -42,31 +42,41 @@ __func_start_regexp__ = re.compile( r"(((?P<py>python)|(?P<fr>fakeroot))\s*)*
__inherit_regexp__ = re.compile( r"inherit\s+(.+)" )
__export_func_regexp__ = re.compile( r"EXPORT_FUNCTIONS\s+(.+)" )
__addtask_regexp__ = re.compile("addtask\s+(?P<func>\w+)\s*((before\s*(?P<before>((.*(?=after))|(.*))))|(after\s*(?P<after>((.*(?=before))|(.*)))))*")
__deltask_regexp__ = re.compile("deltask\s+(?P<func>\w+)")
__addhandler_regexp__ = re.compile( r"addhandler\s+(.+)" )
__def_regexp__ = re.compile( r"def\s+(\w+).*:" )
__python_func_regexp__ = re.compile( r"(\s+.*)|(^$)" )
__infunc__ = []
__infunc__ = ""
__inpython__ = False
__body__ = []
__classname__ = ""
cached_statements = {}
# We need to indicate EOF to the feeder. This code is so messy that
# factoring it out to a close_parse_file method is out of question.
# We will use the IN_PYTHON_EOF as an indicator to just close the method
#
# The two parts using it are tightly integrated anyway
IN_PYTHON_EOF = -9999999999999
def supports(fn, d):
"""Return True if fn has a supported extension"""
return os.path.splitext(fn)[-1] in [".bb", ".bbclass", ".inc"]
def inherit(files, fn, lineno, d):
__inherit_cache = d.getVar('__inherit_cache', False) or []
__inherit_cache = d.getVar('__inherit_cache') or []
files = d.expand(files).split()
for file in files:
if not os.path.isabs(file) and not file.endswith(".bbclass"):
file = os.path.join('classes', '%s.bbclass' % file)
if not os.path.isabs(file):
bbpath = d.getVar("BBPATH")
dname = os.path.dirname(fn)
bbpath = "%s:%s" % (dname, d.getVar("BBPATH", True))
abs_fn, attempts = bb.utils.which(bbpath, file, history=True)
for af in attempts:
if af != abs_fn:
@@ -75,11 +85,11 @@ def inherit(files, fn, lineno, d):
file = abs_fn
if not file in __inherit_cache:
logger.debug(1, "Inheriting %s (from %s:%d)" % (file, fn, lineno))
logger.log(logging.DEBUG -1, "BB %s:%d: inheriting %s", fn, lineno, file)
__inherit_cache.append( file )
d.setVar('__inherit_cache', __inherit_cache)
include(fn, file, lineno, d, "inherit")
__inherit_cache = d.getVar('__inherit_cache', False) or []
__inherit_cache = d.getVar('__inherit_cache') or []
def get_statements(filename, absolute_filename, base_name):
global cached_statements
@@ -87,20 +97,20 @@ def get_statements(filename, absolute_filename, base_name):
try:
return cached_statements[absolute_filename]
except KeyError:
with open(absolute_filename, 'r') as f:
statements = ast.StatementGroup()
lineno = 0
while True:
lineno = lineno + 1
s = f.readline()
if not s: break
s = s.rstrip()
feeder(lineno, s, filename, base_name, statements)
file = open(absolute_filename, 'r')
statements = ast.StatementGroup()
lineno = 0
while True:
lineno = lineno + 1
s = file.readline()
if not s: break
s = s.rstrip()
feeder(lineno, s, filename, base_name, statements)
file.close()
if __inpython__:
# add a blank line to close out any python definition
feeder(lineno, "", filename, base_name, statements, eof=True)
feeder(IN_PYTHON_EOF, "", filename, base_name, statements)
if filename.endswith(".bbclass") or filename.endswith(".inc"):
cached_statements[absolute_filename] = statements
@@ -109,23 +119,29 @@ def get_statements(filename, absolute_filename, base_name):
def handle(fn, d, include):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __infunc__, __body__, __residue__, __classname__
__body__ = []
__infunc__ = []
__infunc__ = ""
__classname__ = ""
__residue__ = []
if include == 0:
logger.debug(2, "BB %s: handle(data)", fn)
else:
logger.debug(2, "BB %s: handle(data, include)", fn)
base_name = os.path.basename(fn)
(root, ext) = os.path.splitext(base_name)
init(d)
if ext == ".bbclass":
__classname__ = root
__inherit_cache = d.getVar('__inherit_cache', False) or []
__inherit_cache = d.getVar('__inherit_cache') or []
if not fn in __inherit_cache:
__inherit_cache.append(fn)
d.setVar('__inherit_cache', __inherit_cache)
if include != 0:
oldfile = d.getVar('FILE', False)
oldfile = d.getVar('FILE')
else:
oldfile = None
@@ -138,36 +154,31 @@ def handle(fn, d, include):
statements = get_statements(fn, abs_fn, base_name)
# DONE WITH PARSING... time to evaluate
if ext != ".bbclass" and abs_fn != oldfile:
if ext != ".bbclass":
d.setVar('FILE', abs_fn)
try:
statements.eval(d)
except bb.parse.SkipRecipe:
except bb.parse.SkipPackage:
bb.data.setVar("__SKIPPED", True, d)
if include == 0:
return { "" : d }
if __infunc__:
raise ParseError("Shell function %s is never closed" % __infunc__[0], __infunc__[1], __infunc__[2])
if __residue__:
raise ParseError("Leftover unparsed (incomplete?) data %s from %s" % __residue__, fn)
if ext != ".bbclass" and include == 0:
return ast.multi_finalize(fn, d)
if ext != ".bbclass" and oldfile and abs_fn != oldfile:
if oldfile:
d.setVar("FILE", oldfile)
return d
def feeder(lineno, s, fn, root, statements, eof=False):
def feeder(lineno, s, fn, root, statements):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__, __infunc__, __body__, bb, __residue__, __classname__
if __infunc__:
if s == '}':
__body__.append('')
ast.handleMethod(statements, fn, lineno, __infunc__[0], __body__, __infunc__[3], __infunc__[4])
__infunc__ = []
ast.handleMethod(statements, fn, lineno, __infunc__, __body__)
__infunc__ = ""
__body__ = []
else:
__body__.append(s)
@@ -175,7 +186,7 @@ def feeder(lineno, s, fn, root, statements, eof=False):
if __inpython__:
m = __python_func_regexp__.match(s)
if m and not eof:
if m and lineno != IN_PYTHON_EOF:
__body__.append(s)
return
else:
@@ -184,7 +195,7 @@ def feeder(lineno, s, fn, root, statements, eof=False):
__body__ = []
__inpython__ = False
if eof:
if lineno == IN_PYTHON_EOF:
return
if s and s[0] == '#':
@@ -211,7 +222,8 @@ def feeder(lineno, s, fn, root, statements, eof=False):
m = __func_start_regexp__.match(s)
if m:
__infunc__ = [m.group("func") or "__anonymous", fn, lineno, m.group("py") is not None, m.group("fr") is not None]
__infunc__ = m.group("func") or "__anonymous"
ast.handleMethodFlags(statements, fn, lineno, __infunc__, m)
return
m = __def_regexp__.match(s)
@@ -231,11 +243,6 @@ def feeder(lineno, s, fn, root, statements, eof=False):
ast.handleAddTask(statements, fn, lineno, m)
return
m = __deltask_regexp__.match(s)
if m:
ast.handleDelTask(statements, fn, lineno, m)
return
m = __addhandler_regexp__.match(s)
if m:
ast.handleBBHandlers(statements, fn, lineno, m)

View File

@@ -24,16 +24,15 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import errno
import re
import os
import re, os
import logging
import bb.utils
from bb.parse import ParseError, resolve_file, ast, logger, handle
from bb.parse import ParseError, resolve_file, ast, logger
__config_regexp__ = re.compile( r"""
^
(?P<exp>export\s+)?
(?P<var>[a-zA-Z0-9\-_+.${}/~]+?)
(?P<exp>export\s*)?
(?P<var>[a-zA-Z0-9\-~_+.${}/]+?)
(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
\s* (
@@ -56,12 +55,10 @@ __config_regexp__ = re.compile( r"""
""", re.X)
__include_regexp__ = re.compile( r"include\s+(.+)" )
__require_regexp__ = re.compile( r"require\s+(.+)" )
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)\[([a-zA-Z0-9\-_+.]+)\]$" )
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/]+)$" )
def init(data):
topdir = data.getVar('TOPDIR', False)
topdir = data.getVar('TOPDIR')
if not topdir:
data.setVar('TOPDIR', os.getcwd())
@@ -69,51 +66,40 @@ def init(data):
def supports(fn, d):
return fn[-5:] == ".conf"
def include(parentfn, fns, lineno, data, error_out):
def include(oldfn, fn, lineno, data, error_out):
"""
error_out: A string indicating the verb (e.g. "include", "inherit") to be
used in a ParseError that will be raised if the file to be included could
not be included. Specify False to avoid raising an error in this case.
"""
fns = data.expand(fns)
parentfn = data.expand(parentfn)
# "include" or "require" accept zero to n space-separated file names to include.
for fn in fns.split():
include_single_file(parentfn, fn, lineno, data, error_out)
def include_single_file(parentfn, fn, lineno, data, error_out):
"""
Helper function for include() which does not expand or split its parameters.
"""
if parentfn == fn: # prevent infinite recursion
if oldfn == fn: # prevent infinite recursion
return None
import bb
fn = data.expand(fn)
oldfn = data.expand(oldfn)
if not os.path.isabs(fn):
dname = os.path.dirname(parentfn)
bbpath = "%s:%s" % (dname, data.getVar("BBPATH"))
dname = os.path.dirname(oldfn)
bbpath = "%s:%s" % (dname, data.getVar("BBPATH", True))
abs_fn, attempts = bb.utils.which(bbpath, fn, history=True)
if abs_fn and bb.parse.check_dependency(data, abs_fn):
logger.warning("Duplicate inclusion for %s in %s" % (abs_fn, data.getVar('FILE')))
bb.warn("Duplicate inclusion for %s in %s" % (abs_fn, data.getVar('FILE', True)))
for af in attempts:
bb.parse.mark_dependency(data, af)
if abs_fn:
fn = abs_fn
elif bb.parse.check_dependency(data, fn):
logger.warning("Duplicate inclusion for %s in %s" % (fn, data.getVar('FILE')))
bb.warn("Duplicate inclusion for %s in %s" % (fn, data.getVar('FILE', True)))
from bb.parse import handle
try:
bb.parse.handle(fn, data, True)
except (IOError, OSError) as exc:
if exc.errno == errno.ENOENT:
if error_out:
raise ParseError("Could not %s file %s" % (error_out, fn), parentfn, lineno)
logger.debug(2, "CONF file '%s' not found", fn)
else:
if error_out:
raise ParseError("Could not %s file %s: %s" % (error_out, fn, exc.strerror), parentfn, lineno)
else:
raise ParseError("Error parsing %s: %s" % (fn, exc.strerror), parentfn, lineno)
ret = handle(fn, data, True)
except (IOError, OSError):
if error_out:
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), oldfn, lineno)
logger.debug(2, "CONF file '%s' not found", fn)
bb.parse.mark_dependency(data, fn)
# We have an issue where a UI might want to enforce particular settings such as
# an empty DISTRO variable. If configuration files do something like assigning
@@ -129,7 +115,7 @@ def handle(fn, data, include):
if include == 0:
oldfile = None
else:
oldfile = data.getVar('FILE', False)
oldfile = data.getVar('FILE')
abs_fn = resolve_file(fn, data)
f = open(abs_fn, 'r')
@@ -158,7 +144,7 @@ def handle(fn, data, include):
# skip comments
if s[0] == '#':
continue
feeder(lineno, s, abs_fn, statements)
feeder(lineno, s, fn, statements)
# DONE WITH PARSING... time to evaluate
data.setVar('FILE', abs_fn)
@@ -195,16 +181,6 @@ def feeder(lineno, s, fn, statements):
ast.handleExport(statements, fn, lineno, m)
return
m = __unset_regexp__.match(s)
if m:
ast.handleUnset(statements, fn, lineno, m)
return
m = __unset_flag_regexp__.match(s)
if m:
ast.handleUnsetFlag(statements, fn, lineno, m)
return
raise ParseError("unparsed line: '%s'" % s, fn, lineno);
# Add us to the handlers list

View File

@@ -28,7 +28,11 @@ import sys
import warnings
from bb.compat import total_ordering
from collections import Mapping
import sqlite3
try:
import sqlite3
except ImportError:
from pysqlite2 import dbapi2 as sqlite3
sqlversion = sqlite3.sqlite_version_info
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
@@ -88,9 +92,9 @@ class SQLTable(collections.MutableMapping):
self._execute("DELETE from %s where key=?;" % self.table, [key])
def __setitem__(self, key, value):
if not isinstance(key, str):
if not isinstance(key, basestring):
raise TypeError('Only string keys are supported')
elif not isinstance(value, str):
elif not isinstance(value, basestring):
raise TypeError('Only string values are supported')
data = self._execute("SELECT * from %s where key=?;" %
@@ -174,7 +178,7 @@ class PersistData(object):
"""
Return a list of key + value pairs for a domain
"""
return list(self.data[domain].items())
return self.data[domain].items()
def getValue(self, domain, key):
"""
@@ -195,16 +199,13 @@ class PersistData(object):
del self.data[domain][key]
def connect(database):
connection = sqlite3.connect(database, timeout=5, isolation_level=None)
connection.execute("pragma synchronous = off;")
connection.text_factory = str
return connection
return sqlite3.connect(database, timeout=5, isolation_level=None)
def persist(domain, d):
"""Convenience factory for SQLTable objects based upon metadata"""
import bb.utils
cachedir = (d.getVar("PERSISTENT_DIR") or
d.getVar("CACHE"))
cachedir = (d.getVar("PERSISTENT_DIR", True) or
d.getVar("CACHE", True))
if not cachedir:
logger.critical("Please set the 'PERSISTENT_DIR' or 'CACHE' variable")
sys.exit(1)

View File

@@ -17,7 +17,7 @@ class CmdError(RuntimeError):
self.msg = msg
def __str__(self):
if not isinstance(self.command, str):
if not isinstance(self.command, basestring):
cmd = subprocess.list2cmdline(self.command)
else:
cmd = self.command
@@ -64,7 +64,7 @@ class Popen(subprocess.Popen):
options.update(kwargs)
subprocess.Popen.__init__(self, *args, **options)
def _logged_communicate(pipe, log, input, extrafiles):
def _logged_communicate(pipe, log, input):
if pipe.stdin:
if input is not None:
pipe.stdin.write(input)
@@ -79,82 +79,40 @@ def _logged_communicate(pipe, log, input, extrafiles):
if pipe.stderr is not None:
bb.utils.nonblockingfd(pipe.stderr.fileno())
rin.append(pipe.stderr)
for fobj, _ in extrafiles:
bb.utils.nonblockingfd(fobj.fileno())
rin.append(fobj)
def readextras(selected):
for fobj, func in extrafiles:
if fobj in selected:
try:
data = fobj.read()
except IOError as err:
if err.errno == errno.EAGAIN or err.errno == errno.EWOULDBLOCK:
data = None
if data is not None:
func(data)
def read_all_pipes(log, rin, outdata, errdata):
rlist = rin
stdoutbuf = b""
stderrbuf = b""
try:
r,w,e = select.select (rlist, [], [], 1)
except OSError as e:
if e.errno != errno.EINTR:
raise
readextras(r)
if pipe.stdout in r:
data = stdoutbuf + pipe.stdout.read()
if data is not None and len(data) > 0:
try:
data = data.decode("utf-8")
outdata.append(data)
log.write(data)
log.flush()
stdoutbuf = b""
except UnicodeDecodeError:
stdoutbuf = data
if pipe.stderr in r:
data = stderrbuf + pipe.stderr.read()
if data is not None and len(data) > 0:
try:
data = data.decode("utf-8")
errdata.append(data)
log.write(data)
log.flush()
stderrbuf = b""
except UnicodeDecodeError:
stderrbuf = data
try:
# Read all pipes while the process is open
while pipe.poll() is None:
read_all_pipes(log, rin, outdata, errdata)
rlist = rin
try:
r,w,e = select.select (rlist, [], [])
except OSError as e:
if e.errno != errno.EINTR:
raise
# Pocess closed, drain all pipes...
read_all_pipes(log, rin, outdata, errdata)
finally:
if pipe.stdout in r:
data = pipe.stdout.read()
if data is not None:
outdata.append(data)
log.write(data)
if pipe.stderr in r:
data = pipe.stderr.read()
if data is not None:
errdata.append(data)
log.write(data)
finally:
log.flush()
if pipe.stdout is not None:
pipe.stdout.close()
if pipe.stderr is not None:
pipe.stderr.close()
return ''.join(outdata), ''.join(errdata)
def run(cmd, input=None, log=None, extrafiles=None, **options):
def run(cmd, input=None, log=None, **options):
"""Convenience function to run a command and return its output, raising an
exception when the command fails"""
if not extrafiles:
extrafiles = []
if isinstance(cmd, str) and not "shell" in options:
if isinstance(cmd, basestring) and not "shell" in options:
options["shell"] = True
try:
@@ -166,13 +124,9 @@ def run(cmd, input=None, log=None, extrafiles=None, **options):
raise CmdError(cmd, exc)
if log:
stdout, stderr = _logged_communicate(pipe, log, input, extrafiles)
stdout, stderr = _logged_communicate(pipe, log, input)
else:
stdout, stderr = pipe.communicate(input)
if not stdout is None:
stdout = stdout.decode("utf-8")
if not stderr is None:
stderr = stderr.decode("utf-8")
if pipe.returncode != 0:
raise ExecutionError(cmd, pipe.returncode, stdout, stderr)

View File

@@ -1,276 +0,0 @@
"""
BitBake progress handling code
"""
# Copyright (C) 2016 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import sys
import re
import time
import inspect
import bb.event
import bb.build
class ProgressHandler(object):
"""
Base class that can pretend to be a file object well enough to be
used to build objects to intercept console output and determine the
progress of some operation.
"""
def __init__(self, d, outfile=None):
self._progress = 0
self._data = d
self._lastevent = 0
if outfile:
self._outfile = outfile
else:
self._outfile = sys.stdout
def _fire_progress(self, taskprogress, rate=None):
"""Internal function to fire the progress event"""
bb.event.fire(bb.build.TaskProgress(taskprogress, rate), self._data)
def write(self, string):
self._outfile.write(string)
def flush(self):
self._outfile.flush()
def update(self, progress, rate=None):
ts = time.time()
if progress > 100:
progress = 100
if progress != self._progress or self._lastevent + 1 < ts:
self._fire_progress(progress, rate)
self._lastevent = ts
self._progress = progress
class LineFilterProgressHandler(ProgressHandler):
"""
A ProgressHandler variant that provides the ability to filter out
the lines if they contain progress information. Additionally, it
filters out anything before the last line feed on a line. This can
be used to keep the logs clean of output that we've only enabled for
getting progress, assuming that that can be done on a per-line
basis.
"""
def __init__(self, d, outfile=None):
self._linebuffer = ''
super(LineFilterProgressHandler, self).__init__(d, outfile)
def write(self, string):
self._linebuffer += string
while True:
breakpos = self._linebuffer.find('\n') + 1
if breakpos == 0:
break
line = self._linebuffer[:breakpos]
self._linebuffer = self._linebuffer[breakpos:]
# Drop any line feeds and anything that precedes them
lbreakpos = line.rfind('\r') + 1
if lbreakpos:
line = line[lbreakpos:]
if self.writeline(line):
super(LineFilterProgressHandler, self).write(line)
def writeline(self, line):
return True
class BasicProgressHandler(ProgressHandler):
def __init__(self, d, regex=r'(\d+)%', outfile=None):
super(BasicProgressHandler, self).__init__(d, outfile)
self._regex = re.compile(regex)
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
def write(self, string):
percs = self._regex.findall(string)
if percs:
progress = int(percs[-1])
self.update(progress)
super(BasicProgressHandler, self).write(string)
class OutOfProgressHandler(ProgressHandler):
def __init__(self, d, regex, outfile=None):
super(OutOfProgressHandler, self).__init__(d, outfile)
self._regex = re.compile(regex)
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
def write(self, string):
nums = self._regex.findall(string)
if nums:
progress = (float(nums[-1][0]) / float(nums[-1][1])) * 100
self.update(progress)
super(OutOfProgressHandler, self).write(string)
class MultiStageProgressReporter(object):
"""
Class which allows reporting progress without the caller
having to know where they are in the overall sequence. Useful
for tasks made up of python code spread across multiple
classes / functions - the progress reporter object can
be passed around or stored at the object level and calls
to next_stage() and update() made whereever needed.
"""
def __init__(self, d, stage_weights, debug=False):
"""
Initialise the progress reporter.
Parameters:
* d: the datastore (needed for firing the events)
* stage_weights: a list of weight values, one for each stage.
The value is scaled internally so you only need to specify
values relative to other values in the list, so if there
are two stages and the first takes 2s and the second takes
10s you would specify [2, 10] (or [1, 5], it doesn't matter).
* debug: specify True (and ensure you call finish() at the end)
in order to show a printout of the calculated stage weights
based on timing each stage. Use this to determine what the
weights should be when you're not sure.
"""
self._data = d
total = sum(stage_weights)
self._stage_weights = [float(x)/total for x in stage_weights]
self._stage = -1
self._base_progress = 0
# Send an initial progress event so the bar gets shown
self._fire_progress(0)
self._debug = debug
self._finished = False
if self._debug:
self._last_time = time.time()
self._stage_times = []
self._stage_total = None
self._callers = []
def _fire_progress(self, taskprogress):
bb.event.fire(bb.build.TaskProgress(taskprogress), self._data)
def next_stage(self, stage_total=None):
"""
Move to the next stage.
Parameters:
* stage_total: optional total for progress within the stage,
see update() for details
NOTE: you need to call this before the first stage.
"""
self._stage += 1
self._stage_total = stage_total
if self._stage == 0:
# First stage
if self._debug:
self._last_time = time.time()
else:
if self._stage < len(self._stage_weights):
self._base_progress = sum(self._stage_weights[:self._stage]) * 100
if self._debug:
currtime = time.time()
self._stage_times.append(currtime - self._last_time)
self._last_time = currtime
self._callers.append(inspect.getouterframes(inspect.currentframe())[1])
elif not self._debug:
bb.warn('ProgressReporter: current stage beyond declared number of stages')
self._base_progress = 100
self._fire_progress(self._base_progress)
def update(self, stage_progress):
"""
Update progress within the current stage.
Parameters:
* stage_progress: progress value within the stage. If stage_total
was specified when next_stage() was last called, then this
value is considered to be out of stage_total, otherwise it should
be a percentage value from 0 to 100.
"""
if self._stage_total:
stage_progress = (float(stage_progress) / self._stage_total) * 100
if self._stage < 0:
bb.warn('ProgressReporter: update called before first call to next_stage()')
elif self._stage < len(self._stage_weights):
progress = self._base_progress + (stage_progress * self._stage_weights[self._stage])
else:
progress = self._base_progress
if progress > 100:
progress = 100
self._fire_progress(progress)
def finish(self):
if self._finished:
return
self._finished = True
if self._debug:
import math
self._stage_times.append(time.time() - self._last_time)
mintime = max(min(self._stage_times), 0.01)
self._callers.append(None)
stage_weights = [int(math.ceil(x / mintime)) for x in self._stage_times]
bb.warn('Stage weights: %s' % stage_weights)
out = []
for stage_weight, caller in zip(stage_weights, self._callers):
if caller:
out.append('Up to %s:%d: %d' % (caller[1], caller[2], stage_weight))
else:
out.append('Up to finish: %d' % stage_weight)
bb.warn('Stage times:\n %s' % '\n '.join(out))
class MultiStageProcessProgressReporter(MultiStageProgressReporter):
"""
Version of MultiStageProgressReporter intended for use with
standalone processes (such as preparing the runqueue)
"""
def __init__(self, d, processname, stage_weights, debug=False):
self._processname = processname
self._started = False
MultiStageProgressReporter.__init__(self, d, stage_weights, debug)
def start(self):
if not self._started:
bb.event.fire(bb.event.ProcessStarted(self._processname, 100), self._data)
self._started = True
def _fire_progress(self, taskprogress):
if taskprogress == 0:
self.start()
return
bb.event.fire(bb.event.ProcessProgress(self._processname, taskprogress), self._data)
def finish(self):
MultiStageProgressReporter.finish(self)
bb.event.fire(bb.event.ProcessFinished(self._processname), self._data)
class DummyMultiStageProcessProgressReporter(MultiStageProgressReporter):
"""
MultiStageProcessProgressReporter that takes the calls and does nothing
with them (to avoid a bunch of "if progress_reporter:" checks)
"""
def __init__(self):
MultiStageProcessProgressReporter.__init__(self, "", None, [])
def _fire_progress(self, taskprogress, rate=None):
pass
def start(self):
pass
def next_stage(self, stage_total=None):
pass
def update(self, stage_progress):
pass
def finish(self):
pass

View File

@@ -48,6 +48,7 @@ def findProviders(cfgData, dataCache, pkg_pn = None):
# Need to ensure data store is expanded
localdata = data.createCopy(cfgData)
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
preferred_versions = {}
@@ -120,14 +121,11 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
preferred_file = None
preferred_ver = None
# pn can contain '_', e.g. gcc-cross-x86_64 and an override cannot
# hence we do this manually rather than use OVERRIDES
preferred_v = cfgData.getVar("PREFERRED_VERSION_pn-%s" % pn)
if not preferred_v:
preferred_v = cfgData.getVar("PREFERRED_VERSION_%s" % pn)
if not preferred_v:
preferred_v = cfgData.getVar("PREFERRED_VERSION")
localdata = data.createCopy(cfgData)
localdata.setVar('OVERRIDES', "%s:pn-%s:%s" % (data.getVar('OVERRIDES', localdata), pn, pn))
bb.data.update_data(localdata)
preferred_v = localdata.getVar('PREFERRED_VERSION', True)
if preferred_v:
m = re.match('(\d+:)*(.*)(_.*)*', preferred_v)
if m:
@@ -225,7 +223,7 @@ def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
def _filterProviders(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
environment variables
environment variables and previous build results
"""
eligible = []
preferred_versions = {}
@@ -244,7 +242,7 @@ def _filterProviders(providers, item, cfgData, dataCache):
pkg_pn[pn] = []
pkg_pn[pn].append(p)
logger.debug(1, "providers for %s are: %s", item, list(pkg_pn.keys()))
logger.debug(1, "providers for %s are: %s", item, pkg_pn.keys())
# First add PREFERRED_VERSIONS
for pn in pkg_pn:
@@ -282,13 +280,13 @@ def _filterProviders(providers, item, cfgData, dataCache):
def filterProviders(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
environment variables
environment variables and previous build results
Takes a "normal" target item
"""
eligible = _filterProviders(providers, item, cfgData, dataCache)
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % item)
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % item, True)
if prefervar:
dataCache.preferred[item] = prefervar
@@ -310,56 +308,38 @@ def filterProviders(providers, item, cfgData, dataCache):
def filterProvidersRunTime(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
environment variables
environment variables and previous build results
Takes a "runtime" target item
"""
eligible = _filterProviders(providers, item, cfgData, dataCache)
# First try and match any PREFERRED_RPROVIDER entry
prefervar = cfgData.getVar('PREFERRED_RPROVIDER_%s' % item)
foundUnique = False
if prefervar:
for p in eligible:
pn = dataCache.pkg_fn[p]
if prefervar == pn:
logger.verbose("selecting %s to satisfy %s due to PREFERRED_RPROVIDER", pn, item)
eligible.remove(p)
eligible = [p] + eligible
foundUnique = True
numberPreferred = 1
# Should use dataCache.preferred here?
preferred = []
preferred_vars = []
pns = {}
for p in eligible:
pns[dataCache.pkg_fn[p]] = p
for p in eligible:
pn = dataCache.pkg_fn[p]
provides = dataCache.pn_provides[pn]
for provide in provides:
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % provide, True)
logger.debug(1, "checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
if prefervar in pns and pns[prefervar] not in preferred:
var = "PREFERRED_PROVIDER_%s = %s" % (provide, prefervar)
logger.verbose("selecting %s to satisfy runtime %s due to %s", prefervar, item, var)
preferred_vars.append(var)
pref = pns[prefervar]
eligible.remove(pref)
eligible = [pref] + eligible
preferred.append(pref)
break
# If we didn't find an RPROVIDER entry, try and infer the provider from PREFERRED_PROVIDER entries
# by looking through the provides of each eligible recipe and seeing if a PREFERRED_PROVIDER was set.
# This is most useful for virtual/ entries rather than having a RPROVIDER per entry.
if not foundUnique:
# Should use dataCache.preferred here?
preferred = []
preferred_vars = []
pns = {}
for p in eligible:
pns[dataCache.pkg_fn[p]] = p
for p in eligible:
pn = dataCache.pkg_fn[p]
provides = dataCache.pn_provides[pn]
for provide in provides:
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % provide)
#logger.debug(1, "checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
if prefervar in pns and pns[prefervar] not in preferred:
var = "PREFERRED_PROVIDER_%s = %s" % (provide, prefervar)
logger.verbose("selecting %s to satisfy runtime %s due to %s", prefervar, item, var)
preferred_vars.append(var)
pref = pns[prefervar]
eligible.remove(pref)
eligible = [pref] + eligible
preferred.append(pref)
break
numberPreferred = len(preferred)
numberPreferred = len(preferred)
if numberPreferred > 1:
logger.error("Trying to resolve runtime dependency %s resulted in conflicting PREFERRED_PROVIDER entries being found.\nThe providers found were: %s\nThe PREFERRED_PROVIDER entries resulting in this conflict were: %s. You could set PREFERRED_RPROVIDER_%s" % (item, preferred, preferred_vars, item))
logger.error("Trying to resolve runtime dependency %s resulted in conflicting PREFERRED_PROVIDER entries being found.\nThe providers found were: %s\nThe PREFERRED_PROVIDER entries resulting in this conflict were: %s", item, preferred, preferred_vars)
logger.debug(1, "sorted runtime providers for %s are: %s", item, eligible)
@@ -399,32 +379,3 @@ def getRuntimeProviders(dataCache, rdepend):
logger.debug(1, "Assuming %s is a dynamic package, but it may not exist" % rdepend)
return rproviders
def buildWorldTargetList(dataCache, task=None):
"""
Build package list for "bitbake world"
"""
if dataCache.world_target:
return
logger.debug(1, "collating packages for \"world\"")
for f in dataCache.possible_world:
terminal = True
pn = dataCache.pkg_fn[f]
if task and task not in dataCache.task_deps[f]['tasks']:
logger.debug(2, "World build skipping %s as task %s doesn't exist", f, task)
terminal = False
for p in dataCache.pn_provides[pn]:
if p.startswith('virtual/'):
logger.debug(2, "World build skipping %s due to %s provider starting with virtual/", f, p)
terminal = False
break
for pf in dataCache.providers[p]:
if dataCache.pkg_fn[pf] != pn:
logger.debug(2, "World build skipping %s due to both us and %s providing %s", f, pf, p)
terminal = False
break
if terminal:
dataCache.world_target.add(pn)

Some files were not shown because too many files have changed in this diff Show More