Compare commits

..

369 Commits

Author SHA1 Message Date
Scott Rifenbark
7e613928fe documentation: Updated title page notes
Fixed the title page notes to help the user get the exact
set of documentation for the appropriate YP release.

(From yocto-docs rev: 09bcec491f9edf5a4e7dac8b6818ce22b5df163f)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2018-12-10 20:43:22 +00:00
Daniel Lublin
331275422b bitbake: lib/bs4: Fix imports from html5lib >= 0.9999999/1.0b8
As of html5lib 0.9999999/1.0b8 (released on July 14, 2016), some modules
have moved from _base to base. Handle this, while staying compatible
with earlier versions.

(Bitbake rev: 0d80cacb2b84ee059cee3caf8a5968033b9ce3c5)

Signed-off-by: Daniel Lublin <daniel@lublin.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2018-03-28 12:56:43 +01:00
Scott Rifenbark
64297072e8 bitbake: bitbake-user-manual: Fixed porno hack for hello world example
Someone hacked the http://hambedded site or it was moved and some
links to that site in the BB manual had been hijacked to point to
an entry portal for a pornography site.  Replaced the link with an
archived version that restores the integrity of the links.

(Bitbake rev: 919303d2e8b4ee2602b09420f40b70de091612c5)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2018-01-17 22:32:37 +00:00
Andre Rosa
ac4d3fca18 bitbake: Replace deprecated git branch parameter "--set-upstream"
Since 2017-08-17 (git version 2.14.1.473.g3ec7d702a) using deprecated
git branch parameter "--set-upstream" causes a fetcher error. Replace
it by "--set-upstream-to".

https://git.kernel.org/pub/scm/git/git.git/commit/?id=52668846ea2d41ffbd87cda7cb8e492dea9f2c4d
says, it's deprecated since 2012-08-30 so hopefully all still supported
host distributions have new enough git to support "--set-upstream-to".

ERROR: PACKAGE do_unpack: Fetcher failure: ...;
git -c core.fsyncobjectfiles=0 branch --set-upstream master origin/master failed with exit code 128, output:
fatal: the '--set-upstream' option is no longer supported. Please use '--track' or '--set-upstream-to' instead.

ERROR: PACKAGE do_unpack: Function failed: base_do_unpack

(Bitbake rev: 68d061d2517f1a79dc6b14a373ed2dcb78a901ce)

Signed-off-by: Andre Rosa <andre.rosa@lge.com>
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2ab50074c1a6c56a8a178755de108447d7b7acaf)
Signed-off-by: Javier Viguera <javier.viguera@digi.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-11-07 13:40:38 +00:00
Ross Burton
3f55846839 wpa_supplicant: fix WPA2 key replay security bug
WPA2 is vulnerable to replay attacks which result in unauthenticated users
having access to the network.

* CVE-2017-13077: reinstallation of the pairwise key in the Four-way handshake

* CVE-2017-13078: reinstallation of the group key in the Four-way handshake

* CVE-2017-13079: reinstallation of the integrity group key in the Four-way
handshake

* CVE-2017-13080: reinstallation of the group key in the Group Key handshake

* CVE-2017-13081: reinstallation of the integrity group key in the Group Key
handshake

* CVE-2017-13082: accepting a retransmitted Fast BSS Transition Reassociation
Request and reinstalling the pairwise key while processing it

* CVE-2017-13086: reinstallation of the Tunneled Direct-Link Setup (TDLS)
PeerKey (TPK) key in the TDLS handshake

* CVE-2017-13087: reinstallation of the group key (GTK) when processing a
Wireless Network Management (WNM) Sleep Mode Response frame

* CVE-2017-13088: reinstallation of the integrity group key (IGTK) when
processing a Wireless Network Management (WNM) Sleep Mode Response frame

Backport patches from upstream to resolve these CVEs.

(From OE-Core rev: 6af6e285e8bed16b02dee27c8466e9f4f9f21e30)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-11-03 12:28:27 +00:00
Derek Straka
e08994ce95 bitbake: bitbake: fetch2/gitsm: Fix fetch when the repository contains nested submodules
This fixes a problem when the repository contains multiple levels of submodules via a resursive submodule init.

(Bitbake rev: bc57798ff39cae5ffea194c867e07136f7b6f3ec)

Signed-off-by: Derek Straka <derek@asterius.io>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-01-12 17:46:35 +00:00
Felipe F. Tonello
1ae880e253 bitbake: fetch2/gitsm: Fix when repository change submodules
This fix a problem when checking out a commit that changes the submodules
previously checkout.

Example:
Recipe uses branch A and then it updates to use branch B, but branch B has
different submodules dependencies then what branch A previously had.

(Bitbake rev: 12f6c0651af8bd5d6efb751690571cf2fcd3eeb0)

Signed-off-by: Felipe F. Tonello <eu@felipetonello.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2017-01-12 17:46:35 +00:00
Richard Purdie
adb34b8ddc build-appliance-image: Update to jethro head revision
(From OE-Core rev: a9db40da62c13b0010ce5afc1fde16d987bdfbc6)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:49:08 +00:00
Robert Yang
a20868079c poky.conf: Bump version for 2.0.3 jethro release
(From meta-yocto rev: 492121940d37a72cf7cbe18472a0471fdaba29ff)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:48:22 +00:00
Armin Kuster
1ff7aee3da tzdata: update to 2016i
Briefly: Cyprus split into two time zones on 2016-10-30, and Tonga
  reintroduces DST on 2016-11-06.

  Changes to future time stamps

    Pacific/Tongatapu begins DST on 2016-11-06 at 02:00, ending on
    2017-01-15 at 03:00.  Assume future observances in Tonga will be
    from the first Sunday in November through the third Sunday in
    January, like Fiji.  (Thanks to Pulu ʻAnau.)  Switch to numeric
    time zone abbreviations for this zone.

  Changes to past and future time stamps

    Northern Cyprus is now +03 year round, causing a split in Cyprus
    time zones starting 2016-10-30 at 04:00.  This creates a zone
    Asia/Famagusta.  (Thanks to Even Scharning and Matt Johnson.)

    Antarctica/Casey switched from +08 to +11 on 2016-10-22.
    (Thanks to Steffen Thorsen.)

  Changes to past time stamps

    Several corrections were made for pre-1975 time stamps in Italy.
    These affect Europe/Malta, Europe/Rome, Europe/San_Marino, and
    Europe/Vatican.

    First, the 1893-11-01 00:00 transition in Italy used the new UT
    offset (+01), not the old (+00:49:56).  (Thanks to Michael
    Deckers.)

    Second, rules for daylight saving in Italy were changed to agree
    with Italy's National Institute of Metrological Research (INRiM)
    except for 1944, as follows (thanks to Pierpaolo Bernardi, Brian
    Inglis, and Michael Deckers):

      The 1916-06-03 transition was at 24:00, not 00:00.

      The 1916-10-01, 1919-10-05, and 1920-09-19 transitions were at
      00:00, not 01:00.

      The 1917-09-30 and 1918-10-06 transitions were at 24:00, not
      01:00.

      The 1944-09-17 transition was at 03:00, not 01:00.  This
      particular change is taken from Italian law as INRiM's table,
      (which says 02:00) appears to have a typo here.  Also, keep the
      1944-04-03 transition for Europe/Rome, as Rome was controlled by
      Germany then.

      The 1967-1970 and 1972-1974 fallback transitions were at 01:00,
      not 00:00.

(From OE-Core rev: daf95f7fd9f7ab65685d7b764d8e50df8d00d308)

(From OE-Core rev: c6e18b6734108c233afc1a188bc58c0e5287c60d)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:46:45 +00:00
Armin Kuster
2e4a7df41c tzcode: update to 2016i
Changes to code

  The code should now be buildable on AmigaOS merely by setting the
  appropriate Makefile variables.  (From a patch by Carsten Larsen.)

(From OE-Core rev: d2b8c4ee535684f5d874082a7f76efbda1907ea5)

(From OE-Core rev: 04de62b4edbe57310cd0b0857a7b0d08b885c38a)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:46:45 +00:00
Armin Kuster
a778a2b6db tzdata: Update to 2016h
Changes to future time stamps

    Asia/Gaza and Asia/Hebron end DST on 2016-10-29 at 01:00, not
    2016-10-21 at 00:00.  (Thanks to Sharef Mustafa.)  Predict that
    future fall transitions will be on the last Saturday of October
    at 01:00, which is consistent with predicted spring transitions
    on the last Saturday of March.  (Thanks to Tim Parenti.)

Changes to past time stamps

    In Turkey, transitions in 1986-1990 were at 01:00 standard time
    not at 02:00, and the spring 1994 transition was on March 20, not
    March 27.  (Thanks to Kıvanç Yazan.)

Changes to past and future time zone abbreviations

    Asia/Colombo now uses numeric time zone abbreviations like "+0530"
    instead of alphabetic ones like "IST" and "LKT".  Various
    English-language sources use "IST", "LKT" and "SLST", with no
    working consensus.  (Usage of "SLST" mentioned by Sadika
    Sumanapala.)

(From OE-Core rev: ff11ca44fec8e4b2aa523e032bd967e3ab8339a8)

(From OE-Core rev: 1f1510e054a1643e9ec9cea6bc96288f9802bfbb)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:46:45 +00:00
Armin Kuster
7b85e8c29c tzcode-native: update to 2016h
Changes to code

zic no longer mishandles relativizing file names when creating
symbolic links like /etc/localtime, when these symbolic links
are outside the usual directory hierarchy.  This fixes a bug
introduced in 2016g.  (Problem reported by Andreas Stieger.)

(From OE-Core rev: 9c5de646e01a83219be74e99dcf7c1e56ba38b53)

(From OE-Core rev: 491cddc2f9e2557897a0ee254702bd83624c104c)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:46:45 +00:00
Armin Kuster
ba4fbd376d python-2.7: Security fix CVE-2016-1000110
affects python-2.7 < 2.7.12

(From OE-Core rev: eda260094a793f96ee0b8a79d3266f64797ccc8d)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:46:45 +00:00
Armin Kuster
70799fb931 python-2.7: Security fix CVE-2016-5699
affect python-2.7 < 2.7.10

(From OE-Core rev: 1b16f5238460f65168851d5cdf74e7e0e64f6bdf)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:46:44 +00:00
Armin Kuster
6976f01adc python-2.7: Security fix CVE-2016-5636
Affects python-2.7 < 2.7.12

(From OE-Core rev: d25b86ce8f2712d02bb7cde78d7f9ea5a57a7770)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:46:44 +00:00
Armin Kuster
867babeb6f python-2.7: Security fix CVE-2016-0772
Affects python < 2.7.12

(From OE-Core rev: dd1a22f4beeb4100388efdc072e7cff2025535a7)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:46:44 +00:00
Armin Kuster
96c1644d0d openssl: Security fix CVE-2016-8610
affects openssl < 1.0.2i

(From OE-Core rev: 0256b61cdafe540edb3cec2a34429e24b037cfae)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:46:44 +00:00
Armin Kuster
9e1ca0ba84 openssl: Security fix CVE-2016-2179
affects openssl < 1.0.2i

(From OE-Core rev: 31e8b48da540d357ac0e7ac17ff41d7eadf4f963)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:46:44 +00:00
Armin Kuster
a37112a3bc bind: Security fix CVE-2016-2776
affect bind < 9.10.4-p3

(From OE-Core rev: 57b4c03b263f2ad056d7973038662d6d6614a9de)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:46:44 +00:00
Armin Kuster
d11c5d8944 bind: Security fix CVE-2016-2775
affect bind < 9.10.4-p2

(From OE-Core rev: 54bf7379036eec6d6c4399aa374f898ba3464996)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:46:44 +00:00
Armin Kuster
1f8eb08791 gnutils: Security fix CVE-2016-7444
affects gnutls < 3.3.24

(From OE-Core rev: c0a682cfeedfc8976324a3bba863f1d9b0127d76)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-06 22:46:44 +00:00
Scott Rifenbark
b9c389404f documentation: Updated Manual History tables for 2.0.3
The release date for 2.0.3 moved from November to December.
I updated all the manual history tables.

(From yocto-docs rev: 36a48384db5b5713a2afe744bb8efab2819e773e)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-23 11:10:41 +00:00
Scott Rifenbark
820b835e3c dev-manual: Fixed typo for "${INC_PR}.0"
The string appeared in the text as "$(INC_PR).0".  So, fixed
it to be proper with the curly braces.

(From yocto-docs rev: b29c0c44253c05b0853bfe4feabc210e67fc30c7)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-16 10:38:29 +00:00
Scott Rifenbark
6ffa151404 documentation: Updates to support 2.0.3 release in Jethro
Made the following changes to support the 2.0.3 release:

 * Updated appropriate variables in the poky.ent file
 * Updated the Manual revision tables for November of 2016
 * Updated the mega-manual.sed file to create correct strings
   for the 2.0.3 release.

(From yocto-docs rev: 4492fb46e478f3e89898d7bcc992f63d59396bd5)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-16 10:38:29 +00:00
Wenzong Fan
c1ba8e1174 gnupg: fix find-version for beta checking
find-version always assumes that gnupg is beta if autogen.sh is run
out of git-repo. This doesn't work for users whom just take release
tarball and re-run autoconf in their local build dir.

This fixes runtime issue:

  $gpg --list-sigs
  gpg: NOTE: THIS IS A DEVELOPMENT VERSION!
  gpg: It is only intended for test purposes and should NOT be
  gpg: used in a production environment or with production keys!

(From OE-Core rev: d39e7ca717b67ad9f2f78b83d90d91e410e52965)

Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-11-03 17:41:08 +00:00
Mingli Yu
c3f5e64b58 perl: fix CVE-2016-1238
Backport patch to fix CVE-2016-1238 from perl upstream:
http://perl5.git.perl.org/perl.git/commitdiff/cee96d52c39b1e7b36e1c62d38bcd8d86e9a41ab

(From OE-Core rev: 7d06ffcbcd0c71dc6dc9efde02bf0cd8d7c7d7e3)

(From OE-Core rev: 39ef8e22b52d3f5daa853aa7866145e9c5469d4b)

Signed-off-by: Mingli Yu <Mingli.Yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Fixed up to apply to 5.20.0
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Mingli Yu
84997c7f21 perl: fix CVE-2015-8607
Backport patch to fix CVE-2015-8607 from perl upstream:
http://perl5.git.perl.org/perl.git/commitdiff/0b6f93036de171c12ba95d415e264d9cf7f4e1fd

(From OE-Core rev: e2289647ace9ef96e6a7e4aae201fd9149e56678)

(From OE-Core rev: d0451b2ed92867a0a2c37baded45cff997739153)

Signed-off-by: Mingli Yu <Mingli.Yu@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

fixed up to apply to 5.22.0
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Mingli Yu
e26f842287 perl: fix CVE-2016-6185
Backport patch to fix CVE-2016-6185 from perl upstream:
http://perl5.git.perl.org/perl.git/commitdiff/08e3451d7

(From OE-Core rev: 81e550d0c23c9842b85207cdfa73bbe9102e01fb)

(From OE-Core rev: 6c72a96e0492e71b6eb9ae72883f4087e75265f0)

Signed-off-by: Mingli Yu <Mingli.Yu@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

fixed up to apply against 5.22.0
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Kai Kang
2b8ab746ba perl: fix CVE-2016-2381
Backport patch to fix CVE-2016-2381 from perl upstream:

http://perl5.git.perl.org/perl.git/commitdiff/ae37b791a73a9e78dedb89fb2429d2628cf58076

(From OE-Core rev: 07ca8a0131f43e9cc2f720e1cdbcb7ba7c074886)

(From OE-Core rev: 30b33f5ad1d7a7c55620598427009bd27cfb3d42)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Fixed up to apply again 5.22.0
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Armin Kuster
b4362e0955 tzdata: update to 2016g
LICENSE md5sum changed do to rewording some text not released to the license.
see 8c143a2b65

  Changes to future time stamps

    Turkey switched from EET/EEST (+02/+03) to permanent +03,
    effective 2016-09-07.  (Thanks to Burak AYDIN.)  Use "+03" rather
    than an invented abbreviation for the new time.

    New leap second 2016-12-31 23:59:60 UTC as per IERS Bulletin C 52.
    (Thanks to Tim Parenti.)

  Changes to past time stamps

    For America/Los_Angeles, spring-forward transition times have been
    corrected from 02:00 to 02:01 in 1948, and from 02:00 to 01:00 in
    1950-1966.

    For zones using Soviet time on 1919-07-01, transitions to UT-based
    time were at 00:00 UT, not at 02:00 local time.  The affected
    zones are Europe/Kirov, Europe/Moscow, Europe/Samara, and
    Europe/Ulyanovsk.  (Thanks to Alexander Belopolsky.)

  Changes to past and future time zone abbreviations

    The Factory zone now uses the time zone abbreviation -00 instead
    of a long English-language string, as -00 is now the normal way to
    represent an undefined time zone.

    Several zones in Antarctica and the former Soviet Union, along
    with zones intended for ships at sea that cannot use POSIX TZ
    strings, now use numeric time zone abbreviations instead of
    invented or obsolete alphanumeric abbreviations.  The affected
    zones are Antarctica/Casey, Antarctica/Davis,
    Antarctica/DumontDUrville, Antarctica/Mawson, Antarctica/Rothera,
    Antarctica/Syowa, Antarctica/Troll, Antarctica/Vostok,
    Asia/Anadyr, Asia/Ashgabat, Asia/Baku, Asia/Bishkek, Asia/Chita,
    Asia/Dushanbe, Asia/Irkutsk, Asia/Kamchatka, Asia/Khandyga,
    Asia/Krasnoyarsk, Asia/Magadan, Asia/Omsk, Asia/Sakhalin,
    Asia/Samarkand, Asia/Srednekolymsk, Asia/Tashkent, Asia/Tbilisi,
    Asia/Ust-Nera, Asia/Vladivostok, Asia/Yakutsk, Asia/Yekaterinburg,
    Asia/Yerevan, Etc/GMT-14, Etc/GMT-13, Etc/GMT-12, Etc/GMT-11,
    Etc/GMT-10, Etc/GMT-9, Etc/GMT-8, Etc/GMT-7, Etc/GMT-6, Etc/GMT-5,
    Etc/GMT-4, Etc/GMT-3, Etc/GMT-2, Etc/GMT-1, Etc/GMT+1, Etc/GMT+2,
    Etc/GMT+3, Etc/GMT+4, Etc/GMT+5, Etc/GMT+6, Etc/GMT+7, Etc/GMT+8,
    Etc/GMT+9, Etc/GMT+10, Etc/GMT+11, Etc/GMT+12, Europe/Kaliningrad,
    Europe/Minsk, Europe/Samara, Europe/Volgograd, and
    Indian/Kerguelen.  For Europe/Moscow the invented abbreviation MSM
    was replaced by +05, whereas MSK and MSD were kept as they are not
    our invention and are widely used.

  Changes to zone names

    Rename Asia/Rangoon to Asia/Yangon, with a backward compatibility link.
    (Thanks to David Massoud.)

(From OE-Core rev: d1341aeda6d9fa5d7f13afabadae60a6fc295b87)

(From OE-Core rev: 4662af3256d6f373e2071047b8a845361188e878)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Armin Kuster
0ad02a1f1a tzcode-native: Update to 2016g
LICENSE file checksum changed do to a verbage change.

  Changes to code

    zic no longer generates binary files containing POSIX TZ-like
    strings that disagree with the local time type after the last
    explicit transition in the data.  This fixes a bug with
    Africa/Casablanca and Africa/El_Aaiun in some year-2037 time
    stamps on the reference platform.  (Thanks to Alexander Belopolsky
    for reporting the bug and suggesting a way forward.)

    If the installed localtime and/or posixrules files are symbolic
    links, zic now keeps them symbolic links when updating them, for
    compatibility with platforms like OpenSUSE where other programs
    configure these files as symlinks.

    zic now avoids hard linking to symbolic links, avoids some
    unnecessary mkdir and stat system calls, and uses shorter file
    names internally.

    zdump has a new -i option to generate transitions in a
    more-compact but still human-readable format.  This option is
    experimental, and the output format may change in future versions.
    (Thanks to Jon Skeet for suggesting that an option was needed,
    and thanks to Tim Parenti and Chris Rovick for further comments.)

  Changes to build procedure

    An experimental distribution format is available, in addition
    to the traditional format which will continue to be distributed.
    The new format is a tarball tzdb-VERSION.tar.lz with signature
    file tzdb-VERSION.tar.lz.asc.  It unpacks to a top-level directory
    tzdb-VERSION containing the code and data of the traditional
    two-tarball format, along with extra data that may be useful.
    (Thanks to Antonio Diaz Diaz, Oscar van Vlijmen, and many others
    for comments about the experimental format.)

    The release version number is now more accurate in the usual case
    where releases are built from a Git repository.  For example, if
    23 commits and some working-file changes have been made since
    release 2016g, the version number is now something like
    '2016g-23-g50556e3-dirty' instead of the misleading '2016g'.
    Official releases uses the same version number format as before,
    e.g., '2016g'.  To support the more-accurate version number, its
    specification has moved from a line in the Makefile to a new
    source file 'version'.

    The experimental distribution contains a file to2050.tzs that
    contains what should be the output of 'zdump -i -c 2050' on
    primary zones.  If this file is available, 'make check' now checks
    that zdump generates this output.

    'make check_web' now works on Fedora-like distributions.

  Changes to documentation and commentary

    tzfile.5 now documents the new restriction on POSIX TZ-like
    strings that is now implemented by zic.

    Comments now cite URLs for some 1917-1921 Russian DST decrees.
    (Thanks to Alexander Belopolsky.)

    tz-link.htm mentions JuliaTime (thanks to Curtis Vogt) and Time4J
    (thanks to Meno Hochschild) and ThreeTen-Extra, and its
    description of Java 8 has been brought up to date (thanks to
    Stephen Colebourne).  Its description of local time on Mars has
    been updated to match current practice, and URLs have been updated
    and some obsolete ones removed.

(From OE-Core rev: 19c365b23c3b835dcb5595aba598f35bf16a6d81)

(From OE-Core rev: f5213870101ab57eb6303290c57935aed40cd9c4)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Armin Kuster
6ec3aa9972 tzcode-native: update to 2016f
changes done in data

(From OE-Core rev: 29377fa91a5f679909d582317c2b53d1f2e5da88)

(From OE-Core rev: 319df4f24b3eca45f068514826e08ab0aeed4f93)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Armin Kuster
ac81181091 tzdata: update to 2016f
Changes affecting future time stamps

    The Egyptian government changed its mind on short notice, and
    Africa/Cairo will not introduce DST starting 2016-07-07 after all.
    (Thanks to Mina Samuel.)

    Asia/Novosibirsk switches from +06 to +07 on 2016-07-24 at 02:00.
    (Thanks to Stepan Golosunov.)

  Changes to past and future time stamps

    Asia/Novokuznetsk and Asia/Novosibirsk now use numeric time zone
    abbreviations instead of invented ones.

  Changes affecting past time stamps

    Europe/Minsk's 1992-03-29 spring-forward transition was at 02:00 not 00:00.
    (Thanks to Stepan Golosunov.)

(From OE-Core rev: dc80bf9b092a76f758d01474619cd9db46a1070d)

(From OE-Core rev: c1191c22fe9d92262645da17f741014a4465a0eb)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Armin Kuster
90dc28b0b6 openssl: Security fix CVE-2016-6306
affects openssl < 1.0.1i

(From OE-Core rev: 7277061de39cdcdc2d1db15cefd9040a54527cd6)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Armin Kuster
8df8e70f96 openssl: Security fix CVE-2016-6304
affects openssl < 1.0.1i

(From OE-Core rev: d6e1a56f4e764832ac84b842fa2696b56d850ee9)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Armin Kuster
d23b450ea3 openssl: Security fix CVE-2016-6303
affects openssl < 1.0.1i

(From OE-Core rev: df7e4fdba42e9fcb799e812f6706bd56967858d9)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Armin Kuster
91353b6936 openssl: Security fix CVE-2016-6302
affects openssl < 1.0.1i

(From OE-Core rev: 963c69e1e8e9cefccccb59619cb07ee31f07ffa1)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Armin Kuster
942832888b openssl: Security fix CVE-2016-2182
affects openssl < 1.0.1i

(From OE-Core rev: bf3918d613b6b2a9707af1eb3c253d23f84d09a3)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Armin Kuster
dc61ec5f0c openssl: Security fix CVE-2016-2181
affects openssl < 1.0.1i

(From OE-Core rev: c3d4cc8e452b29d4ca620b5c93d22a88c5aa1f03)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Armin Kuster
766c5ced75 openssl: Security fix CVE-2016-2180
affects openssl < 1.0.1i

(From OE-Core rev: ed8bed3bf2d2460ff93bdaa255091e0d388a8209)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-10-06 08:51:17 +01:00
Robert Yang
2ff9d30dac init-install.sh: fix disk_size
It mis-matched "SanDisk" or "Disk Flags" before, which caused unexpected
error.

(From OE-Core rev: 346b6ef31253789d7d6664a19297b6deec9d27a0)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a68ac76c1b6ed4c1a2fbc944c5021c89fd26217f)
[YOCTO #10333]
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-27 22:23:00 +01:00
Armin Kuster
2804850ea7 util-linux: Security fix for CVE-2016-5011
affects util-linux < 2.28.2

(From OE-Core rev: c9c85df86cd2270b144fa824ef76adedd3636c8a)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:04 +01:00
Armin Kuster
6998a3c1e6 qemu: Secuirty fix for CVE-2016-5403
affects qemu < 2.7.0-rc0

(From OE-Core rev: 2f3f09dfbff21fb74e50e4e3ce90c252d32ebf61)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:04 +01:00
Armin Kuster
6057d0aa47 qemu: Security fix for CVE-2016-4002
affects qemu < 2.6.0

(From OE-Core rev: 6d7c10eae8b23a71eee6d59baab42d98d8fb7ff8)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:04 +01:00
Armin Kuster
48048dcaa2 qemu: Security fix CVE-2016-6351
affects qemu < 2.6.0

(From OE-Core rev: 5729eb105ff69cae0eac7a596cb0e938f6159526)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:04 +01:00
Armin Kuster
931a6e6d5e qemu: Security fix CVE-2016-4439
affects qemu < 2.6.0

(From OE-Core rev: 628b9bfc91a6f73a5dfff7ade1819ea6a2db7cf0)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:04 +01:00
Armin Kuster
98e7d8a9a0 qemu: Security Fix CVE-2016-3712
affects qemu < 2.6.0

(From OE-Core rev: 6f25d966c41df5315d253859d9ebf231963bf671)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:04 +01:00
Armin Kuster
ffa3a07ac1 qemu: Security Fix CVE-2016-3710
affects Qemu < 2.6.0

(From OE-Core rev: 8ce0ce8a229f8cb2b854e3b9619a9ad75d9b6fe4)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:04 +01:00
Armin Kuster
661aff850e wget: Security fix CVE-2016-4971
affects wget < 1.18.0

(From OE-Core rev: 15b6586ae64f745777ba5c42f4cf055aeeed83d8)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:03 +01:00
Armin Kuster
8f62c3dc44 openssh: Security fix CVE-2015-8325
openssh <  7.2p2

(From OE-Core rev: c71cbdd557476b7669c28b44f56e21ce0d0c53dc)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:03 +01:00
Armin Kuster
2622059ca0 openssh: Security fix CVE-2016-5615
openssh < 7.3

(From OE-Core rev: 3fdad451afcc16b1fa94024310b4d26333ca7de9)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:03 +01:00
Armin Kuster
ddb1db9ef7 openssh: Security fix CVE-2016-6210
affects openssh < 7.3

(From OE-Core rev: 7d07de3841c0a736262088c95a938deff194d9e2)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:03 +01:00
Armin Kuster
fc1ba0b67f git: Security fix CVE-2016-2315 CVE-2016-2324
git versions < 2.5.5 & 2.7.4

(From OE-Core rev: 64ff6226d0c927c05fc42fd9ca8b31bac129b16d)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:03 +01:00
Armin Kuster
9657825ef3 bind: Security fix CVE-2016-2088
(From OE-Core rev: 91e05c25eb221ff1dc2bde5cfaa0bea88345b1e4)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:03 +01:00
Yi Zhao
9f1dc20619 tiff: Security fix CVE-2016-5323
CVE-2016-5323 libtiff: a maliciously crafted TIFF file could cause the
application to crash when using tiffcrop command

External References:
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-5323
http://bugzilla.maptools.org/show_bug.cgi?id=2559

Patch from:
2f79856097

(From OE-Core rev: 4e2f4484d6e1418c34f65de954809d06df41cc38)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit 4ad1220e0a7f9ca9096860f4f9ae7017b36e29e4)
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:03 +01:00
Yi Zhao
c95d42a7d1 tiff: Security fix CVE-2016-5321
CVE-2016-5321 libtiff: a maliciously crafted TIFF file could cause the
application to crash when using tiffcrop command

External References:
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-5321
http://bugzilla.maptools.org/show_bug.cgi?id=2558

Patch from:
d9783e4a14

(From OE-Core rev: 35a7cb62be554e28f64b7583d46d693ea184491f)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit 4a167cfb6ad79bbe2a2ff7f7b43c4a162ca42a4d)
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:03 +01:00
Yi Zhao
7d403a2ecd tiff: Security fix CVE-2016-3186
CVE-2016-3186 libtiff: buffer overflow in the readextension function in
gif2tiff.c allows remote attackers to cause a denial of service via a
crafted GIF file

External References:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-3186
https://bugzilla.redhat.com/show_bug.cgi?id=1319503

Patch from:
https://bugzilla.redhat.com/attachment.cgi?id=1144235&action=diff

(From OE-Core rev: b4471e7264538b3577808fae5e78f42c0d31e195)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit 3d818fc862b1d85252443fefa2222262542a10ae)
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:03 +01:00
Ismo Puustinen
75e6b3b57b libpcre: Fix CVE-2016-3191
Fix workspace overflow for (*ACCEPT) with deeply nested parentheses.

The patch is from libpcre version control at
http://vcs.pcre.org/pcre?view=revision&revision=1631 with the ChangeLog
part removed. Original author is Philip Hazel.

(From OE-Core rev: 249cc163e7a16f307e8b94a7b449cd3e93cc6b15)

Signed-off-by: Ismo Puustinen <ismo.puustinen@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit 386534f968f4da376ba7778b5d436bad4ce8355b)
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:03 +01:00
Armin Kuster
cb5dd8d314 openssl: Security fix CVE-2016-2178
affects  openssl <=  1.0.2h
CVSS v2 Base Score: 2.1 LOW

(From OE-Core rev: 82fe0e8c98244794531f0e24ceb93953fe68dda5)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit 5b3df0c5e8885ea34f66b41fcf209a9960fbbf5e)
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:03 +01:00
Armin Kuster
1fedf13e63 openssl: Security fix CVE-2016-2177
Affects openssl <= 1.0.2h
CVSS v2 Base Score: 7.5 HIGH

(From OE-Core rev: 5781eb9a6e6bf8984b090a488d2a326bf9fafcf8)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit 2848c7d3e454cbc84cba9183f23ccdf3e9200ec9)
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:03 +01:00
Ross Burton
e1b940b4d1 openssl: add a patch to fix parallel builds
Apply a patch taken from Gentoo to hopefully fix the remaining parallel make
races.

(From OE-Core rev: 7ab2f49107cf491d602880205a3ea1222cb5e616)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 3d806d59a4c5e8ff35c7e7c5a3a6ef85e2b4b259)

Minor fixup to get patch to apply to jethro
Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:22:03 +01:00
Ross Burton
b2e2a7426c bitbake: fetch2/wget: fallback to GET if HEAD is rejected in checkstatus()
The core change here is to fall back to GET requests if HEAD is rejected in the
checkstatus() method, as you can't do a HEAD on Amazon S3 (used by Github
archives).  This meant removing the monkey patch that the default method was GET
and adding a fixed redirect handler that doesn't reset to GET.

Also, change the way the opener is constructed from an if/elif cluster to a
conditionally constructed list.

(Bitbake rev: b993d96203541cd2919d688559ab802078a7a506)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 6ec70d5d2e330b41b932b0a655b838a5f37df01e)
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:19:42 +01:00
Ross Burton
524417d587 bitbake: lib/bb/tests/fetch: remove URL that doesn't exist anymore
The CUPS ipptool URL we were checking now redirects to github where the tarball
isn't present, so remove it from the test suite.

(Bitbake rev: ed890c3b54a98ff269cea4e35d246f3b3c0b6ba9)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 4b50895fb3462b21e3874a2e99c363c8d05e89e6)
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:19:42 +01:00
Richard Purdie
0a9e04cade bitbake: bb/tests/fetch: Update cups url
Update the upstream url used for testing cups versions after upstream website
changes.

minor fixup to apply

(Bitbake rev: 79810903cf4141b8c1538975ed89cac553628edd)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

[Bitbake upstream: 5f06041d4936fc22297945bbbad7020bfa9083c6 ]
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-23 23:19:42 +01:00
Maxin B. John
37eb21b2b1 curl: security fix for CVE-2016-5421
Affected versions: libcurl 7.32.0 to and including 7.50.0

(From OE-Core rev: f6999fa952c7db980cfc97f6e5a971e4f34cc0a3)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 08:48:35 +01:00
Maxin B. John
72ea3c272c curl: security fix for CVE-2016-5420
Affected versions: libcurl 7.1 to and including 7.50.0

(From OE-Core rev: 6b732a392289a7bb50b0e3716c066c62fa32a14d)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 08:48:35 +01:00
Maxin B. John
0e0c04343d curl: security fix for CVE-2016-5419
Affected versions: libcurl 7.1 to and including 7.50.0

(From OE-Core rev: d1d6c93b491056b18b528216303047e353956e34)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 08:48:34 +01:00
Enrico Jorns
4037644690 perl-ptest.inc: fix tar call to prevent objcopy failure
With tar version 1.29, the tar call used to copy the ptest files will
not work anymore. While the call did not match the man page (but worked)
before, anyway, the latest update of tar seems to have a more strict argument
handling.

With the current version of the tar call, the copying of files still
works with latest tar version, but the excludes will not be handled
properly anymore.
This results in having binaries compiled with host GCC in the package.
When doing the strip_and_split files in do_package() with the target
objcopy, bitbake will fail with this error:

  ERROR: objcopy failed with exit code 256 (cmd was [...])
  [...]
  File format not recognized

Thus, the current argument issues and required changes are:

 * Options must be placed _before_ the pathnames.

 * --exclude must be followd by a '=' in order to work properly

 * 'f' options is for providing an archive file, which is unnecessary in
   this case

Note that this could also be a candidate for backporting.

(From OE-Core master rev: 2e498879098f7d84610aed7961d92433083d9a02)

(From OE-Core rev: a27b907dd3ad20fc60b7732c19012793aaaba2df)

Signed-off-by: Enrico Jorns <ejo@pengutronix.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 09:00:53 +01:00
Anuj Mittal
64b9c83b0c gcc: make sure header path is set correctly
We're setting the native header paths in do_configure_prepend,
and don't need to set them again here.

This results in gcc-target not being able to locate the headers
and not being able to detect glibc version, which in turn
results in SSP support not getting detected even though it's available
in libc.

(From OE-Core master rev: 85630aa894278e7818c867179dc19ca2fbd994fc)

(From OE-Core rev: f28840de3912c805acde8d11188f0c48617678ab)

Signed-off-by: Anuj Mittal <anujx.mittal@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 09:00:53 +01:00
Armin Kuster
96456b15ea tzdata: update to 2016e
Changes affecting future time stamps

Africa/Cairo observes DST in 2016 from July 7 to the end of October.
Guess October 27 and 24:00 transitions. (Thanks to Steffen Thorsen.)
For future years, guess April's last Thursday to October's last
Thursday except for Ramadan.

Changes affecting past time stamps

Locations while uninhabited now use '-00', not 'zzz', as a
placeholder time zone abbreviation.  This is inspired by Internet
RFC 3339 and is more consistent with numeric time zone
abbreviations already used elsewhere.  The change affects several
arctic and antarctic locations, e.g., America/Cambridge_Bay before
1920 and Antarctica/Troll before 2005.

Asia/Baku's 1992-09-27 transition from +04 (DST) to +04 (non-DST) was
at 03:00, not 23:00 the previous day.  (Thanks to Michael Deckers.)

(From OE-Core master rev: ddcf128e76ed0678ce42416531f4ecb309c57439)

(From OE-Core rev: 225f3b4ea4c7c7439bba2b3a85f24ea94d2f47bc)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 09:00:53 +01:00
Armin Kuster
d8b15a0384 tzcode: update to 2016e
V2: typo in title (jet lagged)
Changes to code

zic now outputs a dummy transition at time 2**31 - 1 in zones
whose POSIX-style TZ strings contain a '<'.  This mostly works
around Qt bug 53071 <https://bugreports.qt.io/browse/QTBUG-53071>.
(Thanks to Zhanibek Adilbekov for reporting the Qt bug.)

Changes affecting documentation and commentary

tz-link.htm says why governments should give plenty of notice for
time zone or DST changes, and refers to Matt Johnson's blog post.
tz-link.htm mentions Tzdata for Elixir.  (Thanks to Matt Johnson.)

(From OE-Core master rev: 5f3340e5c966f4233e0cd4ec468b20a1fd5a7346)

(From OE-Core rev: 6d9e6b6fb2c8c6c80a5981b0f91987b433b6ea24)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 09:00:53 +01:00
George McCollister
9149baa38d wic: fix path parsing, use last occurrence
If the path contains 'scripts' more than once the first occurrence will be
incorrectly used. Use rfind instead of find to find the last occurrence.

(From OE-Core rev: fd544c3ef6ece1e2f9849ee87227efc6d0954e15)

Signed-off-by: George McCollister <george.mccollister@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 09:00:53 +01:00
Nicolas Dechesne
a01d3234f6 bluez5: move btmgmt to common READLINE section
Upstream in 5.33 btmgmt was moved from experimental to common READLINE section,
in commit e4f0c5582f1fe3451d5588243adba9de1ed68b80, but this was never updated
in the recipe.

This is a backport from master branch, commit
28777e593d3dd3a5d0ee2effcdca6a971e2887f9.

(From OE-Core rev: cbe0648e234e83b8ffc336118d3ee2967b4bb175)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 09:00:53 +01:00
Armin Kuster
3b2c540986 libxml2: Security fix for CVE-2016-4448
Affects libxml2 < 2.9.4

(From OE-Core rev: d4343f428c89c6c238cc7cd4c4732448a00003e4)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:59 +01:00
Armin Kuster
ad7cab35ff libxml2: Security fix for CVE-2016-4447
Affects libxml2 < 2.9.4

(From OE-Core rev: b817c98017cb64f902cdae514fb162b3199a0a14)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:59 +01:00
Armin Kuster
4e260c96f4 libxml2: Security fix for CVE-2016-3627
Affects libxml2 < 2.9.4

(From OE-Core rev: ceabe39237a035efda6a74c746848a9fbab30a08)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:59 +01:00
Armin Kuster
1ecd2f56aa libxml2: Security fix for CVE-2016-1833
Affects libxml2 < 2.9.4

(From OE-Core rev: 990b5427fd3bf5c00ac7c5820d5f455378776b62)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:59 +01:00
Armin Kuster
1081306623 libxml2: Security fix for CVE-2016-1835
Affects libxml2 < 2.9.4

(From OE-Core rev: d008b7023cb703a787c8fcac5cd87628b38a9ecd)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:59 +01:00
Armin Kuster
f96cfb009d libxml2: Security fix for CVE-2016-1837
Affects libxml2 < 2.9.4

(From OE-Core rev: d0e3cc8c9234083a4ad6a0c1befe02b6076b084c)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:59 +01:00
Armin Kuster
94d9c374e9 libxml2: Security fix for CVE-2016-4449
Affects limbxml2 < 2.9.4

(From OE-Core rev: 6f6132dc3aeb0d660c9730f6f33e9194a6098226)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:59 +01:00
Armin Kuster
0e8aae7bc8 libxml2: Security fix for CVE-2016-1836
Affects libxml2 < 2.9.4

(From OE-Core rev: 9229873f278f7c24fb01673ec3d9fd404762bc25)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:59 +01:00
Armin Kuster
3e93d609c0 libxml2: Security fix for CVE-2016-1839
Affects libxml2 < 2.9.4

(From OE-Core rev: 689145fc5ae377eab088ee524c447223be29707f)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:58 +01:00
Armin Kuster
970a077b83 libxml2: Security fix for CVE-2016-1838
Affects libxml2 < 2.9.4

(From OE-Core rev: d24b0ac044e02ec34f74e46ad599ac8bdb10432c)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:58 +01:00
Armin Kuster
4cdca0571a libxml2: Security fix for CVE-2016-1840
affects libxml2 < 2.9.4

(From OE-Core rev: 9d894179128771c4a2628c103f5c39e2e6ef13c5)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:58 +01:00
Armin Kuster
17480a956d libxml2: Security fix for CVE-2016-4483.patch
affects libxml2 < 2.9.4

(From OE-Core rev: a28fea55f72284d3f4ed85f19f80b8475e726ee6)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:58 +01:00
Armin Kuster
b3c799c831 libxml2: Security fix for CVE-2016-1834.patch
(From OE-Core rev: 233f3b29760c878a3acb3aa0e22b7c252f17e2b3)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:58 +01:00
Armin Kuster
f01272c3a5 libxml2: Security fix for CVE-2016-3705
(From OE-Core rev: aa8ad693a977e104797dd623d7efad705e298eb2)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:58 +01:00
Armin Kuster
f2688ed200 libxml2: Security fix for CVE-2016-1762
(From OE-Core rev: 8a59dc853d2870bc33ef3cc5af202e33b3d7c6c2)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:58 +01:00
Armin Kuster
c9e0efd1f7 glibc: Security fix for CVE-2016-4429
(From OE-Core rev: 32fd9fed93b896ee50006a95cc9d0209b85268cd)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:58 +01:00
Armin Kuster
2596de9179 glibc: Security Fix for CVE-2016-3706
(From OE-Core rev: 0c82ab38064baaf25169d75ddccaa3926b62c7e3)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-07-27 08:29:58 +01:00
Scott Rifenbark
118380bc5d documentation: Updated date in the manual revision tables.
Added "June 2016" for the date.

(From yocto-docs rev: 9d3327f06f1f798b1ca55b0fc8aeca281e4aca01)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-21 12:59:47 +01:00
Scott Rifenbark
7fde327c85 kernel-dev: Fix the locations of .config and source directory
The locations of the kernel .config file and source direcotry
moved a couple releases ago.  Updated the documentation
accordingly.

Also added a note explaining how to check the expansion of
variables, which servs a couple of purposes:

 * For curious readers, shows them how to understand where
   these variables come from and how they are used.

 * For suspicious readers, shows them how they can verify that
   the variables in the documentation are actually correct.

Author: Tom Zanussi <tom.zanussi@linux.intel.com>
(From yocto-docs rev: af3613b6178122b9e5452529a087143b3fe98495)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-21 12:59:47 +01:00
Scott Rifenbark
3863499572 profile-manual: Added cross-reference links to INHIBIT_PACKAGE_STRIP
I added some reference links to this variable in the ref-manual
glossary.

(From yocto-docs rev: b9ab3953080caf7ebd4b97f3fc2cb5dd1419326b)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-21 12:59:47 +01:00
Scott Rifenbark
c7947af728 ref-manual: Fixed *[doc] string for INHIBIT_PACKAGE_DEBUG_SPLIT
The string was a copy paste error.  It was using the string
for INHIBIT_PACKAGE_STRIP.

(From yocto-docs rev: 9e52affeb8af5e6e667259059224c0f55ed0d090)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-21 12:59:46 +01:00
Scott Rifenbark
a79b7d685b yocto-project-qs: Added note for Fedora23 users
Fedora23 distribution is not supported by the YP 2.0.x release.
I added a note to the required host packages section stating that
if the user is going to use this distribution, they must install
perl-bignum as a required package.

Fixes [YOCTO #9580]

(From yocto-docs rev: ceb707ada99c8f2b4fc096f1c5f0c357522a6984)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-21 12:59:46 +01:00
Scott Rifenbark
4f2dfdcd39 documentation: Prepped for a 2.0.2 release
* poky.ent variables updated for the new release
* <manual>.xml files added the 2.0.2 entry in the manual revision
  table.  Used "TBA 2016" for now.
* mega-manual.sed file updated to replace "2.0.1" with "2.0.2"

(From yocto-docs rev: 0c112723d6982f7ddb6f2908389b5610937ff48f)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-21 12:59:46 +01:00
Elliot Smith
ddbc13155f toasterconf.json: exclude releases Toaster can't build
Due to changes in master to support Python 3, Toaster is no
longer able to build from master.

Remove references to master and set default release to jethro.

The dizzy release should also be removed, as Toaster jethro
is unable to build using this release.

(From OE-Core rev: 1f4bfa33073584c25396d74f3929f263f3df188b)

Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-06-03 15:02:25 +01:00
Matt Madison
32728d0946 wic: insert local Python paths at front
This follows how bitbake performs path insertion, and fixes a
failure to start wic on Ubuntu 15.10 with the distribution's
version of python-ply installed.

(From OE-Core rev: b3a3935c69b6e74e19cd0cb69d47350b9ea9c58e)

Signed-off-by: Matt Madison <matt@madison.systems>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-24 13:21:54 +01:00
Richard Purdie
dade0e68c6 build-appliance-image: Update to jethro head revision
(From OE-Core rev: 8979a4546841f47677ba74989aa32f0cb3e2ff12)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-23 17:35:09 +01:00
Richard Purdie
a325db9bc8 poky.conf: Bump version for 2.0.2 jethro release
(From meta-yocto rev: a9b5cf91fa0ee913381ffec88503e2a40a2e04d4)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-23 17:34:56 +01:00
Richard Purdie
c940dd928f build-appliance-image: Update to jethro head revision
(From OE-Core rev: 1ef5883b78f35679c4ff20468826d63a98be1539)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-23 17:25:14 +01:00
Saul Wold
65306b0bfc gdb: Backport patch to changes with AVX and MPX
The current MPX target descriptions assume that MPX is always combined
with AVX, however that's not correct.  We can have machines with MPX
and without AVX; or machines with AVX and without MPX.

This patch adds new target descriptions for machines that support
both MPX and AVX, as duplicates of the existing MPX descriptions.

The following commit will remove AVX from the MPX-only descriptions.

This commit is backported from 7.12

(From OE-Core rev: 059d459d48bd42a282005698c4dc4a3ecbd2d88f)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-22 08:42:55 +01:00
Armin Kuster
f117786f24 gcc: Security Fix CVE-2016-4490
(From OE-Core rev: 69b1e25a53255433262178b91ab3e328768ad725)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:56:25 +01:00
Armin Kuster
6f8a7089b3 gcc: Security fix CVE-2016-2226
(From OE-Core rev: 8fc7db068cf6e2a527e10e8333585a16ce628e22)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:56:25 +01:00
Armin Kuster
1945133a22 gcc: Security fix CVE-2016-4489
(From OE-Core rev: 7bf396e7bdb3faaf900f99f72446f19df1cffe88)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:56:25 +01:00
Armin Kuster
e3bf77e381 gcc: Security fix CVE-2016-4488
(From OE-Core rev: 07820907d25970f2c22497415aa6ff95fe43dc40)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-17 20:56:25 +01:00
Humberto Ibarra
44585dd62a yocto-bsp: Set correct default branches and branches base for i386, qemu and x86_64 archs
Kernel recipes for linux-yocto_4.1 have outdated branches as default, making it
impossible to find the right branch if the user picks the default value.
The branches_base property uses these outdated branches also.

This updates standard/common-pc and standard/common-pc-64 branches to standard/base

The fix was tested using 'yocto-bsp create' with each one of the following archs:

-i386
-x86_64
-qemu (i386 and x86_64)

After the layer was created, it was added to local.conf and the MACHINE was set
accordingly.

'bitbake linux-yocto' ran successfully with each configuration tested.

[YOCTO #9160]

(From meta-yocto rev: 32e3c2d3910c42f12957c874902a01da94a7971a)

Signed-off-by: Humberto Ibarra <humberto.ibarra.lopez@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-14 09:40:17 +01:00
Humberto Ibarra
a4ee99f27a yocto-bsp: fix default kernel for x86_64 arch
When using x86_64 arch in yocto-bsp the script suggests
4.1 as the default kernel version; however, as soon as the
default is picked the script continues processing with
3.19 kernel.

This changes the default kernel version to 4.1, which is the
right value and matches the script's message.

[Yocto #9353]

(From meta-yocto rev: 932184bef928d83249c4b4e5dcd36c68d4264cd6)

Signed-off-by: Humberto Ibarra <humberto.ibarra.lopez@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-14 09:40:17 +01:00
Ross Burton
16d64def97 conf/distro/poky.conf: use example.com for connectivity check
Instead of pinging both the Yocto Project download and bugzilla sites, use
https://www.example.com/.  This is a reserved domain name and hosted by IANA, so
is a key part of the Internet and should be available everywhere (whereas for
example google.com is generally blocked by the Great Firewall of China).  Also
using a https: site verifies that any local proxies are configured for HTTPS as
well as HTTP.

In my testing this reduces the time taken for connectivity checks from 3 seconds
to 1 second.

(From meta-yocto rev: b253c6073be44090a19d1743deb58ef566853056)

(From meta-yocto rev: c27b1d6ccac67ff3ed16079fcbe0f9a8644499ed)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-14 09:40:17 +01:00
Joshua Lock
a31931e290 openssl: prevent ABI break from earlier jethro releases
The backported upgrade to 1.0.2h included an updated GNU LD
version-script which results in an ABI change. In order to try and
respect ABI for existing binaries built against fido this commit
partially reverts the version-script to maintain the existing ABI
and instead only add the new symbols required by 1.0.2h.

Suggested-by: Martin Jansa <martin.jansa@gmail.com>
(From OE-Core rev: 480db6be99f9a53d8657b31b846f0079ee1a124f)

(From OE-Core rev: 528541845df34843c14be5de62e9f53004d292ac)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-14 09:40:17 +01:00
Armin Kuster
da75750122 openssh: Security Fix CVE-2016-3115
opehssh <= 7.2

(From OE-Core rev: e0df10f586361a18f2858230a5e94ccf9c3cc2f3)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-14 09:40:17 +01:00
Armin Kuster
ae691815c8 busybox: Security fix CVE-2016-2147
busybox <= 1.24.2

(From OE-Core rev: 0a977091a4a5ee925b44c60bc4b13557696afadb)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-14 09:40:16 +01:00
Armin Kuster
ba15486e27 busybox: Security Fix CVE-2016-2148
busybox <= 1.24.2

(From OE-Core rev: 1d7ad5f32ae39f84626bb71ded75439062dd717c)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-14 09:40:16 +01:00
Armin Kuster
2ef5feeb3d libtiff: Security fix CVE-2015-8664 and 8683
CVE-2015-8665
CVE-2015-8683

(From OE-Core rev: 49008750ece710201701a6f413537c857190798a)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-14 09:40:16 +01:00
Robert Yang
a201edefae openssl: 1.0.2d -> 1.0.2h (mainly for CVEs)
* CVEs:
  - CVE-2016-0705
  - CVE-2016-0798
  - CVE-2016-0797
  - CVE-2016-0799
  - CVE-2016-0702
  - CVE-2016-0703
  - CVE-2016-0704
  - CVE-2016-2105
  - CVE-2016-2106
  - CVE-2016-2109
  - CVE-2016-2176

* The LICENSE's checksum is changed because of date changes (2011 ->
  2016), the contents are the same.

* Remove backport patches
  - 0001-Add-test-for-CVE-2015-3194.patch
  - CVE-2015-3193-bn-asm-x86_64-mont5.pl-fix-carry-propagating-bug-CVE.patch
  - CVE-2015-3194-1-Add-PSS-parameter-check.patch
  - CVE-2015-3195-Fix-leak-with-ASN.1-combine.patch
  - CVE-2015-3197.patch
  - CVE-2016-0701_1.patch
  - CVE-2016-0701_2.patch
  - CVE-2016-0800.patch
  - CVE-2016-0800_2.patch
  - CVE-2016-0800_3.patch

* Update crypto_use_bigint_in_x86-64_perl.patch

* Add version-script.patch and update block_diginotar.patch (From master branch)

* Update openssl-avoid-NULL-pointer-dereference-in-EVP_DigestInit_ex.patch
  (From Armin)

(From OE-Core rev: bca156013af0a98cb18d8156626b9acc8f9883e3)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-11 12:37:06 +01:00
Tom Zanussi
00b016b010 Revert "kernel/kernel-arch: Explicitly mapping between, i386/x86_64 and x86 for kernel ARCH"
This reverts commit a6f52930a6.

In addition to also causing the problem in [YOCTO #9579], this commit
was reverted in krogoth and master but wasn't reverted in jethro but
should be.  The original revert message was:

This reverts commit 8d310b24927d0f348fb431895f0583733db2aad0.

That commit completely breaks KBUILD_DEFCONFIG because it relies on
$ARCH to match between the target OE arch and the kernel subdirectory
containing the defconfigs. In the kernel all defconfigs for everything
x86-based (including x86_64) is stored in dir arch/x86/configs/

kernel-yocto.bbclass correctly searches for all the defconfigs inside
${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG}

Commit 8d310b249 makes it search in wrong places and _only_ if you
define TARGET_ARCH = "athlon" will it search x86 which is nonsensical.

The commit further adds an if clause to hack the mungled kernel arches
back to their original values (ugh) in do_shared_workdir which is run
after do compile, but of course the build breaks before that in
do_kernel_metadata because of the KBUILD_DEFCONFIG mentioned above (so
that hack is useless).

Please fix that corner case bug in another way which does not completely
screw up the kernel arch mapping & defconfig logic. If 64bit configs are
generated in the kernel for 32bit machines because the host is asked,
then it it a bug in the kernel, it is of no use to hack around it in OE.

(From OE-Core rev: bc02a478a5d4a5de7b3943ed809d5c22711f5b1f)

(From OE-Core rev: 88e0032f13f635c868c426e963db4d8a6fc42e9d)

Signed-off-by: Ioan-Adrian Ratiu <adrian.ratiu@ni.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-11 12:37:06 +01:00
Martyn Welch
877a6b3ef4 glew: Correct version in autotooling patches
The additional autotooling patched into glew claims the version is 1.9.0
whilst we are building 1.12.0. The version in the autotooling is used to
set the version number in the pkgconfig file, this results in the
configuration of packages which depend on glew > 1.9.0 failing.

This patch updates the version number used in the patches to match that of
the version being built.

(From OE-Core rev: 0ef7c0f30456cc242de331b273b92c1dfe835350)

Signed-off-by: Martyn Welch <martyn.welch@collabora.co.uk>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-11 12:37:06 +01:00
André Draszik
ed3fc1ab85 gdb: fix QA warning (uClibc)
WARNING: QA Issue: gdb rdepends on libiconv, but it isn't a build dependency? [build-deps]

We already have virtual/libiconv which is set appropriately
in all environments, so let's use it to fix the issue.

(From OE-Core rev: 9ae38c3b24b387b02541142d40343d1dd0411c88)

Signed-off-by: André Draszik <adraszik@tycoint.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-11 12:37:06 +01:00
Tristan Van Berkom
dafc9d7755 binutils: backport bug fix to the 2.25 branch for jethro
We fail to build webkit on aarch64 due to this binutils bug:

   https://sourceware.org/bugzilla/show_bug.cgi?id=19353

Applying patch which fixes this, stripped out changelog entry
from patch to make it apply without error.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-11 12:37:06 +01:00
Yuanjie Huang
49ce0e7d4a glibc: Fix CVE-2015-8778
CVE: CVE-2015-8778

Improve check against integer wraparound in hcreate_r [BZ #18240]

This is an integer overflow in hcreate and hcreate_r which can result in
an out-of-bound memory access.  This could lead to application crashes
or, potentially, arbitrary code execution.

Upstream-Status: Backport [2.23]
(cherry-picked from commit bae7c7c7, 4bd228c8)

(From OE-Core rev: 71b051f51a44dad1fdca7ca6b3552d0aebdc91d3)

Signed-off-by: Yuanjie Huang <yuanjie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-11 12:37:06 +01:00
Robert Yang
6b2102cd59 boot-directdisk.bbclass: remove HDDIMG before create
Fixed when rebuild:
mkdosfs: file /path/to/hdd.image already exists

(From OE-Core rev: 69b49e8dc45cf60defba547d93e663df42c92127)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry-pick from 9abcd309c098558360cde2bff65be840ead25f83)
Signed-off-by: Tim Kilbourn <tkilbourn@gmail.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-09 14:37:28 +01:00
Stefan Agner
504e742a5e opkg: backport fix for double remove of packges
Backport the fix 7885da3974 ("pkg_get_provider_replacees: do not
add installed pkg to replacee list"). This avoids opkg trying to
remove a package twice e.g. when upgrading.

Suggested-by: Alejandro del Castillo <alejandro.delcastillo@ni.com>
(From OE-Core rev: f26fc34bbe9cf9ae059d4fe646a84501b8924f75)

Signed-off-by: Stefan Agner <stefan.agner@toradex.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-09 14:37:28 +01:00
Sona Sarmadi
6b9d2edd7d bind: CVE-2016-1285 CVE-2016-1286
CVE-2016-1285 bind: malformed packet sent to rndc can trigger assertion failure
CVE-2016-1286 bind: malformed signature records for DNAME records can
trigger assertion failure

[YOCTO #9400]

External References:
https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2016-1285
https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2016-1286
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1285
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1286

References to the Upstream commits and Security Advisories:

CVE-2016-1285: https://kb.isc.org/article/AA-01352
https://source.isc.org/cgi-bin/gitweb.cgi?p=bind9.git;a=patch;
h=e7e15d1302b26a96fa0a5307d6f2cb0d8ad4ea63

CVE-2016-1286: https://kb.isc.org/article/AA-01353
https://source.isc.org/cgi-bin/gitweb.cgi?p=bind9.git;a=patch;
h=456e1eadd2a3a2fb9617e60d4db90ef4ba7c6ba3

https://source.isc.org/cgi-bin/gitweb.cgi?p=bind9.git;a=patch;
h=499952eb459c9a41d2092f1d98899c131f9103b2

(From OE-Core rev: e8bc043f871e507542955ad28de74f67afa9bc36)

Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-09 14:37:28 +01:00
Bjørn Forsman
ed3115be57 license.bbclass: fix warnings when run in unprivileged "container" env
An unprivileged "container" environment like this[1] doesn't have root
account (uid 0) which causes tons of "Invalid argument" warnings:

  $ bitbake ...
  ...
  WARNING: Could not copy license file [src] to [dest]: [Errno 22] Invalid argument: '[src]'
  WARNING: Could not copy license file [src] to [dest]: [Errno 22] Invalid argument: '[src]'
  WARNING: Could not copy license file [src] to [dest]: [Errno 22] Invalid argument: '[src]'
  ...

Fix it by handling EINVAL similar to existing handling of EPERM (which
was added for when not running under pseudo).

[1]: The real environemnt is buildFHSUserEnv from NixOS/nixpkgs, but a
  demonstration of the issue can be done like this:

    $ touch f
    $ unshare --user --mount chown 0:0 f
    chown: changing ownership of ‘f’: Invalid argument

(From OE-Core master rev: d00b2250a6afebd7d1373c04b4006290f0cd4043)

(From OE-Core rev: e49794b9fe3391073138cb6116a46b37dd5119e7)

Signed-off-by: Bjørn Forsman <bjorn.forsman@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-09 14:37:28 +01:00
Armin Kuster
c6864efbc0 tzdata: update to 2016d
Changes affecting future time stamps

America/Caracas switches from -0430 to -04 on 2016-05-01 at 02:30.
(Thanks to Alexander Krivenyshev for the heads-up.)

Asia/Magadan switches from +10 to +11 on 2016-04-24 at 02:00.
(Thanks to Alexander Krivenyshev and Matt Johnson.)

New zone Asia/Tomsk, split off from Asia/Novosibirsk. It covers
Tomsk Oblast, Russia, which switches from +06 to +07 on 2016-05-29
at 02:00.  (Thanks to Stepan Golosunov.)

Changes affecting past time stamps

New zone Europe/Kirov, split off from Europe/Volgograd.  It covers
Kirov Oblast, Russia, which switched from +04/+05 to +03/+04 on
1989-03-26 at 02:00, roughly a year after Europe/Volgograd made
the same change.  (Thanks to Stepan Golosunov.)

Russia and nearby locations had daylight-saving transitions on
1992-03-29 at 02:00 and 1992-09-27 at 03:00, instead of on
1992-03-28 at 23:00 and 1992-09-26 at 23:00.  (Thanks to Stepan
Golosunov.)

Many corrections to historical time in Kazakhstan from 1991
through 2005.  (Thanks to Stepan Golosunov.)  Replace Kazakhstan's
invented time zone abbreviations with numeric abbreviations.

(From OE-Core master rev: 10194ca3d8c2f4d8648a685c5c239a33d944b6fe)

(From OE-Core rev: a4808f800f856fb01761f4835f6a87e736349994)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-09 14:37:28 +01:00
Armin Kuster
328bd419be tzcode: update to 2016d
they keep the versions in-sync. changes are all in data.

Changes affecting future time stamps

America/Caracas switches from -0430 to -04 on 2016-05-01 at 02:30.
(Thanks to Alexander Krivenyshev for the heads-up.)

Asia/Magadan switches from +10 to +11 on 2016-04-24 at 02:00.
(Thanks to Alexander Krivenyshev and Matt Johnson.)

New zone Asia/Tomsk, split off from Asia/Novosibirsk. It covers
Tomsk Oblast, Russia, which switches from +06 to +07 on 2016-05-29
at 02:00.  (Thanks to Stepan Golosunov.)

Changes affecting past time stamps

New zone Europe/Kirov, split off from Europe/Volgograd.  It covers
Kirov Oblast, Russia, which switched from +04/+05 to +03/+04 on
1989-03-26 at 02:00, roughly a year after Europe/Volgograd made
the same change.  (Thanks to Stepan Golosunov.)

Russia and nearby locations had daylight-saving transitions on
1992-03-29 at 02:00 and 1992-09-27 at 03:00, instead of on
1992-03-28 at 23:00 and 1992-09-26 at 23:00.  (Thanks to Stepan
Golosunov.)

Many corrections to historical time in Kazakhstan from 1991
through 2005.  (Thanks to Stepan Golosunov.)  Replace Kazakhstan's
invented time zone abbreviations with numeric abbreviations.

(From OE-Core master rev: db8223e4dd2e513a656aedfae217d94e053c2366)

(From OE-Core rev: bb0b1a8dd056af717c37571f8d0e023acd304835)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-05-09 14:37:28 +01:00
Armin Kuster
6dba9abd43 tzcode: update to 2016c
(From OE-Core rev: 28032d8c3122b75ceb3f4a664a2b478c9a9a6a2c)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:03:08 +01:00
Armin Kuster
be42a1d4fd tzdata: update to 2016c
The 2016c release of the tz code and data is available. Its most urgent change is for Asia/Baku, where the update takes effect this weekend.

This release reflects the following changes, which were either circulated on the tz mailing list or are relatively minor technical or administrative changes:

Changes affecting future time stamps

Azerbaijan no longer observes DST.  (Thanks to Steffen Thorsen.)

Chile reverts from permanent to seasonal DST.  (Thanks to Juan
Correa for the heads-up, and to Tim Parenti for corrections.)
Guess that future transitions are August's and May's second
Saturdays at 24:00 mainland time.  Also, call the period from
2014-09-07 through 2016-05-14 daylight saving time instead of
standard time, as that seems more appropriate now.

Changes affecting past time stamps

Europe/Kaliningrad and Europe/Vilnius changed from +03/+04 to
+02/+03 on 1989-03-26, not 1991-03-31.  Europe/Volgograd changed
from +04/+05 to +03/+04 on 1988-03-27, not 1989-03-26.
(Thanks to Stepan Golosunov.)

Changes to commentary
Several updates and URLs for historical and proposed Russian changes.
(Thanks to Stepan Golosunov, Matt Johnson, and Alexander Krivenyshev.)

(From OE-Core rev: c3eb4f08a6157e4c06878d0749438a53890c2af8)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:03:08 +01:00
Armin Kuster
6d06b104ce tzcode: update to 2016b
change SRC_URI http seems more reliable

Changes to code

     tzselect's diagnostics and checking, and checktab.awk's checking,
     have been improved.  (Thanks to J William Piggott.)

     tzcode now builds under MinGW.  (Thanks to Ian Abbott and Esben Haabendal.)

     tzselect now tests Julian-date TZ settings more accurately.
     (Thanks to J William Piggott.)

Changes to commentary

     Comments in zone tables have been improved.  (Thanks to J William Piggott.)

     tzselect again limits its menu comments so that menus fit on a
     24x80 alphanumeric display.

     A new web page tz-how-to.html.  (Thanks to Bill Seymour.)

     In the Theory file, the description of possible time zone abbreviations in
     tzdata has been cleaned up, as the old description was unclear and
     inconsistent.  (Thanks to Alain Mouette for reporting the problem.)

(From OE-Core rev: cb091aead5680e99bd8d14bcf6d8444ac9ccd669)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:03:08 +01:00
Armin Kuster
5f5e9c4629 tzdata: update to 2016b
updated SRC_URI to http as it seems more stable.

Changes affecting future time stamps

     New zones Europe/Astrakhan and Europe/Ulyanovsk for Astrakhan and
     Ulyanovsk Oblasts, Russia, both of which will switch from +03 to +04 on
     2016-03-27 at 02:00 local time.  They need distinct zones since their
     post-1970 histories disagree.  New zone Asia/Barnaul for Altai Krai and
     Altai Republic, Russia, which will switch from +06 to +07 on the same date
     and local time.  Also, Asia/Sakhalin moves from +10 to +11 on 2016-03-27
     at 02:00.  (Thanks to Alexander Krivenyshev for the heads-up, and to
     Matt Johnson and Stepan Golosunov for followup.)

     As a trial of a new system that needs less information to be made up,
     the new zones use numeric time zone abbreviations like "+04"
     instead of invented abbreviations like "ASTT".

     Haiti will not observe DST in 2016.  (Thanks to Jean Antoine via
     Steffen Thorsen.)

     Palestine's spring-forward transition on 2016-03-26 is at 01:00, not 00:00.
     (Thanks to Hannah Kreitem.) Guess future transitions will be March's last
     Saturday at 01:00, not March's last Friday at 24:00.

Changes affecting past time stamps

     Europe/Chisinau observed DST during 1990, and switched from +04 to
     +03 at 1990-05-06 02:00, instead of switching from +03 to +02.
     (Thanks to Stepan Golosunov.)

     1991 abbreviations in Europe/Samara should be SAMT/SAMST, not
     KUYT/KUYST.  (Thanks to Stepan Golosunov.)

(From OE-Core rev: 7d2ade652954f51345fde61976a899b8aafd79a1)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:03:08 +01:00
Scott Rifenbark
bb5e264604 yocto-project-qs: Updated the minnowboard example.
Fixes [YOCTO #9386]

Added some missing information:

 * Added instruction to be in the poky directory before cloning
   the meta-intel repository.

 * Removed the "source" part of the string for the bitbake-layer
   command.

 * Added text to describe that the user needs to be sure that the
   same branches are in play for poky and meta-intel before they
   launch the build.

(From yocto-docs rev: a9b85623b1aa30362e9c38ea8f4fd38f35798f67)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:05 +01:00
Scott Rifenbark
2d452b19d6 poky.ent: Added lower-case distro name variable.
I added a variable named DISTRO_NAME_NO_CAP that can be used
to resolve to the branch name as it is needed on command lines
and as it appears in output.

(From yocto-docs rev: e0e27a3623ee90701367162affd9c5d3806297e5)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:04 +01:00
Awais Belal
f87869c6d6 lttng-tools: fix regression tests hang
Some of the lttng fast_regression ptests have race
conditions which end up in a deadlock so the test
case never returns and the only way around is to
kill the process.
This is fixed by picking up relevant patches from
lttng-tools mainstream that fix up the behavior
of these tests.

(From OE-Core rev: 7c5fbfc13a541e904022e19eff8251f1cdf764f5)

Signed-off-by: Awais Belal <awais_belal@mentor.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:04 +01:00
Ross Burton
a820a2073b ncurses: update SRC_URI
Upstream re-arranged their FTP server and deleted the tarball that we were
downloading.  This tarball is mirrors on downloads.yoctoproject.org but not
everyone uses that, so Work around this by pointing the SRC_URI at the Yocto
Project source mirror directly.

[ YOCTO #9379 ]

(From OE-Core rev: d64047b2e28f89b0efbfbced48149e1a86babc61)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:04 +01:00
Juro Bystricky
bdd03ee432 python3: fix building nativesdk-python3
When the class nativesdk.bbclass is inherited, it redefines TARGET_CC_ARCH,
in the case of python3, this enables debug, causing an error while linking.
Since we don't enable debug during configure some functions are not declared.
This patch makes sure we keep debug disabled, fixing the linking errors.

[YOCTO #9357]

(From OE-Core rev: 2dd22dff121b3effe40abe4370de89231785a823)

Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:04 +01:00
Awais Belal
87f7f062df systemd-serialgetty: allow baud rate overriding
In case a getty is required on a UART which is not being
used as the kernel console, the current agetty invocation
fails to obey the baud rate configured through the
SERIAL_CONSOLES variable because it uses --keep-baud.

(From OE-Core master rev: b54b73834e73d55de1038b55d0a4d7f49cda52d0)

(From OE-Core rev: 4e9d7fc44a1fcefe15dd66905ae0dbbc7dc1ca9d)

Signed-off-by: Awais Belal <awais_belal@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:04 +01:00
Lukas Bulwahn
16cb70663f boost: ensure boost to remain an empty metapackage
To ensure that boost remains an empty metapackage after version
updates, we explicitly require boost files to be empty. If new
libraries exist after a version update of the boost recipe,
bitbake will emit a warning at the do_package task. For example,
at the version update from 1.58.0 to 1.59.0, the new timer
library is indicated with:

WARNING: QA Issue: boost: Files/directories were installed but not shipped in any package:
  /usr/lib/libboost_timer.so.1.59.0
Please set FILES such that these items are packaged. Alternatively if they are unneeded, avoid installing them or delete them within do_install.
boost: 1 installed and not shipped files. [installed-vs-shipped]

Ross Burton suggested this improvement on the openembedded-core
mailing list during review of the boost recipe version update [1].

[1] http://lists.openembedded.org/pipermail/openembedded-core/2015-December/114314.html

(From OE-Core master rev: c4e33232db2da3594cc4ba38eea56ee1acb54d3a)

(From OE-Core rev: 90dcc9838e5be74f5ec7a8380cf6da3bddb1c955)

Signed-off-by: Lukas Bulwahn <lukas.bulwahn@oss.bmw-carit.de>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:04 +01:00
Christopher Larson
3f54d40e23 systemd: chown hwdb.bin to root:root for do_rootfs
This is created by qemu for the do_rootfs case, which bypasses pseudo, so we
need to correct the ownership. This fixes a warning issued by
rootfs_check_host_user_contaminated.

(From OE-Core master rev: 4ff6b8cadec10e17dbf884a873a227e29944f5d1)

(From OE-Core rev: 36eb5b6e75361053b5dd00652df6361499d8a645)

Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:04 +01:00
Ross Burton
909cf62394 cdrtools: update SRC_URI
Upstream released their 3.01 so the alpha releases we were downloading have
moved.  Update the SRC_URI so it continues to download.

(From OE-Core rev: 2ba9f90e86d25aa0b9319093478ea2218e1423e4)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:04 +01:00
Li Xin
e86f3240e6 gcc-cross-canadian.inc: add INSANE_SKIP_ to avoid build warning
WARNING: QA Issue: gcc-cross-canadian-i586-dbg: found library in wrong location:
/PATH/sysroots/x86_64-oesdk-linux/usr/libexec/i586-oe-linux/gcc/
i586-oe-linux/5.2.0/.debug/libcc1.so.0.0.0

This warning is introduced by commit f6e47aa(gcc-target 5.1: fix for libcc1)

(From OE-Core rev: 62c51c4178fb66341498c71c74ce42652568c7fa)

Signed-off-by: Li Xin <lixin.fnst@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:03 +01:00
Bill Randle
2aeac77235 systemd: fix segfault on shutdown
This applies upstream fixes to fix a segfault in systemd-logind on
shutdown.

[Fixes YOCTO #9265]

(From OE-Core rev: 4939402d8c67d68e20618cdfdd091bd8cc3f535a)

Signed-off-by: Bill Randle <william.c.randle@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:03 +01:00
Ulrich Ölmann
9e5370d2e6 nfs-utils: bugfix: adjust name of statd service unit
Upstream nfs-utils use 'rpc-statd.service' and Yocto introduced
'nfs-statd.service' instead but forgot to update the mount.nfs helper
'start-statd' accordingly.

(From OE-Core rev: 48d1a2882bedc1c955071b3602dc640b530fbc47)

Signed-off-by: Ulrich Ölmann <u.oelmann@pengutronix.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:03 +01:00
Brad Mouring
07682c1bfb busybox_git: Fix SRCREV
The SRCREV in the busybox git recipe did not point to a commit ID
on the master branch. Point the variable to something reachable from
the master branch (which fixes this recipe's fetch()).

Suggested-by: Khem Raj <raj.khem@gmail.com>
(From OE-Core rev: 6ff2acbc72dc958cb3b97998462015010c44d946)

Signed-off-by: Brad Mouring <brad.mouring@ni.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:03 +01:00
Brad Mouring
09e3b84ea5 busybox-1.23: Backport patch to fix zcip false-conflict
Busybox upstream fixed the issue where an incorrect comparison of
addresses led to bogus renegotiation of a new ll ip in 1.24. Backport
this change to 1.23.2.

(From OE-Core rev: 47cb52741c946b6bbe09d5ee9a9f2fe855e8d5fb)

Signed-off-by: Brad Mouring <brad.mouring@ni.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:03 +01:00
Javier Viguera
4fe84e836d bluez5: allow D-Bus to spawn obexd in systems without systemd
This includes a proper D-Bus service file for obexd in systems that do
not support systemd.

(From OE-Core rev: 75c5dc8d4a5506bf5b89292a96c7b9f91e9d71c8)

(From OE-Core rev: a68ff298c8466adbce5f81b4f8104dfdc226eaf7)

Signed-off-by: Javier Viguera <javier.viguera@digi.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:02 +01:00
Khem Raj
a9e1361611 ruby-native: Depend on openssl-native
This dependency is floating otherwise, It races against openssl-native
and when openssl config does not match with openssl on build host the
build fails occasionally

x86_64-linux/usr/include/openssl/ripemd.h:70:4: error: #error RIPEMD is
disabled.
 #  error RIPEMD is disabled.

Change-Id: I5ff6d8f058ff99c64ad4dc7c0377724071003ae6
(From OE-Core master rev: d0c8d98077622a700d92384f676770cb4d6d4f46)

(From OE-Core rev: 0e3888cc455139bc5ca6080b1d2bc897f42ef7ad)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-04-11 22:02:02 +01:00
Markus Lehtonen
3b223f75ee devtool: extract: update SRCTREECOVEREDTASKS for kernel
Add 'do_kernel_configme' and 'do_kernel_configcheck' to
SRCTREECOVEREDTASKS of kernel packages. These tasks should not be run
because kernel meta in the srctree is not necessarily up-to-date or
even present which causes build failures and/or invalid kernel config.
Especially so because 'do_patch' which is a dependency of
'do_kernel_configme' is not being run.

We now store .config in the srctree and 'do_configure' task is able to
run successfully.

(From OE-Core master rev: 7ce4c18a4ba1ebcb9f46e652a881ace1f21d2292)

(From OE-Core rev: 4d879cb8d7384ac4a96f22c1664b8875f2d8f615)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-20 10:23:02 +00:00
Markus Lehtonen
42ce9b8751 devtool: extract: copy kernel config to srctree
This makes the correct kernel config to be used when building kernel
from srctree (extrernalsrc). If no kernel config is present in the
builddir 'do_configure' task copies .config from the srctree.

(From OE-Core master rev: 3b516332e038a587685f6e0c14a7f04990bdd6cc)

(From OE-Core rev: 32593f2b6a44a7bfdab55aec7e172476020fd4eb)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-20 10:23:01 +00:00
Peter Kjellerstedt
45a2977b83 lib/oe/patch: Make GitApplyTree._applypatch() support read-only .git/hooks
Rather than modifying files in .git/hooks, which can be read-only
(e.g., if it is a link to a directory in /usr/share), move away the
entire .git/hooks directory temporarily.

(From OE-Core master rev: a88d603b51a9ebb39210d54b667519acfbe465c3)

(From OE-Core rev: 09a2718cb030f8cce202ded0e823cadea4c71f6a)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-20 09:55:53 +00:00
Chang Rebecca Swee Fun
c8e5c38b8a tune-corei7.inc: Fix PACKAGE_EXTRA_ARCHS for corei7-32
Change the name to core2-32 from core2.

There's no AVAILTUNES with the name core2. Make sure that we specify
the correct TUNE name so PACKAGE_EXTRA_ARCHS is expanded correctly.

[ YOCTO #9197 ]

(From OE-Core rev: 0903d6f0098f112d4263812df109e0c44c166db8)

(From OE-Core rev: 883c38cf0e59082276f933f9b47e276b6b88270f)

Signed-off-by: Chang Rebecca Swee Fun <rebecca.swee.fun.chang@intel.com>
Signed-off-by: Anuj Mittal <anujx.mittal@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-15 11:29:04 +00:00
Jagadeesh Krishnanjanappa
4823395a7d license.bbclass: fix host contamination warnings for license files
We get below host contamination warnings of license files for
each recipe, when we try to create a separate ${PN}-lic package (which
contains license files), by setting LICENSE_CREATE_PACKAGE equal to "1"
in local.conf.

-- snip --
WARNING: QA Issue: libcgroup: /libcgroup-lic/usr/share/licenses/libcgroup/generic_LGPLv2.1 is owned by uid 5001, which is the same as the user running bitbake. This may be due to host contamination [host-user-contaminated]
WARNING: QA Issue: attr: /attr-lic/usr/share/licenses/attr/libattr.c is owned by uid 5001, which is the same as the user running bitbake. This may be due to host contamination [host-user-contaminated]
WARNING: QA Issue: bash: /bash-lic/usr/share/licenses/bash/COPYING is owned by uid 5001, which is the same as the user running bitbake. This may be due to host contamination [host-user-contaminated]
-- CUT --

Since the license files from source and OE-core, are populated in a normal
shell environment rather in pseudo environment (fakeroot); the ownership of
these files will be same as host user running bitbake. During the do_package
task (which runs in pseudo environment (fakeroot)), os.link preserves the
ownership of these license files as host user instead of root user.
This causes license files to have UID same as host user id and resulting in
above warnings during do_package_qa task.

Changing ownership of license files to root user (which has UID and GID as 0)
under pseudo environment will solve above warnings, and on exiting pseudo
environment the license files will continue to be owned by host user. Perform
this manipulation within try/except statements, as tasks which are not exected
under pseudo (such as do_populate_lic) result in OSError when trying to
change ownership of license files.

(From OE-Core master rev: a411e96c3989bc9ffbd870b54cd6a7ad2e9f2c61)

(From OE-Core rev: c87a3507c4557827b3a495a876cf6411ce225407)

Signed-off-by: Jagadeesh Krishnanjanappa <jkrishnanjanappa@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-15 11:29:04 +00:00
Mariano Lopez
4a4fde53bd dhcp: CVE-2015-8605
ISC DHCP allows remote attackers to cause a denial of
service (application crash) via an invalid length field
in a UDP IPv4 packet.

(From OE-Core master rev: f9739b7fa8d08521dc5e42a169753d4c75074ec7)

(From OE-Core rev: 71c92a9e62f4278a946e272b0798d071191dd751)

Signed-off-by: Mariano Lopez <mariano.lopez@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-15 11:29:04 +00:00
Chang Rebecca Swee Fun
f8dd7e105a make 4.1: fix segfault when ttyname fails
GNU make segfaults when run in a chroot environment because
of a known bug in GNU make 4.1. See [1] for details.

Works if /dev/pts is mounted before chroot.

[1] http://savannah.gnu.org/bugs/?43434

[YOCTO #9067]

Reported-by: Alexander Larsson <alexl@redhat.com>
(From OE-Core master rev: 0fe2a4b428b1b9a937914d87ec089b5a64f641eb)

(From OE-Core rev: 1def72ab689bbf0d2974ab771febf241befa2495)

Signed-off-by: Anuj Mittal <anujx.mittal@intel.com>
Signed-off-by: Chang Rebecca Swee Fun <rebecca.swee.fun.chang@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-15 11:29:04 +00:00
Ross Burton
269c2bd717 xorg-lib: allow native building without x11 DISTRO_FEATURES
The Xorg libraries use REQUIRED_DISTRO_FEATURES to stop building on
distributions without the x11 feature but this stops people building native
tooling that uses libX11, such as libsdl-native.

(From OE-Core rev: f2970211690be3cb99ef7404f98010f3fecae45d)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-15 11:29:04 +00:00
Ross Burton
1a51bb69b7 base: check for existing prefix when expanding names in PACKAGECONFIG
When the DEPENDS are added as part of the PACKAGECONFIG logic the list of
packages are expanded so that any required nativesdk-/-native/multilib prefixes
and suffixes are added.

However the special handling of virtual/foo names doesn't check that the prefix
already exists, which breaks under nativesdk as in that situation there's an
explicit nativesdk- prefix *and* MLPREFIX is set to nativesdk-.  This results in
the same prefix being applied twice, and virtual packages such as virtual/libx11
ending up as virtual/nativesdk-nativesdk-libx11.

(From OE-Core rev: 9e7d207e207bf0319b09d403d87d37f24e3dfbee)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-11 23:15:01 +00:00
Scott Rifenbark
c484129d7b documentation: Final bits to support a 2.0.1 doc release
Edits included:

 * Update to poky.ent to have 2.0.1 variable values.
 * Update to all Manual revision tables.
 * Update to mega-manual.sed file so good links result in the
   mega-manual.

(From yocto-docs rev: d7277ca5c6863a116816ff81683a694a337de575)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-11 23:05:48 +00:00
Ross Burton
5f27b611cd conf/local.conf.sample: comment out ASSUME_PROVIDED=libsdl-native
Ubuntu 15.10 and Debian testing can't build qemu-native against the host libsdl.
Now that libsdl-native is buildable, comment out the ASSUME_PROVIDED which meant
it wouldn't be used.

[ YOCTO #8553 ]

(From meta-yocto rev: 759accbfca46de058ce402938713189dab22a70c)

(From meta-yocto rev: 32a797541bec9c8b13955f5a060558fe64c4fefc)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-11 11:00:21 +00:00
Craig McQueen
6e32be7c7b os-release: put double-quotes around variable contents
This makes the resulting /etc/os-release file have valid shell
assignment syntax. This makes it loadable by a shell script, using the
'source' command:

    source /etc/os-release

(From OE-Core rev: bab590d738e218fb2da2b3bf27933fe4562de870)

Signed-off-by: Ross Burton <ross.burton@intel.com>

(From OE-Core master rev: f6e0ea000fa3b9a726ab56500f643f9902371618)
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-11 10:57:03 +00:00
Arnold Csorvasi
d6fed74776 image_types_uboot: add cpio.gz.uboot to supported IMAGE_TYPES
U-Boot needs the U-Boot header in a ramdisk image to boot it.
Add this header to the cpio.gz image, so that it can be booted
with U-Boot.

(From OE-Core rev: 240ecb6ac624cd6e5d813d8144c7a7f2d7adb31f)

Signed-off-by: Arnold Csorvasi <arnold.csorvasi@ni.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>

(From OE-Core master rev: 8376fa3d4ef6175b83ab7f1ec8e4e20ec14964f4)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-11 10:57:03 +00:00
Ross Burton
6e8edf0e0f libsdl: expand PACKAGECONFIG and enable native builds
Use PACKAGECONFIG instead of using logic in DEPENDS and EXTRA_OECONF, adding new
options for PulseAudio, tslib, DirectFB, OpenGL and X11.  Pass
--disable-x11-shared so that it links to the X libraries instead of using
dlopen().

Disable tslib by default as the kernel event input subsystem is generally used.

SDL's OpenGL support requires X11 so check for both x11 and opengl, and merge
the dependencies.

Finally enable native builds, with a minimal PACKAGECONFIG that will build from
oe-core for native and nativesdk.

(From OE-Core rev: 66205c6096ce9d8bc828bf9b61d927cb495f69b1)

Signed-off-by: Ross Burton <ross.burton@intel.com>

(From OE-Core master rev: 3d6c31c3a4ff34376e17005a981bb55fc6f7a38f)
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-11 10:57:03 +00:00
Mariano Lopez
87ba508688 image_types.bbclass: Rebuild when WICVARS change
The procces to do a wic image is to save a file with
variables required by wic and then call wic using this
file. Because this is external to bitbake if the vars
change, the image won't be rebuild; an example of such
is IMAGE_BOOT_FILES.

This patch adds these variables to vardeps of do_rootfs
when a wic image is build. This will rebuild the image
if a variable needed by wic changes.

[YOCTO #8693]

(From OE-Core rev: 91d4706d356659e46923a8314f1a2aa259ead4fe)

Signed-off-by: Mariano Lopez <mariano.lopez@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>

(From OE-Core master rev: 12c54d50ed4c321dc272beb3c6cb770965c979f1)
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-11 10:57:03 +00:00
Christopher Larson
9991263ffe image_types: improve wks path specification
Hardcoding a full input path with zero flexibility goes against everything the
Yocto Project is about. Rework it to let the user specify the wks base
filename with WKS_FILE and it'll search the layers for the wks file and use
it.

(From OE-Core rev: cb5c5d950a83b85881eeadc0362230fa2720962f)

Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>

(From OE-Core master rev: 8cc7f5229f5447c2183ac319dd52c7ed737ec89b)
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-11 10:57:03 +00:00
Noor Ahsan
b15baaee6f wic: rawcopy: Copy source file to build folder
When a file is given using --sourceparams then wic directly use that file
instead of copying them to build folder. At time of assembling it os.rename
is called which renames all the files to name. In that process the original
file is renamed. When image recipe is rebuilt then wic complains about
missing file which was renamed in previous build.

[YOCTO #8854]

(From OE-Core rev: d3dee0f4107156442238c9ea82f742afeeb0665a)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>

(From OE-Core master rev: 33c52b1f2d39feb641465bf42e8b16d0ab22a316)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-11 10:57:03 +00:00
Scott Rifenbark
1a52eceaa5 ref-manual: Corrected Note for CentOS package requirements
Fixes [YOCTO #8324]

I needed to change "older" to "newer" in the note.  I had it
backwards.

(From yocto-docs rev: 73107e18cd342624890264b3b127adc478bc9193)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-03 17:40:20 +00:00
Scott Rifenbark
6d601592e1 ref-manual: Updated the S variable description with feedback
Applied wording feedback.

Fixes [YOCTO #8542]

(From yocto-docs rev: 7f2ed81317e26fb5d3dd3003cd96b3691174c5c0)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-03 17:40:19 +00:00
Scott Rifenbark
75c088f2e2 ref-manual: Updated the staging.bbclass description
Fixes [YOCTO #8800]

Provided better wording.

(From yocto-docs rev: 68be69d758b7638ddb824bbec89e76cf7dc026ff)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-03 17:40:19 +00:00
Scott Rifenbark
41265570c6 ref-manual: Updated the S variable description.
Fixes [YOCTO #8542]

I updated the description with a new example specific to Git.
When you use Git, you have to specifically set the S directory
for things to work.

(From yocto-docs rev: e31f6ba125c4e173832793c14c931c8298ba3510)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-03 17:40:19 +00:00
Scott Rifenbark
5ca77b2fc6 dev-manual, ref-manual: Updated licensing text information.
Fixes [YOCTO #8634]

To clear up the behavior the COPY_LIC_DIRS, COPY_LIC_MANIFEST,
and LICENSE_CREATE_PACKAGE variable behaviors, I updated the
glossary descriptions of the variables.  Also, added more info
to the "Providing License Text" section in the dev-manual.  Tied
everything together with good referencing.

(From yocto-docs rev: d1f8fb672aeba8b163cc79d5043e6ffcddc9db25)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-03 17:40:19 +00:00
Scott Rifenbark
a4398c7ff7 ref-manual: Added order information for conf file parsing.
I included a new paragraph at the end of the section describing
configuration in the "Closer Look" chapter.  Cases exist when
two configuration files set the same variable.  Depending on the
order, the last configuration file parsed is the one that actually
sets the variable.

Fixes [YOCTO #8914]

(From yocto-docs rev: ce3f2344550ae1b735082d10f4f17ff555d24c38)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-03 17:40:19 +00:00
Armin Kuster
05c31507da openssl: Security fix CVE-2016-0800
CVE-2016-0800 SSL/TLS: Cross-protocol attack on TLS using SSLv2 (DROWN)

https://www.openssl.org/news/secadv/20160301.txt

(From OE-Core rev: c99ed6b73f397906475c09323b03b53deb83de55)

Signed-off-by: Armin Kuster <akuster@mvista.com>

Not required for master, an update to 1.0.2g has been submitted.
Backport to fido is required.
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-03 10:38:46 +00:00
Hongxu Jia
6945a4fdde wpa-supplicant: Fix CVE-2015-8041
Backport patch from http://w1.fi/security/2015-5/
and rebase for wpa-supplicant 2.4

(From OE-Core rev: 4d0ebfd77c07475494665dde962137934dd2194a)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>

Not needed in master since the upgrade to 2.5
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-03-03 10:38:46 +00:00
Richard Purdie
b1f23d1254 build-appliance-image: Update to jethro head revision
(From OE-Core rev: 0c702756dd0009c4112028fbf2479a346867b32c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-24 09:04:22 +00:00
Armin Kuster
7fe17a2942 qemu: Security fix CVE-2016-2198
CVE-2016-2198 Qemu: usb: ehci null pointer dereference in ehci_caps_write

(From OE-Core rev: 646a8cfa5398a22062541ba9c98539180ba85d58)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-21 09:37:33 +00:00
Armin Kuster
50700a7da6 qemu: Security fix CVE-2016-2197
CVE-2016-2197 Qemu: ide: ahci null pointer dereference when using FIS CLB engines

(From OE-Core rev: ca7cbcf22558349f0b43ed7dc84ad38d7c178c55)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-21 09:37:33 +00:00
Armin Kuster
1f0e615bec libgcrypt: Security fix CVE-2015-7511
CVE-2015-7511 libgcrypt: side-channel attack on ECDH with Weierstrass curves

affects libgcrypt < 1.6.5

Patch 1 is a dependancy patch. simple macro name change.
Patch 2 is the cve fix.

(From OE-Core rev: c691ce99bd2d249d6fdc4ad58300719488fea12c)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-21 09:37:33 +00:00
Armin Kuster
dc5f155e15 uclibc: Security fix CVE-2016-2225
CVE-2016-2225 Make sure to always terminate decoded string

This change is being provide to comply to Yocto compatiblility.

(From OE-Core rev: 093d76f3f4a385aae46304bd572ce1545c6bcf33)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-21 09:37:33 +00:00
Armin Kuster
ef135112fd uclibc: Security fix CVE-2016-2224
CVE-2016-2224 Do not follow compressed items forever.

This change is being provide to comply to Yocto compatiblity.

(From OE-Core rev: 4fe0654253d7444f2c445a30b06623cef036b2bb)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-21 09:37:32 +00:00
Armin Kuster
ae57ea03c6 libbsd: Security fix CVE-2016-2090
CVE-2016-2090 Heap buffer overflow in fgetwln function of libbsd

affects libbsd <= 0.8.1 (and therefore not needed in master)

(From OE-Core rev: e56aba3a822f072f8ed2062a691762a4a970a3f0)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-18 10:57:11 +00:00
Armin Kuster
eb9666a3e2 glibc: Security fix CVE-2015-7547
CVE-2015-7547: getaddrinfo() stack-based buffer overflow

(From OE-Core rev: cf754c5c806307d6eb522d4272b3cd7485f82420)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-18 07:42:07 +00:00
Richard Purdie
5b12268f6e build-appliance-image: Update to jethro head revision
(From OE-Core rev: 05e551d821594b0f4c06328386b6a82e0801ac2a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-07 22:57:07 +00:00
Armin Kuster
a3a374a639 curl: Secuirty fix CVE-2016-0755
CVE-2016-0755 curl: NTLM credentials not-checked for proxy connection re-use

(From OE-Core rev: 8322814c7f657f572d5c986652e708d6bd774378)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-07 22:55:24 +00:00
Armin Kuster
f4341a9b6f curl: Security fix CVE-2016-0754
CVE-2016-0754 curl: remote file name path traversal in curl tool for Windows

(From OE-Core rev: b2c9b48dea2fd968c307a809ff95f2e686435222)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-07 22:55:24 +00:00
Armin Kuster
35f4306ed4 nettle: Security fix CVE-2015-8804
(From OE-Core rev: 7474c7dbf98c1a068bfd9b14627b604da5d79b67)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-07 22:55:24 +00:00
Armin Kuster
3e8a07b901 nettle: Security fix CVE-2015-8803 and CVE-2015-8805
(From OE-Core rev: f62eb452244c3124cc88ef01c14116dac43f377a)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-07 22:55:24 +00:00
Armin Kuster
5ffc3267e7 socat: Security fix CVE-2016-2217
this address both
Socat security advisory 7 and MSVR-1499: "Bad DH p parameter in OpenSSL"
and Socat security advisory 8: "Stack overflow in arguments parser

[Yocto # 9024]

(From OE-Core rev: 0218ce89d3b5125cf7c9a8a91f4a70eb31c04c52)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-07 22:55:24 +00:00
Armin Kuster
5cc5f99bba libpng: Security fix CVE-2015-8472
libpng: Buffer overflow vulnerabilities in png_get_PLTE/png_set_PLTE functions

this patch fixes an incomplete patch in CVE-2015-8126

(From OE-Core rev: f4a805702df691cbd2b80aa5f75d6adfb0f145eb)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-07 22:55:24 +00:00
Armin Kuster
21a816c73a libpng: Security fix CVE-2015-8126
libpng: Buffer overflow vulnerabilities in png_get_PLTE/png_set_PLTE functions

(From OE-Core rev: d0a8313a03711ff881ad89b6cfc545f66a0bc018)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-07 22:55:24 +00:00
Armin Kuster
6a0fbfaeb5 foomatic-filters: Security fixes CVE-2015-8327
CVE-2015-8327 cups-filters: foomatic-rip did not consider the back tick as an illegal shell escape character

this time with the recipe changes.

(From OE-Core rev: 62d6876033476592a8ca35f4e563c996120a687b)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-07 22:55:24 +00:00
Armin Kuster
d57aaf7a39 foomatic-filters: Security fix CVE-2015-8560
CVE-2015-8560 cups-filters: foomatic-rip did not consider semicolon as illegal shell escape character

(From OE-Core rev: 307056ce062bf4063f6effeb4c891c82c949c053)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-07 22:55:23 +00:00
Richard Purdie
941874ae29 build-appliance-image: Update to jethro head revision
(From OE-Core rev: a2b1d9a6f0f29a2d21c80e549b10f3522df20c11)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-07 17:23:54 +00:00
Jens Rehsack
d74a3cb765 cross-localedef-native: add ABI breaking glibc patch
Add patch from commit 96b1b5c127 to cross-localedef-native
to avoid broken images built with ENABLE_BINARY_LOCALE_GENERATION set to 1:

    $ sh -c "export LANG=de_DE; ls -la"
    sh: loadlocale.c:130: _nl_intern_locale_data: Assertion `cnt < (sizeof (_nl_value_type_LC_COLLATE) / sizeof (_nl_value_type_LC_COLLATE[0]))' failed.
    Aborted

(From OE-Core rev: 2ddfcfaa996d8c675b5c161acb605dc5573eba67)

Signed-off-by: Jens Rehsack <sno@netbsd.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-07 17:23:01 +00:00
Richard Purdie
12fae23964 build-appliance-image: Update to jethro head revision
(From OE-Core rev: 113812945c3cddfec75d67d781c0fa2d7ee02762)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-05 11:17:08 +00:00
Richard Purdie
67ac9d6254 e2fsprogs: Ensure we use the right mke2fs.conf when restoring from sstate
If we don't do this, we can use an mke2fs.conf from a different path which
may contain incompatible flags and lead to obtuse build failures such as:

Invalid filesystem option set: has_journal,extent,huge_file,flex_bg,metadata_csum,64bit,dir_nlink,extra_isize

To fix this, wrap the mke2fs binary and its hardlinks and point at the
correct configuration file.

In particular this fixes conflicts between master and jethro builds
affecting the main autobuilder.

(From OE-Core rev: 0ef6277463517fb0e52b4bd65ca5f6ab42315773)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-05 11:16:46 +00:00
Richard Purdie
5812fc9e20 build-appliance-image: Update to jethro head revision
(From OE-Core rev: f3831307d7c849e60c4141f7bfe4067ec5ff224a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:23:31 +00:00
Scott Rifenbark
3de249206e ref-manual: Updated host package install requirements CentOS
Put in a caveat about getting the ADT Installer to work
with CentOS 6.x.  New note.

Fixes [YOCTO #8324]

(From yocto-docs rev: 6ee7696537ca2031073cc59a42ff035cfd8caeec)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:22:30 +00:00
Belen Barros Pena
79de8cf5fa toaster-manual: Updated the "Installation" to have TOASTER_DIR information
In section 3.6 of the manual about setting up a production instance of
Toaster, explain that TOASTER_DIR determines the location of the build
directory, and that the checksettings command configures the build
environment for Toaster.

NOTE: I applied some minor fixes to the wording.

(From yocto-docs rev: 5d899f3026cff40078449ca8bdaba680f79ee0a8)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:22:30 +00:00
Scott Rifenbark
a23d2625e2 toaster-manual: Updated instructions for production setup.
Current instructions were wrong.  Applied changes to correct
them.

Author: Belen Barros Pena <belen.barros.pena@intel.com>
(From yocto-docs rev: 609e7bd8847cba70e49f4c8a58524392fdc1bd41)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:22:30 +00:00
Alejandro Hernandez
b6def81ff5 linux-yocto: Update SRCREV for genericx86* for 4.1, fixes CVE-2016-0728
This addresses CVE-2016-0728: KEYS: Fix keyring ref leak in join_session_keyring(), and upgrades to LINUX_VERSION 4.1.17

(From meta-yocto rev: 2aab8657999c2bcf6e7a54f1085664207ba3ac93)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:18 +00:00
Alejandro Hernandez
db0f8ac8b3 linux-yocto: Update SRCREV for genericx86* for 3.19, fixes CVE-2016-0728
This addresses CVE-2016-0728: KEYS: Fix keyring ref leak in join_session_keyring()

(From meta-yocto rev: 20c1e1e8ec2f18fbbb47b6dbc27dd7dfa15922fb)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:18 +00:00
Alejandro Hernandez
c8122a088f linux-yocto: Update SRCREV for genericx86* for 3.14, fixes CVE-2016-0728
This addresses CVE-2016-0728: KEYS: Fix keyring ref leak in join_session_keyring(), and upgrades to LINUX_VERSION 3.14.39

(From meta-yocto rev: 47a81a47c5f1f2625365ab7a2f130b75fb5764fd)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:18 +00:00
Jianxun Zhang
cdeb2415dd meta-yocto-bsp: Remove uvesafb (v86d) from generic x86 features
When uvesafb is automatically loaded during boot and FW doesn't
support legacy video bios and frame buffer, its user space helper
will throw error messages in kernel log:

[6.843790] uvesafb: Getting VBE info block failed (eax=0x4f00, err=1)
[6.843864] uvesafb: vbe_init() failed with -22
[6.843916] uvesafb: probe of uvesafb.0 failed with error -22

Assuming most x86 boards today don't really rely on this module, this
change simply removes it from the common feature list to get rid of
these harmless messages.

[YOCTO #6584]

(From meta-yocto rev: d58fc630b1114dbafa8342de7dcaef8e7d798848)

(From meta-yocto rev: 8b08977dc9f2d9ff4fd5ecf4ead24a36dcbda542)

Signed-off-by: Jianxun Zhang <jianxun.zhang@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 6af89812e8)
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:18 +00:00
Leonardo Sandoval
52cd219877 yocto-bsp: Set SRCREV meta/machine revisions to AUTOREV
By default, checkout to latest revision from the machine branch specified by
the user.

(From meta-yocto rev: f79a43406b5b323587415380ecffc87527c64653)

(From meta-yocto rev: 311e084bb321701624785ce56a1ad23d7b20b396)

Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a35f79ddd8)
Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:18 +00:00
Leonardo Sandoval
a88d6cb170 yocto-bsp: Set KTYPE to user selected base branch
Fixes the hardcode branch name set to KTYPE, where its value is used as a base branch
when user decides to create a new branch. Tested on x86_64 architecture.

[YOCTO #8630]

(From meta-yocto rev: ab895be90a0cae7dfa77a8aab3b19e5571e7e7bc)

(From meta-yocto rev: bc5aec2348b2c314953806734a8fbabf798d142c)

Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 9d585b5025)
Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:18 +00:00
Leonardo Sandoval
4e74b36458 yocto-bsp: Avoid duplication of user patches ({{=machine}}-user-patches.scc)
On linux-yocto-dev or linux-yocto_X.YY bbappend files, the SRC_URI includes
{{=machine}}-standard.scc, which in turn includes {{=machine}}-user-parches.scc,
thus there is no need to include it again on the corresponding bbappend file.

[YOCTO #8486]

(From meta-yocto rev: 11c93b5dd8c651df478d4810e1b6ff6ad9fa57e8)

(From meta-yocto rev: c1105ff0e65a24f344e5fab17402b1b4fcb1d728)

Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f674ffa528)
Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:18 +00:00
Leonardo Sandoval
66807731c7 yocto-bsp: Default kernel version to 4.1 on x86_64
On the 3.19 to 4.1 migration, the target x86_64 was not taken into account
(no reason, just missing the correspoding update on the kernel-list.noinstall
file), so moving it to 4.1 to be align with the rest.

(From meta-yocto rev: 283665d9295c3c10f964496dc0110137e358daa6)

(From meta-yocto rev: d58d3c5e65294bd6f4f3f780d746e1c3f8678c2b)

Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 9cc221dcb6)
Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:18 +00:00
Ross Burton
4c075e7114 piglit: don't use /tmp to write generated sources to
If there are multiple builds on the same machine then piglit writing it's
generated sources to /tmp will race.  Instead, export TEMP to tell the tempfile
module to use a temporary directory under ${B}.

(From OE-Core rev: 226a26e51eb0789686509d3e22a3766e2e3e8666)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:18 +00:00
Paul Eggleton
ee52ac6e85 gen-lockedsig-cache: fix bad destination path joining
When copying the sstate-cache into the extensible SDK, if the source
path had a trailing / and the destination path did not, there would be a
missing / between the path and the subdirectory name, and you'd end up
with subdirectories like "sstate-cacheCentOS-6.7". There are functions
in os.path for this sort of thing so let's just use them and avoid the
problem.

(From OE-Core rev: 2ed6adfea5ba16aeda7b5d908bea4303202d3774)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit 5eb8f15c48b5f39a10eb2b63b026cf1ebfd05533)
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:17 +00:00
Alejandro Hernandez
e9f95df962 linux-yocto: Update SRCREV for qemux86* for 4.1, fixes CVE-2016-0728
This addresses CVE-2016-0728: KEYS: Fix keyring ref leak in join_session_keyring(), and upgrades to LINUX_VERSION 4.1.17

(From OE-Core rev: f070d5fee56a4589a6abf422e6872373c5557c6d)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:17 +00:00
Alejandro Hernandez
e63bab1a09 linux-yocto: Update SRCREV for qemux86* for 3.19, fixes CVE-2016-0728
This addresses CVE-2016-0728: KEYS: Fix keyring ref leak in join_session_keyring()

(From OE-Core rev: 8cb97ea8ed59ee77c0542b50d1af65bf9a3c3fef)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:17 +00:00
Alejandro Hernandez
64a492097f linux-yocto: Update SRCREV for qemux86* for 3.14, fixes CVE-2016-0728
This addresses CVE-2016-0728: KEYS: Fix keyring ref leak in join_session_keyring(), and upgrades to LINUX_VERSION 3.14.39

(From OE-Core rev: ce53ebc001af87d169a2e0e98ca3d7d4729fdec4)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:17 +00:00
Maxin B. John
5b043dafa3 libpng12: update URL that no longer exists
Fix the following warning:

WARNING: Failed to fetch URL http://downloads.sourceforge.net/project/
libpng/libpng12/1.2.53/libpng-1.2.53.tar.xz, attempting MIRRORS if
available.

[YOCTO #8739]

(From OE-Core rev: 02363e50b4a3d124fa71edb2870deb820567482b)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:17 +00:00
Maxin B. John
655c8a5c9d libpng: update URL that no longer exists
Fix the following warning:

WARNING: Failed to fetch URL http://downloads.sourceforge.net/
project/libpng/libpng16/1.6.17/libpng-1.6.17.tar.xz, attempting
MIRRORS if available

[YOCTO #8739]

(From OE-Core rev: dbde0550ce0cc112947367eb89b914be5b3359a7)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:17 +00:00
Ross Burton
96fda8c8f6 busybox: fix build of last applet
If CONFIG_FEATURE_LAST_SMALL is enabled the build fails because of a broken
__UT_NAMESIZE test.

[ YOCTO #8869 ]

(From OE-Core rev: 6348b2e8e0510b45f4afd2018e90796714863fc1)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:17 +00:00
Joe Slater
ae037d974e ghostscript: add dependency for pnglibconf.h
When using parallel make jobs, we need to be sure that
pnglibconf.h is created before we try to reference it,
so add a rule to png.mak.

(From OE-Core rev: 4b7bda9d1ac836de0c657cca28044b822e444bea)

Signed-off-by: Joe Slater <jslater@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit fad19750d23aad2d14a1726c4e3c2c0d05f6e13d)
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:17 +00:00
Jussi Kukkonen
26eb877e18 gcr: Require x11 DISTRO_FEATURE
This enables a world build without x11. GTK3DISTROFEATURES is not
enough because gtk+-x11.pc is still required.

Fixes [YOCTO #8611].

(From OE-Core rev: b1175339287395a7ad4fe4639a73f3a1dda74358)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit dbdcd87144cc1cd6c5d50c800c7f266aaf25ca17)
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:17 +00:00
Bogdan-Alexandru Voiculescu
e632cdb031 uClibc: enable utmp for shadow compatibility
with the enabling of utmpx in busybox and uClibc it was noted that shadow
support for utmpx also needs utmp explicitly enabled in uclibc. this is
a workaround that might be removed once shadow properly supports
--enable-utmpx to check for utmpx configuration instead of utmp like
it does now

[YOCTO #8243]
[YOCTO #8971]

(From OE-Core rev: 05cab660ea956aabf6e6f971bdc5c9e2d94b9f2d)

Signed-off-by: Bogdan-Alexandru Voiculescu <bogdanx.a.voiculescu@intel.com>
Signed-off-by: Benjamin Esquivel <benjamin.esquivel@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit 969158d63ba2c8e2e11af41c2a6d4f1aa5b0099f)
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:17 +00:00
Armin Kuster
e8c96131d9 git: Security fix CVE-2015-7545
CVE-2015-7545 git: arbitrary code execution via crafted URLs

(From OE-Core rev: 1e0780427bad448c5b3644134b581ecf1d53af84)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:17 +00:00
Armin Kuster
108ea6d05f glibc-locale: fix QA warning
WARNING: QA Issue: glibc-locale: /glibc-binary-localedata-sd-in/usr/lib/locale/sd_IN/LC_CTYPE is owned by uid 1000, which is the same as the user running bitbake. This may be due to host contamination [host-user-contaminated]

fix type
(From OE-Core rev: 9d5cd7a353ec257c88d54dd9af2327b0d86d5662)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:16 +00:00
Armin Kuster
9a88c1d255 grub: Security fix CVE-2015-8370
CVE-2015-8370 grub2: buffer overflow when checking password entered during bootup

(From OE-Core rev: b63e3b57b47e95003a1fb014f90333c327681d5b)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:16 +00:00
Armin Kuster
443b09a61d gdk-pixbuf: Security fix CVE-2015-7674
CVE-2015-7674 Heap overflow with a gif file in gdk-pixbuf < 2.32.1

(From OE-Core rev: f2b16d0f9c3ad67fdf63e9e41f42a6d54f1043e4)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:16 +00:00
Armin Kuster
6c910685ec librsvg: Security fix CVE-2015-7558
CVE-2015-7558 librsvg2: Stack exhaustion causing DoS

including two supporting patches.

(From OE-Core rev: 4945643bab1ee6b844115cc747e5c67d874d5fe6)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:16 +00:00
Armin Kuster
9fd2349842 bind: Security fix CVE-2015-8461
CVE-2015-8461 bind: race condition when handling socket errors can lead to an assertion failure in resolver.c\

(From OE-Core rev: 1656eaa722952861ec73362776bd0c4826aec3da)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:16 +00:00
Armin Kuster
5a40d9fb69 bind: Security fix CVE-2015-8000
CVE-2015-8000 bind: responses with a malformed class attribute can trigger an assertion failure in db.c

(From OE-Core rev: a159f9dcf3806f2c3677775d6fb131dab17a5a17)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:16 +00:00
Armin Kuster
1bbf18385b libxml2: Security fix CVE-2015-8710
CVE-2015-8710 libxml2: out-of-bounds memory access when parsing an unclosed HTML comment

(From OE-Core rev: 03d481070ebc6f9af799aec5d038871f9c73901c)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:16 +00:00
Armin Kuster
2ec6d1dcbc libxml2: Security fix CVE-2015-8241
CVE-2015-8241 libxml2: Buffer overread with XML parser in xmlNextChar

(From OE-Core rev: f3c19a39cdec435f26a7f46a3432231ba4daa19c)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:16 +00:00
Armin Kuster
55aafb547d dpkg: Security fix CVE-2015-0860
CVE-2015-0860 dpkg: stack overflows and out of bounds read

(From OE-Core rev: 5aaec01acc9e5a19374a566307a425d43c887f4b)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:15 +00:00
Armin Kuster
029948bc8e tzdata: update to 2016a
Changed LIC_CHKSUM_FILES to a new LICENSE  file.
Add BSD-3-clause to licenses

Changes affecting future time stamps

America/Cayman will not observe daylight saving this year after all.
Revert our guess that it would.  (Thanks to Matt Johnson.)

Asia/Chita switches from +0800 to +0900 on 2016-03-27 at 02:00.
(Thanks to Alexander Krivenyshev.)

Asia/Tehran now has DST predictions for the year 2038 and later,
to be March 21 00:00 to September 21 00:00.  This is likely better
than predicting no DST, albeit off by a day every now and then.

Changes affecting past and future time stamps

America/Metlakatla switched from PST all year to AKST/AKDT on
2015-11-01 at 02:00.  (Thanks to Steffen Thorsen.)

America/Santa_Isabel has been removed, and replaced with a
backward compatibility link to America/Tijuana.  Its contents were
apparently based on a misreading of Mexican legislation.

Changes affecting past time stamps
Asia/Karachi's two transition times in 2002 were off by a minute.
(Thanks to Matt Johnson.)

(From OE-Core rev: 790315dbd2dcb5b2024948ef412f32d2788cb6b5)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit 39e231cfabda8d75906c935d2a01f37df6121b84)
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:15 +00:00
Armin Kuster
2bcf141c77 tzcode: update to 2016a
Change LIC_CHKSUM_FILES to License. Some files are BSD clause 3

Changes affecting build procedure

An installer can now combine leap seconds with use of the backzone file,
e.g., with 'make PACKRATDATA=backzone REDO=posix_right zones'.
The old 'make posix_packrat' rule is now marked as obsolescent.
(Thanks to Ian Abbott for an initial implementation.)

Changes affecting documentation and commentary

A new file LICENSE makes it easier to see that the code and data
are mostly public-domain.  (Thanks to James Knight.) The three
non-public-domain files now use the current (3-clause) BSD license
instead of older versions of that license.

tz-link.htm mentions the BDE library (thanks to Andrew Paprocki),
CCTZ (thanks to Tim Parenti), TimeJones.com, and has a new section
on editing tz source files (with a mention of Sublime zoneinfo,
thanks to Gilmore Davidson).

The Theory and asia files now mention the 2015 book "The Global
Transformation of Time, 1870-1950", and cite a couple of reviews.

The America/Chicago entry now documents the informal use of US
central time in Fort Pierre, South Dakota.  (Thanks to Rick
McDermid, Matt Johnson, and Steve Jones.)

(From OE-Core rev: 1ee9072e16d96f95d07ec5a1f63888ce4730d60e)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit b7f292b84eea202fb13730c11452ac1957e41cf0)
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:15 +00:00
Jianxun Zhang
cc3a391bd9 kernel-yocto: fix checkout bare-cloned kernel repositories
The existing code doesn't tell regular (with .git) and bare cases and
just move the unpacked repo to the place of kernel source. But later
steps will fail on a bare-cloned repo because we can not checkout
directly in a bare cloned repo.

This change performs another clone to fix the issue.

Note: This change doesn't cover the case that S and WORKDIR are same
and the repo is bare cloned.

(From OE-Core rev: f3d0ae7b174f47170fef14a699aec22d02ea1745)

Signed-off-by: Jianxun Zhang <jianxun.zhang@linux.intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
(cherry picked from commit ccfa2ee5c4f509de4c18a7054b2a66fc874d5d69)
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-02-04 23:20:15 +00:00
Armin Kuster
049be17b53 libpcre: bug fixes include security
[Yocto # 9008]

This is the next patch release for pcre. The 8.xx series now only contains
bug fixes.

http://www.pcre.org/original/changelog.txt

The following security fixes are included:
CVE-2015-3210 pcre: heap buffer overflow in pcre_compile2() / compile_regex()
CVE-2015-3217 pcre: stack overflow in match()
CVE-2015-5073 CVE-2015-8388 pcre: Buffer overflow caused by certain patterns with an unmatched closing parenthesis

CVE-2015-8380 pcre: Heap-based buffer overflow in pcre_exec
CVE-2015-8381 pcre: Heap Overflow in compile_regex()
CVE-2015-8383 pcre: Buffer overflow caused by repeated conditional group
CVE-2015-8384 pcre: Buffer overflow caused by recursive back reference by name within certain group
CVE-2015-8385 pcre: Buffer overflow caused by forward reference by name to certain group
CVE-2015-8386 pcre: Buffer overflow caused by lookbehind assertion
CVE-2015-8387 pcre: Integer overflow in subroutine calls
CVE-2015-8389 pcre: Infinite recursion in JIT compiler when processing certain patterns
 CVE-2015-8390 pcre: Reading from uninitialized memory when processing certain patterns

 CVE-2015-8392 pcre: Buffer overflow caused by certain patterns with duplicated named groups
 CVE-2015-8393 pcre: Information leak when running pcgrep -q on crafted binary
 CVE-2015-8394 pcre: Integer overflow caused by missing check for certain conditions
 CVE-2015-8395 pcre: Buffer overflow caused by certain references
 CVE-2016-1283 pcre: Heap buffer overflow in pcre_compile2 causes DoS

(From OE-Core rev: 3e403cc1bdeefd4f39e54bae2269ca56307e8468)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:10 +00:00
Armin Kuster
5e94ac7ba9 qemu: Security fix CVE-2015-7295
CVE-2015-7295 Qemu: net: virtio-net possible remote DoS

(From OE-Core rev: 74771f8c41aaede0ddfb86983c6841bd1f1c1f0f)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:09 +00:00
Armin Kuster
7ee1828d30 qemu: Security fix CVE-2016-1568
CVE-2016-1568 Qemu: ide: ahci use-after-free vulnerability in aio port commands

(From OE-Core rev: 166c19df8be28da255cc68032e2d11afc59d4197)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:09 +00:00
Armin Kuster
ca6ec2e392 qemu: Security fix CVE-2015-8345
CVE-2015-8345 Qemu: net: eepro100: infinite loop in processing command block list

(From OE-Core rev: 99ffcd66895e4ba064542a1797057e45ec4d3220)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:09 +00:00
Armin Kuster
b55a677699 qemu: Security fix CVE-2015-7512
CVE-2015-7512 Qemu: net: pcnet: buffer overflow in non-loopback mod

(From OE-Core rev: e6e9be51f77c9531f49cebe0ca6b495c23cf022d)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:09 +00:00
Armin Kuster
4922f470dd qemu: Security fix CVE-2015-7504
CVE-2015-7504 Qemu: net: pcnet: heap overflow vulnerability in loopback mode

(From OE-Core rev: b01b569d7d7e651a35fa38750462f13aeb64a2f3)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:09 +00:00
Armin Kuster
3ec0e95fed qemu: Security fix CVE-2015-8504
CVE-2015-8504 Qemu: ui: vnc: avoid floating point exception

(From OE-Core rev: c622bdd7133d31d7fbefe87fb38187f0aea4b592)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:09 +00:00
Armin Kuster
942ce53beb openssl: Security fix CVE-2016-0701
CVE-2016-0701 OpenSSL: DH small subgroups

(From OE-Core rev: c5868a7cd0a28c5800dfa4be1c9d98d3de08cd12)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:09 +00:00
Armin Kuster
ce8ae1c164 openssl: Security fix CVE-2015-3197
CVE-2015-3197 OpenSSL: SSLv2 doesn't block disabled ciphers

(From OE-Core rev: b387d9b8dff8e2c572ca14f9628ab8298347fd4f)

Signed-off-by: Armin Kuster <akuster808@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:09 +00:00
Armin Kuster
080e027d14 tiff: Security fix CVE-2015-8784
CVE-2015-8784 libtiff: out-of-bound write in NeXTDecode()

(From OE-Core rev: 3e89477c8ad980fabd13694fa72a0be2e354bbe2)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:09 +00:00
Armin Kuster
c6ae9c1fae tiff: Security fix CVE-2015-8781
CVE-2015-8781 libtiff: out-of-bounds writes for invalid images

(From OE-Core rev: 29c80024bdb67477dae47d8fb903feda2efe75d4)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:09 +00:00
Derek Straka
049b7db30c bind: CVE-2015-8704 and CVE-2015-8705
CVE-2015-8704:
Allows remote authenticated users to cause a denial of service via a malformed Address Prefix List record

CVE-2015-8705:
When debug logging is enabled, allows remote attackers to cause a denial of service or have possibly unspecified impact via OPT data or ECS option

[YOCTO 8966]

References:
https://kb.isc.org/article/AA-01346/0/BIND-9.10.3-P3-Release-Notes.html
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8704
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8705

(From OE-Core rev: 78ceabeb2df55194f16324d21ba97e81121f996b)

Signed-off-by: Derek Straka <derek@asterius.io>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:09 +00:00
Mariano Lopez
d632a923dc rpmresolve.c: Fix unfreed pointers that keep DB opened
There are some unfreed rpmmi pointers in printDepList()
function; this happens when the package have null as
the requirement.

This patch fixes these unfreed pointers and add small
changes to keep consistency with some variables.

[YOCTO #8028]

(From OE-Core master rev: da7aa183f94adc1d0fff5bb81e827c584f9938ec)

(From OE-Core rev: 409f19280983b8100a27a773cefbff187cca737a)

Signed-off-by: Mariano Lopez <mariano.lopez@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:08 +00:00
Armin Kuster
5b993ed429 openssh: CVE-2016-1907
This issue requires three commits:
https://anongit.mindrot.org/openssh.git/commit/?id=ed4ce82dbfa8a3a3c8ea6fa0db113c71e234416c
https://anongit.mindrot.org/openssh.git/commit/?id=f98a09cacff7baad8748c9aa217afd155a4d493f
https://anongit.mindrot.org/openssh.git/commit/?id=2fecfd486bdba9f51b3a789277bb0733ca36e1c0

(From OE-Core master rev: a42229df424552955c0ac62da1063461f97f5938)

(From OE-Core rev: 50f46e40fa2d1d126294874765f90ed5bdee0f15)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:08 +00:00
Armin Kuster
27ee5b4f0e glibc: CVE-2015-8776
it was found that out-of-range time values passed to the strftime function may
cause it to crash, leading to a denial of service, or potentially disclosure
information.

(From OE-Core rev: b9bc001ee834e4f8f756a2eaf2671aac3324b0ee)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:08 +00:00
Armin Kuster
a4134af78b glibc: CVE-2015-9761
A stack overflow vulnerability was found in nan* functions that could cause
applications which process long strings with the nan function to crash or,
potentially, execute arbitrary code.

(From OE-Core rev: fd3da8178c8c06b549dbc19ecec40e98ab934d49)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:08 +00:00
Armin Kuster
e10ec6f3be glibc: CVE-2015-8779
A stack overflow vulnerability in the catopen function was found, causing
applications which pass long strings to the catopen function to crash or,
potentially execute arbitrary code.

(From OE-Core rev: af20e323932caba8883c91dac610e1ba2b3d4ab5)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:08 +00:00
Armin Kuster
a5a965d409 glibc: CVE-2015-8777.patch
The process_envvars function in elf/rtld.c in the GNU C Library (aka glibc or
libc6) before 2.23 allows local users to bypass a pointer-guarding protection
mechanism via a zero value of the LD_POINTER_GUARD environment variable.

(From OE-Core rev: 22570ba08d7c6157aec58764c73b1134405b0252)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-30 12:13:08 +00:00
Ed Bartosh
2fb7ee2628 bitbake: toaster: make runbuilds loop
This avoids having a loop in shell code and initializing
heavy Django init machinery every second.

Ignore exceptions to prevent exiting the loop.

(Bitbake rev: e04da15556ca0936de652b8c085e4199e5551457)

(Bitbake rev: 0e9d8d63ddb35d181d4e470585d1e4a4c646cd00)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: brian avery <avery.brian@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Ed Bartosh <eduard.bartosh@intel.com>
Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-25 16:29:16 +00:00
Richard Purdie
b9ad87b18f nativesdk-buildtools-perl-dummy: Bump PR
Recent changes to this recipe caused automated PR increments
to break, regressing package feeds. The only way to recover
is to bump PR, so do this centrally to fix anyone affected.

(From OE-Core rev: dacdb499d31cb2e80cca33cba9d599c8ee983dc4)

(From OE-Core rev: 8ce8f62b22b1e20db0f62d7bd8246738147d5f2e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-21 16:21:35 +00:00
Paul Eggleton
0a1c63ad6b nativesdk-buildtools-perl-dummy: properly set PACKAGE_ARCH
Turns out I did a silly thing in OE-Core revision
9b1831cf4a2940dca1d23f14dff460ff5a50a520 and forgot to remove the
explicit setting of PACKAGE_ARCH outside of the anonymous python
function; the original bug was apparently fixed but the functionality of
allarch.bbclass was being disabled because it was able to see that
PACKAGE_ARCH was not set to "all" - which was what I was trying to
ensure.

(From OE-Core rev: a25ab5449825315d4f51b31a634fe6cd8f908526)

(From OE-Core rev: afd527d365c58e622983b77a1a7ed57f59ef7b32)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-21 16:21:35 +00:00
Paul Eggleton
d4b400e1c7 nativesdk-buildtools-perl-dummy: fix rebuilding when SDKMACHINE changes
This recipe produces an empty dummy package (in order to satisfy
dependencies on perl so we don't have perl within buildtools-tarball).
Because we were inheriting nativesdk here the recipe was being rebuilt,
but having forced PACKAGE_ARCH to a particular value the packages for
each architecture were stepping on eachother. Since the packages are
empty they can in fact be allarch (even though they won't actually go
into the "all" package feed). It turns out that nheriting nativesdk
wasn't actually necessary either, so drop that.

Fixes [YOCTO #8509].

(From OE-Core rev: 9b1831cf4a2940dca1d23f14dff460ff5a50a520)

(From OE-Core rev: 66694fe312cf0668d08e42246332ce085a4d6372)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-21 16:21:35 +00:00
Richard Purdie
8c8c4ede3f Revert "gstreamer1.0-plugins-good.inc: add gudev back to PACKAGECONFIG"
This reverts commit 5c90b561930aac1783485d91579d313932273e92.

The original change was intentional so back out 'fixes'.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-20 17:10:32 +00:00
Richard Purdie
b83220257a Revert "gstreamer: Deal with merge conflict which breaks systemd builds"
This reverts commit bc458ae9586b45b11b6908eadb31e94d892e698f.

The original change was intentional so back out 'fixes'.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-20 17:10:32 +00:00
Richard Purdie
dd0ba9ea4a build-appliance-image: Update to jethro head revision
(From OE-Core rev: 716d3140c150bb3d99210e74da91904efc84c907)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-17 14:33:04 +00:00
Richard Purdie
325d205769 gstreamer: Deal with merge conflict which breaks systemd builds
In jethro, the dependency is "udev", the change to libgudev happened
in master after the release and this was a mistake during
backporting of gstreamer fixes.

(From OE-Core rev: bc458ae9586b45b11b6908eadb31e94d892e698f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-17 14:32:37 +00:00
Richard Purdie
53b114b55f build-appliance-image: Update to jethro head revision
(From OE-Core rev: bc1d59a075bfd1b0dca7a19553cc7970b7460b38)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 22:28:11 +00:00
Richard Purdie
02be35d1ad poky.conf: Bump version for 2.0.1 jethro release
(From meta-yocto rev: d5f3f25fab4e7076ea5dee2ad3669525dec78567)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 22:27:23 +00:00
Ed Bartosh
f5551f85aa ref-manual: Updated the list of supported image types.
The list in the IMAGE_TYPES variable description has been
updated to add and remove several image types.

(From yocto-docs rev: b598590074d41b0eedc8466b325632caeed52e3b)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 16:31:22 +00:00
Ed Bartosh
aa179aeede dev-manual: Added three new wic option descriptions.
* --part-type
 * --use-uuid
 * --uuid

(From yocto-docs rev: 79790dd454c13780e045c2afd1eef51180a8b251)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 16:31:22 +00:00
Ed Bartosh
20007c87b2 dev-manual: Added the --overhead-factor wic option description.
(From yocto-docs rev: 346f68486d86292337923e89fbd7e8b2ccd4814b)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 16:31:22 +00:00
Ed Bartosh
2dd7f469f5 dev-manual: Added the --extra-space wic option description.
(From yocto-docs rev: cd44efe920352f8a59c5c66cf4bd09ac80a2a5c2)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 16:31:22 +00:00
Ed Bartosh
81cc737056 dev-manual: Added wic --notable option description.
(From yocto-docs rev: 473914d9100c201474c7e0d6c954cf01ee3afa11)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 16:31:21 +00:00
Ed Bartosh
2b1dce5a3c dev-manual:
Updated the --source wic command-line option for partition
size details.

(From yocto-docs rev: b268ad2f252114a09c1d57884fb051b90ad082b1)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 16:31:21 +00:00
Jianxun Zhang
a6f52930a6 kernel/kernel-arch: Explicitly mapping between i386/x86_64 and x86 for kernel ARCH
For a bare-bone kernel recipe which specifies 32 bit x86 target,
a 64 bit .config will be generated from do_configure task when
building 32-bit qemux86, once all of these conditions are true:

* arch of host is x86_64
* kernel source tree used in build has commit ffee0de41 which
  actually chooses i386 or x86_64 defconfig by asking host when
  ARCH is "x86" (arch/x86/Makefile)
* bare-bone kernel recipe inherits directly from kernel without
  other special treatments.

Build will fail because of the mismatched kernel architecture.

The patch sets ARCH i386 or x86_64 explicitly to configure
task to avoid this host contamination. Kernel artifact is also
changed so that it can map i386 and x64 back to arch/x86 when
needed.

(From OE-Core rev: 6ffcfc0bc08bcbe81e17ceeb7094f09cc9214b94)

Signed-off-by: Jianxun Zhang <jianxun.zhang@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:41 +00:00
Alexander Kanavin
e79a538a54 openssh: update to 7.1p2
This fixes a number of security issues.

(From OE-Core rev: b31fc9b167e5ca3115a0d0169126d63f2dbd3824)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:41 +00:00
Paul Eggleton
b171076f46 devtool: reset: do clean for multiple recipes at once with -a
We need to run the clean for all recipes that are being reset before we
start deleting things from the workspace; if we don't, recipes providing
dependencies may be missing when we come to clean a recipe later (since
we don't and couldn't practically reset them in dependency order). This
also improves performance since we have the startup startup time for the
clean just once rather than for every recipe.

(From OE-Core master rev: c10a2de75a99410eb5338dd6da0e0b0e32bae6f5)

(From OE-Core rev: d64a5794098e9ca715a70daa704f571ba97e9912)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:41 +00:00
Paul Eggleton
255115f6e4 devtool: sdk-update: fix error checking
Running "raise" with no arguments here is invalid, we're not in
exception handling context. Rather than also adding code to catch the
exception I just moved the check out to the parent function from which
we can just exit.

(From OE-Core master rev: 0164dc66467739b357ab22bf9b8c0845f3eff4a4)

(From OE-Core rev: d9c5653f994e0f366c9154a2a988175a9f8e3130)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:41 +00:00
Paul Eggleton
3f691055c5 devtool: sdk-update: fix metadata update step
* Clone the correct path - we need .git on the end
* Pull from the specified path instead of expecting a remote to be set
* up in the repo already (it isn't by default)

(From OE-Core master rev: 1a60ee8bd21e156022c928f12bb296ab5caaa766)

(From OE-Core rev: a0e1ff92b189681df5cf106dc924e76bb05caf31)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:41 +00:00
Paul Eggleton
5ba94af1e6 devtool: sdk-update: fix not using updateserver config file option
We read the updateserver setting from the config file but we never
actually used that value - the code then went on to use only the value
supplied on the command line.

Fix courtesy of Dmitry Rozhkov <dmitry.rozhkov@intel.com>

(From OE-Core master rev: 1c85237803038fba539d5b03bf4de39d99380684)

(From OE-Core rev: 3940fe87f944bd2067a96b1b6a8c1dc646569690)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:41 +00:00
Paul Eggleton
d03d145410 classes/populate_sdk_ext: disable signature warnings
The user of the extensible SDK doesn't need to see these.

(From OE-Core master rev: 7045fabf73d4eef9c023edb9e0a8b8d1d3f04680)

(From OE-Core rev: f89d5dc8e980e1ac48357f49158632689582d7fb)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:41 +00:00
Paul Eggleton
00ff950d3c classes/populate_sdk_ext: fix cascading from preparation failure
During extensible SDK installtion, if the build system preparation step
fails we try to put something at the end of the environment setup script
to show an error when it is sourced, in case the user doesn't realise
that the partially-installed SDK is broken. However, an apostrophe in
the message (actually a single quote) appears to terminate the string
and therefore breaks the command. Drop it to avoid that.

(From OE-Core master rev: 21e591d182e24c399ae010a8eff9b89947061a46)

(From OE-Core rev: 91326ede91ff7b820ec60ec642927cc223cae81f)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:40 +00:00
Paul Eggleton
22446c6f44 scripts/oe-publish-sdk: add missing call to git update-server-info
We need to call git update-server-info here on the created repository or
we can't share it over plain http as we need to be able to for the
update process to function as currently implemented.

(From OE-Core master rev: 3ab40bf9d5f19d91e45f7bae77f037b2544e889b)

(From OE-Core rev: 2b3c7c6fc52a0fb66e31796ca7daacd19afbf75f)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:40 +00:00
Ed Bartosh
8597a616f3 devtool: use cp instead of shutil.copytree
Copied layers with 'cp -a' instead of calling shutil.copytree as
copytree fails to copy broken symlinks.

More pythonic fix would be to use copytree with 'ignore' parameter,
but this could slow down copying complex directory structures.

[YOCTO #8825]

(From OE-Core master rev: e5b841420b9fdd33829f7665a62cd06a3017f7e6)

(From OE-Core rev: fa0424ee742a6b331f1c6462eb69fecba6dc7f86)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:40 +00:00
Paul Eggleton
95cc641ec3 buildhistory: fix not recording SDK information
After OE-Core revision baa4e43a29e45df17eaa3456acc179b08d571db6 we lost
recording SDK the contents in buildhistory. This was due to the
SDK_POSTPROCESS_COMMAND variable being set with = in
populate_sdk_base.bbclass which overwrote any value set with += in
buildhistory.bbclass; to fix it, use _append in buildhistory.bbclass
instead.

Fixes [YOCTO #8839].

(From OE-Core master rev: 11d1aa82ef4a00051e0a50a87a1efed1c50c73b5)

(From OE-Core rev: 36d4b0903890bc793608759b3351a5de4229de11)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:40 +00:00
Paul Eggleton
84d48acb01 recipetool: create: fix error when extracting source to a specified directory
Having fetched the source and unpacked it to a temporary directory, we
then move part of it to the destination directory, or if the source is at
the top level we move the whole temporary directory, but in the latter
case we were later attempting to delete the temporary directory which no
longer existed. Clear out the variable so that doesn't happen.

(From OE-Core master rev: 91714a52e91cddba5a16c73cf5765d1f47f7856c)

(From OE-Core rev: 8b7644fa4cd72b7f80d2aaa3bfcd2efed2402d37)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:40 +00:00
Paul Eggleton
4369329b76 recipetool: create: detect when specified URL returns a web page
If the user specifies a URL that just returns a web page, then it's
probably incorrect (or broken); attempt to detect this and show an error
if it's the case.

(From OE-Core master rev: 83b1245b2638eb5d314fe663d33cd52a776a34a7)

(From OE-Core rev: cf61eff7bbc9afa0eeb1fd481f1d4b75429a1c24)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:40 +00:00
Paul Eggleton
4c3191f9ab recipetool: create: prevent attempting to unpack entire DL_DIR
If you specify a URL ending in /, BitBake's fetcher returns a localpath
of ${DL_DIR}, and if you then try to unpack that it will attempt to copy
the entire DL_DIR contents to the destination - which at least on my
system filled my entire /tmp. Obviously we should fix the fetcher, but
at least detect and stop that from happening here for now.

(From OE-Core master rev: 7e63a672517518644a37ce006e05b5494c29cf6e)

(From OE-Core rev: 623e59b103c1edf3211384d26cc0c83cfd424587)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:40 +00:00
Paul Eggleton
caca77eb17 recipetool: create: fix do_install handling for makefile-only software
In my testing here it appears make -qn returns an error (exit code 2)
whereas make -n doesn't; I can't immediately tell why based on the
documentation. We don't actually care for it to be quiet since we're
capturing the output, so let's just leave -q off and have this work
properly as a result.

(From OE-Core master rev: 30c4cd9efdac400d713dff645f23f2627277d75a)

(From OE-Core rev: d76191cef76c6c4416a5e635a9424192e16c1090)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:40 +00:00
Paul Eggleton
383159ef64 recipetool: create: avoid traceback on fetch error
If a fetch error occurs, the fetcher already prints a reasonable error -
we don't need the traceback as well, so catch that and exit if it
occurs.

(From OE-Core master rev: c2cc5abe34169eae92067d97ce1e747e7c1413f5)

(From OE-Core rev: b2706b5b311d456e7da5acf02e25f3f8650c50e5)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:40 +00:00
Paul Eggleton
be40baa5a0 recipetool: create: handle https://....git URLs
When you grab a URL for a github repository you'll almost certainly find
it in https://github.com/path/to/repository.git format; but bitbake's
fetcher can't handle that because it'll see https:// at the start and
assume it should use wget to fetch it. If the URL starts with http:// or
https:// and the path part ends with .git then assume it's a git
repository and adjust it accordingly.

(From OE-Core master rev: bdbc4cf41d30eddb8a9ed882dedcc1670ce8fdd6)

(From OE-Core rev: 9d41e993a95a7b60f1ed5f8e9ca887fdf393233c)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:40 +00:00
Paul Eggleton
a897bfdbdc devtool: sdk-update: fix traceback without update server set
If the SDK update server hasn't been set in the config (when building
the extensible SDK this would be set via SDK_UPDATE_URL) and it wasn't
specified on the command line then we were failing with a traceback
because we didn't pass the default value properly - None is interpreted
as no default, meaning raise an exception if no such option exists.

Additionally we don't need the try...except anymore either because with
a proper default value, NoSectionError is caught as well.

(From OE-Core master rev: 9763c1b83362f8445ed6dff2804dd7d282861f79)

(From OE-Core rev: b2696869c1428e8ef2a198d2432121ddc2e2034c)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:40 +00:00
Paul Eggleton
9c4b61e919 classes/populate_sdk_ext: error out of install if buildtools install fails
If the installation of buildtools fails then we should fail the entire
installation instead of blindly continuing on.

(From OE-Core master rev: 34bb63e6c72fb862e0ef0d2b26e1bfddaf7ddb99)

(From OE-Core rev: 696979ef39fbd85fa74cfb4a0cbee49b045e2d92)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 15:51:39 +00:00
Robert Yang
4c07dd2172 gstreamer1.0-plugins-good.inc: add gudev back to PACKAGECONFIG
The 66e32244aed8d33f1b49fbe78179f2442545c730 wrongly removed gudev from
PACKAGECONFIG, now add it back.

(From OE-Core rev: 5c90b561930aac1783485d91579d313932273e92)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-15 11:19:09 +00:00
Saul Wold
83b72d8d1f linux-yocto: Update Genericx86* BSP to 4.1.15 kernel
(From meta-yocto rev: ccd390f15d9d9b9f975a9e0a784e84d69d9d6f4d)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:29 +00:00
Ross Burton
44639bd817 libaio: don't disable linking to the system libraries
For some reason that I don't understand (a decade-old attempt at optimisation?)
libaio disables linkage to the system libraries.  Enabling fortify means linking
to the system libraries, so remove the existing addition of -lc for x86 (the
problem also happens on at least PPC) and just link to the system libraries on
all platforms.

Also remove the sed of src/Makefile as the build not respecting LDFLAGS has been
fixed upstream.

(From OE-Core rev: f435ac9db0581d8313a38d586b00c2b3de419298)

(From OE-Core rev: 901af5a00338fd8f1ace939123484ea91c090a7a)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:28 +00:00
Bruce Ashfield
a0be9bd862 linux-yocto/4.1: update to v4.1.15
Updating the 4.1 kernel repo to the latest 4.1.x stable.

(From OE-Core rev: 1df3a79cf454754e6be6c1ffc91ba8310a880616)

(From OE-Core rev: 1896042df8db8ec21e41d45c2640360242fb0aee)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:28 +00:00
Armin Kuster
53f0290658 libxml2: security fix CVE-2015-5312
(From OE-Core rev: 8546fada29f2c8ec0111a15fe50d90d3f2518d52)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:28 +00:00
Armin Kuster
f4b0c49145 libxml2: security fix CVE-2015-8242
(From OE-Core rev: d392edafa1d73cace437f45bfbc147de9fc4cf8b)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:28 +00:00
Armin Kuster
fb409c9d17 libxml2: security fix CVE-2015-7500
includes a depend fix security issue CVE-2015-7500

(From OE-Core rev: 2febaf28b165dadc23eeb7f16391e72d4184b0a7)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:28 +00:00
Armin Kuster
55d097a106 libxml2: security fix CVE-2015-7499
includes:
CVE-2015-7499-1
CVE-2015-7499-2

(From OE-Core rev: 51aedd5307b92b63d97b63bd9911eda67ee6fde8)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:28 +00:00
Armin Kuster
8e6b2d6823 libxml2: security fix CVE-2015-7497
(From OE-Core rev: c1d69a59a693dabf4b48619fdc12ce0f148a2386)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:28 +00:00
Armin Kuster
332eb1dcce libxml2: security fix CVE-2015-7498
(From OE-Core rev: cece10f44c9cceddab17adf1a1debc4b14e50a8d)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:28 +00:00
Armin Kuster
cbc4e832d1 libxml2: security fix CVE-2015-8035
(From OE-Core rev: 1266b6269cbafbb529579d92334785a833c22fc1)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:28 +00:00
Armin Kuster
c4b71e1a6a libxml2: security fix CVE-2015-7942
includes:
CVE-2015-7942
CVE-2015-7942-2

(From OE-Core rev: 66c7e97f8687c1b656c322282ee7cdc200945616)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:28 +00:00
Armin Kuster
fdea03df12 libxml2: security fix CVE-2015-8317
(From OE-Core rev: 42086e309dfce3caa05e88681875f5f78cf5f095)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:27 +00:00
Armin Kuster
6fc1109f5d libxml2: security fix CVE-2015-7941
includes:
CVE-2015-7941-1
CVE-2015-7941-2

(From OE-Core rev: 48af957147a091550c089423e3a65bac6596c41e)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:27 +00:00
Armin Kuster
9eb4ce0a81 openssl: fix for CVE-2015-3195
(From OE-Core rev: 85841412db0b1e22c53e62a839d03f7672b07b64)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:27 +00:00
Armin Kuster
6880f826c3 openssl: fix for CVE-2015-3194
(From OE-Core rev: ce9f78296101772655809036e21009acec78da24)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:27 +00:00
Armin Kuster
7dcaa840ff openssl: fix for CVE-2015-3193
(From OE-Core rev: 4d9006b1217ee7e97108f36db19aebd93e1d9850)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-14 15:18:27 +00:00
Hongxu Jia
435139b2a9 logrotate: do not move binary logrotate to /usr/bin
In oe-core commit a46d3646a3e1781be4423b508ea63996b3cfca8a
...
Author: Fahad Usman <fahad_usman@mentor.com>
Date:   Tue Aug 26 13:16:48 2014 +0500

    logrotate: obey our flags

    Needed to quiet GNU_HASH warnings, and some minor fixes.
...
it explicitly move logrotate to /usr/bin without any reason,
which is against the original Linux location /usr/sbin.

So partly revert the above commit which let logrotate be
kept in the original place /usr/sbin.

(From OE-Core master rev: 0007436b486fd0bea9e6ef60bf57603e7cfce54b)

(From OE-Core rev: c0a13c410393ce51a2a55e36a0913c0136058bdc)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:31 +00:00
Andre McCurdy
5f49c0a248 cairo: fix license for cairo-script-interpreter
Without an explicit license, cairo-script-interpreter inherits
the default LICENSE and isn't packaged in builds which blacklist
GPLv3.

(From OE-Core master rev: cb8f84218b065fed88a8c36f3c78065e8ab726bf)

(From OE-Core rev: 6d0cf8ebde4eaa2c868dac8d0dac498c4210ec05)

Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:31 +00:00
Mark Hatle
a29ec8108e glibc: Fix ld.so / prelink interface for ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA
A bug in glibc 2.22's ld.so interface for the prelink support causes
the displayed values to be incorrect.  The included path fixes this
issue.

   Clear ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA for prelink

   prelink runs ld.so with the environment variable LD_TRACE_PRELINKING
   set to dump the relocation type class from _dl_debug_bindings.  prelink
   has the following relocation type classes:

   where ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA has a conflict with
   RTYPE_CLASS_TLS.

   Since prelink doesn't use ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA, we
   should clear the ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA bit when the
   DL_DEBUG_PRELINK bit is set.

 (From OE-Core master rev: 12c86bdcc60c54e587a896b0dceb8bb6cc9ff7e3)

(From OE-Core rev: 73919830f88f2d28da973e72fbdfaab591a5af69)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:31 +00:00
Mark Hatle
b1e980f33b gcc: Update default Power GCC settings to use secure-plt
The gcc default, bss-plt, will cause errors when using the prelinker.  All
other distributions that I am aware of are using the the secure-plt.  For an
explanation of the differences, the gcc docs:

  Current PowerPC GCC accepts a `-msecure-plt' option that generates code
  capable of using a newer PLT and GOT layout that has the security
  advantage of no executable section ever needing to be writable and no
  writable section ever being executable. PowerPC ld will generate this
  layout, including stubs to access the PLT, if all input files (including
  startup and static libraries) were compiled with `-msecure-plt'.
  `--bss-plt' forces the old BSS PLT (and GOT layout) which can give
  slightly better performance.

The security of the new PLT and ability to run the prelinker outweigh
any performance penalty.

The secure-plt is enabled by default.  The old bss-plt can be enabled by
selecting 'bssplt' in the DISTRO_FEATURES.

(From OE-Core master rev: 70c55aada1101a5c687cdaa79f370fa4530b39d9)

(From OE-Core rev: 44adc575be5d9b9ad0d87e143467aeeadde2fe89)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:31 +00:00
Mark Hatle
ed8269010c prelink: Fix various prelink issues on IA32, ARM, and MIPS.
Fix the following issues:

IA32 / ARM - Resync to glibc-2.22, fix a mismatch w/ glibc's ld.so
MIPS - Ignore the new SHT_MIPS_ABIFLAGS
ARM - Fix missing ARM IFUNC support chunk

Also upstream prelink project no longer has a 'trunk' directory.

(From OE-Core master rev: c725328f2ab5c9b220c552ed37c0d24b098a218d)

(From OE-Core rev: de7f25e9d67b150db4780bb82ef9481982e81312)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:30 +00:00
Jens Rehsack
9a620dada4 autotools: Allow recipe-individual configure scripts
OpenJDK-8 has it's configure script at common/autotools - which will cause
the entire assumption of ${S}/configure is regenerated by autoreconf, intltoolize or alike
fails heavily.

Also - other configure mechanisms can be supported more similar (see how pkgsrc
manages different ones ...)

(From OE-Core master rev: fe506eddb0790e37ac1e50f37fa2e32ad81d5493)

(From OE-Core rev: 809df21d8a8cc4ab860a84ccd7b2e51105df68ee)

Signed-off-by: Jens Rehsack <sno@netbsd.org>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:30 +00:00
Fang Jia
f8280717e4 toolchain-scripts.bbclass: unset command_not_found_handle
On Ubuntu-system, When sourcing the env.sh from an exported sdk, and
running a bogus linux command (for example "asd"), a core dump of
python is usually generated.

Unset the command_not_found_handle to fix it.

(From OE-Core master rev: 473ccbebb426df757adb8955eaa5e191d88180d1)

(From OE-Core rev: fe622c4508d2c87f7bd7c15c6391c8e1319fd3b6)

Signed-off-by: Fang Jia <fang.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:30 +00:00
Paul Eggleton
49858bdc02 devtool: upgrade: fetch remote repository before checking out new revision
If we're upgrading a recipe that fetches from git, and we've simply
fetched a tarball of the repo instead of directly from the upstream repo
(this can happen if you have PREMIRRORS set up as in poky with a core recipe,
e.g. kernelshark) then we won't have any new revisions, and the checkout
will fail with "fatal: reference is not a tree: <hash>". To avoid this,
do a "git fetch" before checking out the new revision.

(From OE-Core master rev: c4daebf3fe797a8063dcbc2ab229be2fbedc8134)

(From OE-Core rev: 2c8afd6aae775ab10dd30eb890fc410739048d79)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:30 +00:00
Paul Eggleton
d2134528a6 devtool: upgrade: remove erroneous error when not renaming recipe
If we're upgrading a git recipe the recipe file usually won't need
renaming; for some unknown reason we were throwing an error here which
isn't correct.

(From OE-Core master rev: 656348dff9bc9dd1cafc8fff11e5e374e3667f0f)

(From OE-Core rev: 9816c0a2ad2c1011e298d734576b531de9947740)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:30 +00:00
Paul Eggleton
fec97f6fa2 devtool: upgrade: fix updating PV and SRCREV
This code was clearly never tested. Fix the following issues:
* Actually set SRCREV if it's been specified
* Enable history tracking and reparse so that we handle if variables are
  set in an inc file next to the recipe
* Use a more accurate check for PV being in the recipe which will work
  if it's in an inc file next to the recipe

(From OE-Core master rev: 8b8f04226ebf464fa61c05ca7af7c6cbda392339)

(From OE-Core rev: 105a7c90dac6f43b7c3d1de92827db2db8419112)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:29 +00:00
Paul Eggleton
3b4f65968e devtool: upgrade: fix removing other recipes from workspace on reset
If you did a "devtool add" followed by "devtool upgrade" and then did
a "devtool reset" on the recipe you upgraded, the first recipe would
also be deleted from the workspace - this was because we were
erroneously adding the entire "recipes" subdirectory and its contents to
be tracked for removal on reset. Remove the unnecessary call to
os.path.dirname() that caused this.

(From OE-Core master rev: 65354e066f87df7d3138adceb22d6a05d1685904)

(From OE-Core rev: c44d41b0dec7457c4347a00b21d8b5bd24a9b887)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:29 +00:00
Tzu-Jung Lee
61a7de097a devtool: include do_patch in SRCTREECOVEREDTASKS
The external source of kernel has been patched during the
construction of git repository. Include the do_patch task in the
SRCTREECOVEREDTASKS.

(From OE-Core master rev: 0731c5a9e98f7b7f6e5ada9bbb99acb3f5884516)

(From OE-Core rev: e82466ebd9c8b9277255680d5efdd76eabf125b1)

Signed-off-by: Tzu-Jung Lee <roylee17@currantlabs.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:29 +00:00
Paul Eggleton
82c0072033 toolchain-shar-extract.sh: do not allow $ in paths for ext SDK
If you put an $ character in the path, SDK installation fails during the
preparation stage, so add this to the disallowed characters.

Fixes [YOCTO #8625].

(From OE-Core master rev: 654f4785f719552f4e78e14a5a901c07d00ce68d)

(From OE-Core rev: d7bcdb33a675fbdd30596d62961be52aa98c9e05)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:29 +00:00
Paul Eggleton
f181e72cb8 scripts/gen-lockedsig-cache: improve output
* Print some status when running
* When incorrect number of arguments specified, print usage text

(From OE-Core master rev: ac38d245878b618ddf56f9a68834d344500e45a6)

(From OE-Core rev: 5c5953cbc44c7532650cb9e3c877fa86c9d0f242)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:29 +00:00
Paul Eggleton
4b5d4ca1c9 toolchain-shar-extract.sh: proper fix for additional env setup scripts
buildtools-tarball uses a custom env setup script, which isn't named the
same as the default; thus unfortunately OE-Core revision
a36469c97c9cb335de1e95dea5141038f337df95 broke installation of
buildtools-tarball. Revert that and implement a more robust mechanism.

(From OE-Core master rev: 00e081b81ba8118959b724269ba9d18d42aba8a4)

(From OE-Core rev: feefaceb8a2bce8129aba82d4d93e725656ee075)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:29 +00:00
Jean-Francois Dagenais
d2ea8f1041 toolchain-shar-relocate: don't assume last state of env_setup_script is good
In the case where many environment-setup-* files exist, the incorrect
filename might be lastly set in env_setup_script, which leads to
incorrect behaviour for the initialization of native_sysroot.

The scenario I had was that our custom meta-toolchain-*.bb, which
inherits populate_sdk, defined another environment-setup-* file to dump
variable information for qt-creator. The file is named like so in order
for the sdk shell script to pick it up and fix the SDK paths in the
file. Since it (coincidentally) alphabetically comes after ...-core2, it
was last set in env_setup_script and the grep OECORE_NATIVE_SYSROOT
would simply be blank. The apparent symptom was "...relocate_sdk.py:
Argument list too long" since the find command would not be searching in
the right path.

(From OE-Core master rev: a36469c97c9cb335de1e95dea5141038f337df95)

(From OE-Core rev: 2f04a9285cfabdb053dafacd17320f847ac6343f)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:29 +00:00
Mark Hatle
02ef437608 populate_sdk_ext.bbclass: Be more permissive on the name of the buildtools
We want to support different names for the buildtools tarball.  The
name may not always be of the default oe-core format.

For instance, at Wind River we define the built-tools name to be:

${SDK_ARCH}-buildtools-nativesdk-standalone-${DISTRO_VERSION}

because thes standard SDK_NAME has additional information that is not
relevant to the builtools tarball.

(From OE-Core master rev: b49c6f179b06a8b97106aa4c95f2cdb3c4dc0920)

(From OE-Core rev: ed92440d19e5948aa64c95fcf30b989c5e6efdb9)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:29 +00:00
Paul Eggleton
3653b17aea classes/populate_sdk_ext: fail if SDK_ARCH != BUILD_ARCH
The extensible SDK relies upon uninative, and with the way that
uninative works, the build system architecture must be the same as the
SDK architecture or the extensible SDK won't be usable. At some point in
future hopefully we can remove this limitation, but until then it's
disingenuous to allow this to build, so add a check to ensure
SDK_ARCH == BUILD_ARCH and fail if it isn't.

(From OE-Core master rev: 9e30e849eda3b0a0c54d3f7ed0102760fdaef06c)

(From OE-Core rev: 1042d020d5d1b6af3f32e5fe29562d1dce765f0a)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:28 +00:00
Paul Eggleton
8879571d11 classes/populate_sdk_ext: tweak reporting of workspace exclusion
If you have a local workspace layer enabled when building the
extensible SDK, we explicitly exclude that from the SDK (mostly because
the SDK has its own for the user to use). Adjust the message we print
notifying the user of this so it's clear that we're excluding it from
the SDK, and scale it back from a warning to a note printed with
bb.plain().

(From OE-Core master rev: 90f46f74a088a7b965d2205eceb9eff6f276dd38)

(From OE-Core rev: dbacd35c0db2e9f4b9b2a20ffa6bcc5f78432d8a)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:28 +00:00
Paul Eggleton
eeda3c66a2 classes/populate_sdk_ext: make it clear when SDK installation has failed
When SDK preparation fails:

* Insert an ERROR: in front of the error message
* Add an error message to the environment setup script

Hopefully this should make it more obvious when this happens.

Fixes [YOCTO #8658].

(From OE-Core master rev: 105df569b3b1982005c2edb37f4690f9ba6bde35)

(From OE-Core rev: 98215b9513212b7002d072afa763347520544ee0)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:28 +00:00
Paul Eggleton
dee9fbe044 classes/populate_sdk_ext: tidy up preparation log file writing
Use a variable for the log file which includes the full path; this is
not only neater but avoids us writing the first part (the output of
oe-init-build-env) to a file in another directory since we are
changing directory as part of this subshell.

(From OE-Core master rev: 001af71752a9e9aab460cbd49ed049e1eb726295)

(From OE-Core rev: dded5f93d5650ebe5eb661a5cec698b1fa82e1ba)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:28 +00:00
Paul Eggleton
d001d46d17 classes/license: fix intermittent license collection warning
Fixes the following warning sometimes appearing during image builds:

WARNING: The license listed ABC was not in the licenses collected for recipe xyz

The files being looked for here, which runs during do_rootfs,
are written out by the do_populate_lic task for each recipe. However,
there was no explicit dependency between do_rootfs and all of the
do_populate_lic tasks to ensure they had run - only an implicit link via
do_build, so it is possible that sometimes they had not depending on how
the tasks were scheduled. Add an explicit set of dependencies to fix
this.

(From OE-Core master rev: ef7dc532e800d9b170246550cbc8703adf624beb)

(From OE-Core rev: f521d8d2d1ea495383f54e5e7c2754dde007f7eb)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:28 +00:00
Paul Eggleton
777451ca43 classes/metadata_scm: fix git errors showing up on non-git repositories
Fixes the following error showing up for layers that aren't a git repo
(or aren't parented by one):

fatal: Not a git repository (or any of the parent directories): .git

This was because we weren't intercepting stderr. We might as well just
use bb.process.run() here which does that and returns stdout and stderr
separately.

(This was a regression that came in with OE-Core revision
3aac11076e).

Fixes [YOCTO #8661].

(From OE-Core master rev: f533c1bf4c6edbecc67f9e2c62fd475d64668e86)

(From OE-Core rev: 8968ede9c8cdcd2cbf13bd5bba95883082189908)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:28 +00:00
Paul Eggleton
cb0ca7264d oeqa/selftest/layerappend: fix test if build directory is not inside COREBASE
Fix test_layer_appends to work when build directory is not inside
COREBASE.

Fixes [YOCTO #8639].

(From OE-Core master rev: 0f146e77655d153d3f9a59e489265450f08c6ad7)

(From OE-Core rev: e353b303e271368426e71810bb75173ca6f53455)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:27 +00:00
Paul Eggleton
8970ad60f5 oeqa/selftest/devtool: fix test if build directory is not inside COREBASE
Fix test_devtool_update_recipe_git to work when build directory is not
inside COREBASE.

Fixes [YOCTO #8639].

(From OE-Core master rev: 0225888207f82e5f1d9e3dffb7c342a10169aea3)

(From OE-Core rev: 16250994516ff907e18e71158aeb15e4d637de63)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:27 +00:00
Paul Eggleton
4f7fdd0a59 classes/distrodata: split SRC_URI properly before determining type
We weren't splitting SRC_URI values containing multiple URIs here; this
didn't cause any errors except when a trailing ; was left on a URI, in
which case the next URI was considered part of the parameter, which
didn't contain a = and therefore was considered invalid.

We only care about the first URI in SRC_URI in this context (since
that's the upstream URI by convention) so split it as we should and take
the first item.

Fixes [YOCTO #8645].

(From OE-Core master rev: 8e75b7e7d54e5638b42b9e7f90f2c6c17e62033f)

(From OE-Core rev: a28eba9fb03720c805eae02c3d0aebf9294e300b)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:27 +00:00
Randy Witt
3b7df55075 uninative.bbclass: Choose the correct loader based on BUILD_ARCH
Previously UNINATIVE_LOADER was always ld-linux-x86-64.so.2. That is
incorrect when the host is 32-bit.

This change also changes to using ?= so the user can override
UNINATIVE_LOADER if so desired.

[YOCTO #8124]

(From OE-Core master rev: b78fa0bcadd54bb29b6f1bb3a9308d4c454bf4e2)

(From OE-Core rev: b901a3057ff511f4c8bc730b37b967a93995de2f)

Signed-off-by: Randy Witt <randy.e.witt@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:27 +00:00
Ross Burton
f3d7c3f385 openssl: sanity check that the bignum module is present
The crypto_use_bigint_in_x86-64_perl patch uses the "bigint" module to
transparently support 64-bit integers on 32-bit hosts.  Whilst bigint (part of
bignum) is a core Perl module not all distributions install it (notable Fedora
23).

As the error message when bignum isn't installed is obscure, add a task to check
that it is available and alert the user if it isn't.

[ YOCTO #8562 ]

(From OE-Core master rev: 2f9a2fbc46aa435a0a7f7662bb62029ac714f25a)

(From OE-Core rev: 7aab4744a329f5fd1aca221950ef629e9f92b456)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:27 +00:00
Li Xin
96b1b5c127 glibc: Backported a patch to fix glibc's bug(18589)
Also Fix LSB NG cases:
 * /tset/ANSI.os/locale/setlocale/T.setlocale 1 2 4 5 15
 * /tset/ANSI.os/string/strcoll_X/T.strcoll_X 1
 * /tset/LI18NUX2K.L1/base/wcscoll/T.wcscoll 1
 * /tset/LI18NUX2K.L1/utils/localedef/T.localedef 7
 * /tset/LI18NUX2K.L1/utils/sort/T.sort 1 3 17 19 33 35
 * /tset/LI18NUX2K.L1/utils/comm/T.comm 1 2
 * /tset/LI18NUX2K.L1/utils/ls-fh/T.ls-fh 2

This patch is backported from
https://sourceware.org/git/gitweb.cgi?p=glibc.git;h=6c84109cfa26f35c3dfed3acb97d347361bd5849

(From OE-Core master rev: e88fe8f4c0ea70fb271d3a11e1a3bfcac4c92626)

(From OE-Core rev: 36c50bbe6592040e984af989e9841f0d38b8a1d1)

Signed-off-by: Li Xin <lixin.fnst@cn.fujitsu.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:27 +00:00
Andre McCurdy
7aecb577e0 directfb.inc: force bfd linker for armv7a
Workaround for linker errors seen with armv7a + gold:

 | ../arm-rdk-linux-gnueabi-libtool  --tag=CC   --mode=link arm-rdk-linux-gnueabi-gcc  -march=armv7-a -mfloat-abi=hard -mtune=cortex-a15 --sysroot=.../build/tmp/sysroots/eos -I.../build/tmp/sysroots/eos/usr/include/freetype2 -I.../build/tmp/sysroots/eos/usr/include/libpng16 -Wall -Wstrict-prototypes -Wmissing-prototypes -Wno-strict-aliasing -Werror-implicit-function-declaration -O3 -g2 -ffast-math -pipe -O2 -pipe -g -feliminate-unused-debug-types -D_GNU_SOURCE  -std=gnu99 -Werror-implicit-function-declaration  -Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed -o directfb-csource directfb-csource.o -lpng16 -ldl -lrt -lpthread
 | arm-rdk-linux-gnueabi-libtool: link: arm-rdk-linux-gnueabi-gcc -march=armv7-a -mfloat-abi=hard -mtune=cortex-a15 --sysroot=.../build/tmp/sysroots/eos -I.../build/tmp/sysroots/eos/usr/include/freetype2 -I.../build/tmp/sysroots/eos/usr/include/libpng16 -Wall -Wstrict-prototypes -Wmissing-prototypes -Wno-strict-aliasing -Werror-implicit-function-declaration -O3 -g2 -ffast-math -pipe -O2 -pipe -g -feliminate-unused-debug-types -D_GNU_SOURCE -std=gnu99 -Werror-implicit-function-declaration -Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed -o directfb-csource directfb-csource.o  .../build/tmp/sysroots/eos/usr/lib/libpng16.so -lz -lm -ldl -lrt -lpthread
 | .../build/tmp/sysroots/x86_64-linux/usr/bin/arm-rdk-linux-gnueabi/../../libexec/arm-rdk-linux-gnueabi/gcc/arm-rdk-linux-gnueabi/5.2.0/ld: error: directfb-csource.o: requires unsupported dynamic reloc R_ARM_MOVW_ABS_NC; recompile with -fPIC
 | collect2: error: ld returned 1 exit status

(From OE-Core rev: 0f0f16d3955f1428d1691a4edfe48cf00defed21)

Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:26 +00:00
Martin Jansa
75ca2c8682 texinfo: don't create dependency on INHERIT variable
* we don't want the do_package signature depending on INHERIT variable
* e.g. just adding the own-mirrors causes texinfo to rebuild:
  # bitbake-diffsigs BUILD/sstate-diff/*/*/texinfo/*do_package.sig*
  basehash changed from 015df2fd8e396cc1e15622dbac843301 to 9f1d06c4f238c70a99ccb6d8da348b6a
  Variable INHERIT value changed from
  ' rm_work blacklist blacklist report-error ${PACKAGE_CLASSES} ${USER_CLASSES} ${INHERIT_DISTRO} ${INHERIT_BLACKLIST} sanity'
  to
  ' rm_work own-mirrors blacklist blacklist report-error ${PACKAGE_CLASSES} ${USER_CLASSES} ${INHERIT_DISTRO} ${INHERIT_BLACKLIST} sanity'

(From OE-Core rev: 9cee82c8267f8bc0cb5fa4c7313f9682edf1ce2d)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:26 +00:00
Martin Jansa
02c7b3f271 package_manager.py: define info_dir and status_file when OPKGLIBDIR isn't the default
* without this the do_rootfs task doesn't respect OPKGLIBDIR and
  info, status are created in different directory than opkg on
  target expects
* people who modify OPKGLIBDIR need to make sure that opkg.conf included
  in opkg package also sets info_dir and status_file options

(From OE-Core rev: 48a6d618d4b39058bf04a6cb0d8c076ae5da4013)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:26 +00:00
Ross Burton
003c94f7d9 libsdl2: require GLES when building Wayland support
The Wayland support requires GLES2 to be enabled as otherwise the EGL support
code in SDL2 isn't enabled.

| In file included from .../SDL2-2.0.3/src/video/wayland/SDL_waylandvideo.c:34:0:
| .../SDL2-2.0.3/src/video/wayland/SDL_waylandvideo.c: In function 'Wayland_CreateDevice':
| .../SDL2-2.0.3/src/video/wayland/SDL_waylandopengles.h:38:38: error: 'SDL_EGL_GetSwapInterval' undeclared (first use in this function)
|  #define Wayland_GLES_GetSwapInterval SDL_EGL_GetSwapInterval

Solve this by adding gles2 to the Wayland PACKAGECONFIG option.

(From OE-Core rev: 0f7f15ed02ec0f7b08b9ef62f6eca6c0c1e5a73f)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:26 +00:00
Martin Jansa
ad6db0121f gst-plugins-bad: add PACKAGECONFIGs for voamrwbenc, voaacenc, resindvd
* allows to easily enable them and fixes:
WARNING: QA Issue: gstreamer1.0-plugins-bad: Files/directories were installed but not shipped in any package:
  /usr/share/gstreamer-1.0
  /usr/share/gstreamer-1.0/presets
  /usr/share/gstreamer-1.0/presets/GstVoAmrwbEnc.prs
Please set FILES such that these items are packaged. Alternatively if they are unneeded, avoid installing them or delete them within do_install.
gstreamer1.0-plugins-bad: 3 installed and not shipped files. [installed-vs-shipped]

(From OE-Core rev: 7d45881da23dca70334400f556ed198126190cea)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:26 +00:00
Martin Jansa
f0d87fea69 gstreamer1.0-plugins-good: fix PACKAGECONFIG for gudev and add one for v4l2 and libv4l2
* WARN: gstreamer1.0-plugins-good: gstreamer1.0-plugins-good-video4linux2 rdepends on libcap, but it isn't a build dependency?
  WARN: gstreamer1.0-plugins-good: gstreamer1.0-plugins-good-video4linux2 rdepends on libgudev, but it isn't a build dependency?
  WARN: gstreamer1.0-plugins-good: gstreamer1.0-plugins-good-video4linux2 rdepends on libudev, but it isn't a build dependency?
  WARN: gstreamer1.0-plugins-good: gstreamer1.0-plugins-good-video4linux2 rdepends on zlib, but it isn't a build dependency?

(From OE-Core rev: 66e32244aed8d33f1b49fbe78179f2442545c730)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:26 +00:00
Martin Jansa
35f34a61b3 gstreamer1.0-plugins-bad: fix dependencies for uvch264 PACKAGECONFIG
* ERROR: gstreamer1.0-plugins-bad: gstreamer1.0-plugins-bad-uvch264 package isn't created when building with minimal dependencies?
* ERROR: gstreamer1.0-plugins-bad: gstreamer1.0-plugins-bad-uvch264-dev package isn't created when building with minimal dependencies?

* it's because it should depend on libgudev not udev:
  configure: *** for plug-ins: uvch264 ***
  checking linux/uvcvideo.h usability... yes
  checking linux/uvcvideo.h presence... yes
  checking for linux/uvcvideo.h... yes
  checking for GST_VIDEO... yes
  checking for G_UDEV... no
  checking for LIBUSB... yes

(From OE-Core rev: 470f5ae7d9a7283a40f9dacdcc86f3b3b36fb572)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:26 +00:00
Martin Jansa
3b77e205c0 gstreamer1.0-plugins-{base,good}: update PACKAGECONFIGs
* there are new libavc1394, libiec61883, libraw1394, cdparanoia recipes in meta-multimedia

(From OE-Core rev: 9b21563448c2616792bfc411a8f2b9bb48e38a78)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:25 +00:00
Martin Jansa
e2d441275d libunwind: fix build for qemuarm
(From OE-Core rev: 481eab06645c633eba98de9f8e8632ce7a11c41b)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:25 +00:00
Martin Jansa
ef69078072 guile, mailx, gcc, opensp, gstreamer1.0-libav, libunwind: disable thumb where it fails for qemuarm
(From OE-Core rev: 0d1ea096cde4a145b0bb6efaa8fac03de74848d1)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:25 +00:00
Martin Jansa
4700e404f3 icu: force arm mode
* otherwise it triggers following ICE:
ERROR: Function failed: do_compile (log file is located at /OE/build/shr-core/tmp-eglibc/work/arm920tt-oe-linux-gnueabi/icu/53.1-r0/temp/log.do_compile.21570)
ERROR: Logfile of failure stored in: /OE/build/shr-core/tmp-eglibc/work/arm920tt-oe-linux-gnueabi/icu/53.1-r0/temp/log.do_compile.21570
Log data follows:
| DEBUG: SITE files ['endian-little', 'bit-32', 'arm-common', 'common-linux', 'common-glibc', 'arm-linux', 'arm-linux-gnueabi', 'common']
| DEBUG: Executing shell function do_compile
| NOTE: make
| Note: rebuild with "make VERBOSE=1 " to show all compiler parameters.
| make[0]: Making `all' in `stubdata'
| make[1]: Entering directory '/OE/build/shr-core/tmp-eglibc/work/arm920tt-oe-linux-gnueabi/icu/53.1-r0/build/stubdata'
| make[1]: Nothing to be done for 'all'.
| make[1]: Leaving directory '/OE/build/shr-core/tmp-eglibc/work/arm920tt-oe-linux-gnueabi/icu/53.1-r0/build/stubdata'
| make[0]: Making `all' in `common'
| make[1]: Entering directory '/OE/build/shr-core/tmp-eglibc/work/arm920tt-oe-linux-gnueabi/icu/53.1-r0/build/common'
|    arm-oe-linux-gnueabi-gcc    ...  /OE/build/shr-core/tmp-eglibc/work/arm920tt-oe-linux-gnueabi/icu/53.1-r0/icu/source/common/ubidiwrt.c
| /OE/build/shr-core/tmp-eglibc/work/arm920tt-oe-linux-gnueabi/icu/53.1-r0/icu/source/common/ubidiwrt.c: In function 'ubidi_writeReordered_53':
| /OE/build/shr-core/tmp-eglibc/work/arm920tt-oe-linux-gnueabi/icu/53.1-r0/icu/source/common/ubidiwrt.c:643:1: internal compiler error: in patch_jump_insn, at cfgrtl.c:1275
|  }
|  ^
| Please submit a full bug report,
| with preprocessed source if appropriate.
| See <http://gcc.gnu.org/bugs.html> for instructions.
| *** Failed compilation command follows: ----------------------------------------------------------
| arm-oe-linux-gnueabi-gcc -march=armv4t -mthumb -mthumb-interwork -mtune=arm920t --sysroot=/OE/build/shr-core/tmp-eglibc/sysroots/om-gta02 -D_REENTRANT -DU_HAVE_ELF_H=1 -DU_HAVE_ATOMIC=1 -I/OE/build/shr-core/tmp-eglibc/work/arm920tt-oe-linux-gnueabi/icu/53.1-r0/icu/source/common -DDEFAULT_ICU_PLUGINS="/usr/lib/icu"  -DU_ATTRIBUTE_DEPRECATED= -DU_COMMON_IMPLEMENTATION -O2 -pipe -g -feliminate-unused-debug-types -std=c99 -Wall -pedantic -Wshadow -Wpointer-arith -Wmissing-prototypes -Wwrite-strings -c -DPIC -fPIC -o ubidiwrt.o /OE/build/shr-core/tmp-eglibc/work/arm920tt-oe-linux-gnueabi/icu/53.1-r0/icu/source/common/ubidiwrt.c
| --- ( rebuild with "make VERBOSE=1 all" to show all parameters ) --------
| /OE/build/shr-core/tmp-eglibc/work/arm920tt-oe-linux-gnueabi/icu/53.1-r0/icu/source/config/mh-linux:44: recipe for target 'ubidiwrt.o' failed
| make[1]: *** [ubidiwrt.o] Error 1
| make[1]: Leaving directory '/OE/build/shr-core/tmp-eglibc/work/arm920tt-oe-linux-gnueabi/icu/53.1-r0/build/common'
| Makefile:141: recipe for target 'all-recursive' failed
| make: *** [all-recursive] Error 2
| ERROR: oe_runmake failed
| WARNING: /OE/build/shr-core/tmp-eglibc/work/arm920tt-oe-linux-gnueabi/icu/53.1-r0/temp/run.do_compile.21570:1 exit 1 from
|   exit 1
| ERROR: Function failed: do_compile (log file is located at /OE/build/shr-core/tmp-eglibc/work/arm920tt-oe-linux-gnueabi/icu/53.1-r0/temp/log.do_compile.21570)
NOTE: recipe icu-53.1-r0: task do_compile: Failed
ERROR: Task 6803 (/OE/build/shr-core/openembedded-core/meta/recipes-support/icu/icu_53.1.bb, do_compile) failed with exit code '1'

(From OE-Core rev: 07ec50eb553a1ac8a7780223d68f83bf9c79d4d5)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:25 +00:00
Khem Raj
743ee049b8 libxcb: Add a workaround for gcc5 bug on mips
This fixes build failure for libxcb on mips

(From OE-Core master rev: cad52140997e86c6fee4938369dfce21767f1a63)

(From OE-Core rev: 175397f8ca2e9d311965ebe040b253830a98e409)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-12 08:42:25 +00:00
Christopher Larson
8a3deca4a4 bitbake: fetch: use orig localpath when calling orig method
When a mirror tarball is fetched, the original fetch method is called, which
unpacks the mirror tarball. After the original method is called, it checks the
localpath of the mirror tarball rather than the clone path, which isn't ideal,
particularly if the mirror tarball was removed due to being out of date. We
know the original fetch method will do what it needs to do to get its content
in the form it needs from the mirror tarball, so we can use its localpath
instead.

(Bitbake rev: 022fe4481dc80121abb04e8a2b357722bc806475)

Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Awais Belal <awais_belal@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-08 12:10:32 +00:00
Leonardo Sandoval
0073b234d7 yocto-bsp: Typo on the file extension
By mistake, the file initially had a wrong extension name, so changing to the
correct one.

(From meta-yocto master rev: 32c2278b8fe93429d4cfa097eefccd20157cd3b8)

(From meta-yocto rev: 4bc43893cc437e4278f7332b4486a196a7d0315d)

Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-07 12:13:55 +00:00
Scott Rifenbark
71dbbcd0c8 bsp-guide: Updated the license statement.
Changed the license statement to not be "non-commercial".

(From yocto-docs rev: 42124666b6ba2f5673807bdfc40624b79c5870de)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-07 12:13:55 +00:00
Anibal Limon
41f1026849 dev-manual: Correction to the KVM stuff in the runqemu commands.
Applied this patch from Anibal to correct an earlier patch.

(From yocto-docs rev: 27df743fd55735addb9d2ab1164b07381908c98a)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-07 12:13:55 +00:00
Scott Rifenbark
38e3c6e6dd mega-manual: Added four new figures for GUI example.
Forgot to add these to the mega-manual figures folder so they
were not being found when the mega-manual was made.  This is
an issue with the tarball for jethro but will be correct for
the HTML published versions in the jethro branch.

(From yocto-docs rev: e1c9ef040ea1540f6ba84a1b40c60394cd64443f)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-07 12:13:54 +00:00
Scott Rifenbark
b99ec284c4 poky.ent: Fixed POKYVERSION variable.
Turns out this variable was accidentally incremented to "15.0.0"
during the release.  I did this because of skipping the YP 1.9
release.  The variable got wrapped into the tarball as the incorrect
"15.0.0".  This could be issues for anyone starting with a set
of manuals generated from the tarball release.  I updated the value
in the yocto-docs jethro branch and rebuilt the dev-manual where the
error was seven times.  Also rebuilt the mega-manual. Both corrected
versions are available on the website under the 2.0 set of manuals.

(From yocto-docs rev: 90e9495baddae9fc5a0e79410e10eaaa72f86e76)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-07 12:13:54 +00:00
Scott Rifenbark
c670dc77fe yocto-project-qs, ref-manual, poky.ent: CentOS Package updates
Fixes [YOCTO #8696]

Turns out the 'dnf' command is not yet supported for CentOS
as it is for Fedora, I changed the 'dnf' command back to
'yum'.  Also, there were some essential packages that needed
to be added to CentOS.  Finally, there was a slight
inconsistency in the Fedora list of essential packages and the
ones for supporting Graphics.  I had a redundant listing of
one of the packages.  I took that out of the Graphics area and
left it only in the essentials area.

(From yocto-docs rev: b9f7bcd796d33e95a1e5da9c1af167ef8cfe9f1b)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-07 12:13:54 +00:00
Anibal Limon
b968190e84 dev-manual: Updated runqemu command options list
Since 2.0 release KVM mode does not require VHOST
enablement and a new option was added to support the
old mode.  Updated the list of runqemu command options.

(From yocto-docs rev: 2a0d7affc34ce6d018e81940106e6fe2848780ac)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-07 12:13:54 +00:00
Scott Rifenbark
1278753c37 toaster-manual: Removed SDKMACHINE from the json file example.
(From yocto-docs rev: ea20ff8361fe72c701b085ee82f0702ad66baa7d)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-07 12:13:54 +00:00
Scott Rifenbark
7b25b70884 ref-manual: Updated list of supported distros.
(From yocto-docs rev: 863367fd38df2b2c80edba27b8483fda82c4e119)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-07 12:13:54 +00:00
Scott Rifenbark
d9423fbd54 ref-manual: Updated the GCC 5 migration section for 2.0
Added another link to Josh's porting guide.

(From yocto-docs rev: 12161bbbf75485589275b5d60ed84ed4849c5e3d)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-01-07 12:13:54 +00:00
Paul Eggleton
347347ad78 bitbake: lib/bb/utils: improve edit_bblayers_conf() handling of bblayers.conf formatting
Make the following improvements to edit_bblayers_conf():

* Support ~ in BBLAYERS entries
* Handle where BBLAYERS items are added over multiple lines with +=
  instead of one single long item

Also add some comments documenting the function arguments and return
values as well as a set of bitbake-selftest tests.

(This function is used by the bitbake-layers add, remove and
layerindex-fetch subcommands, as well as devtool when adding the
workspace layer).

(Bitbake master rev: e9a0858023c7671e30cc8ebb08496304b7f26b31)

(Bitbake rev: fca41cf073469493e9dada377fc42d4b084c45c9)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-14 23:16:17 +00:00
Paul Eggleton
5935783f21 bitbake: lib/bb/utils: fix error in edit_metadata() when deleting first line
If you tried to delete the variable on the first line passed to
edit_metadata() this failed because the logic for trimming extra blank
lines didn't expect the list to be empty at that point - fix that bad
assumption.

(Bitbake master rev: 8bce6fefdc5c046b916588962a2b429c0f648133)

(Bitbake rev: 3fbf3f8211183ecb18938f2fc9acaa400766d9f0)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-14 23:16:17 +00:00
Li Zhou
7fdad70111 rpcbind: Security Advisory - rpcbind - CVE-2015-7236
rpcbind: Fix memory corruption in PMAP_CALLIT code

Use-after-free vulnerability in xprt_set_caller in rpcb_svc_com.c in
rpcbind 0.2.1 and earlier allows remote attackers to cause a denial of
service (daemon crash) via crafted packets, involving a PMAP_CALLIT
code.

The patch comes from
<http://www.openwall.com/lists/oss-security/2015/09/18/7>, and it hasn't
been in rpcbind upstream yet.

(From OE-Core master rev: cc4f62f3627f3804907e8ff9c68d9321979df32b)

(From OE-Core rev: 224bcc2ead676600bcd9e290ed23d9b2ed2f481e)

Signed-off-by: Li Zhou <li.zhou@windriver.com>
Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:15 +00:00
Wenzong Fan
0cb2fa5f73 subversion: fix CVE-2015-3187
The svn_repos_trace_node_locations function in Apache Subversion before
1.7.21 and 1.8.x before 1.8.14, when path-based authorization is used,
allows remote authenticated users to obtain sensitive path information
by reading the history of a node that has been moved from a hidden path.

Patch is from:
http://subversion.apache.org/security/CVE-2015-3187-advisory.txt

(From OE-Core master rev: 6da25614edcad30fdb4bea8ff47b81ff81cdaed2)

(From OE-Core rev: e1e277bf51c6f00268358f6bf8623261b1b9bc22)

Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:15 +00:00
Wenzong Fan
5b52e9b086 subversion: fix CVE-2015-3184
mod_authz_svn in Apache Subversion 1.7.x before 1.7.21 and 1.8.x before
1.8.14, when using Apache httpd 2.4.x, does not properly restrict
anonymous access, which allows remote anonymous users to read hidden
files via the path name.

Patch is from:
http://subversion.apache.org/security/CVE-2015-3184-advisory.txt

(From OE-Core master rev: 29eb921ed074d86fa8d5b205a313eb3177473a63)

(From OE-Core rev: 7af7a3e692a6cd0d92768024efe32bfa7d83bc8f)

Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:15 +00:00
Bhuvanchandra DV
59bdde4327 linux-firmware: rtl8192cx: Add latest available firmware
Add latest available firmware binaries for RTL8192CX chipsets.
These new firmwares have been released in 2012, have been used
by the mainline kernel as preferred firmware since 3.13 and
even backported to stable branches.

(master rev: 2dc67b53d1b7c056bbbff2f90ad16ed214b57609)

(From OE-Core rev: 3671e20cb31f0a5c11939f3c5ba2d088db08e705)

Signed-off-by: Bhuvanchandra DV <bhuvanchandra.dv@toradex.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:15 +00:00
Ng, Mei Yeen
8ad2bcca49 init-install-efi: fix script for gummiboot loader
After running gummiboot loader install option, the installed target
storage device boot parameter for root=PARTUUID is empty causing boot failure.
This issue is only observed with gummiboot and not with GRUB loader.

This fix assign the rootuuid of the rootfs partition for gummiboot loader.

[YOCTO #8709]
(From OE-Core rev: 0b9f31452a65d1a8d8392b4ba9c335bd32860a6a)

Signed-off-by: Ng, Mei Yeen <mei.yeen.ng@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:15 +00:00
Ng, Mei Yeen
c3087bd977 init-install-efi: fix script for eMMC installation
Running the install option from bootloader to install image to eMMC will fail
with error:
Formatting /dev/mmcblk01 to vfat...
mkfs.fat 3.0.28 (2015-05-16)
/dev/mmcblk01: No such file or directory

This issue impacts both grub and gummiboot install option to eMMC device.
The installation failure is due to the following:
[1] Unable to partition eMMC as the partition prefix 'p' is not appended
The condition checking failed with the additional /dev/ appended with
the target device name.
[2] The partition uuid for boot, root and swap partition is not captured
for eMMC

This fix updated the condition checking and changed the variables to
reference the boot, root and swap partitions for UUID.

[YOCTO #8710]
(master rev: a7d081c3db776c8b0734942df6bf96f811f15bd3)

(From OE-Core rev: 1be316beb5c2b1e32f11ab8ec5dee68f64defb2d)

Signed-off-by: Ng, Mei Yeen <mei.yeen.ng@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:15 +00:00
Jussi Kukkonen
d2bf9fb2ca pulseaudio: Fix HDMI profile selection
On systems with two cards, the correct output profile does not get
selected automatically even in the simple case where there is one
available profile. This scenario is typical at least with HDMI audio
(which is on a separate card).

Fixes [YOCTO #8448]

(From OE-Core rev: 7d26b5f7fad5f5200f73e2a2c11874d8ccf34c59)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:14 +00:00
Mike Crowe
0556c58bff allarch: Force TARGET_*FLAGS variable values
TARGET_CPPFLAGS, TARGET_CFLAGS, TARGET_CPPFLAGS and TARGET_LDFLAGS may
differ between MACHINEs. Since they are exported they affect task hashes
even if unused which leads to multiple variants of allarch packages
existing in sstate and bouncing in the sysroot when switching between
MACHINEs.

allarch packages shouldn't be using these variables anyway, so let's
ensure they have a fixed value in order to avoid this problem.

(Compare with 05a70ac30b37cab0952f1b9df501993a9dec70da and
14f4d016fef9d660da1e7e91aec4a0e807de59ab.)

(From OE-Core rev: 16482cf042e129e8f429bdcea9c0c9addb0e8a0b)

Signed-off-by: Mike Crowe <mac@mcrowe.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:14 +00:00
Maxin B. John
e683dac7ab libsndfile: fix CVE-2014-9756
Fix divide by zero bug (CVE-2014-9756)

(From OE-Core master rev: f47cf07ab9d00ed7eddc8e867138481f7bd2bb7d)

(From OE-Core rev: 353f6d9530e9545aee5c77de348abeee9002f046)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:14 +00:00
Armin Kuster
092757ec5b libxslt: CVE-2015-7995
This is a is being give a High rating so please consider it for
all 1.1.28 versions.

A type confusion error within the libxslt "xsltStylePreCompute()"
function in preproc.c can lead to a DoS. Confirmed in version 1.1.28,
other versions may also be affected.

(From OE-Core master rev: 0f89bbab6588a1171259801fa879516740030acb)

(From OE-Core rev: bc8b7401fa18f6a987041d7f93a1fa3512f8513c)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:14 +00:00
Ross Burton
dab55553b2 unzip: rename patch to reflect CVE fix
(From OE-Core rev: e3d2974348bd830ec2fcf84ea08cbf38abbc0327)

(master rev: 78e05984b1)

(From OE-Core rev: 97b247a88024083ce145f9e64ac9c9a182d02d3e)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:14 +00:00
Ross Burton
1753d4a5da readline: rename patch to contain CVE reference
To help automated scanning of CVEs, put the CVE ID in the filename.

(From OE-Core master rev: 211bce4f23230c7898cccdb73b582420f830f977)

(From OE-Core rev: 6821bb42febfc5f939896b0ffbc1c00b15b9329e)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:14 +00:00
Ross Burton
9dd3422bc6 libarchive: rename patch to reflect CVE
This patch is a CVE fix, so rename it to help CVE detection tools identify it as
such.

(From OE-Core master rev: 3fd05ce1f709cbbd8fdeb1dbfdffbd39922eca6e)

(From OE-Core rev: 2cc8c8066193f851ea0ed3912dee287c2d1c5257)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:14 +00:00
Mark Hatle
1401976a02 binutils: Fix octeon3 disassembly patch
The structure has apparently changed, and there was a missing
setting.  This corrects a segfault when disassembling code.

(From OE-Core master rev: 2e8f1ffe3a8d7740b0ac68eefbba3fe28f7ba6d4)

(From OE-Core rev: 6a6f5446303a9b0b858d153137244a5a101520ce)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:14 +00:00
Alejandro del Castillo
a54a0dba10 opkg: add cache filename length fixes
(From OE-Core master rev: 8e53500a7c05204fc63759f456639545a022e82b)

(From OE-Core rev: 71ad09cfe9c43a113295c95a0fb0899d44f2bb7e)

Signed-off-by: Alejandro del Castillo <alejandro.delcastillo@ni.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2015-12-08 10:27:14 +00:00
2899 changed files with 128993 additions and 93354 deletions

3
.gitignore vendored
View File

@@ -1,7 +1,7 @@
*.pyc
*.pyo
/*.patch
/build*/
build*/
pyshtables.py
pstage/
scripts/oe-git-proxy-socks
@@ -14,7 +14,6 @@ hob-image-*.bb
*.orig
*.rej
*~
!meta-poky
!meta-yocto
!meta-yocto-bsp
!meta-yocto-imported

View File

@@ -1,2 +1,2 @@
# Template settings
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf}
TEMPLATECONF=${TEMPLATECONF:-meta-yocto/conf}

2
README
View File

@@ -42,7 +42,7 @@ documentation:
Git repository: http://git.yoctoproject.org/cgit/cgit.cgi/yocto-docs/
Mailing list: yocto@yoctoproject.org
meta-poky, meta-yocto-bsp:
meta-yocto(-bsp):
Git repository: http://git.yoctoproject.org/cgit/cgit.cgi/meta-yocto(-bsp)
Mailing list: poky@yoctoproject.org

View File

@@ -8,5 +8,3 @@ Foundation and individual contributors.
* Twitter Bootstrap (including Glyphicons), redistributed under the Apache License 2.0.
* jQuery is redistributed under the MIT license.
* QUnit is redistributed under the MIT license.

View File

@@ -35,7 +35,7 @@ except RuntimeError as exc:
from bb import cookerdata
from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
__version__ = "1.31.0"
__version__ = "1.28.0"
if __name__ == "__main__":
if __version__ != bb.__version__:

View File

@@ -95,7 +95,7 @@ def find_compare_task(bbhandler, pn, taskname):
# Recurse into signature comparison
output = bb.siggen.compare_sigfiles(latestfiles[0], latestfiles[1], recursecb)
if output:
print('\n'.join(output))
print '\n'.join(output)
sys.exit(0)
@@ -130,9 +130,9 @@ else:
except IOError as e:
logger.error(str(e))
sys.exit(1)
except (pickle.UnpicklingError, EOFError):
except cPickle.UnpicklingError, EOFError:
logger.error('Invalid signature data - ensure you are specifying sigdata/siginfo files')
sys.exit(1)
if output:
print('\n'.join(output))
print '\n'.join(output)

View File

@@ -57,9 +57,9 @@ else:
except IOError as e:
logger.error(str(e))
sys.exit(1)
except (pickle.UnpicklingError, EOFError):
except cPickle.UnpicklingError, EOFError:
logger.error('Invalid signature data - ensure you are specifying a sigdata/siginfo file')
sys.exit(1)
if output:
print('\n'.join(output))
print '\n'.join(output)

View File

@@ -654,7 +654,7 @@ build results (as the layer priority order has effectively changed).
logger.plain(' Skipping layer config file %s' % f1full )
continue
else:
logger.warning('Overwriting file %s', fdest)
logger.warn('Overwriting file %s', fdest)
bb.utils.copyfile(f1full, fdest)
if ext == '.bb':
for append in self.bbhandler.cooker.collection.get_file_appends(f1full):
@@ -790,8 +790,8 @@ Lists recipes with the bbappends that apply to them as subitems.
if best_filename:
if best_filename in missing:
logger.warning('%s: missing append for preferred version',
best_filename)
logger.warn('%s: missing append for preferred version',
best_filename)
return True
else:
return False
@@ -1068,5 +1068,5 @@ if __name__ == "__main__":
except Exception:
ret = 1
import traceback
traceback.print_exc()
traceback.print_exc(5)
sys.exit(ret)

View File

@@ -50,6 +50,6 @@ if __name__ == "__main__":
except Exception:
ret = 1
import traceback
traceback.print_exc()
traceback.print_exc(5)
sys.exit(ret)

View File

@@ -18,7 +18,7 @@ if len(sys.argv) != 2 or not sys.argv[1].startswith("decafbad"):
sys.exit(1)
profiling = False
if sys.argv[1].startswith("decafbadbad"):
if sys.argv[1] == "decafbadbad":
profiling = True
try:
import cProfile as profile
@@ -159,8 +159,7 @@ def fork_off_task(cfg, data, workerdata, fn, task, taskname, appends, taskdepdat
pipeout = os.fdopen(pipeout, 'wb', 0)
pid = os.fork()
except OSError as e:
logger.critical("fork failed: %d (%s)" % (e.errno, e.strerror))
sys.exit(1)
bb.msg.fatal("RunQueue", "fork failed: %d (%s)" % (e.errno, e.strerror))
if pid == 0:
def child():
@@ -203,8 +202,6 @@ def fork_off_task(cfg, data, workerdata, fn, task, taskname, appends, taskdepdat
the_data = bb.cache.Cache.loadDataFull(fn, appends, data)
the_data.setVar('BB_TASKHASH', workerdata["runq_hash"][task])
bb.utils.set_process_name("%s:%s" % (the_data.getVar("PN", True), taskname.replace("do_", "")))
# exported_vars() returns a generator which *cannot* be passed to os.environ.update()
# successfully. We also need to unset anything from the environment which shouldn't be there
exports = bb.data.exported_vars(the_data)
@@ -299,16 +296,12 @@ class BitbakeWorker(object):
signal.signal(signal.SIGTERM, self.sigterm_exception)
# Let SIGHUP exit as SIGTERM
signal.signal(signal.SIGHUP, self.sigterm_exception)
if "beef" in sys.argv[1]:
bb.utils.set_process_name("Worker (Fakeroot)")
else:
bb.utils.set_process_name("Worker")
def sigterm_exception(self, signum, stackframe):
if signum == signal.SIGTERM:
bb.warn("Worker received SIGTERM, shutting down...")
bb.warn("Worker recieved SIGTERM, shutting down...")
elif signum == signal.SIGHUP:
bb.warn("Worker received SIGHUP, shutting down...")
bb.warn("Worker recieved SIGHUP, shutting down...")
self.handle_finishnow(None)
signal.signal(signal.SIGTERM, signal.SIG_DFL)
os.kill(os.getpid(), signal.SIGTERM)
@@ -365,7 +358,7 @@ class BitbakeWorker(object):
def handle_ping(self, _):
workerlog_write("Handling ping\n")
logger.warning("Pong from bitbake-worker!")
logger.warn("Pong from bitbake-worker!")
def handle_quit(self, data):
workerlog_write("Handling quit\n")

View File

@@ -462,7 +462,7 @@ def main():
state_group = 2
for key in bb.data.keys(documentation):
data = documentation.getVarFlag(key, "doc", False)
data = documentation.getVarFlag(key, "doc")
if not data:
continue

View File

@@ -119,4 +119,4 @@ if __name__ == '__main__':
gtk.main()
except Exception:
import traceback
traceback.print_exc()
traceback.print_exc(3)

View File

@@ -1,8 +1,5 @@
#!/bin/echo ERROR: This script needs to be sourced. Please run as .
# toaster - shell script to start Toaster
# Copyright (C) 2013-2015 Intel Corp.
#!/bin/sh
# (c) 2013 Intel Corp.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@@ -15,26 +12,32 @@
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
HELP="
Usage: source toaster start|stop [webport=<address:port>] [noweb]
Optional arguments:
[noweb] Setup the environment for building with toaster but don't start the development server
[webport] Set the development server (default: localhost:8000)
"
# This script can be run in two modes.
# When used with "source", from a build directory,
# it enables toaster event logging and starts the bitbake resident server.
# use as: source toaster [start|stop] [noweb] [noui]
# When it is called as a stand-alone script, it starts just the
# web server, and the building shall be done through the web interface.
# As script, it will not return to the command prompt. Stop with Ctrl-C.
# Helper function to kill a background toaster development server
webserverKillAll()
{
local pidfile
for pidfile in ${BUILDDIR}/.toastermain.pid ${BUILDDIR}/.runbuilds.pid; do
for pidfile in ${BUILDDIR}/.toastermain.pid; do
if [ -f ${pidfile} ]; then
pid=`cat ${pidfile}`
while kill -0 $pid 2>/dev/null; do
kill -SIGTERM -$pid 2>/dev/null
sleep 1
# Kill processes if they are still running - may happen
# in interactive shells
# Kill processes if they are still running - may happen in interactive shells
ps fux | grep "python.*manage.py runserver" | awk '{print $2}' | xargs kill
done
rm ${pidfile}
@@ -52,15 +55,35 @@ webserverStartAll()
retval=0
# you can always add a superuser later via
# ../bitbake/lib/toaster/manage.py createsuperuser --username=<ME>
$MANAGE migrate --noinput || retval=1
# python bitbake/lib/toaster/manage.py python manage.py createsuperuser --username=<ME>
python $BBBASEDIR/lib/toaster/manage.py syncdb --noinput || retval=1
python $BBBASEDIR/lib/toaster/manage.py migrate orm || retval=2
if [ $retval -eq 1 ]; then
echo "Failed migrations, aborting system start" 1>&2
echo "Failed db sync, aborting system start" 1>&2
return $retval
fi
$MANAGE checksettings --traceback || retval=1
python $BBBASEDIR/lib/toaster/manage.py migrate orm || retval=1
if [ $retval -eq 1 ]; then
printf "\nError on orm migration, rolling back...\n"
python $BBBASEDIR/lib/toaster/manage.py migrate orm 0001_initial --fake
return $retval
fi
python $BBBASEDIR/lib/toaster/manage.py migrate bldcontrol || retval=1
if [ $retval -eq 1 ]; then
printf "\nError on bldcontrol migration, rolling back...\n"
python $BBBASEDIR/lib/toaster/manage.py migrate bldcontrol 0001_initial --fake
return $retval
fi
if [ "$TOASTER_MANAGED" = '1' ]; then
python $BBBASEDIR/lib/toaster/manage.py checksettings --traceback || retval=1
fi
if [ $retval -eq 1 ]; then
printf "\nError while checking settings; aborting\n"
@@ -69,9 +92,7 @@ webserverStartAll()
echo "Starting webserver..."
$MANAGE runserver "$ADDR_PORT" \
</dev/null >>${BUILDDIR}/toaster_web.log 2>&1 \
& echo $! >${BUILDDIR}/.toastermain.pid
python $BBBASEDIR/lib/toaster/manage.py runserver "0.0.0.0:$WEB_PORT" </dev/null >>${BUILDDIR}/toaster_web.log 2>&1 & echo $! >${BUILDDIR}/.toastermain.pid
sleep 1
@@ -79,13 +100,22 @@ webserverStartAll()
retval=1
rm "${BUILDDIR}/.toastermain.pid"
else
echo "Toaster development webserver started at http://$ADDR_PORT"
echo -e "\nYou can now run 'bitbake <target>' on the command line and monitor your build in Toaster.\nYou can also use a Toaster project to configure and run a build.\n"
echo "Webserver address: http://0.0.0.0:$WEB_PORT/"
fi
return $retval
}
# Helper functions to add a special configuration file
addtoConfiguration()
{
file=$1
shift
echo "#Created by toaster start script" > ${BUILDDIR}/conf/$file
for var in "$@"; do echo $var >> ${BUILDDIR}/conf/$file; done
}
INSTOPSYSTEM=0
# define the stop command
@@ -98,34 +128,45 @@ stop_system()
kill `cat ${BUILDDIR}/.toasterui.pid` 2>/dev/null
rm ${BUILDDIR}/.toasterui.pid
fi
BBSERVER=0.0.0.0:-1 bitbake -m
unset BBSERVER
webserverKillAll
# unset exported variables
unset DATABASE_URL
unset TOASTER_CONF
unset TOASTER_DIR
unset BITBAKE_UI
unset BBBASEDIR
# force stop any misbehaving bitbake server
lsof bitbake.lock | awk '{print $2}' | grep "[0-9]\+" | xargs -n1 -r kill
trap - SIGHUP
#trap - SIGCHLD
INSTOPSYSTEM=0
}
check_pidbyfile() {
[ -e $1 ] && kill -0 `cat $1` 2>/dev/null
}
notify_chldexit() {
if [ $NOTOASTERUI -eq 0 ]; then
check_pidbyfile ${BUILDDIR}/.toasterui.pid && return
stop_system
fi
}
verify_prereq() {
# Verify Django version
reqfile=$(python -c "import os; print(os.path.realpath('$BBBASEDIR/toaster-requirements.txt'))")
exp='s/Django\([><=]\+\)\([^,]\+\),\([><=]\+\)\(.\+\)/'
exp=$exp'import sys,django;version=django.get_version().split(".");'
exp=$exp'sys.exit(not (version \1 "\2".split(".") and version \3 "\4".split(".")))/p'
if ! sed -n "$exp" $reqfile | python - ; then
req=`grep ^Django $reqfile`
echo "This program needs $req"
echo "Please install with pip install -r $reqfile"
# Verify prerequisites
if ! echo "import django; print (1,) == django.VERSION[0:1] and django.VERSION[1:2][0] in (6,)" | python 2>/dev/null | grep True >/dev/null; then
printf "This program needs Django 1.6. Please install with\n\npip install django==1.6\n"
return 2
fi
if ! echo "import south; print reduce(lambda x, y: 2 if x==2 else 0 if x == 0 else y, map(lambda x: 1+cmp(x[1]-x[0],0), zip([0,8,4], map(int,south.__version__.split(\".\"))))) > 0" | python 2>/dev/null | grep True >/dev/null; then
printf "This program needs South 0.8.4. Please install with\n\npip install south==0.8.4\n"
return 2
fi
return 0
}
# read command line parameters
if [ -n "$BASH_SOURCE" ] ; then
TOASTER=${BASH_SOURCE}
@@ -135,42 +176,33 @@ else
TOASTER=$0
fi
export BBBASEDIR=`dirname $TOASTER`/..
MANAGE=$BBBASEDIR/lib/toaster/manage.py
OEROOT=`dirname $TOASTER`/../..
[ `basename \"$0\"` = `basename \"${TOASTER}\"` ] && TOASTER_MANAGED=1
BBBASEDIR=`dirname $TOASTER`/..
RUNNING=0
NOTOASTERUI=0
WEBSERVER=1
TOASTER_BRBE=""
if [ "$WEB_PORT" = "" ]; then
WEB_PORT="8000"
fi
# this is the configuraton file we are using for toaster
# we are using the same logic that oe-setup-builddir uses
# (based on TEMPLATECONF and .templateconf) to determine
# which toasterconf.json to use.
# note: There are a number of relative path assumptions
# in the local layers that currently make using an arbitrary
# toasterconf.json difficult.
. $OEROOT/.templateconf
if [ -n "$TEMPLATECONF" ]; then
if [ ! -d "$TEMPLATECONF" ]; then
# Allow TEMPLATECONF=meta-xyz/conf as a shortcut
if [ -d "$OEROOT/$TEMPLATECONF" ]; then
TEMPLATECONF="$OEROOT/$TEMPLATECONF"
fi
if [ ! -d "$TEMPLATECONF" ]; then
echo >&2 "Error: '$TEMPLATECONF' must be a directory containing toasterconf.json"
return 1
fi
fi
fi
# note default is assuming yocto. Override this if you are
# running in a pure OE environment and use the toasterconf.json
# in meta/conf/toasterconf.json
# note: for future there are a number of relative path assumptions
# in the local layers that currently prevent using an arbitrary
# toasterconf.json
if [ "$TOASTER_CONF" = "" ]; then
TOASTER_CONF="$TEMPLATECONF/toasterconf.json"
export TOASTER_CONF=$(python -c "import os; print(os.path.realpath('$TOASTER_CONF'))")
TOASTER_CONF="$(dirname $TOASTER)/../../meta-yocto/conf/toasterconf.json"
export TOASTER_CONF=$(python -c "import os; print os.path.realpath('$TOASTER_CONF')")
fi
if [ ! -f $TOASTER_CONF ]; then
echo "$TOASTER_CONF configuration file not found. Set TOASTER_CONF to specify file or fix .templateconf"
return 1
echo "$TOASTER_CONF configuration file not found. set TOASTER_CONF to specify a path"
[ "$TOASTER_MANAGED" = '1' ] && exit 1 || return 1
fi
# this defines the dir toaster will use for
# 1) clones of layers (in _toaster_clones )
# 2) the build dir (in build)
@@ -180,44 +212,102 @@ fi
# make sure that the toaster.sqlite file doesn't default to `pwd` like it currently does.
export TOASTER_DIR=`pwd`
WEBSERVER=1
ADDR_PORT="localhost:8000"
unset CMD
NOBROWSER=0
for param in $*; do
case $param in
noui )
NOTOASTERUI=1
;;
noweb )
WEBSERVER=0
;;
start )
CMD=$param
nobrowser )
NOBROWSER=1
;;
stop )
CMD=$param
brbe=* )
TOASTER_BRBE=$'\n'"TOASTER_BRBE=\""${param#*=}"\""
;;
webport=*)
ADDR_PORT="${param#*=}"
# Split the addr:port string
ADDR=`echo $ADDR_PORT | cut -f 1 -d ':'`
PORT=`echo $ADDR_PORT | cut -f 2 -d ':'`
# If only a port has been speified then set address to localhost.
if [ $ADDR = $PORT ] ; then
ADDR_PORT="localhost:$PORT"
fi
;;
*)
echo "$HELP"
return 1
;;
WEB_PORT="${param#*=}"
esac
done
if [ `basename \"$0\"` = `basename \"${TOASTER}\"` ]; then
echo "Error: This script needs to be sourced. Please run as . $TOASTER"
if [ "$TOASTER_MANAGED" = '1' ]; then
# We are called as standalone. We refuse to run in a build environment - we need the interactive mode for that.
# Start just the web server, point the web browser to the interface, and start any Django services.
if ! verify_prereq; then
echo "Error: Could not verify that the needed dependencies are installed. Please use virtualenv and pip to install dependencies listed in toaster-requirements.txt" 1>&2
exit 1
fi
if [ -n "$BUILDDIR" ]; then
printf "Error: It looks like you sourced oe-init-build-env. Toaster cannot start in build mode from an oe-core build environment.\n You should be starting Toaster from a new terminal window.\n" 1>&2
exit 1
fi
# Define a fake builddir where only the pid files are actually created. No real builds will take place here.
BUILDDIR=/tmp/toaster_$$
if [ -d "$BUILDDIR" ]; then
echo "Previous toaster run directory $BUILDDIR found, cowardly refusing to start. Please remove the directory when that toaster instance is over" 2>&1
exit 1
fi
mkdir -p "$BUILDDIR"
RUNNING=1
trap_ctrlc() {
echo "** Stopping system"
webserverKillAll
RUNNING=0
}
do_cleanup() {
find "$BUILDDIR" -type f | xargs rm
rmdir "$BUILDDIR"
}
cleanup() {
if grep -ir error "$BUILDDIR" >/dev/null; then
if grep -irn "That port is already in use" "$BUILDDIR"; then
echo "You can use the \"webport=PORTNUMBER\" parameter to start Toaster on a different port (port $WEB_PORT is already in use)"
do_cleanup
else
printf "\nErrors found in the Toaster log files present in '$BUILDDIR'. Directory will not be cleaned.\n Please review the errors and notify toaster@yoctoproject.org or submit a bug https://bugzilla.yoctoproject.org/enter_bug.cgi?product=Toaster"
fi
else
echo "No errors found, removing the run directory '$BUILDDIR'"
do_cleanup
fi
}
export TOASTER_MANAGED=1
if [ $WEBSERVER -gt 0 ] && ! webserverStartAll; then
echo "Failed to start the web server, stopping" 1>&2
cleanup
exit 1
fi
if [ $WEBSERVER -gt 0 ] && [ $NOBROWSER -eq 0 ] ; then
echo "Starting browser..."
xdg-open http://127.0.0.1:$WEB_PORT/ >/dev/null 2>&1 &
fi
trap trap_ctrlc 2
echo "Toaster is now running. You can stop it with Ctrl-C"
while [ $RUNNING -gt 0 ]; do
python $BBBASEDIR/lib/toaster/manage.py runbuilds 2>&1 | tee -a "$BUILDDIR/toaster.log"
sleep 1
done
cleanup
echo "**** Exit"
exit 0
fi
if ! verify_prereq; then
echo "Error: Could not verify that the needed dependencies are installed. Please use virtualenv and pip to install dependencies listed in toaster-requirements.txt" 1>&2
return 1
fi
verify_prereq || return 1
# We make sure we're running in the current shell and in a good environment
if [ -z "$BUILDDIR" ] || ! which bitbake >/dev/null 2>&1 ; then
@@ -225,68 +315,84 @@ if [ -z "$BUILDDIR" ] || ! which bitbake >/dev/null 2>&1 ; then
return 2
fi
# this defines the dir toaster will use for
# 1) clones of layers (in _toaster_clones )
# 2) the build dir (in build)
# 3) the sqlite db if that is being used.
# 4) pid's we need to clean up on exit/shutdown
# note: for future. in order to make this an arbitrary directory, we need to
# make sure that the toaster.sqlite file doesn't default to `pwd`
# like it currently does.
export TOASTER_DIR=`dirname $BUILDDIR`
# Determine the action. If specified by arguments, fine, if not, toggle it
if [ "$CMD" = "start" ] ; then
if [ -n "$BBSERVER" ]; then
echo " Toaster is already running. Exiting..."
return 1
fi
elif [ "$CMD" = "" ]; then
echo "No command specified"
echo "$HELP"
return 1
if [ "$1" = 'start' ] || [ "$1" = 'stop' ]; then
CMD="$1"
else
if [ -z "$BBSERVER" ]; then
CMD="start"
else
CMD="stop"
fi
fi
echo "The system will $CMD."
# Make sure it's safe to run by checking bitbake lock
lock=1
if [ -e $BUILDDIR/bitbake.lock ]; then
python -c "import fcntl; fcntl.flock(open(\"$BUILDDIR/bitbake.lock\"), fcntl.LOCK_EX|fcntl.LOCK_NB)" 2>/dev/null || lock=0
fi
if [ ${CMD} = 'start' ] && [ $lock -eq 0 ]; then
echo "Error: bitbake lock state error. File locks show that the system is on." 1>&2
echo "Please wait for the current build to finish, stop and then start the system again." 1>&2
return 3
fi
if [ ${CMD} = 'start' ] && [ -e $BUILDDIR/.toastermain.pid ] && kill -0 `cat $BUILDDIR/.toastermain.pid`; then
echo "Warning: bitbake appears to be dead, but the Toaster web server is running. Something fishy is going on." 1>&2
echo "Cleaning up the web server to start from a clean slate."
webserverKillAll
fi
# Execute the commands
case $CMD in
start )
# check if addr:port is not in use
if [ "$CMD" == 'start' ]; then
if [ $WEBSERVER -gt 0 ]; then
$MANAGE checksocket "$ADDR_PORT" || return 1
fi
fi
# kill Toaster web server if it's alive
if [ -e $BUILDDIR/.toastermain.pid ] && kill -0 `cat $BUILDDIR/.toastermain.pid`; then
echo "Warning: bitbake appears to be dead, but the Toaster web server is running." 1>&2
echo " Something fishy is going on." 1>&2
echo "Cleaning up the web server to start from a clean slate."
webserverKillAll
fi
# Create configuration file
conf=${BUILDDIR}/conf/local.conf
line='INHERIT+="toaster buildhistory"'
grep -q "$line" $conf || echo $line >> $conf
start_success=1
addtoConfiguration toaster.conf "INHERIT+=\"toaster buildhistory\"" $TOASTER_BRBE
if [ $WEBSERVER -gt 0 ] && ! webserverStartAll; then
echo "Failed ${CMD}."
return 4
fi
export BITBAKE_UI='toasterui'
export DATABASE_URL=`$MANAGE get-dburl`
$MANAGE runbuilds & echo $! >${BUILDDIR}/.runbuilds.pid
# set fail safe stop system on terminal exit
unset BBSERVER
PREREAD=""
if [ -e ${BUILDDIR}/conf/toaster-pre.conf ]; then
rm ${BUILDDIR}/conf/toaster-pre.conf
fi
bitbake $PREREAD --postread conf/toaster.conf --server-only -t xmlrpc -B 0.0.0.0:0
if [ $? -ne 0 ]; then
start_success=0
echo "Bitbake server start failed"
else
export BBSERVER=0.0.0.0:-1
if [ $NOTOASTERUI -eq 0 ]; then # we start the TOASTERUI only if not inhibited
bitbake --observe-only -u toasterui >>${BUILDDIR}/toaster_ui.log 2>&1 & echo $! >${BUILDDIR}/.toasterui.pid
fi
fi
if [ $start_success -eq 1 ]; then
# set fail safe stop system on terminal exit
trap stop_system SIGHUP
echo "Successful ${CMD}."
return 0
else
# failed start, do stop
stop_system
echo "Failed ${CMD}."
return 1
fi
# stop system on terminal exit
set -o monitor
trap stop_system SIGHUP
echo "Successful ${CMD}."
return 0
#trap notify_chldexit SIGCHLD
;;
stop )
stop_system
echo "Successful ${CMD}."
;;
esac

View File

@@ -36,7 +36,7 @@ def main(argv=None):
Get the mapping for the target recipe.
"""
if len(argv) != 1:
print("Error, need one argument!", file=sys.stderr)
print >>sys.stderr, "Error, need one argument!"
return 2
cachefile = argv[0]
@@ -56,7 +56,7 @@ def main(argv=None):
continue
# 1.0 is the default version for a no PV recipe.
if "pv" in val.__dict__:
if val.__dict__.has_key("pv"):
pv = val.pv
else:
pv = "1.0"

View File

@@ -42,9 +42,9 @@
<literallayout class='monospaced'>
$ grep processor /proc/cpuinfo
</literallayout>
This command returns the number of processors, which takes into
account hyper-threading.
Thus, a quad-core build host with hyper-threading most likely
This command returns the number of processors, which takes into
account hyper-threading.
Thus, a quad-core build host with hyper-threading most likely
shows eight processors, which is the value you would then assign to
<filename>BB_NUMBER_THREADS</filename>.
</para>
@@ -116,35 +116,16 @@
Prior to parsing configuration files, Bitbake looks
at certain variables, including:
<itemizedlist>
<listitem><para>
<link linkend='var-BB_ENV_WHITELIST'><filename>BB_ENV_WHITELIST</filename></link>
</para></listitem>
<listitem><para>
<link linkend='var-BB_ENV_EXTRAWHITE'><filename>BB_ENV_EXTRAWHITE</filename></link>
</para></listitem>
<listitem><para>
<link linkend='var-BB_PRESERVE_ENV'><filename>BB_PRESERVE_ENV</filename></link>
</para></listitem>
<listitem><para>
<link linkend='var-BB_ORIGENV'><filename>BB_ORIGENV</filename></link>
</para></listitem>
<listitem><para><link linkend='var-BB_ENV_WHITELIST'><filename>BB_ENV_WHITELIST</filename></link></para></listitem>
<listitem><para><link linkend='var-BB_PRESERVE_ENV'><filename>BB_PRESERVE_ENV</filename></link></para></listitem>
<listitem><para><link linkend='var-BB_ENV_EXTRAWHITE'><filename>BB_ENV_EXTRAWHITE</filename></link></para></listitem>
<listitem><para>
<link linkend='var-BITBAKE_UI'><filename>BITBAKE_UI</filename></link>
</para></listitem>
</itemizedlist>
The first four variables in this list relate to how BitBake treats shell
environment variables during task execution.
By default, BitBake cleans the environment variables and provides tight
control over the shell execution environment.
However, through the use of these first four variables, you can
apply your control regarding the
environment variables allowed to be used by BitBake in the shell
during execution of tasks.
See the
"<link linkend='passing-information-into-the-build-task-environment'>Passing Information Into the Build Task Environment</link>"
section and the information about these variables in the
variable glossary for more information on how they work and
on how to use them.
You can find information on how to pass environment variables into the BitBake
execution environment in the
"<link linkend='passing-information-into-the-build-task-environment'>Passing Information Into the Build Task Environment</link>" section.
</para>
<para>

View File

@@ -47,6 +47,7 @@
-rw-rw-r--. 1 wmat wmat 849 Nov 26 04:55 HEADER
drwxrwxr-x. 5 wmat wmat 4096 Jan 31 13:44 lib
-rw-rw-r--. 1 wmat wmat 195 Nov 26 04:55 MANIFEST.in
-rwxrwxr-x. 1 wmat wmat 3195 Jan 31 11:57 setup.py
-rw-rw-r--. 1 wmat wmat 2887 Nov 26 04:55 TODO
</literallayout>
</para>
@@ -134,7 +135,7 @@
<ulink url="http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html">Mailing List post - The BitBake equivalent of "Hello, World!"</ulink>
</para></listitem>
<listitem><para>
<ulink url="http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/">Hambedded Linux blog post - From Bitbake Hello World to an Image</ulink>
<ulink url="https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/">Hambedded Linux blog post - From Bitbake Hello World to an Image</ulink>
</para></listitem>
</itemizedlist>
</note>
@@ -269,7 +270,7 @@
and define some key BitBake variables.
For more information on the <filename>bitbake.conf</filename>,
see
<ulink url='http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#an-overview-of-bitbakeconf'></ulink>
<ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#an-overview-of-bitbakeconf'></ulink>
</para>
<para>Use the following commands to create the <filename>conf</filename>
directory in the project directory:
@@ -354,7 +355,7 @@ ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inh
supporting.
For more information on the <filename>base.bbclass</filename> file,
you can look at
<ulink url='http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#tasks'></ulink>.
<ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#tasks'></ulink>.
</para></listitem>
<listitem><para><emphasis>Run Bitbake:</emphasis>
After making sure that the <filename>classes/base.bbclass</filename>
@@ -376,7 +377,7 @@ ERROR: Unable to parse base: ParseError in configuration INHERITs: Could not inh
Thus, this example creates and uses a layer called "mylayer".
<note>
You can find additional information on adding a layer at
<ulink url='http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#adding-an-example-layer'></ulink>.
<ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#adding-an-example-layer'></ulink>.
</note>
</para>
<para>Minimally, you need a recipe file and a layer configuration

View File

@@ -471,7 +471,7 @@
Following is the usage and syntax for BitBake:
<literallayout class='monospaced'>
$ bitbake -h
Usage: bitbake [options] [recipename/target recipe:do_task ...]
Usage: bitbake [options] [recipename/target ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
@@ -529,11 +529,9 @@
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
-P, --profile Profile the command and save reports.
-u UI, --ui=UI The user interface to use (depexp, goggle, hob, knotty
or ncurses - default knotty).
-u UI, --ui=UI The user interface to use (e.g. knotty, hob, depexp).
-t SERVERTYPE, --servertype=SERVERTYPE
Choose which server type to use (process or xmlrpc -
default process).
Choose which server to use, process or xmlrpc.
--token=XMLRPCTOKEN Specify the connection token to be used when
connecting to a remote server.
--revisions-changed Set the exit code depending on whether upstream
@@ -548,10 +546,6 @@
-m, --kill-server Terminate the remote server.
--observe-only Connect to a server as an observing-only client.
--status-only Check the status of the remote bitbake server.
-w WRITEEVENTLOG, --write-log=WRITEEVENTLOG
Writes the event log of the build to a bitbake event
json file. Use '' (empty string) to assign the name
automatically.
</literallayout>
</para>
</section>

View File

@@ -379,12 +379,6 @@
You can use <filename>OVERRIDES</filename> to conditionally select
a specific version of a variable and to conditionally
append or prepend the value of a variable.
<note>
Overrides can only use lower-case characters.
Additionally, underscores are not permitted in override names
as they are used to separate overrides from each other and
from the variable name.
</note>
<itemizedlist>
<listitem><para><emphasis>Selecting a Variable:</emphasis>
The <filename>OVERRIDES</filename> variable is
@@ -609,13 +603,8 @@
BitBake uses the
<link linkend='var-BBPATH'><filename>BBPATH</filename></link>
variable to locate needed include and class files.
Additionally, BitBake searches the current directory for
<filename>include</filename> and <filename>require</filename>
directives.
<note>
The <filename>BBPATH</filename> variable is analogous to
the environment variable <filename>PATH</filename>.
</note>
The <filename>BBPATH</filename> variable is analogous to
the environment variable <filename>PATH</filename>.
</para>
<para>
@@ -663,51 +652,6 @@
after the "inherit" statement.
</note>
</para>
<para>
If necessary, it is possible to inherit a class
conditionally by using
a variable expression after the <filename>inherit</filename>
statement.
Here is an example:
<literallayout class='monospaced'>
inherit ${VARNAME}
</literallayout>
If <filename>VARNAME</filename> is going to be set, it needs
to be set before the <filename>inherit</filename> statement
is parsed.
One way to achieve a conditional inherit in this case is to use
overrides:
<literallayout class='monospaced'>
VARIABLE = ""
VARIABLE_someoverride = "myclass"
</literallayout>
</para>
<para>
Another method is by using anonymous Python.
Here is an example:
<literallayout class='monospaced'>
python () {
if condition == value:
d.setVar('VARIABLE', 'myclass')
else:
d.setVar('VARIABLE', '')
}
</literallayout>
</para>
<para>
Alternatively, you could use an in-line Python expression
in the following form:
<literallayout class='monospaced'>
inherit ${@'classname' if condition else ''}
inherit ${@functionname(params)}
</literallayout>
In all cases, if the expression evaluates to an empty
string, the statement does not trigger a syntax error
because it becomes a no-op.
</para>
</section>
<section id='include-directive'>
@@ -815,16 +759,6 @@
<filename>INHERIT</filename> to inherit a class effectively
inherits the class globally (i.e. for all recipes).
</note>
If you want to use the directive to inherit
multiple classes, you can provide them on the same line in the
<filename>local.conf</filename> file.
Use spaces to separate the classes.
The following example shows how to inherit both the
<filename>autotools</filename> and <filename>pkgconfig</filename>
classes:
<literallayout class='monospaced'>
inherit autotools pkgconfig
</literallayout>
</para>
</section>
</section>
@@ -906,18 +840,6 @@
is a global variable and is always automatically
available.
</para>
<note>
Variable expressions (e.g. <filename>${X}</filename>) are no
longer expanded within Python functions.
This behavior is intentional in order to allow you to freely
set variable values to expandable expressions without having
them expanded prematurely.
If you do wish to expand a variable within a Python function,
use <filename>d.getVar("X", True)</filename>.
Or, for more complicated expressions, use
<filename>d.expand()</filename>.
</note>
</section>
<section id='python-functions'>
@@ -1185,18 +1107,10 @@
<title>Passing Information Into the Build Task Environment</title>
<para>
When running a task, BitBake tightly controls the shell execution
When running a task, BitBake tightly controls the execution
environment of the build tasks to make
sure unwanted contamination from the build machine cannot
influence the build.
<note>
By default, BitBake cleans the environment to include only those
things exported or listed in its whitelist to ensure that the build
environment is reproducible and consistent.
You can prevent this "cleaning" by setting the
<link linkend='var-BB_PRESERVE_ENV'><filename>BB_PRESERVE_ENV</filename></link>
variable.
</note>
Consequently, if you do want something to get passed into the
build task environment, you must take these two steps:
<orderedlist>
@@ -1204,16 +1118,14 @@
Tell BitBake to load what you want from the environment
into the datastore.
You can do so through the
<link linkend='var-BB_ENV_WHITELIST'><filename>BB_ENV_WHITELIST</filename></link>
and
<link linkend='var-BB_ENV_EXTRAWHITE'><filename>BB_ENV_EXTRAWHITE</filename></link>
variables.
variable.
For example, assume you want to prevent the build system from
accessing your <filename>$HOME/.ccache</filename>
directory.
The following command "whitelists" the environment variable
<filename>CCACHE_DIR</filename> causing BitBack to allow that
variable into the datastore:
The following command tells BitBake to load
<filename>CCACHE_DIR</filename> from the environment into
the datastore:
<literallayout class='monospaced'>
export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE CCACHE_DIR"
</literallayout></para></listitem>
@@ -1262,6 +1174,12 @@
The previous example returns <filename>BAR</filename> from the original
execution environment.
</para>
<para>
By default, BitBake cleans the environment to include only those
things exported or listed in its whitelist to ensure that the build
environment is reproducible and consistent.
</para>
</section>
</section>
@@ -1316,8 +1234,6 @@
</para></listitem>
<listitem><para><emphasis>dirs:</emphasis>
Directories that should be created before the task runs.
The last directory listed will be used as the work directory
for the task.
</para></listitem>
<listitem><para><emphasis>lockfiles:</emphasis>
Specifies one or more lockfiles to lock while the task
@@ -1849,10 +1765,6 @@
<entry align="left"><filename>d.delVarFlags("X")</filename></entry>
<entry align="left">Deletes all the flags for the variable "X".</entry>
</row>
<row>
<entry align="left"><filename>d.expand(expression)</filename></entry>
<entry align="left">Expands variable references in the specified string expression.</entry>
</row>
</tbody>
</tgroup>
</informaltable>

View File

@@ -1142,14 +1142,14 @@
to "hide" these <filename>.bb</filename> and
<filename>.bbappend</filename> files.
BitBake ignores any recipe or recipe append files that
match any of the expressions.
match the expression.
It is as if BitBake does not see them at all.
Consequently, matching files are not parsed or otherwise
used by BitBake.</para>
<para>
The values you provide are passed to Python's regular
The value you provide is passed to Python's regular
expression compiler.
The expressions are compared against the full paths to
The expression is compared against the full paths to
the files.
For complete syntax information, see Python's
documentation at
@@ -1165,16 +1165,18 @@
BBMASK = "meta-ti/recipes-misc/"
</literallayout>
If you want to mask out multiple directories or recipes,
you can specify multiple regular expression fragments.
use the vertical bar to separate the regular expression
fragments.
This next example masks out multiple directories and
individual recipes:
<literallayout class='monospaced'>
BBMASK += "/meta-ti/recipes-misc/ meta-ti/recipes-ti/packagegroup/"
BBMASK += "/meta-oe/recipes-support/"
BBMASK += "/meta-foo/.*/openldap"
BBMASK += "opencv.*\.bbappend"
BBMASK += "lzma"
BBMASK = "meta-ti/recipes-misc/|meta-ti/recipes-ti/packagegroup/"
BBMASK .= "|.*meta-oe/recipes-support/"
BBMASK .= "|.*openldap"
BBMASK .= "|.*opencv"
BBMASK .= "|.*lzma"
</literallayout>
Notice how the vertical bar is used to append the fragments.
<note>
When specifying a directory name, use the trailing
slash character to ensure you match just that directory

View File

@@ -56,7 +56,7 @@
-->
<copyright>
<year>2004-2016</year>
<year>2004-2015</year>
<holder>Richard Purdie</holder>
<holder>Chris Larson</holder>
<holder>and Phil Blundell</holder>

View File

@@ -21,7 +21,7 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
__version__ = "1.31.0"
__version__ = "1.28.0"
import sys
if sys.version_info < (2, 7, 3):
@@ -70,8 +70,6 @@ logger = logging.getLogger("BitBake")
logger.addHandler(NullHandler())
logger.setLevel(logging.DEBUG - 2)
mainlogger = logging.getLogger("BitBake.Main")
# This has to be imported after the setLoggerClass, as the import of bb.msg
# can result in construction of the various loggers.
import bb.msg
@@ -81,26 +79,26 @@ sys.modules['bb.fetch'] = sys.modules['bb.fetch2']
# Messaging convenience functions
def plain(*args):
mainlogger.plain(''.join(args))
logger.plain(''.join(args))
def debug(lvl, *args):
if isinstance(lvl, basestring):
mainlogger.warning("Passed invalid debug level '%s' to bb.debug", lvl)
logger.warn("Passed invalid debug level '%s' to bb.debug", lvl)
args = (lvl,) + args
lvl = 1
mainlogger.debug(lvl, ''.join(args))
logger.debug(lvl, ''.join(args))
def note(*args):
mainlogger.info(''.join(args))
logger.info(''.join(args))
def warn(*args):
mainlogger.warning(''.join(args))
logger.warn(''.join(args))
def error(*args, **kwargs):
mainlogger.error(''.join(args), extra=kwargs)
logger.error(''.join(args), extra=kwargs)
def fatal(*args, **kwargs):
mainlogger.critical(''.join(args), extra=kwargs)
logger.critical(''.join(args), extra=kwargs)
raise BBHandledException()
def deprecated(func, name=None, advice=""):

View File

@@ -61,13 +61,8 @@ def reset_cache():
# in all namespaces, hence we add them to __builtins__.
# If we do not do this and use the exec globals, they will
# not be available to subfunctions.
if hasattr(__builtins__, '__setitem__'):
builtins = __builtins__
else:
builtins = __builtins__.__dict__
builtins['bb'] = bb
builtins['os'] = os
__builtins__['bb'] = bb
__builtins__['os'] = os
class FuncFailed(Exception):
def __init__(self, name = None, logfile = None):
@@ -161,18 +156,13 @@ class LogTee(object):
def flush(self):
self.outfile.flush()
#
# pythonexception allows the python exceptions generated to be raised
# as the real exceptions (not FuncFailed) and without a backtrace at the
# origin of the failure.
#
def exec_func(func, d, dirs = None, pythonexception=False):
def exec_func(func, d, dirs = None):
"""Execute a BB 'function'"""
body = d.getVar(func, False)
if not body:
if body is None:
logger.warning("Function %s doesn't exist", func)
logger.warn("Function %s doesn't exist", func)
return
flags = d.getVarFlags(func)
@@ -234,18 +224,22 @@ def exec_func(func, d, dirs = None, pythonexception=False):
with bb.utils.fileslocked(lockfiles):
if ispython:
exec_func_python(func, d, runfile, cwd=adir, pythonexception=pythonexception)
exec_func_python(func, d, runfile, cwd=adir)
else:
exec_func_shell(func, d, runfile, cwd=adir)
_functionfmt = """
def {function}(d):
{body}
{function}(d)
"""
logformatter = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
def exec_func_python(func, d, runfile, cwd=None, pythonexception=False):
def exec_func_python(func, d, runfile, cwd=None):
"""Execute a python BB 'function'"""
code = _functionfmt.format(function=func)
bbfile = d.getVar('FILE', True)
code = _functionfmt.format(function=func, body=d.getVar(func, True))
bb.utils.mkdirhier(os.path.dirname(runfile))
with open(runfile, 'w') as script:
bb.data.emit_func_python(func, script, d)
@@ -260,18 +254,11 @@ def exec_func_python(func, d, runfile, cwd=None, pythonexception=False):
bb.debug(2, "Executing python function %s" % func)
try:
text = "def %s(d):\n%s" % (func, d.getVar(func, False))
fn = d.getVarFlag(func, "filename", False)
lineno = int(d.getVarFlag(func, "lineno", False))
bb.methodpool.insert_method(func, text, fn, lineno - 1)
comp = utils.better_compile(code, func, "exec_python_func() autogenerated")
utils.better_exec(comp, {"d": d}, code, "exec_python_func() autogenerated", pythonexception=pythonexception)
comp = utils.better_compile(code, func, bbfile)
utils.better_exec(comp, {"d": d}, code, bbfile)
except (bb.parse.SkipRecipe, bb.build.FuncFailed):
raise
except:
if pythonexception:
raise
raise FuncFailed(func, None)
finally:
bb.debug(2, "Python function %s finished" % func)
@@ -290,8 +277,9 @@ bb_exit_handler() {
case $ret in
0) ;;
*) case $BASH_VERSION in
"") echo "WARNING: exit code $ret from a shell command.";;
*) echo "WARNING: ${BASH_SOURCE[0]}:${BASH_LINENO[0]} exit $ret from '$BASH_COMMAND'";;
"") echo "WARNING: exit code $ret from a shell command.";;
*) echo "WARNING: ${BASH_SOURCE[0]}:${BASH_LINENO[0]} exit $ret from
\"$BASH_COMMAND\"";;
esac
exit $ret
esac
@@ -331,7 +319,7 @@ exit $ret
os.chmod(runfile, 0775)
cmd = runfile
if d.getVarFlag(func, 'fakeroot', False):
if d.getVarFlag(func, 'fakeroot'):
fakerootcmd = d.getVar('FAKEROOT', True)
if fakerootcmd:
cmd = [fakerootcmd, runfile]
@@ -406,7 +394,7 @@ def _exec_task(fn, task, d, quieterr):
Execution of a task involves a bit more setup than executing a function,
running it with its own local metadata, and with some useful variables set.
"""
if not d.getVarFlag(task, 'task', False):
if not d.getVarFlag(task, 'task'):
event.fire(TaskInvalid(task, d), d)
logger.error("No such task: %s" % task)
return 1
@@ -545,7 +533,7 @@ def _exec_task(fn, task, d, quieterr):
bb.utils.remove(loglink)
event.fire(TaskSucceeded(task, logfn, localdata), localdata)
if not localdata.getVarFlag(task, 'nostamp', False) and not localdata.getVarFlag(task, 'selfstamp', False):
if not localdata.getVarFlag(task, 'nostamp') and not localdata.getVarFlag(task, 'selfstamp'):
make_stamp(task, localdata)
return 0
@@ -553,7 +541,7 @@ def _exec_task(fn, task, d, quieterr):
def exec_task(fn, task, d, profile = False):
try:
quieterr = False
if d.getVarFlag(task, "quieterrors", False) is not None:
if d.getVarFlag(task, "quieterrors") is not None:
quieterr = True
if profile:
@@ -594,10 +582,10 @@ def stamp_internal(taskname, d, file_name, baseonly=False):
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stamp[file_name]
stamp = d.stamp_base[file_name].get(taskflagname) or d.stamp[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMP', True)
stamp = d.getVarFlag(taskflagname, 'stamp-base', True) or d.getVar('STAMP', True)
file_name = d.getVar('BB_FILENAME', True)
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
@@ -628,10 +616,10 @@ def stamp_cleanmask_internal(taskname, d, file_name):
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stampclean[file_name]
stamp = d.stamp_base_clean[file_name].get(taskflagname) or d.stampclean[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMPCLEAN', True)
stamp = d.getVarFlag(taskflagname, 'stamp-base-clean', True) or d.getVar('STAMPCLEAN', True)
file_name = d.getVar('BB_FILENAME', True)
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
@@ -758,7 +746,7 @@ def addtask(task, before, after, d):
bbtasks.append(task)
d.setVar('__BBTASKS', bbtasks)
existing = d.getVarFlag(task, "deps", False) or []
existing = d.getVarFlag(task, "deps") or []
if after is not None:
# set up deps for function
for entry in after.split():
@@ -768,7 +756,7 @@ def addtask(task, before, after, d):
if before is not None:
# set up things that depend on this func
for entry in before.split():
existing = d.getVarFlag(entry, "deps", False) or []
existing = d.getVarFlag(entry, "deps") or []
if task not in existing:
d.setVarFlag(entry, "deps", [task] + existing)
@@ -783,7 +771,7 @@ def deltask(task, d):
d.delVarFlag(task, 'deps')
for bbtask in d.getVar('__BBTASKS', False) or []:
deps = d.getVarFlag(bbtask, 'deps', False) or []
deps = d.getVarFlag(bbtask, 'deps') or []
if task in deps:
deps.remove(task)
d.setVarFlag(bbtask, 'deps', deps)

View File

@@ -43,7 +43,7 @@ except ImportError:
logger.info("Importing cPickle failed. "
"Falling back to a very slow implementation.")
__cache_version__ = "149"
__cache_version__ = "148"
def getCacheFile(path, filename, data_hash):
return os.path.join(path, filename + "." + data_hash)
@@ -85,8 +85,8 @@ class RecipeInfoCommon(object):
return out_dict
@classmethod
def getvar(cls, var, metadata, expand = True):
return metadata.getVar(var, expand) or ''
def getvar(cls, var, metadata):
return metadata.getVar(var, True) or ''
class CoreRecipeInfo(RecipeInfoCommon):
@@ -99,7 +99,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.timestamp = bb.parse.cached_mtime(filename)
self.variants = self.listvar('__VARIANTS', metadata) + ['']
self.appends = self.listvar('__BBAPPEND', metadata)
self.nocache = self.getvar('BB_DONT_CACHE', metadata)
self.nocache = self.getvar('__BB_DONT_CACHE', metadata)
self.skipreason = self.getvar('__SKIPPED', metadata)
if self.skipreason:
@@ -129,6 +129,8 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.not_world = self.getvar('EXCLUDE_FROM_WORLD', metadata)
self.stamp = self.getvar('STAMP', metadata)
self.stampclean = self.getvar('STAMPCLEAN', metadata)
self.stamp_base = self.flaglist('stamp-base', self.tasks, metadata)
self.stamp_base_clean = self.flaglist('stamp-base-clean', self.tasks, metadata)
self.stamp_extrainfo = self.flaglist('stamp-extra-info', self.tasks, metadata)
self.file_checksums = self.flaglist('file-checksums', self.tasks, metadata, True)
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
@@ -140,11 +142,10 @@ class CoreRecipeInfo(RecipeInfoCommon):
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
self.rdepends_pkg = self.pkgvar('RDEPENDS', self.packages, metadata)
self.rrecommends_pkg = self.pkgvar('RRECOMMENDS', self.packages, metadata)
self.inherits = self.getvar('__inherit_cache', metadata, expand=False)
self.inherits = self.getvar('__inherit_cache', metadata)
self.fakerootenv = self.getvar('FAKEROOTENV', metadata)
self.fakerootdirs = self.getvar('FAKEROOTDIRS', metadata)
self.fakerootnoenv = self.getvar('FAKEROOTNOENV', metadata)
self.extradepsfunc = self.getvar('calculate_extra_depends', metadata)
@classmethod
def init_cacheData(cls, cachedata):
@@ -157,6 +158,8 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.stamp = {}
cachedata.stampclean = {}
cachedata.stamp_base = {}
cachedata.stamp_base_clean = {}
cachedata.stamp_extrainfo = {}
cachedata.file_checksums = {}
cachedata.fn_provides = {}
@@ -180,7 +183,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootenv = {}
cachedata.fakerootnoenv = {}
cachedata.fakerootdirs = {}
cachedata.extradepsfunc = {}
def add_cacheData(self, cachedata, fn):
cachedata.task_deps[fn] = self.task_deps
@@ -190,6 +192,8 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.pkg_dp[fn] = self.defaultpref
cachedata.stamp[fn] = self.stamp
cachedata.stampclean[fn] = self.stampclean
cachedata.stamp_base[fn] = self.stamp_base
cachedata.stamp_base_clean[fn] = self.stamp_base_clean
cachedata.stamp_extrainfo[fn] = self.stamp_extrainfo
cachedata.file_checksums[fn] = self.file_checksums
@@ -216,8 +220,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
rprovides += self.rprovides_pkg[package]
for rprovide in rprovides:
if fn not in cachedata.rproviders[rprovide]:
cachedata.rproviders[rprovide].append(fn)
cachedata.rproviders[rprovide].append(fn)
for package in self.packages_dynamic:
cachedata.packages_dynamic[package].append(fn)
@@ -248,7 +251,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootenv[fn] = self.fakerootenv
cachedata.fakerootnoenv[fn] = self.fakerootnoenv
cachedata.fakerootdirs[fn] = self.fakerootdirs
cachedata.extradepsfunc[fn] = self.extradepsfunc
@@ -339,7 +341,7 @@ class Cache(object):
value = pickled.load()
except Exception:
break
if key in self.depends_cache:
if self.depends_cache.has_key(key):
self.depends_cache[key].append(value)
else:
self.depends_cache[key] = [value]
@@ -755,14 +757,13 @@ class MultiProcessCache(object):
self.cachedata = self.create_cachedata()
self.cachedata_extras = self.create_cachedata()
def init_cache(self, d, cache_file_name=None):
def init_cache(self, d):
cachedir = (d.getVar("PERSISTENT_DIR", True) or
d.getVar("CACHE", True))
if cachedir in [None, '']:
return
bb.utils.mkdirhier(cachedir)
self.cachefile = os.path.join(cachedir,
cache_file_name or self.__class__.cache_file_name)
self.cachefile = os.path.join(cachedir, self.__class__.cache_file_name)
logger.debug(1, "Using cache in '%s'", self.cachefile)
glf = bb.utils.lockfile(self.cachefile + ".lock")
@@ -786,7 +787,7 @@ class MultiProcessCache(object):
data = [{}]
return data
def save_extras(self):
def save_extras(self, d):
if not self.cachefile:
return
@@ -816,7 +817,7 @@ class MultiProcessCache(object):
if h not in dest[j]:
dest[j][h] = source[j][h]
def save_merge(self):
def save_merge(self, d):
if not self.cachefile:
return

View File

@@ -15,8 +15,6 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import glob
import operator
import os
import stat
import bb.utils
@@ -90,50 +88,3 @@ class FileChecksumCache(MultiProcessCache):
dest[0][h] = source[0][h]
else:
dest[0][h] = source[0][h]
def get_checksums(self, filelist, pn):
"""Get checksums for a list of files"""
def checksum_file(f):
try:
checksum = self.get_checksum(f)
except OSError as e:
bb.warn("Unable to get checksum for %s SRC_URI entry %s: %s" % (pn, os.path.basename(f), e))
return None
return checksum
def checksum_dir(pth):
# Handle directories recursively
dirchecksums = []
for root, dirs, files in os.walk(pth):
for name in files:
fullpth = os.path.join(root, name)
checksum = checksum_file(fullpth)
if checksum:
dirchecksums.append((fullpth, checksum))
return dirchecksums
checksums = []
for pth in filelist.split():
exist = pth.split(":")[1]
if exist == "False":
continue
pth = pth.split(":")[0]
if '*' in pth:
# Handle globs
for f in glob.glob(pth):
if os.path.isdir(f):
if not os.path.islink(f):
checksums.extend(checksum_dir(f))
else:
checksum = checksum_file(f)
checksums.append((f, checksum))
elif os.path.isdir(pth):
if not os.path.islink(pth):
checksums.extend(checksum_dir(pth))
else:
checksum = checksum_file(pth)
checksums.append((pth, checksum))
checksums.sort(key=operator.itemgetter(1))
return checksums

View File

@@ -28,10 +28,6 @@ def check_indent(codestr):
return codestr
if codestr[i-1] == "\t" or codestr[i-1] == " ":
if codestr[0] == "\n":
# Since we're adding a line, we need to remove one line of any empty padding
# to ensure line numbers are correct
codestr = codestr[1:]
return "if 1:\n" + codestr
return codestr
@@ -148,10 +144,6 @@ class CodeParserCache(MultiProcessCache):
return cacheline
def init_cache(self, d):
# Check if we already have the caches
if self.pythoncache:
return
MultiProcessCache.init_cache(self, d)
# cachedata gets re-assigned in the parent
@@ -167,11 +159,11 @@ codeparsercache = CodeParserCache()
def parser_cache_init(d):
codeparsercache.init_cache(d)
def parser_cache_save():
codeparsercache.save_extras()
def parser_cache_save(d):
codeparsercache.save_extras(d)
def parser_cache_savemerge():
codeparsercache.save_merge()
def parser_cache_savemerge(d):
codeparsercache.save_merge(d)
Logger = logging.getLoggerClass()
class BufferedLogger(Logger):
@@ -221,17 +213,6 @@ class PythonParser():
self.references.add(node.args[0].s)
else:
self.warn(node.func, node.args[0])
elif name and name.endswith(".expand"):
if isinstance(node.args[0], ast.Str):
value = node.args[0].s
d = bb.data.init()
parser = d.expandWithRefs(value, self.name)
self.references |= parser.references
self.execs |= parser.execs
for varname in parser.contains:
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname] |= parser.contains[varname]
elif name in self.execfuncs:
if isinstance(node.args[0], ast.Str):
self.var_execs.add(node.args[0].s)
@@ -254,7 +235,6 @@ class PythonParser():
break
def __init__(self, name, log):
self.name = name
self.var_execs = set()
self.contains = {}
self.execs = set()
@@ -264,7 +244,7 @@ class PythonParser():
self.unhandled_message = "in call of %s, argument '%s' is not a string literal"
self.unhandled_message = "while parsing %s, %s" % (name, self.unhandled_message)
def parse_python(self, node, lineno=0, filename="<string>"):
def parse_python(self, node):
if not node or not node.strip():
return
@@ -286,9 +266,7 @@ class PythonParser():
self.contains[i] = set(codeparsercache.pythoncacheextras[h].contains[i])
return
# We can't add to the linenumbers for compile, we can pad to the correct number of blank lines though
node = "\n" * int(lineno) + node
code = compile(check_indent(str(node)), filename, "exec",
code = compile(check_indent(str(node)), "<string>", "exec",
ast.PyCF_ONLY_AST)
for n in ast.walk(code):

View File

@@ -279,7 +279,6 @@ class CommandsSync:
mask = params[3]
return bb.event.set_UIHmask(handlerNum, llevel, debug_domains, mask)
setEventMask.needconfig = False
setEventMask.readonly = True
def setFeatures(self, command, params):
"""

View File

@@ -67,14 +67,6 @@ class CollectionError(bb.BBHandledException):
class state:
initial, parsing, running, shutdown, forceshutdown, stopped, error = range(7)
@classmethod
def get_name(cls, code):
for name in dir(cls):
value = getattr(cls, name)
if type(value) == type(cls.initial) and value == code:
return name
raise ValueError("Invalid status code: %s" % code)
class SkippedPackage:
def __init__(self, info = None, reason = None):
@@ -93,7 +85,7 @@ class SkippedPackage:
class CookerFeatures(object):
_feature_list = [HOB_EXTRA_CACHES, BASEDATASTORE_TRACKING, SEND_SANITYEVENTS] = range(3)
_feature_list = [HOB_EXTRA_CACHES, SEND_DEPENDS_TREE, BASEDATASTORE_TRACKING, SEND_SANITYEVENTS] = range(4)
def __init__(self):
self._features=set()
@@ -144,10 +136,6 @@ class BBCooker:
self.watcher.bbwatchedfiles = []
self.notifier = pyinotify.Notifier(self.watcher, self.notifications)
# If being called by something like tinfoil, we need to clean cached data
# which may now be invalid
bb.parse.__mtime_cache = {}
bb.parse.BBHandler.cached_statements = {}
self.initConfigurationData()
@@ -201,13 +189,13 @@ class BBCooker:
def config_notifications(self, event):
if not event.pathname in self.configwatcher.bbwatchedfiles:
return
if not event.pathname in self.inotify_modified_files:
self.inotify_modified_files.append(event.pathname)
if not event.path in self.inotify_modified_files:
self.inotify_modified_files.append(event.path)
self.baseconfig_valid = False
def notifications(self, event):
if not event.pathname in self.inotify_modified_files:
self.inotify_modified_files.append(event.pathname)
if not event.path in self.inotify_modified_files:
self.inotify_modified_files.append(event.path)
self.parsecache_valid = False
def add_filewatch(self, deps, watcher=None):
@@ -246,9 +234,9 @@ class BBCooker:
def sigterm_exception(self, signum, stackframe):
if signum == signal.SIGTERM:
bb.warn("Cooker received SIGTERM, shutting down...")
bb.warn("Cooker recieved SIGTERM, shutting down...")
elif signum == signal.SIGHUP:
bb.warn("Cooker received SIGHUP, shutting down...")
bb.warn("Cooker recieved SIGHUP, shutting down...")
self.state = state.forceshutdown
def setFeatures(self, features):
@@ -360,9 +348,15 @@ class BBCooker:
# set our handler's event processor
event = EventWriter(self) # self is the cooker here
# set up cooker features for this mock UI handler
# we need to write the dependency tree in the log
self.featureset.setFeature(CookerFeatures.SEND_DEPENDS_TREE)
# register the log file writer as UI Handler
bb.event.register_UIHhandler(EventLogWriteHandler())
#
# Copy of the data store which has been expanded.
# Used for firing events and accessing variables where expansion needs to be accounted for
@@ -650,8 +644,8 @@ class BBCooker:
# emit the metadata which isnt valid shell
data.expandKeys(envdata)
for e in envdata.keys():
if envdata.getVarFlag(e, 'func', False) and envdata.getVarFlag(e, 'python', False):
logger.plain("\npython %s () {\n%s}\n", e, envdata.getVar(e, False))
if data.getVarFlag( e, 'python', envdata ):
logger.plain("\npython %s () {\n%s}\n", e, envdata.getVar(e, True))
def buildTaskData(self, pkgs_to_build, task, abort, allowincomplete=False):
@@ -723,15 +717,8 @@ class BBCooker:
depend_tree["packages"] = {}
depend_tree["rdepends-pkg"] = {}
depend_tree["rrecs-pkg"] = {}
depend_tree['providermap'] = {}
depend_tree["layer-priorities"] = self.recipecache.bbfile_config_priorities
for name, fn in taskdata.get_providermap().iteritems():
pn = self.recipecache.pkg_fn[fn]
if name != pn:
version = "%s:%s-%s" % self.recipecache.pkg_pepvpr[fn]
depend_tree['providermap'][name] = (pn, version)
for task in xrange(len(rq.rqdata.runq_fnid)):
taskname = rq.rqdata.runq_task[task]
fnid = rq.rqdata.runq_fnid[task]
@@ -906,7 +893,7 @@ class BBCooker:
logger.info("PN build list saved to 'pn-buildlist'")
for pn in depgraph["depends"]:
for depend in depgraph["depends"][pn]:
print('"%s" -> "%s" [style=solid]' % (pn, depend), file=depends_file)
print('"%s" -> "%s"' % (pn, depend), file=depends_file)
for pn in depgraph["rdepends-pn"]:
for rdepend in depgraph["rdepends-pn"][pn]:
print('"%s" -> "%s" [style=dashed]' % (pn, rdepend), file=depends_file)
@@ -924,13 +911,13 @@ class BBCooker:
else:
print('"%s" [label="%s(%s) %s\\n%s"]' % (package, package, pn, version, fn), file=depends_file)
for depend in depgraph["depends"][pn]:
print('"%s" -> "%s" [style=solid]' % (package, depend), file=depends_file)
print('"%s" -> "%s"' % (package, depend), file=depends_file)
for package in depgraph["rdepends-pkg"]:
for rdepend in depgraph["rdepends-pkg"][package]:
print('"%s" -> "%s" [style=dashed]' % (package, rdepend), file=depends_file)
for package in depgraph["rrecs-pkg"]:
for rdepend in depgraph["rrecs-pkg"][package]:
print('"%s" -> "%s" [style=dotted]' % (package, rdepend), file=depends_file)
print('"%s" -> "%s" [style=dashed]' % (package, rdepend), file=depends_file)
print("}", file=depends_file)
logger.info("Package dependencies saved to 'package-depends.dot'")
@@ -1032,21 +1019,22 @@ class BBCooker:
def findFilesMatchingInDir(self, filepattern, directory):
"""
Searches for files containing the substring 'filepattern' which are children of
Searches for files matching the regex 'pattern' which are children of
'directory' in each BBPATH. i.e. to find all rootfs package classes available
to BitBake one could call findFilesMatchingInDir(self, 'rootfs_', 'classes')
or to find all machine configuration files one could call:
findFilesMatchingInDir(self, '.conf', 'conf/machine')
findFilesMatchingInDir(self, 'conf/machines', 'conf')
"""
matches = []
p = re.compile(re.escape(filepattern))
bbpaths = self.data.getVar('BBPATH', True).split(':')
for path in bbpaths:
dirpath = os.path.join(path, directory)
if os.path.exists(dirpath):
for root, dirs, files in os.walk(dirpath):
for f in files:
if filepattern in f:
if p.search(f):
matches.append(f)
if matches:
@@ -1084,7 +1072,7 @@ class BBCooker:
for pfn in self.recipecache.pkg_fn:
inherits = self.recipecache.inherits.get(pfn, None)
if inherits and klass in inherits:
if inherits and inherits.count(klass) > 0:
pkg_list.append(self.recipecache.pkg_fn[pfn])
return pkg_list
@@ -1107,6 +1095,28 @@ class BBCooker:
tree = self.generatePkgDepTreeData(pkgs, 'build')
bb.event.fire(bb.event.TargetsTreeGenerated(tree), self.data)
def buildWorldTargetList(self):
"""
Build package list for "bitbake world"
"""
parselog.debug(1, "collating packages for \"world\"")
for f in self.recipecache.possible_world:
terminal = True
pn = self.recipecache.pkg_fn[f]
for p in self.recipecache.pn_provides[pn]:
if p.startswith('virtual/'):
parselog.debug(2, "World build skipping %s due to %s provider starting with virtual/", f, p)
terminal = False
break
for pf in self.recipecache.providers[p]:
if self.recipecache.pkg_fn[pf] != pn:
parselog.debug(2, "World build skipping %s due to both us and %s providing %s", f, pf, p)
terminal = False
break
if terminal:
self.recipecache.world_target.add(pn)
def interactiveMode( self ):
"""Drop off into a shell"""
try:
@@ -1343,7 +1353,7 @@ class BBCooker:
failures += len(exc.args)
retval = False
except SystemExit as exc:
self.command.finishAsyncCommand(str(exc))
self.command.finishAsyncCommand()
return False
if not retval:
@@ -1379,7 +1389,7 @@ class BBCooker:
failures += len(exc.args)
retval = False
except SystemExit as exc:
self.command.finishAsyncCommand(str(exc))
self.command.finishAsyncCommand()
return False
if not retval:
@@ -1427,21 +1437,14 @@ class BBCooker:
dump = {}
for k in self.data.keys():
try:
expand = True
flags = self.data.getVarFlags(k)
if flags and "func" in flags and "python" in flags:
expand = False
v = self.data.getVar(k, expand)
v = self.data.getVar(k, True)
if not k.startswith("__") and not isinstance(v, bb.data_smart.DataSmart):
dump[k] = {
'v' : v ,
'history' : self.data.varhistory.variable(k),
}
for d in flaglist:
if flags and d in flags:
dump[k][d] = flags[d]
else:
dump[k][d] = None
dump[k][d] = self.data.getVarFlag(k, d)
except Exception as e:
print(e)
return dump
@@ -1503,8 +1506,6 @@ class BBCooker:
# reload files for which we got notifications
for p in self.inotify_modified_files:
bb.parse.update_cache(p)
if p in bb.parse.BBHandler.cached_statements:
del bb.parse.BBHandler.cached_statements[p]
self.inotify_modified_files = []
if not self.baseconfig_valid:
@@ -1576,7 +1577,7 @@ class BBCooker:
parselog.warn("Explicit target \"%s\" is in ASSUME_PROVIDED, ignoring" % pkg)
if 'world' in pkgs_to_build:
bb.providers.buildWorldTargetList(self.recipecache)
self.buildWorldTargetList()
pkgs_to_build.remove('world')
for t in self.recipecache.world_target:
pkgs_to_build.append(t)
@@ -1642,9 +1643,6 @@ class BBCooker:
else:
self.state = state.shutdown
if self.parser:
self.parser.shutdown(clean=not force, force=force)
def finishcommand(self):
self.state = state.initial
@@ -1762,32 +1760,18 @@ class CookerCollectFiles(object):
globbed = glob.glob(f)
if not globbed and os.path.exists(f):
globbed = [f]
# glob gives files in order on disk. Sort to be deterministic.
for g in sorted(globbed):
for g in globbed:
if g not in newfiles:
newfiles.append(g)
bbmask = config.getVar('BBMASK', True)
if bbmask:
# First validate the individual regular expressions and ignore any
# that do not compile
bbmasks = []
for mask in bbmask.split():
try:
re.compile(mask)
bbmasks.append(mask)
except sre_constants.error:
collectlog.critical("BBMASK contains an invalid regular expression, ignoring: %s" % mask)
# Then validate the combined regular expressions. This should never
# fail, but better safe than sorry...
bbmask = "|".join(bbmasks)
try:
bbmask_compiled = re.compile(bbmask)
except sre_constants.error:
collectlog.critical("BBMASK is not a valid regular expression, ignoring: %s" % bbmask)
bbmask = None
collectlog.critical("BBMASK is not a valid regular expression, ignoring.")
return list(newfiles), 0
bbfiles = []
bbappend = []
@@ -1964,17 +1948,9 @@ class Parser(multiprocessing.Process):
def parse(self, filename, appends, caches_array):
try:
# Record the filename we're parsing into any events generated
def parse_filter(self, record):
record.taskpid = bb.event.worker_pid
record.fn = filename
return True
# Reset our environment and handlers to the original settings
bb.utils.set_context(self.context.copy())
bb.event.set_class_handlers(self.handlers.copy())
bb.event.LogHandler.filter = parse_filter
return True, bb.cache.Cache.parse(filename, appends, self.cfg, caches_array)
except Exception as exc:
tb = sys.exc_info()[2]
@@ -2005,6 +1981,8 @@ class CookerParser(object):
self.total = len(filelist)
self.current = 0
self.num_processes = int(self.cfgdata.getVar("BB_NUMBER_PARSE_THREADS", True) or
multiprocessing.cpu_count())
self.process_names = []
self.bb_cache = bb.cache.Cache(self.cfgdata, self.cfghash, cooker.caches_array)
@@ -2019,9 +1997,6 @@ class CookerParser(object):
self.toparse = self.total - len(self.fromcache)
self.progress_chunk = max(self.toparse / 100, 1)
self.num_processes = min(int(self.cfgdata.getVar("BB_NUMBER_PARSE_THREADS", True) or
multiprocessing.cpu_count()), len(self.willparse))
self.start()
self.haveshutdown = False
@@ -2032,9 +2007,8 @@ class CookerParser(object):
bb.event.fire(bb.event.ParseStarted(self.toparse), self.cfgdata)
def init():
Parser.cfg = self.cfgdata
bb.utils.set_process_name(multiprocessing.current_process().name)
multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, exitpriority=1)
multiprocessing.util.Finalize(None, bb.fetch.fetcher_parse_save, exitpriority=1)
multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, args=(self.cfgdata,), exitpriority=1)
multiprocessing.util.Finalize(None, bb.fetch.fetcher_parse_save, args=(self.cfgdata,), exitpriority=1)
self.feeder_quit = multiprocessing.Queue(maxsize=1)
self.parser_quit = multiprocessing.Queue(maxsize=self.num_processes)
@@ -2066,7 +2040,7 @@ class CookerParser(object):
bb.event.fire(event, self.cfgdata)
self.feeder_quit.put(None)
for process in self.processes:
self.parser_quit.put(None)
self.jobs.put(None)
else:
self.feeder_quit.put('cancel')
@@ -2087,8 +2061,8 @@ class CookerParser(object):
sync = threading.Thread(target=self.bb_cache.sync)
sync.start()
multiprocessing.util.Finalize(None, sync.join, exitpriority=-100)
bb.codeparser.parser_cache_savemerge()
bb.fetch.fetcher_parse_done()
bb.codeparser.parser_cache_savemerge(self.cooker.data)
bb.fetch.fetcher_parse_done(self.cooker.data)
if self.cooker.configuration.profile:
profiles = []
for i in self.process_names:
@@ -2151,11 +2125,16 @@ class CookerParser(object):
logger.error('ExpansionError during parsing %s: %s', value.recipe, str(exc))
self.shutdown(clean=False)
return False
except SyntaxError as exc:
self.error += 1
logger.error('Unable to parse %s', exc.recipe)
self.shutdown(clean=False)
return False
except Exception as exc:
self.error += 1
etype, value, tb = sys.exc_info()
if hasattr(value, "recipe"):
logger.error('Unable to parse %s' % value.recipe,
logger.error('Unable to parse %s', value.recipe,
exc_info=(etype, value, exc.traceback))
else:
# Most likely, an exception occurred during raising an exception

View File

@@ -137,7 +137,6 @@ class CookerConfiguration(object):
self.force = False
self.profile = False
self.nosetscene = False
self.setsceneonly = False
self.invalidate_stamp = False
self.dump_signatures = []
self.dry_run = False
@@ -182,7 +181,7 @@ def catch_parse_error(func):
parselog.critical(traceback.format_exc())
parselog.critical("Unable to parse %s: %s" % (fn, exc))
sys.exit(1)
except bb.data_smart.ExpansionError as exc:
except (bb.parse.ParseError, bb.data_smart.ExpansionError) as exc:
import traceback
bbdir = os.path.dirname(__file__) + os.sep
@@ -192,10 +191,7 @@ def catch_parse_error(func):
fn, _, _, _ = traceback.extract_tb(tb, 1)[0]
if not fn.startswith(bbdir):
break
parselog.critical("Unable to parse %s" % fn, exc_info=(exc_class, exc, tb))
sys.exit(1)
except bb.parse.ParseError as exc:
parselog.critical(str(exc))
parselog.critical("Unable to parse %s", fn, exc_info=(exc_class, exc, tb))
sys.exit(1)
return wrapped
@@ -293,8 +289,6 @@ class CookerDataBuilder(object):
parselog.debug(2, "Adding layer %s", layer)
if 'HOME' in approved and '~' in layer:
layer = os.path.expanduser(layer)
if layer.endswith('/'):
layer = layer.rstrip('/')
data.setVar('LAYERDIR', layer)
data = parse_config_file(os.path.join(layer, "conf", "layer.conf"), data)
data.expandVarref('LAYERDIR')
@@ -323,9 +317,7 @@ class CookerDataBuilder(object):
# Nomally we only register event handlers at the end of parsing .bb files
# We register any handlers we've found so far here...
for var in data.getVar('__BBHANDLERS', False) or []:
handlerfn = data.getVarFlag(var, "filename", False)
handlerln = int(data.getVarFlag(var, "lineno", False))
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask", True) or "").split(), handlerfn, handlerln)
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask", True) or "").split())
if data.getVar("BB_WORKERCONTEXT", False) is None:
bb.fetch.fetcher_init(data)

View File

@@ -178,8 +178,8 @@ def createDaemon(function, logfile):
# os.dup2(0, 2) # standard error (2)
si = open('/dev/null', 'r')
so = open(logfile, 'w')
si = file('/dev/null', 'r')
so = file(logfile, 'w')
se = so

View File

@@ -107,7 +107,7 @@ def setVarFlag(var, flag, flagvalue, d):
def getVarFlag(var, flag, d):
"""Gets given flag from given var"""
return d.getVarFlag(var, flag, False)
return d.getVarFlag(var, flag)
def delVarFlag(var, flag, d):
"""Removes a given flag from the variable's flags"""
@@ -182,12 +182,12 @@ def inheritFromOS(d, savedenv, permitted):
def emit_var(var, o=sys.__stdout__, d = init(), all=False):
"""Emit a variable to be sourced by a shell."""
func = d.getVarFlag(var, "func", False)
if d.getVarFlag(var, 'python', False) and func:
if d.getVarFlag(var, "python"):
return False
export = d.getVarFlag(var, "export", False)
unexport = d.getVarFlag(var, "unexport", False)
export = d.getVarFlag(var, "export")
unexport = d.getVarFlag(var, "unexport")
func = d.getVarFlag(var, "func")
if not all and not export and not unexport and not func:
return False
@@ -227,7 +227,6 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
if func:
# NOTE: should probably check for unbalanced {} within the var
val = val.rstrip('\n')
o.write("%s() {\n%s\n}\n" % (varExpanded, val))
return 1
@@ -245,7 +244,7 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
def emit_env(o=sys.__stdout__, d = init(), all=False):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
isfunc = lambda key: bool(d.getVarFlag(key, "func", False))
isfunc = lambda key: bool(d.getVarFlag(key, "func"))
keys = sorted((key for key in d.keys() if not key.startswith("__")), key=isfunc)
grouped = groupby(keys, isfunc)
for isfunc, keys in grouped:
@@ -254,8 +253,8 @@ def emit_env(o=sys.__stdout__, d = init(), all=False):
def exported_keys(d):
return (key for key in d.keys() if not key.startswith('__') and
d.getVarFlag(key, 'export', False) and
not d.getVarFlag(key, 'unexport', False))
d.getVarFlag(key, 'export') and
not d.getVarFlag(key, 'unexport'))
def exported_vars(d):
for key in exported_keys(d):
@@ -270,7 +269,7 @@ def exported_vars(d):
def emit_func(func, o=sys.__stdout__, d = init()):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
keys = (key for key in d.keys() if not key.startswith("__") and not d.getVarFlag(key, "func", False))
keys = (key for key in d.keys() if not key.startswith("__") and not d.getVarFlag(key, "func"))
for key in keys:
emit_var(key, o, d, False)
@@ -284,7 +283,7 @@ def emit_func(func, o=sys.__stdout__, d = init()):
seen |= deps
newdeps = set()
for dep in deps:
if d.getVarFlag(dep, "func", False) and not d.getVarFlag(dep, "python", False):
if d.getVarFlag(dep, "func") and not d.getVarFlag(dep, "python"):
emit_var(dep, o, d, False) and o.write('\n')
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep, True))
newdeps |= set((d.getVarFlag(dep, "vardeps", True) or "").split())
@@ -298,7 +297,7 @@ def emit_func_python(func, o=sys.__stdout__, d = init()):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
def write_func(func, o, call = False):
body = d.getVar(func, False)
body = d.getVar(func, True)
if not body.startswith("def"):
body = _functionfmt.format(function=func, body=body)
@@ -308,7 +307,7 @@ def emit_func_python(func, o=sys.__stdout__, d = init()):
write_func(func, o, True)
pp = bb.codeparser.PythonParser(func, logger)
pp.parse_python(d.getVar(func, False))
pp.parse_python(d.getVar(func, True))
newdeps = pp.execs
newdeps |= set((d.getVarFlag(func, "vardeps", True) or "").split())
seen = set()
@@ -317,10 +316,10 @@ def emit_func_python(func, o=sys.__stdout__, d = init()):
seen |= deps
newdeps = set()
for dep in deps:
if d.getVarFlag(dep, "func", False) and d.getVarFlag(dep, "python", False):
if d.getVarFlag(dep, "func") and d.getVarFlag(dep, "python"):
write_func(dep, o)
pp = bb.codeparser.PythonParser(dep, logger)
pp.parse_python(d.getVar(dep, False))
pp.parse_python(d.getVar(dep, True))
newdeps |= pp.execs
newdeps |= set((d.getVarFlag(dep, "vardeps", True) or "").split())
newdeps -= seen
@@ -339,7 +338,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps |= parser.references
deps = deps | (keys & parser.execs)
return deps, value
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "vardepvalueexclude", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "vardepvalueexclude", "postfuncs", "prefuncs"]) or {}
vardeps = varflags.get("vardeps")
value = d.getVar(key, False)
@@ -362,27 +361,27 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
value = varflags.get("vardepvalue")
elif varflags.get("func"):
if varflags.get("python"):
parsedvar = d.expandWithRefs(value, key)
parser = bb.codeparser.PythonParser(key, logger)
if value and "\t" in value:
logger.warning("Variable %s contains tabs, please remove these (%s)" % (key, d.getVar("FILE", True)))
parser.parse_python(value, filename=varflags.get("filename"), lineno=varflags.get("lineno"))
if parsedvar.value and "\t" in parsedvar.value:
logger.warn("Variable %s contains tabs, please remove these (%s)" % (key, d.getVar("FILE", True)))
parser.parse_python(parsedvar.value)
deps = deps | parser.references
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, d)
else:
parsedvar = d.expandWithRefs(value, key)
parser = bb.codeparser.ShellParser(key, logger)
parser.parse_shell(parsedvar.value)
deps = deps | shelldeps
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
value = handle_contains(value, parsedvar.contains, d)
if vardeps is None:
parser.log.flush()
if "prefuncs" in varflags:
deps = deps | set(varflags["prefuncs"].split())
if "postfuncs" in varflags:
deps = deps | set(varflags["postfuncs"].split())
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
value = handle_contains(value, parsedvar.contains, d)
else:
parser = d.expandWithRefs(value, key)
deps |= parser.references
@@ -407,8 +406,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps |= set((vardeps or "").split())
deps -= set(varflags.get("vardepsexclude", "").split())
except Exception as e:
bb.warn("Exception during build_dependencies for %s" % key)
raise
raise bb.data_smart.ExpansionError(key, None, e)
return deps, value
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
#d.setVarFlag(key, "vardeps", deps)
@@ -416,7 +414,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
def generate_dependencies(d):
keys = set(key for key in d if not key.startswith("__"))
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export", False) and not d.getVarFlag(key, "unexport", False))
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export") and not d.getVarFlag(key, "unexport"))
varflagsexcl = d.getVar('BB_SIGNATURE_EXCLUDE_FLAGS', True)
deps = {}

View File

@@ -40,7 +40,7 @@ logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = ["_append", "_prepend", "_remove"]
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>.*))?$')
__expand_var_regexp__ = re.compile(r"\${[^{}@\n\t :]+}")
__expand_var_regexp__ = re.compile(r"\${[^{}@\n\t ]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
def infer_caller_details(loginfo, parent = False, varval = True):
@@ -147,7 +147,7 @@ class DataContext(dict):
def __missing__(self, key):
value = self.metadata.getVar(key, True)
if value is None or self.metadata.getVarFlag(key, 'func', False):
if value is None or self.metadata.getVarFlag(key, 'func'):
raise KeyError(key)
else:
return value
@@ -384,12 +384,7 @@ class DataSmart(MutableMapping):
olds = s
try:
s = __expand_var_regexp__.sub(varparse.var_sub, s)
try:
s = __expand_python_regexp__.sub(varparse.python_sub, s)
except SyntaxError as e:
# Likely unmatched brackets, just don't expand the expression
if e.msg != "EOL while scanning string literal":
raise
s = __expand_python_regexp__.sub(varparse.python_sub, s)
if s == olds:
break
except ExpansionError:
@@ -480,7 +475,7 @@ class DataSmart(MutableMapping):
base = match.group('base')
keyword = match.group("keyword")
override = match.group('add')
l = self.getVarFlag(base, keyword, False) or []
l = self.getVarFlag(base, keyword) or []
l.append([value, override])
self.setVarFlag(base, keyword, l, ignore=True)
# And cause that to be recorded:
@@ -552,7 +547,7 @@ class DataSmart(MutableMapping):
# aka pay the cookie monster
override = var[var.rfind('_')+1:]
shortvar = var[:var.rfind('_')]
while override and override.islower():
while override:
if shortvar not in self.overridedata:
self.overridedata[shortvar] = []
if [var, override] not in self.overridedata[shortvar]:
@@ -566,7 +561,7 @@ class DataSmart(MutableMapping):
if len(shortvar) == 0:
override = None
def getVar(self, var, expand, noweakdefault=False, parsing=False):
def getVar(self, var, expand=False, noweakdefault=False, parsing=False):
return self.getVarFlag(var, "_content", expand, noweakdefault, parsing)
def renameVar(self, key, newkey, **loginfo):
@@ -582,11 +577,11 @@ class DataSmart(MutableMapping):
self.setVar(newkey, val, ignore=True, parsing=True)
for i in (__setvar_keyword__):
src = self.getVarFlag(key, i, False)
src = self.getVarFlag(key, i)
if src is None:
continue
dest = self.getVarFlag(newkey, i, False) or []
dest = self.getVarFlag(newkey, i) or []
dest.extend(src)
self.setVarFlag(newkey, i, dest, ignore=True)
@@ -626,7 +621,7 @@ class DataSmart(MutableMapping):
if '_' in var:
override = var[var.rfind('_')+1:]
shortvar = var[:var.rfind('_')]
while override and override.islower():
while override:
try:
if shortvar in self.overridedata:
# Force CoW by recreating the list first
@@ -663,7 +658,7 @@ class DataSmart(MutableMapping):
self.dict["__exportlist"]["_content"] = set()
self.dict["__exportlist"]["_content"].add(var)
def getVarFlag(self, var, flag, expand, noweakdefault=False, parsing=False):
def getVarFlag(self, var, flag, expand=False, noweakdefault=False, parsing=False):
local_var = self._findVar(var)
value = None
if flag == "_content" and var in self.overridedata and not parsing:
@@ -692,7 +687,7 @@ class DataSmart(MutableMapping):
match = active[a]
del active[a]
if match:
value = self.getVar(match, False)
value = self.getVar(match)
if local_var is not None and value is None:
if flag in local_var:
@@ -917,7 +912,7 @@ class DataSmart(MutableMapping):
yield k
def __len__(self):
return len(frozenset(iter(self)))
return len(frozenset(self))
def __getitem__(self, item):
value = self.getVar(item, False)
@@ -962,7 +957,7 @@ class DataSmart(MutableMapping):
if key == "__BBANONFUNCS":
for i in bb_list:
value = d.getVar(i, False) or ""
value = d.getVar(i, True) or ""
data.update({i:value})
data_str = str([(k, data[k]) for k in sorted(data.keys())])

View File

@@ -31,7 +31,6 @@ except ImportError:
import logging
import atexit
import traceback
import ast
import bb.utils
import bb.compat
import bb.exceptions
@@ -72,16 +71,11 @@ _catchall_handlers = {}
_eventfilter = None
_uiready = False
if hasattr(__builtins__, '__setitem__'):
builtins = __builtins__
else:
builtins = __builtins__.__dict__
def execute_handler(name, handler, event, d):
event.data = d
addedd = False
if 'd' not in builtins:
builtins['d'] = d
if 'd' not in __builtins__:
__builtins__['d'] = d
addedd = True
try:
ret = handler(event)
@@ -99,7 +93,7 @@ def execute_handler(name, handler, event, d):
finally:
del event.data
if addedd:
del builtins['d']
del __builtins__['d']
def fire_class_handlers(event, d):
if isinstance(event, logging.LogRecord):
@@ -183,7 +177,7 @@ def fire_from_worker(event, d):
fire_ui_handlers(event, d)
noop = lambda _: None
def register(name, handler, mask=None, filename=None, lineno=None):
def register(name, handler, mask=None):
"""Register an Event handler"""
# already registered
@@ -195,15 +189,7 @@ def register(name, handler, mask=None, filename=None, lineno=None):
if isinstance(handler, basestring):
tmp = "def %s(e):\n%s" % (name, handler)
try:
code = bb.methodpool.compile_cache(tmp)
if not code:
if filename is None:
filename = "%s(e)" % name
code = compile(tmp, filename, "exec", ast.PyCF_ONLY_AST)
if lineno is not None:
ast.increment_lineno(code, lineno-1)
code = compile(code, filename, "exec")
bb.methodpool.compile_cache_add(tmp, code)
code = compile(tmp, "%s(e)" % name, "exec")
except SyntaxError:
logger.error("Unable to register event handler '%s':\n%s", name,
''.join(traceback.format_exc(limit=0)))
@@ -609,10 +595,7 @@ class LogHandler(logging.Handler):
etype, value, tb = record.exc_info
if hasattr(tb, 'tb_next'):
tb = list(bb.exceptions.extract_traceback(tb, context=3))
# Need to turn the value into something the logging system can pickle
record.bb_exc_info = (etype, value, tb)
record.bb_exc_formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
value = str(value)
record.exc_info = None
fire(record, None)

View File

@@ -29,10 +29,11 @@ from __future__ import absolute_import
from __future__ import print_function
import os, re
import signal
import glob
import logging
import urllib
import urlparse
import collections
import operator
import bb.persist_data, bb.utils
import bb.checksum
from bb import data
@@ -298,7 +299,7 @@ class URI(object):
if self.query else '')
def _param_str_split(self, string, elmdelim, kvdelim="="):
ret = collections.OrderedDict()
ret = {}
for k, v in [x.split(kvdelim, 1) for x in string.split(elmdelim)]:
ret[k] = v
return ret
@@ -328,7 +329,7 @@ class URI(object):
def path(self, path):
self._path = path
if not path or re.compile("^/").match(path):
if re.compile("^/").match(path):
self.relative = False
else:
self.relative = True
@@ -376,12 +377,9 @@ def decodeurl(url):
if locidx != -1 and type.lower() != 'file':
host = location[:locidx]
path = location[locidx:]
elif type.lower() == 'file':
else:
host = ""
path = location
else:
host = location
path = ""
if user:
m = re.compile('(?P<user>[^:]+)(:?(?P<pswd>.*))').match(user)
if m:
@@ -391,7 +389,7 @@ def decodeurl(url):
user = ''
pswd = ''
p = collections.OrderedDict()
p = {}
if parm:
for s in parm.split(';'):
if s:
@@ -517,13 +515,13 @@ def fetcher_init(d):
if hasattr(m, "init"):
m.init(d)
def fetcher_parse_save():
_checksum_cache.save_extras()
def fetcher_parse_save(d):
_checksum_cache.save_extras(d)
def fetcher_parse_done():
_checksum_cache.save_merge()
def fetcher_parse_done(d):
_checksum_cache.save_merge(d)
def fetcher_compare_revisions():
def fetcher_compare_revisions(d):
"""
Compare the revisions in the persistant cache with current values and
return true/false on whether they've changed.
@@ -576,10 +574,10 @@ def verify_checksum(ud, d, precomputed={}):
else:
sha256data = bb.utils.sha256_file(ud.localpath)
if ud.method.recommends_checksum(ud) and not ud.md5_expected and not ud.sha256_expected:
if ud.method.recommends_checksum(ud):
# If strict checking enabled and neither sum defined, raise error
strict = d.getVar("BB_STRICT_CHECKSUM", True) or "0"
if strict == "1":
if (strict == "1") and not (ud.md5_expected or ud.sha256_expected):
logger.error('No checksum specified for %s, please add at least one to the recipe:\n'
'SRC_URI[%s] = "%s"\nSRC_URI[%s] = "%s"' %
(ud.localpath, ud.md5_name, md5data,
@@ -587,22 +585,34 @@ def verify_checksum(ud, d, precomputed={}):
raise NoChecksumError('Missing SRC_URI checksum', ud.url)
# Log missing sums so user can more easily add them
logger.warning('Missing md5 SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"',
ud.localpath, ud.md5_name, md5data)
logger.warning('Missing sha256 SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"',
ud.localpath, ud.sha256_name, sha256data)
if not ud.md5_expected:
logger.warn('Missing md5 SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"',
ud.localpath, ud.md5_name, md5data)
if not ud.sha256_expected:
logger.warn('Missing sha256 SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"',
ud.localpath, ud.sha256_name, sha256data)
md5mismatch = False
sha256mismatch = False
if ud.md5_expected != md5data:
md5mismatch = True
if ud.sha256_expected != sha256data:
sha256mismatch = True
# We want to alert the user if a checksum is defined in the recipe but
# it does not match.
msg = ""
mismatch = False
if ud.md5_expected and ud.md5_expected != md5data:
if md5mismatch and ud.md5_expected:
msg = msg + "\nFile: '%s' has %s checksum %s when %s was expected" % (ud.localpath, 'md5', md5data, ud.md5_expected)
mismatch = True;
if ud.sha256_expected and ud.sha256_expected != sha256data:
if sha256mismatch and ud.sha256_expected:
msg = msg + "\nFile: '%s' has %s checksum %s when %s was expected" % (ud.localpath, 'sha256', sha256data, ud.sha256_expected)
mismatch = True;
@@ -627,9 +637,6 @@ def verify_donestamp(ud, d, origud=None):
Returns True, if the donestamp exists and is valid, False otherwise. When
returning False, any existing done stamps are removed.
"""
if not ud.needdonestamp:
return True
if not os.path.exists(ud.donestamp):
return False
@@ -660,9 +667,9 @@ def verify_donestamp(ud, d, origud=None):
# files to those containing the checksums.
if not isinstance(e, EOFError):
# Ignore errors, they aren't fatal
logger.warning("Couldn't load checksums from donestamp %s: %s "
"(msg: %s)" % (ud.donestamp, type(e).__name__,
str(e)))
logger.warn("Couldn't load checksums from donestamp %s: %s "
"(msg: %s)" % (ud.donestamp, type(e).__name__,
str(e)))
try:
checksums = verify_checksum(ud, d, precomputed_checksums)
@@ -676,10 +683,9 @@ def verify_donestamp(ud, d, origud=None):
except ChecksumError as e:
# Checksums failed to verify, trigger re-download and remove the
# incorrect stamp file.
logger.warning("Checksum mismatch for local file %s\n"
"Cleaning and trying again." % ud.localpath)
if os.path.exists(ud.localpath):
rename_bad_checksum(ud, e.checksum)
logger.warn("Checksum mismatch for local file %s\n"
"Cleaning and trying again." % ud.localpath)
rename_bad_checksum(ud, e.checksum)
bb.utils.remove(ud.donestamp)
return False
@@ -689,9 +695,6 @@ def update_stamp(ud, d):
donestamp is file stamp indicating the whole fetching is done
this function update the stamp after verifying the checksum
"""
if not ud.needdonestamp:
return
if os.path.exists(ud.donestamp):
# Touch the done stamp file to show active use of the download
try:
@@ -700,21 +703,11 @@ def update_stamp(ud, d):
# Errors aren't fatal here
pass
else:
try:
checksums = verify_checksum(ud, d)
# Store the checksums for later re-verification against the recipe
with open(ud.donestamp, "wb") as cachefile:
p = pickle.Pickler(cachefile, pickle.HIGHEST_PROTOCOL)
p.dump(checksums)
except ChecksumError as e:
# Checksums failed to verify, trigger re-download and remove the
# incorrect stamp file.
logger.warning("Checksum mismatch for local file %s\n"
"Cleaning and trying again." % ud.localpath)
if os.path.exists(ud.localpath):
rename_bad_checksum(ud, e.checksum)
bb.utils.remove(ud.donestamp)
raise
checksums = verify_checksum(ud, d)
# Store the checksums for later re-verification against the recipe
with open(ud.donestamp, "wb") as cachefile:
p = pickle.Pickler(cachefile, pickle.HIGHEST_PROTOCOL)
p.dump(checksums)
def subprocess_setup():
# Python installs a SIGPIPE handler by default. This is usually not what
@@ -725,7 +718,7 @@ def subprocess_setup():
def get_autorev(d):
# only not cache src rev in autorev case
if d.getVar('BB_SRCREV_POLICY', True) != "cache":
d.setVar('BB_DONT_CACHE', '1')
d.setVar('__BB_DONT_CACHE', '1')
return "AUTOINC"
def get_srcrev(d, method_name='sortable_revision'):
@@ -808,15 +801,13 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None):
'GIT_SSL_CAINFO',
'GIT_SMART_HTTP',
'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
'SOCKS5_USER', 'SOCKS5_PASSWD',
'DBUS_SESSION_BUS_ADDRESS']
'SOCKS5_USER', 'SOCKS5_PASSWD']
if not cleanup:
cleanup = []
origenv = d.getVar("BB_ORIGENV", False)
for var in exportvars:
val = d.getVar(var, True) or (origenv and origenv.getVar(var, True))
val = d.getVar(var, True)
if val:
cmd = 'export ' + var + '=\"%s\"; %s' % (val, cmd)
@@ -929,10 +920,6 @@ def rename_bad_checksum(ud, suffix):
def try_mirror_url(fetch, origud, ud, ld, check = False):
# Return of None or a value means we're finished
# False means try another url
if ud.lockfile and ud.lockfile != origud.lockfile:
lf = bb.utils.lockfile(ud.lockfile)
try:
if check:
found = ud.method.checkstatus(fetch, ud, ld)
@@ -959,9 +946,8 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
if origud.mirrortarball and os.path.basename(ud.localpath) == os.path.basename(origud.mirrortarball) \
and os.path.basename(ud.localpath) != os.path.basename(origud.localpath):
# Create donestamp in old format to avoid triggering a re-download
if ud.donestamp:
bb.utils.mkdirhier(os.path.dirname(ud.donestamp))
open(ud.donestamp, 'w').close()
bb.utils.mkdirhier(os.path.dirname(ud.donestamp))
open(ud.donestamp, 'w').close()
dest = os.path.join(dldir, os.path.basename(ud.localpath))
if not os.path.exists(dest):
os.symlink(ud.localpath, dest)
@@ -985,10 +971,9 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
except bb.fetch2.BBFetchException as e:
if isinstance(e, ChecksumError):
logger.warning("Mirror checksum failure for url %s (original url: %s)\nCleaning and trying again." % (ud.url, origud.url))
logger.warning(str(e))
if os.path.exists(ud.localpath):
rename_bad_checksum(ud, e.checksum)
logger.warn("Mirror checksum failure for url %s (original url: %s)\nCleaning and trying again." % (ud.url, origud.url))
logger.warn(str(e))
rename_bad_checksum(ud, e.checksum)
elif isinstance(e, NoChecksumError):
raise
else:
@@ -999,10 +984,6 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
except UnboundLocalError:
pass
return False
finally:
if ud.lockfile and ud.lockfile != origud.lockfile:
bb.utils.unlockfile(lf)
def try_mirrors(fetch, d, origud, mirrors, check = False):
"""
@@ -1033,7 +1014,7 @@ def trusted_network(d, url):
return True
pkgname = d.expand(d.getVar('PN', False))
trusted_hosts = d.getVarFlag('BB_ALLOWED_NETWORKS', pkgname, False)
trusted_hosts = d.getVarFlag('BB_ALLOWED_NETWORKS', pkgname)
if not trusted_hosts:
trusted_hosts = d.getVar('BB_ALLOWED_NETWORKS', True)
@@ -1047,7 +1028,6 @@ def trusted_network(d, url):
if not network:
return True
network = network.split(':')[0]
network = network.lower()
for host in trusted_hosts.split(" "):
@@ -1140,7 +1120,48 @@ def get_file_checksums(filelist, pn):
it proceeds
"""
return _checksum_cache.get_checksums(filelist, pn)
def checksum_file(f):
try:
checksum = _checksum_cache.get_checksum(f)
except OSError as e:
bb.warn("Unable to get checksum for %s SRC_URI entry %s: %s" % (pn, os.path.basename(f), e))
return None
return checksum
def checksum_dir(pth):
# Handle directories recursively
dirchecksums = []
for root, dirs, files in os.walk(pth):
for name in files:
fullpth = os.path.join(root, name)
checksum = checksum_file(fullpth)
if checksum:
dirchecksums.append((fullpth, checksum))
return dirchecksums
checksums = []
for pth in filelist.split():
exist = pth.split(":")[1]
if exist == "False":
continue
pth = pth.split(":")[0]
if '*' in pth:
# Handle globs
for f in glob.glob(pth):
if os.path.isdir(f):
checksums.extend(checksum_dir(f))
else:
checksum = checksum_file(f)
checksums.append((f, checksum))
elif os.path.isdir(pth):
checksums.extend(checksum_dir(pth))
else:
checksum = checksum_file(pth)
checksums.append((pth, checksum))
checksums.sort(key=operator.itemgetter(1))
return checksums
class FetchData(object):
@@ -1150,7 +1171,6 @@ class FetchData(object):
def __init__(self, url, d, localonly = False):
# localpath is the location of a downloaded result. If not set, the file is local.
self.donestamp = None
self.needdonestamp = True
self.localfile = ""
self.localpath = None
self.lockfile = None
@@ -1177,13 +1197,13 @@ class FetchData(object):
elif self.type not in ["http", "https", "ftp", "ftps", "sftp"]:
self.md5_expected = None
else:
self.md5_expected = d.getVarFlag("SRC_URI", self.md5_name, True)
self.md5_expected = d.getVarFlag("SRC_URI", self.md5_name)
if self.sha256_name in self.parm:
self.sha256_expected = self.parm[self.sha256_name]
elif self.type not in ["http", "https", "ftp", "ftps", "sftp"]:
self.sha256_expected = None
else:
self.sha256_expected = d.getVarFlag("SRC_URI", self.sha256_name, True)
self.sha256_expected = d.getVarFlag("SRC_URI", self.sha256_name)
self.ignore_checksums = False
self.names = self.parm.get("name",'default').split(',')
@@ -1201,7 +1221,7 @@ class FetchData(object):
raise NonLocalMethod()
if self.parm.get("proto", None) and "protocol" not in self.parm:
logger.warning('Consider updating %s recipe to use "protocol" not "proto" in SRC_URI.', d.getVar('PN', True))
logger.warn('Consider updating %s recipe to use "protocol" not "proto" in SRC_URI.', d.getVar('PN', True))
self.parm["protocol"] = self.parm.get("proto", None)
if hasattr(self.method, "urldata_init"):
@@ -1215,20 +1235,13 @@ class FetchData(object):
self.localpath = self.method.localpath(self, d)
dldir = d.getVar("DL_DIR", True)
if not self.needdonestamp:
return
# Note: .done and .lock files should always be in DL_DIR whereas localpath may not be.
if self.localpath and self.localpath.startswith(dldir):
basepath = self.localpath
elif self.localpath:
basepath = dldir + os.sep + os.path.basename(self.localpath)
elif self.basepath or self.basename:
basepath = dldir + os.sep + (self.basepath or self.basename)
else:
bb.fatal("Can't determine lock path for url %s" % url)
basepath = dldir + os.sep + (self.basepath or self.basename)
self.donestamp = basepath + '.done'
self.lockfile = basepath + '.lock'
@@ -1342,11 +1355,6 @@ class FetchMethod(object):
iterate = False
file = urldata.localpath
# Localpath can't deal with 'dir/*' entries, so it converts them to '.',
# but it must be corrected back for local files copying
if urldata.basename == '*' and file.endswith('/.'):
file = '%s/%s' % (file.rstrip('/.'), urldata.path)
try:
unpack = bb.utils.to_boolean(urldata.parm.get('unpack'), True)
except ValueError as exc:
@@ -1399,40 +1407,51 @@ class FetchMethod(object):
cmd = 'rpm2cpio.sh %s | cpio -id' % (file)
elif file.endswith('.deb') or file.endswith('.ipk'):
cmd = 'ar -p %s data.tar.gz | zcat | tar --no-same-owner -xpf -' % file
elif file.endswith('.tar.7z'):
cmd = '7z x -so %s | tar xf - ' % file
elif file.endswith('.7z'):
cmd = '7za x -y %s 1>/dev/null' % file
# If 'subdir' param exists, create a dir and use it as destination for unpack cmd
if 'subdir' in urldata.parm:
unpackdir = '%s/%s' % (rootdir, urldata.parm.get('subdir'))
bb.utils.mkdirhier(unpackdir)
else:
unpackdir = rootdir
if not unpack or not cmd:
# If file == dest, then avoid any copies, as we already put the file into dest!
dest = os.path.join(unpackdir, os.path.basename(file))
if file != dest and not (os.path.exists(dest) and os.path.samefile(file, dest)):
destdir = '.'
# For file:// entries all intermediate dirs in path must be created at destination
if urldata.type == "file":
# Trailing '/' does a copying to wrong place
urlpath = urldata.path.rstrip('/')
# Want files places relative to cwd so no leading '/'
urlpath = urlpath.lstrip('/')
if urlpath.find("/") != -1:
destdir = urlpath.rsplit("/", 1)[0] + '/'
bb.utils.mkdirhier("%s/%s" % (unpackdir, destdir))
cmd = 'cp -fpPR %s %s' % (file, destdir)
dest = os.path.join(rootdir, os.path.basename(file))
if (file != dest) and not (os.path.exists(dest) and os.path.samefile(file, dest)):
if os.path.isdir(file):
# If for example we're asked to copy file://foo/bar, we need to unpack the result into foo/bar
basepath = getattr(urldata, "basepath", None)
destdir = "."
if basepath and basepath.endswith("/"):
basepath = basepath.rstrip("/")
elif basepath:
basepath = os.path.dirname(basepath)
if basepath and basepath.find("/") != -1:
destdir = basepath[:basepath.rfind('/')]
destdir = destdir.strip('/')
if destdir != "." and not os.access("%s/%s" % (rootdir, destdir), os.F_OK):
os.makedirs("%s/%s" % (rootdir, destdir))
cmd = 'cp -fpPR %s %s/%s/' % (file, rootdir, destdir)
#cmd = 'tar -cf - -C "%d" -ps . | tar -xf - -C "%s/%s/"' % (file, rootdir, destdir)
else:
# The "destdir" handling was specifically done for FILESPATH
# items. So, only do so for file:// entries.
if urldata.type == "file" and urldata.path.find("/") != -1:
destdir = urldata.path.rsplit("/", 1)[0]
if urldata.parm.get('subdir') != None:
destdir = urldata.parm.get('subdir') + "/" + destdir
else:
if urldata.parm.get('subdir') != None:
destdir = urldata.parm.get('subdir')
else:
destdir = "."
bb.utils.mkdirhier("%s/%s" % (rootdir, destdir))
cmd = 'cp -f %s %s/%s/' % (file, rootdir, destdir)
if not cmd:
return
# Change to unpackdir before executing command
# Change to subdir before executing command
save_cwd = os.getcwd();
os.chdir(unpackdir)
os.chdir(rootdir)
if 'subdir' in urldata.parm:
newdir = ("%s/%s" % (rootdir, urldata.parm.get('subdir')))
bb.utils.mkdirhier(newdir)
os.chdir(newdir)
path = data.getVar('PATH', True)
if path:
@@ -1559,8 +1578,7 @@ class Fetch(object):
m = ud.method
localpath = ""
if ud.lockfile:
lf = bb.utils.lockfile(ud.lockfile)
lf = bb.utils.lockfile(ud.lockfile)
try:
self.d.setVar("BB_NO_NETWORK", network)
@@ -1597,14 +1615,13 @@ class Fetch(object):
except BBFetchException as e:
if isinstance(e, ChecksumError):
logger.warning("Checksum failure encountered with download of %s - will attempt other sources if available" % u)
logger.warn("Checksum failure encountered with download of %s - will attempt other sources if available" % u)
logger.debug(1, str(e))
if os.path.exists(ud.localpath):
rename_bad_checksum(ud, e.checksum)
rename_bad_checksum(ud, e.checksum)
elif isinstance(e, NoChecksumError):
raise
else:
logger.warning('Failed to fetch URL %s, attempting MIRRORS if available' % u)
logger.warn('Failed to fetch URL %s, attempting MIRRORS if available' % u)
logger.debug(1, str(e))
firsterr = e
# Remove any incomplete fetch
@@ -1627,8 +1644,7 @@ class Fetch(object):
raise
finally:
if ud.lockfile:
bb.utils.unlockfile(lf)
bb.utils.unlockfile(lf)
def checkstatus(self, urls=None):
"""
@@ -1670,6 +1686,9 @@ class Fetch(object):
ud = self.ud[u]
ud.setup_localpath(self.d)
if self.d.expand(self.localpath) is None:
continue
if ud.lockfile:
lf = bb.utils.lockfile(ud.lockfile)
@@ -1756,7 +1775,6 @@ from . import hg
from . import osc
from . import repo
from . import clearcase
from . import npm
methods.append(local.Local())
methods.append(wget.Wget())
@@ -1773,4 +1791,3 @@ methods.append(hg.Hg())
methods.append(osc.Osc())
methods.append(repo.Repo())
methods.append(clearcase.ClearCase())
methods.append(npm.Npm())

View File

@@ -70,7 +70,6 @@ import errno
import os
import re
import bb
import errno
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
@@ -262,7 +261,23 @@ class Git(FetchMethod):
if ud.bareclone:
cloneflags += " --mirror"
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, cloneflags, ud.clonedir, destdir), d)
# Versions of git prior to 1.7.9.2 have issues where foo.git and foo get confused
# and you end up with some horrible union of the two when you attempt to clone it
# The least invasive workaround seems to be a symlink to the real directory to
# fool git into ignoring any .git version that may also be present.
#
# The issue is fixed in more recent versions of git so we can drop this hack in future
# when that version becomes common enough.
clonedir = ud.clonedir
if not ud.path.endswith(".git"):
indirectiondir = destdir[:-1] + ".indirectionsymlink"
if os.path.exists(indirectiondir):
os.remove(indirectiondir)
bb.utils.mkdirhier(os.path.dirname(indirectiondir))
os.symlink(ud.clonedir, indirectiondir)
clonedir = indirectiondir
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, cloneflags, clonedir, destdir), d)
os.chdir(destdir)
repourl = self._get_repo_url(ud)
runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, repourl), d)
@@ -274,7 +289,7 @@ class Git(FetchMethod):
branchname = ud.branches[ud.names[0]]
runfetchcmd("%s checkout -B %s %s" % (ud.basecmd, branchname, \
ud.revisions[ud.names[0]]), d)
runfetchcmd("%s branch --set-upstream %s origin/%s" % (ud.basecmd, branchname, \
runfetchcmd("%s branch %s --set-upstream-to origin/%s" % (ud.basecmd, branchname, \
branchname), d)
else:
runfetchcmd("%s checkout %s" % (ud.basecmd, ud.revisions[ud.names[0]]), d)
@@ -364,7 +379,7 @@ class Git(FetchMethod):
"""
pupver = ('', '')
tagregex = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX', True) or "(?P<pver>([0-9][\.|_]?)+)")
tagregex = re.compile(d.getVar('GITTAGREGEX', True) or "(?P<pver>([0-9][\.|_]?)+)")
try:
output = self._lsremote(ud, d, "refs/tags/*")
except bb.fetch2.FetchError or bb.fetch2.NetworkAccess:

View File

@@ -45,7 +45,6 @@ class Local(FetchMethod):
ud.decodedurl = urllib.unquote(ud.url.split("://")[1].split(";")[0])
ud.basename = os.path.basename(ud.decodedurl)
ud.basepath = ud.decodedurl
ud.needdonestamp = False
return
def localpath(self, urldata, d):

View File

@@ -1,284 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' NPM implementation
The NPM fetcher is used to retrieve files from the npmjs repository
Usage in the recipe:
SRC_URI = "npm://registry.npmjs.org/;name=${PN};version=${PV}"
Suported SRC_URI options are:
- name
- version
npm://registry.npmjs.org/${PN}/-/${PN}-${PV}.tgz would become npm://registry.npmjs.org;name=${PN};ver=${PV}
The fetcher all triggers off the existence of ud.localpath. If that exists and has the ".done" stamp, its assumed the fetch is good/done
"""
import os
import sys
import urllib
import json
import subprocess
import signal
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import ChecksumError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
from bb.fetch2 import UnpackError
from bb.fetch2 import ParameterError
from distutils import spawn
def subprocess_setup():
# Python installs a SIGPIPE handler by default. This is usually not what
# non-Python subprocesses expect.
# SIGPIPE errors are known issues with gzip/bash
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
class Npm(FetchMethod):
"""Class to fetch urls via 'npm'"""
def init(self, d):
pass
def supports(self, ud, d):
"""
Check to see if a given url can be fetched with npm
"""
return ud.type in ['npm']
def debug(self, msg):
logger.debug(1, "NpmFetch: %s", msg)
def clean(self, ud, d):
logger.debug(2, "Calling cleanup %s" % ud.pkgname)
bb.utils.remove(ud.localpath, False)
bb.utils.remove(ud.pkgdatadir, True)
bb.utils.remove(ud.fullmirror, False)
def urldata_init(self, ud, d):
"""
init NPM specific variable within url data
"""
if 'downloadfilename' in ud.parm:
ud.basename = ud.parm['downloadfilename']
else:
ud.basename = os.path.basename(ud.path)
# can't call it ud.name otherwise fetcher base class will start doing sha1stuff
# TODO: find a way to get an sha1/sha256 manifest of pkg & all deps
ud.pkgname = ud.parm.get("name", None)
if not ud.pkgname:
raise ParameterError("NPM fetcher requires a name parameter", ud.url)
ud.version = ud.parm.get("version", None)
if not ud.version:
raise ParameterError("NPM fetcher requires a version parameter", ud.url)
ud.bbnpmmanifest = "%s-%s.deps.json" % (ud.pkgname, ud.version)
ud.registry = "http://%s" % (ud.url.replace('npm://', '', 1).split(';'))[0]
prefixdir = "npm/%s" % ud.pkgname
ud.pkgdatadir = d.expand("${DL_DIR}/%s" % prefixdir)
if not os.path.exists(ud.pkgdatadir):
bb.utils.mkdirhier(ud.pkgdatadir)
ud.localpath = d.expand("${DL_DIR}/npm/%s" % ud.bbnpmmanifest)
self.basecmd = d.getVar("FETCHCMD_wget", True) or "/usr/bin/env wget -O -t 2 -T 30 -nv --passive-ftp --no-check-certificate "
self.basecmd += " --directory-prefix=%s " % prefixdir
ud.write_tarballs = ((data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True) or "0") != "0")
ud.mirrortarball = 'npm_%s-%s.tar.xz' % (ud.pkgname, ud.version)
ud.fullmirror = os.path.join(d.getVar("DL_DIR", True), ud.mirrortarball)
def need_update(self, ud, d):
if os.path.exists(ud.localpath):
return False
return True
def _runwget(self, ud, d, command, quiet):
logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
bb.fetch2.check_network_access(d, command)
runfetchcmd(command, d, quiet)
def _unpackdep(self, ud, pkg, data, destdir, dldir, d):
file = data[pkg]['tgz']
logger.debug(2, "file to extract is %s" % file)
if file.endswith('.tgz') or file.endswith('.tar.gz') or file.endswith('.tar.Z'):
cmd = 'tar xz --strip 1 --no-same-owner --warning=no-unknown-keyword -f %s/%s' % (dldir, file)
else:
bb.fatal("NPM package %s downloaded not a tarball!" % file)
# Change to subdir before executing command
save_cwd = os.getcwd()
if not os.path.exists(destdir):
os.makedirs(destdir)
os.chdir(destdir)
path = d.getVar('PATH', True)
if path:
cmd = "PATH=\"%s\" %s" % (path, cmd)
bb.note("Unpacking %s to %s/" % (file, os.getcwd()))
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True)
os.chdir(save_cwd)
if ret != 0:
raise UnpackError("Unpack command %s failed with return value %s" % (cmd, ret), ud.url)
if 'deps' not in data[pkg]:
return
for dep in data[pkg]['deps']:
self._unpackdep(ud, dep, data[pkg]['deps'], "%s/node_modules/%s" % (destdir, dep), dldir, d)
def unpack(self, ud, destdir, d):
dldir = d.getVar("DL_DIR", True)
depdumpfile = "%s-%s.deps.json" % (ud.pkgname, ud.version)
with open("%s/npm/%s" % (dldir, depdumpfile)) as datafile:
workobj = json.load(datafile)
dldir = "%s/%s" % (os.path.dirname(ud.localpath), ud.pkgname)
self._unpackdep(ud, ud.pkgname, workobj, "%s/npmpkg" % destdir, dldir, d)
def _parse_view(self, output):
'''
Parse the output of npm view --json; the last JSON result
is assumed to be the one that we're interested in.
'''
pdata = None
outdeps = {}
datalines = []
bracelevel = 0
for line in output.splitlines():
if bracelevel:
datalines.append(line)
elif '{' in line:
datalines = []
datalines.append(line)
bracelevel = bracelevel + line.count('{') - line.count('}')
if datalines:
pdata = json.loads('\n'.join(datalines))
return pdata
def _getdependencies(self, pkg, data, version, d, ud, optional=False):
pkgfullname = pkg
if version != '*' and not '/' in version:
pkgfullname += "@'%s'" % version
logger.debug(2, "Calling getdeps on %s" % pkg)
fetchcmd = "npm view %s --json --registry %s" % (pkgfullname, ud.registry)
output = runfetchcmd(fetchcmd, d, True)
pdata = self._parse_view(output)
if not pdata:
raise FetchError("The command '%s' returned no output" % fetchcmd)
if optional:
pkg_os = pdata.get('os', None)
if pkg_os:
if not isinstance(pkg_os, list):
pkg_os = [pkg_os]
if 'linux' not in pkg_os or '!linux' in pkg_os:
logger.debug(2, "Skipping %s since it's incompatible with Linux" % pkg)
return
#logger.debug(2, "Output URL is %s - %s - %s" % (ud.basepath, ud.basename, ud.localfile))
outputurl = pdata['dist']['tarball']
data[pkg] = {}
data[pkg]['tgz'] = os.path.basename(outputurl)
self._runwget(ud, d, "%s %s" % (self.basecmd, outputurl), False)
dependencies = pdata.get('dependencies', {})
optionalDependencies = pdata.get('optionalDependencies', {})
depsfound = {}
optdepsfound = {}
data[pkg]['deps'] = {}
for dep in dependencies:
if dep in optionalDependencies:
optdepsfound[dep] = dependencies[dep]
else:
depsfound[dep] = dependencies[dep]
for dep, version in optdepsfound.iteritems():
self._getdependencies(dep, data[pkg]['deps'], version, d, ud, optional=True)
for dep, version in depsfound.iteritems():
self._getdependencies(dep, data[pkg]['deps'], version, d, ud)
def _getshrinkeddependencies(self, pkg, data, version, d, ud, lockdown, manifest):
logger.debug(2, "NPM shrinkwrap file is %s" % data)
outputurl = "invalid"
if ('resolved' not in data) or (not data['resolved'].startswith('http')):
# will be the case for ${PN}
fetchcmd = "npm view %s@%s dist.tarball --registry %s" % (pkg, version, ud.registry)
logger.debug(2, "Found this matching URL: %s" % str(fetchcmd))
outputurl = runfetchcmd(fetchcmd, d, True)
else:
outputurl = data['resolved']
self._runwget(ud, d, "%s %s" % (self.basecmd, outputurl), False)
manifest[pkg] = {}
manifest[pkg]['tgz'] = os.path.basename(outputurl).rstrip()
manifest[pkg]['deps'] = {}
if pkg in lockdown:
sha1_expected = lockdown[pkg][version]
sha1_data = bb.utils.sha1_file("npm/%s/%s" % (ud.pkgname, manifest[pkg]['tgz']))
if sha1_expected != sha1_data:
msg = "\nFile: '%s' has %s checksum %s when %s was expected" % (manifest[pkg]['tgz'], 'sha1', sha1_data, sha1_expected)
raise ChecksumError('Checksum mismatch!%s' % msg)
else:
logger.debug(2, "No lockdown data for %s@%s" % (pkg, version))
if 'dependencies' in data:
for obj in data['dependencies']:
logger.debug(2, "Found dep is %s" % str(obj))
self._getshrinkeddependencies(obj, data['dependencies'][obj], data['dependencies'][obj]['version'], d, ud, lockdown, manifest[pkg]['deps'])
def download(self, ud, d):
"""Fetch url"""
jsondepobj = {}
shrinkobj = {}
lockdown = {}
if not os.listdir(ud.pkgdatadir) and os.path.exists(ud.fullmirror):
dest = d.getVar("DL_DIR", True)
bb.utils.mkdirhier(dest)
save_cwd = os.getcwd()
os.chdir(dest)
runfetchcmd("tar -xJf %s" % (ud.fullmirror), d)
os.chdir(save_cwd)
return
shwrf = d.getVar('NPM_SHRINKWRAP', True)
logger.debug(2, "NPM shrinkwrap file is %s" % shwrf)
try:
with open(shwrf) as datafile:
shrinkobj = json.load(datafile)
except:
logger.warning('Missing shrinkwrap file in NPM_SHRINKWRAP for %s, this will lead to unreliable builds!' % ud.pkgname)
lckdf = d.getVar('NPM_LOCKDOWN', True)
logger.debug(2, "NPM lockdown file is %s" % lckdf)
try:
with open(lckdf) as datafile:
lockdown = json.load(datafile)
except:
logger.warning('Missing lockdown file in NPM_LOCKDOWN for %s, this will lead to unreproducible builds!' % ud.pkgname)
if ('name' not in shrinkobj):
self._getdependencies(ud.pkgname, jsondepobj, ud.version, d, ud)
else:
self._getshrinkeddependencies(ud.pkgname, shrinkobj, ud.version, d, ud, lockdown, jsondepobj)
with open(ud.localpath, 'w') as outfile:
json.dump(jsondepobj, outfile)
def build_mirror_data(self, ud, d):
# Generate a mirror tarball if needed
if ud.write_tarballs and not os.path.exists(ud.fullmirror):
# it's possible that this symlink points to read-only filesystem with PREMIRROR
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
save_cwd = os.getcwd()
os.chdir(d.getVar("DL_DIR", True))
logger.info("Creating tarball of npm data")
runfetchcmd("tar -cJf %s npm/%s npm/%s" % (ud.fullmirror, ud.bbnpmmanifest, ud.pkgname), d)
runfetchcmd("touch %s.done" % (ud.fullmirror), d)
os.chdir(save_cwd)

View File

@@ -34,13 +34,13 @@ class Osc(FetchMethod):
# Create paths to osc checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(d.getVar('OSCDIR', True), ud.host)
ud.pkgdir = os.path.join(data.expand('${OSCDIR}', d), ud.host)
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
else:
pv = d.getVar("PV", False)
pv = data.getVar("PV", d, 0)
rev = bb.fetch2.srcrev_internal_helper(ud, d)
if rev and rev != True:
ud.revision = rev
@@ -84,7 +84,7 @@ class Osc(FetchMethod):
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(d.getVar('OSCDIR', True), ud.path, ud.module), os.R_OK):
if os.access(os.path.join(data.expand('${OSCDIR}', d), ud.path, ud.module), os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ ud.url)
# update sources there
@@ -114,7 +114,7 @@ class Osc(FetchMethod):
Generate a .oscrc to be used for this run.
"""
config_path = os.path.join(d.getVar('OSCDIR', True), "oscrc")
config_path = os.path.join(data.expand('${OSCDIR}', d), "oscrc")
if (os.path.exists(config_path)):
os.remove(config_path)
@@ -123,8 +123,8 @@ class Osc(FetchMethod):
f.write("apisrv = %s\n" % ud.host)
f.write("scheme = http\n")
f.write("su-wrapper = su -c\n")
f.write("build-root = %s\n" % d.getVar('WORKDIR', True))
f.write("urllist = %s\n" % d.getVar("OSCURLLIST", True))
f.write("build-root = %s\n" % data.expand('${WORKDIR}', d))
f.write("urllist = http://moblin-obs.jf.intel.com:8888/build/%(project)s/%(repository)s/%(buildarch)s/:full/%(name)s.rpm\n")
f.write("extra-pkgs = gzip\n")
f.write("\n")
f.write("[%s]\n" % ud.host)

View File

@@ -37,9 +37,7 @@ from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
from bb.utils import export_proxies
from bs4 import BeautifulSoup
from bs4 import SoupStrainer
class Wget(FetchMethod):
"""Class to fetch urls via 'wget'"""
@@ -63,8 +61,6 @@ class Wget(FetchMethod):
ud.basename = os.path.basename(ud.path)
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
if not ud.localfile:
ud.localfile = data.expand(urllib.unquote(ud.host + ud.path).replace("/", "."), d)
self.basecmd = d.getVar("FETCHCMD_wget", True) or "/usr/bin/env wget -t 2 -T 30 -nv --passive-ftp --no-check-certificate"
@@ -222,6 +218,22 @@ class Wget(FetchMethod):
return resp
def export_proxies(d):
variables = ['http_proxy', 'HTTP_PROXY', 'https_proxy', 'HTTPS_PROXY',
'ftp_proxy', 'FTP_PROXY', 'no_proxy', 'NO_PROXY']
exported = False
for v in variables:
if v in os.environ.keys():
exported = True
else:
v_proxy = d.getVar(v, True)
if v_proxy is not None:
os.environ[v] = v_proxy
exported = True
return exported
class HTTPMethodFallback(urllib2.BaseHandler):
"""
Fallback to GET if HEAD is not allowed (405 HTTP error)
@@ -381,7 +393,7 @@ class Wget(FetchMethod):
version = ['', '', '']
bb.debug(3, "VersionURL: %s" % (url))
soup = BeautifulSoup(self._fetch_index(url, ud, d), "html.parser", parse_only=SoupStrainer("a"))
soup = BeautifulSoup(self._fetch_index(url, ud, d))
if not soup:
bb.debug(3, "*** %s NO SOUP" % (url))
return ""
@@ -420,10 +432,10 @@ class Wget(FetchMethod):
version_dir = ['', '', '']
version = ['', '', '']
dirver_regex = re.compile("(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])+(\d+))")
dirver_regex = re.compile("(\D*)((\d+[\.\-_])+(\d+))")
s = dirver_regex.search(dirver)
if s:
version_dir[1] = s.group('ver')
version_dir[1] = s.group(2)
else:
version_dir[1] = dirver
@@ -431,26 +443,16 @@ class Wget(FetchMethod):
ud.path.split(dirver)[0], ud.user, ud.pswd, {}])
bb.debug(3, "DirURL: %s, %s" % (dirs_uri, package))
soup = BeautifulSoup(self._fetch_index(dirs_uri, ud, d), "html.parser", parse_only=SoupStrainer("a"))
soup = BeautifulSoup(self._fetch_index(dirs_uri, ud, d))
if not soup:
return version[1]
for line in soup.find_all('a', href=True):
s = dirver_regex.search(line['href'].strip("/"))
if s:
sver = s.group('ver')
# When prefix is part of the version directory it need to
# ensure that only version directory is used so remove previous
# directories if exists.
#
# Example: pfx = '/dir1/dir2/v' and version = '2.5' the expected
# result is v2.5.
spfx = s.group('pfx').split('/')[-1]
version_dir_new = ['', sver, '']
version_dir_new = ['', s.group(2), '']
if self._vercmp(version_dir, version_dir_new) <= 0:
dirver_new = spfx + sver
dirver_new = s.group(1) + s.group(2)
path = ud.path.replace(dirver, dirver_new, True) \
.split(package)[0]
uri = bb.fetch.encodeurl([ud.type, ud.host, path,
@@ -504,7 +506,7 @@ class Wget(FetchMethod):
self.suffix_regex_comp = re.compile(psuffix_regex)
# compile regex, can be specific by package or generic regex
pn_regex = d.getVar('UPSTREAM_CHECK_REGEX', True)
pn_regex = d.getVar('REGEX', True)
if pn_regex:
package_custom_regex_comp = re.compile(pn_regex)
else:
@@ -540,7 +542,7 @@ class Wget(FetchMethod):
bb.debug(3, "latest_versionstring, regex: %s" % (package_regex.pattern))
uri = ""
regex_uri = d.getVar("UPSTREAM_CHECK_URI", True)
regex_uri = d.getVar("REGEX_URI", True)
if not regex_uri:
path = ud.path.split(package)[0]

View File

@@ -100,12 +100,11 @@ def import_extension_module(pkg, modulename, checkattr):
# Dynamically load the UI based on the ui name. Although we
# suggest a fixed set this allows you to have flexibility in which
# ones are available.
module = __import__(pkg.__name__, fromlist=[modulename])
module = __import__(pkg.__name__, fromlist = [modulename])
return getattr(module, modulename)
except AttributeError:
modules = present_options(list_extension_modules(pkg, checkattr))
raise BBMainException('FATAL: Unable to import extension module "%s" from %s. '
'Valid extension modules: %s' % (modulename, pkg.__name__, modules))
raise BBMainException('FATAL: Unable to import extension module "%s" from %s. Valid extension modules: %s' % (modulename, pkg.__name__, present_options(list_extension_modules(pkg, checkattr))))
# Display bitbake/OE warnings via the BitBake.Warnings logger, ignoring others"""
warnlog = logging.getLogger("BitBake.Warnings")
@@ -116,7 +115,7 @@ def _showwarning(message, category, filename, lineno, file=None, line=None):
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno)
warnlog.warning(s)
warnlog.warn(s)
warnings.showwarning = _showwarning
warnings.filterwarnings("ignore")
@@ -130,166 +129,128 @@ class BitBakeConfigParameters(cookerdata.ConfigParameters):
def parseCommandLine(self, argv=sys.argv):
parser = optparse.OptionParser(
formatter=BitbakeHelpFormatter(),
version="BitBake Build Tool Core version %s" % bb.__version__,
usage="""%prog [options] [recipename/target recipe:do_task ...]
formatter = BitbakeHelpFormatter(),
version = "BitBake Build Tool Core version %s" % bb.__version__,
usage = """%prog [options] [recipename/target recipe:do_task ...]
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
will provide the layer, BBFILES and other configuration information.""")
parser.add_option("-b", "--buildfile", action="store", dest="buildfile", default=None,
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
parser.add_option("-b", "--buildfile", help = "Execute tasks from a specific .bb recipe directly. WARNING: Does not handle any dependencies from other recipes.",
action = "store", dest = "buildfile", default = None)
parser.add_option("-k", "--continue", action="store_false", dest="abort", default=True,
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
parser.add_option("-k", "--continue", help = "Continue as much as possible after an error. While the target that failed and anything depending on it cannot be built, as much as possible will be built before stopping.",
action = "store_false", dest = "abort", default = True)
parser.add_option("-a", "--tryaltconfigs", action="store_true",
dest="tryaltconfigs", default=False,
help="Continue with builds by trying to use alternative providers "
"where possible.")
parser.add_option("-a", "--tryaltconfigs", help = "Continue with builds by trying to use alternative providers where possible.",
action = "store_true", dest = "tryaltconfigs", default = False)
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
parser.add_option("-f", "--force", help = "Force the specified targets/task to run (invalidating any existing stamp file).",
action = "store_true", dest = "force", default = False)
parser.add_option("-c", "--cmd", action="store", dest="cmd",
help="Specify the task to execute. The exact options available "
"depend on the metadata. Some examples might be 'compile'"
" or 'populate_sysroot' or 'listtasks' may give a list of "
"the tasks available.")
parser.add_option("-c", "--cmd", help = "Specify the task to execute. The exact options available depend on the metadata. Some examples might be 'compile' or 'populate_sysroot' or 'listtasks' may give a list of the tasks available.",
action = "store", dest = "cmd")
parser.add_option("-C", "--clear-stamp", action="store", dest="invalidate_stamp",
help="Invalidate the stamp for the specified task such as 'compile' "
"and then run the default task for the specified target(s).")
parser.add_option("-C", "--clear-stamp", help = "Invalidate the stamp for the specified task such as 'compile' and then run the default task for the specified target(s).",
action = "store", dest = "invalidate_stamp")
parser.add_option("-r", "--read", action="append", dest="prefile", default=[],
help="Read the specified file before bitbake.conf.")
parser.add_option("-r", "--read", help = "Read the specified file before bitbake.conf.",
action = "append", dest = "prefile", default = [])
parser.add_option("-R", "--postread", action="append", dest="postfile", default=[],
help="Read the specified file after bitbake.conf.")
parser.add_option("-R", "--postread", help = "Read the specified file after bitbake.conf.",
action = "append", dest = "postfile", default = [])
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
help="Output more log message data to the terminal.")
parser.add_option("-v", "--verbose", help = "Output more log message data to the terminal.",
action = "store_true", dest = "verbose", default = False)
parser.add_option("-D", "--debug", action="count", dest="debug", default=0,
help="Increase the debug level. You can specify this more than once.")
parser.add_option("-D", "--debug", help = "Increase the debug level. You can specify this more than once.",
action = "count", dest="debug", default = 0)
parser.add_option("-n", "--dry-run", action="store_true", dest="dry_run", default=False,
help="Don't execute, just go through the motions.")
parser.add_option("-n", "--dry-run", help = "Don't execute, just go through the motions.",
action = "store_true", dest = "dry_run", default = False)
parser.add_option("-S", "--dump-signatures", action="append", dest="dump_signatures",
default=[], metavar="SIGNATURE_HANDLER",
help="Dump out the signature construction information, with no task "
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means compare the dumped signature with the cached one.")
parser.add_option("-S", "--dump-signatures", help = "Dump out the signature construction information, with no task execution. The SIGNATURE_HANDLER parameter is passed to the handler. Two common values are none and printdiff but the handler may define more/less. none means only dump the signature, printdiff means compare the dumped signature with the cached one.",
action = "append", dest = "dump_signatures", default = [], metavar="SIGNATURE_HANDLER")
parser.add_option("-p", "--parse-only", action="store_true",
dest="parse_only", default=False,
help="Quit after parsing the BB recipes.")
parser.add_option("-p", "--parse-only", help = "Quit after parsing the BB recipes.",
action = "store_true", dest = "parse_only", default = False)
parser.add_option("-s", "--show-versions", action="store_true",
dest="show_versions", default=False,
help="Show current and preferred versions of all recipes.")
parser.add_option("-s", "--show-versions", help = "Show current and preferred versions of all recipes.",
action = "store_true", dest = "show_versions", default = False)
parser.add_option("-e", "--environment", action="store_true",
dest="show_environment", default=False,
help="Show the global or per-recipe environment complete with information"
" about where variables were set/changed.")
parser.add_option("-e", "--environment", help = "Show the global or per-recipe environment complete with information about where variables were set/changed.",
action = "store_true", dest = "show_environment", default = False)
parser.add_option("-g", "--graphviz", action="store_true", dest="dot_graph", default=False,
help="Save dependency tree information for the specified "
"targets in the dot syntax.")
parser.add_option("-g", "--graphviz", help = "Save dependency tree information for the specified targets in the dot syntax.",
action = "store_true", dest = "dot_graph", default = False)
parser.add_option("-I", "--ignore-deps", action="append",
dest="extra_assume_provided", default=[],
help="Assume these dependencies don't exist and are already provided "
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
"graphs more appealing")
parser.add_option("-I", "--ignore-deps", help = """Assume these dependencies don't exist and are already provided (equivalent to ASSUME_PROVIDED). Useful to make dependency graphs more appealing""",
action = "append", dest = "extra_assume_provided", default = [])
parser.add_option("-l", "--log-domains", action="append", dest="debug_domains", default=[],
help="Show debug logging for the specified logging domains")
parser.add_option("-l", "--log-domains", help = """Show debug logging for the specified logging domains""",
action = "append", dest = "debug_domains", default = [])
parser.add_option("-P", "--profile", action="store_true", dest="profile", default=False,
help="Profile the command and save reports.")
parser.add_option("-P", "--profile", help = "Profile the command and save reports.",
action = "store_true", dest = "profile", default = False)
env_ui = os.environ.get('BITBAKE_UI', None)
default_ui = env_ui or 'knotty'
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
parser.add_option("-u", "--ui", help = "The user interface to use (@CHOICES@ - default %default).",
action="store", dest="ui", default=default_ui)
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
parser.add_option("-u", "--ui", action="store", dest="ui",
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %default).")
parser.add_option("-t", "--servertype", help = "Choose which server type to use (@CHOICES@ - default %default).",
action = "store", dest = "servertype", default = "process")
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
parser.add_option("-t", "--servertype", action="store", dest="servertype",
default=["process", "xmlrpc"]["BBSERVER" in os.environ],
help="Choose which server type to use (@CHOICES@ - default %default).")
parser.add_option("", "--token", help = "Specify the connection token to be used when connecting to a remote server.",
action = "store", dest = "xmlrpctoken")
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
"to a remote server.")
parser.add_option("", "--revisions-changed", help = "Set the exit code depending on whether upstream floating revisions have changed or not.",
action = "store_true", dest = "revisions_changed", default = False)
parser.add_option("", "--revisions-changed", action="store_true",
dest="revisions_changed", default=False,
help="Set the exit code depending on whether upstream floating "
"revisions have changed or not.")
parser.add_option("", "--server-only", help = "Run bitbake without a UI, only starting a server (cooker) process.",
action = "store_true", dest = "server_only", default = False)
parser.add_option("", "--server-only", action="store_true",
dest="server_only", default=False,
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
parser.add_option("-B", "--bind", help = "The name/address for the bitbake server to bind to.",
action = "store", dest = "bind", default = False)
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
help="The name/address for the bitbake server to bind to.")
parser.add_option("", "--no-setscene", help = "Do not run any setscene tasks. sstate will be ignored and everything needed, built.",
action = "store_true", dest = "nosetscene", default = False)
parser.add_option("", "--no-setscene", action="store_true",
dest="nosetscene", default=False,
help="Do not run any setscene tasks. sstate will be ignored and "
"everything needed, built.")
parser.add_option("", "--remote-server", help = "Connect to the specified server.",
action = "store", dest = "remote_server", default = False)
parser.add_option("", "--setscene-only", action="store_true",
dest="setsceneonly", default=False,
help="Only run setscene tasks, don't run any real tasks.")
parser.add_option("-m", "--kill-server", help = "Terminate the remote server.",
action = "store_true", dest = "kill_server", default = False)
parser.add_option("", "--remote-server", action="store", dest="remote_server",
default=os.environ.get("BBSERVER"),
help="Connect to the specified server.")
parser.add_option("", "--observe-only", help = "Connect to a server as an observing-only client.",
action = "store_true", dest = "observe_only", default = False)
parser.add_option("-m", "--kill-server", action="store_true",
dest="kill_server", default=False,
help="Terminate the remote server.")
parser.add_option("", "--status-only", help = "Check the status of the remote bitbake server.",
action = "store_true", dest = "status_only", default = False)
parser.add_option("", "--observe-only", action="store_true",
dest="observe_only", default=False,
help="Connect to a server as an observing-only client.")
parser.add_option("", "--status-only", action="store_true",
dest="status_only", default=False,
help="Check the status of the remote bitbake server.")
parser.add_option("-w", "--write-log", action="store", dest="writeeventlog",
default=os.environ.get("BBEVENTLOG"),
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
parser.add_option("-w", "--write-log", help = "Writes the event log of the build to a bitbake event json file. Use '' (empty string) to assign the name automatically.",
action = "store", dest = "writeeventlog")
options, targets = parser.parse_args(argv)
# use configuration files from environment variables
if "BBPRECONF" in os.environ:
option.prefile.append(os.environ["BBPRECONF"])
# some environmental variables set also configuration options
if "BBSERVER" in os.environ:
options.servertype = "xmlrpc"
options.remote_server = os.environ["BBSERVER"]
if "BBPOSTCONF" in os.environ:
option.postfile.append(os.environ["BBPOSTCONF"])
if "BBTOKEN" in os.environ:
options.xmlrpctoken = os.environ["BBTOKEN"]
if "BBEVENTLOG" is os.environ:
options.writeeventlog = os.environ["BBEVENTLOG"]
# fill in proper log name if not supplied
if options.writeeventlog is not None and len(options.writeeventlog) == 0:
from datetime import datetime
eventlog = "bitbake_eventlog_%s.json" % datetime.now().strftime("%Y%m%d%H%M%S")
options.writeeventlog = eventlog
import datetime
options.writeeventlog = "bitbake_eventlog_%s.json" % datetime.datetime.now().strftime("%Y%m%d%H%M%S")
# if BBSERVER says to autodetect, let's do that
if options.remote_server:
@@ -318,13 +279,12 @@ class BitBakeConfigParameters(cookerdata.ConfigParameters):
def start_server(servermodule, configParams, configuration, features):
server = servermodule.BitBakeServer()
single_use = not configParams.server_only
if configParams.bind:
(host, port) = configParams.bind.split(':')
server.initServer((host, int(port)), single_use)
configuration.interface = [server.serverImpl.host, server.serverImpl.port]
server.initServer((host, int(port)))
configuration.interface = [ server.serverImpl.host, server.serverImpl.port ]
else:
server.initServer(single_use=single_use)
server.initServer()
configuration.interface = []
try:
@@ -335,6 +295,7 @@ def start_server(servermodule, configParams, configuration, features):
server.addcooker(cooker)
server.saveConnectionDetails()
except Exception as e:
exc_info = sys.exc_info()
while hasattr(server, "event_queue"):
try:
import queue
@@ -346,7 +307,7 @@ def start_server(servermodule, configParams, configuration, features):
break
if isinstance(event, logging.LogRecord):
logger.handle(event)
raise
raise exc_info[1], None, exc_info[2]
server.detach()
cooker.lock.close()
return server
@@ -383,7 +344,7 @@ def bitbake_main(configParams, configuration):
if configParams.remote_server:
raise BBMainException("FATAL: The '--server-only' option conflicts with %s.\n" %
("the BBSERVER environment variable" if "BBSERVER" in os.environ \
else "the '--remote-server' option"))
else "the '--remote-server' option" ))
if configParams.bind and configParams.servertype != "xmlrpc":
raise BBMainException("FATAL: If '-B' or '--bind' is defined, we must "
@@ -398,8 +359,7 @@ def bitbake_main(configParams, configuration):
"connecting to a server.\n")
if configParams.kill_server and not configParams.remote_server:
raise BBMainException("FATAL: '--kill-server' can only be used to "
"terminate a remote server")
raise BBMainException("FATAL: '--kill-server' can only be used to terminate a remote server")
if "BBDEBUG" in os.environ:
level = int(os.environ["BBDEBUG"])
@@ -407,7 +367,7 @@ def bitbake_main(configParams, configuration):
configuration.debug = level
bb.msg.init_msgconfig(configParams.verbose, configuration.debug,
configuration.debug_domains)
configuration.debug_domains)
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
@@ -436,8 +396,7 @@ def bitbake_main(configParams, configuration):
bb.event.ui_queue = []
else:
# we start a stub server that is actually a XMLRPClient that connects to a real server
server = servermodule.BitBakeXMLRPCClient(configParams.observe_only,
configParams.xmlrpctoken)
server = servermodule.BitBakeXMLRPCClient(configParams.observe_only, configParams.xmlrpctoken)
server.saveConnectionDetails(configParams.remote_server)
@@ -447,13 +406,6 @@ def bitbake_main(configParams, configuration):
except Exception as e:
bb.fatal("Could not connect to server %s: %s" % (configParams.remote_server, str(e)))
if configParams.kill_server:
server_connection.connection.terminateServer()
bb.event.ui_queue = []
return 0
server_connection.setupEventQueue()
# Restore the environment in case the UI needs it
for k in cleanedvars:
os.environ[k] = cleanedvars[k]
@@ -465,15 +417,18 @@ def bitbake_main(configParams, configuration):
server_connection.terminate()
return 0
if configParams.kill_server:
server_connection.connection.terminateServer()
bb.event.ui_queue = []
return 0
try:
return ui_module.main(server_connection.connection, server_connection.events,
configParams)
return ui_module.main(server_connection.connection, server_connection.events, configParams)
finally:
bb.event.ui_queue = []
server_connection.terminate()
else:
print("Bitbake server address: %s, server port: %s" % (server.serverImpl.host,
server.serverImpl.port))
print("Bitbake server address: %s, server port: %s" % (server.serverImpl.host, server.serverImpl.port))
return 0
return 1

View File

@@ -19,22 +19,11 @@
from bb.utils import better_compile, better_exec
def insert_method(modulename, code, fn, lineno):
def insert_method(modulename, code, fn):
"""
Add code of a module should be added. The methods
will be simply added, no checking will be done
"""
comp = better_compile(code, modulename, fn, lineno=lineno)
comp = better_compile(code, modulename, fn )
better_exec(comp, None, code, fn)
compilecache = {}
def compile_cache(code):
h = hash(code)
if h in compilecache:
return compilecache[h]
return None
def compile_cache_add(code, compileobj):
h = hash(code)
compilecache[h] = compileobj

View File

@@ -220,7 +220,7 @@ class diskMonitor:
if minSpace and freeSpace < minSpace:
# Always show warning, the self.checked would always be False if the action is WARN
if self.preFreeS[k] == 0 or self.preFreeS[k] - freeSpace > self.spaceInterval and not self.checked[k]:
logger.warning("The free space of %s (%s) is running low (%.3fGB left)" % \
logger.warn("The free space of %s (%s) is running low (%.3fGB left)" % \
(path, dev, freeSpace / 1024 / 1024 / 1024.0))
self.preFreeS[k] = freeSpace
@@ -246,7 +246,7 @@ class diskMonitor:
continue
# Always show warning, the self.checked would always be False if the action is WARN
if self.preFreeI[k] == 0 or self.preFreeI[k] - freeInode > self.inodeInterval and not self.checked[k]:
logger.warning("The free inode of %s (%s) is running low (%.3fK left)" % \
logger.warn("The free inode of %s (%s) is running low (%.3fK left)" % \
(path, dev, freeInode / 1024.0))
self.preFreeI[k] = freeInode

View File

@@ -90,9 +90,8 @@ class BBLogFormatter(logging.Formatter):
if self.color_enabled:
record = self.colorize(record)
msg = logging.Formatter.format(self, record)
if hasattr(record, 'bb_exc_formatted'):
msg += '\n' + ''.join(record.bb_exc_formatted)
elif hasattr(record, 'bb_exc_info'):
if hasattr(record, 'bb_exc_info'):
etype, value, tb = record.bb_exc_info
formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
msg += '\n' + ''.join(formatted)

View File

@@ -161,8 +161,8 @@ def vars_from_file(mypkg, d):
def get_file_depends(d):
'''Return the dependent files'''
dep_files = []
depends = d.getVar('__base_depends', False) or []
depends = depends + (d.getVar('__depends', False) or [])
depends = d.getVar('__base_depends', True) or []
depends = depends + (d.getVar('__depends', True) or [])
for (fn, _) in depends:
dep_files.append(os.path.abspath(fn))
return " ".join(dep_files)

View File

@@ -83,7 +83,7 @@ class DataNode(AstNode):
def getFunc(self, key, data):
if 'flag' in self.groupd and self.groupd['flag'] != None:
return data.getVarFlag(key, self.groupd['flag'], expand=False, noweakdefault=True)
return data.getVarFlag(key, self.groupd['flag'], noweakdefault=True)
else:
return data.getVar(key, False, noweakdefault=True, parsing=True)
@@ -141,37 +141,24 @@ class DataNode(AstNode):
class MethodNode(AstNode):
tr_tbl = string.maketrans('/.+-@%&', '_______')
def __init__(self, filename, lineno, func_name, body, python, fakeroot):
def __init__(self, filename, lineno, func_name, body):
AstNode.__init__(self, filename, lineno)
self.func_name = func_name
self.body = body
self.python = python
self.fakeroot = fakeroot
def eval(self, data):
text = '\n'.join(self.body)
funcname = self.func_name
if self.func_name == "__anonymous":
funcname = ("__anon_%s_%s" % (self.lineno, self.filename.translate(MethodNode.tr_tbl)))
self.python = True
text = "def %s(d):\n" % (funcname) + text
bb.methodpool.insert_method(funcname, text, self.filename, self.lineno - len(self.body))
bb.methodpool.insert_method(funcname, text, self.filename)
anonfuncs = data.getVar('__BBANONFUNCS', False) or []
anonfuncs.append(funcname)
data.setVar('__BBANONFUNCS', anonfuncs)
if data.getVar(funcname, False):
# clean up old version of this piece of metadata, as its
# flags could cause problems
data.delVarFlag(funcname, 'python')
data.delVarFlag(funcname, 'fakeroot')
if self.python:
data.setVarFlag(funcname, "python", "1")
if self.fakeroot:
data.setVarFlag(funcname, "fakeroot", "1")
data.setVarFlag(funcname, "func", 1)
data.setVar(funcname, text, parsing=True)
data.setVarFlag(funcname, 'filename', self.filename)
data.setVarFlag(funcname, 'lineno', str(self.lineno - len(self.body)))
data.setVar(funcname, text, parsing=True)
else:
data.setVarFlag(self.func_name, "func", 1)
data.setVar(self.func_name, text, parsing=True)
class PythonMethodNode(AstNode):
def __init__(self, filename, lineno, function, modulename, body):
@@ -185,12 +172,31 @@ class PythonMethodNode(AstNode):
# 'this' file. This means we will not parse methods from
# bb classes twice
text = '\n'.join(self.body)
bb.methodpool.insert_method(self.modulename, text, self.filename, self.lineno - len(self.body) - 1)
bb.methodpool.insert_method(self.modulename, text, self.filename)
data.setVarFlag(self.function, "func", 1)
data.setVarFlag(self.function, "python", 1)
data.setVar(self.function, text, parsing=True)
data.setVarFlag(self.function, 'filename', self.filename)
data.setVarFlag(self.function, 'lineno', str(self.lineno - len(self.body) - 1))
class MethodFlagsNode(AstNode):
def __init__(self, filename, lineno, key, m):
AstNode.__init__(self, filename, lineno)
self.key = key
self.m = m
def eval(self, data):
if data.getVar(self.key, False):
# clean up old version of this piece of metadata, as its
# flags could cause problems
data.setVarFlag(self.key, 'python', None)
data.setVarFlag(self.key, 'fakeroot', None)
if self.m.group("py") is not None:
data.setVarFlag(self.key, "python", "1")
else:
data.delVarFlag(self.key, "python")
if self.m.group("fr") is not None:
data.setVarFlag(self.key, "fakeroot", "1")
else:
data.delVarFlag(self.key, "fakeroot")
class ExportFuncsNode(AstNode):
def __init__(self, filename, lineno, fns, classname):
@@ -203,7 +209,7 @@ class ExportFuncsNode(AstNode):
for func in self.n:
calledfunc = self.classname + "_" + func
if data.getVar(func, False) and not data.getVarFlag(func, 'export_func', False):
if data.getVar(func, False) and not data.getVarFlag(func, 'export_func'):
continue
if data.getVar(func, False):
@@ -211,15 +217,13 @@ class ExportFuncsNode(AstNode):
data.setVarFlag(func, 'func', None)
for flag in [ "func", "python" ]:
if data.getVarFlag(calledfunc, flag, False):
data.setVarFlag(func, flag, data.getVarFlag(calledfunc, flag, False))
if data.getVarFlag(calledfunc, flag):
data.setVarFlag(func, flag, data.getVarFlag(calledfunc, flag))
for flag in [ "dirs" ]:
if data.getVarFlag(func, flag, False):
data.setVarFlag(calledfunc, flag, data.getVarFlag(func, flag, False))
data.setVarFlag(func, "filename", "autogenerated")
data.setVarFlag(func, "lineno", 1)
if data.getVarFlag(func, flag):
data.setVarFlag(calledfunc, flag, data.getVarFlag(func, flag))
if data.getVarFlag(calledfunc, "python", False):
if data.getVarFlag(calledfunc, "python"):
data.setVar(func, " bb.build.exec_func('" + calledfunc + "', d)\n", parsing=True)
else:
if "-" in self.classname:
@@ -274,12 +278,15 @@ def handleExport(statements, filename, lineno, m):
def handleData(statements, filename, lineno, groupd):
statements.append(DataNode(filename, lineno, groupd))
def handleMethod(statements, filename, lineno, func_name, body, python, fakeroot):
statements.append(MethodNode(filename, lineno, func_name, body, python, fakeroot))
def handleMethod(statements, filename, lineno, func_name, body):
statements.append(MethodNode(filename, lineno, func_name, body))
def handlePythonMethod(statements, filename, lineno, funcname, modulename, body):
statements.append(PythonMethodNode(filename, lineno, funcname, modulename, body))
def handleMethodFlags(statements, filename, lineno, key, m):
statements.append(MethodFlagsNode(filename, lineno, key, m))
def handleExportFuncs(statements, filename, lineno, m, classname):
statements.append(ExportFuncsNode(filename, lineno, m.group(1), classname))
@@ -310,9 +317,7 @@ def finalize(fn, d, variant = None):
all_handlers = {}
for var in d.getVar('__BBHANDLERS', False) or []:
# try to add the handler
handlerfn = d.getVarFlag(var, "filename", False)
handlerln = int(d.getVarFlag(var, "lineno", False))
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask", True) or "").split(), handlerfn, handlerln)
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask", True) or "").split())
bb.event.fire(bb.event.RecipePreFinalise(fn), d)

View File

@@ -47,6 +47,7 @@ __addhandler_regexp__ = re.compile( r"addhandler\s+(.+)" )
__def_regexp__ = re.compile( r"def\s+(\w+).*:" )
__python_func_regexp__ = re.compile( r"(\s+.*)|(^$)" )
__infunc__ = []
__inpython__ = False
__body__ = []
@@ -54,6 +55,15 @@ __classname__ = ""
cached_statements = {}
# We need to indicate EOF to the feeder. This code is so messy that
# factoring it out to a close_parse_file method is out of question.
# We will use the IN_PYTHON_EOF as an indicator to just close the method
#
# The two parts using it are tightly integrated anyway
IN_PYTHON_EOF = -9999999999999
def supports(fn, d):
"""Return True if fn has a supported extension"""
return os.path.splitext(fn)[-1] in [".bb", ".bbclass", ".inc"]
@@ -100,7 +110,7 @@ def get_statements(filename, absolute_filename, base_name):
file.close()
if __inpython__:
# add a blank line to close out any python definition
feeder(lineno, "", filename, base_name, statements, eof=True)
feeder(IN_PYTHON_EOF, "", filename, base_name, statements)
if filename.endswith(".bbclass") or filename.endswith(".inc"):
cached_statements[absolute_filename] = statements
@@ -161,12 +171,12 @@ def handle(fn, d, include):
return d
def feeder(lineno, s, fn, root, statements, eof=False):
def feeder(lineno, s, fn, root, statements):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__, __infunc__, __body__, bb, __residue__, __classname__
if __infunc__:
if s == '}':
__body__.append('')
ast.handleMethod(statements, fn, lineno, __infunc__[0], __body__, __infunc__[3], __infunc__[4])
ast.handleMethod(statements, fn, lineno, __infunc__[0], __body__)
__infunc__ = []
__body__ = []
else:
@@ -175,7 +185,7 @@ def feeder(lineno, s, fn, root, statements, eof=False):
if __inpython__:
m = __python_func_regexp__.match(s)
if m and not eof:
if m and lineno != IN_PYTHON_EOF:
__body__.append(s)
return
else:
@@ -184,7 +194,7 @@ def feeder(lineno, s, fn, root, statements, eof=False):
__body__ = []
__inpython__ = False
if eof:
if lineno == IN_PYTHON_EOF:
return
if s and s[0] == '#':
@@ -211,7 +221,8 @@ def feeder(lineno, s, fn, root, statements, eof=False):
m = __func_start_regexp__.match(s)
if m:
__infunc__ = [m.group("func") or "__anonymous", fn, lineno, m.group("py") is not None, m.group("fr") is not None]
__infunc__ = [m.group("func") or "__anonymous", fn, lineno]
ast.handleMethodFlags(statements, fn, lineno, __infunc__[0], m)
return
m = __def_regexp__.match(s)

View File

@@ -84,13 +84,13 @@ def include(parentfn, fn, lineno, data, error_out):
bbpath = "%s:%s" % (dname, data.getVar("BBPATH", True))
abs_fn, attempts = bb.utils.which(bbpath, fn, history=True)
if abs_fn and bb.parse.check_dependency(data, abs_fn):
logger.warning("Duplicate inclusion for %s in %s" % (abs_fn, data.getVar('FILE', True)))
logger.warn("Duplicate inclusion for %s in %s" % (abs_fn, data.getVar('FILE', True)))
for af in attempts:
bb.parse.mark_dependency(data, af)
if abs_fn:
fn = abs_fn
elif bb.parse.check_dependency(data, fn):
logger.warning("Duplicate inclusion for %s in %s" % (fn, data.getVar('FILE', True)))
logger.warn("Duplicate inclusion for %s in %s" % (fn, data.getVar('FILE', True)))
try:
bb.parse.handle(fn, data, True)

View File

@@ -201,7 +201,6 @@ class PersistData(object):
def connect(database):
connection = sqlite3.connect(database, timeout=5, isolation_level=None)
connection.execute("pragma synchronous = off;")
connection.text_factory = str
return connection
def persist(domain, d):

View File

@@ -121,14 +121,11 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
preferred_file = None
preferred_ver = None
# pn can contain '_', e.g. gcc-cross-x86_64 and an override cannot
# hence we do this manually rather than use OVERRIDES
preferred_v = cfgData.getVar("PREFERRED_VERSION_pn-%s" % pn, True)
if not preferred_v:
preferred_v = cfgData.getVar("PREFERRED_VERSION_%s" % pn, True)
if not preferred_v:
preferred_v = cfgData.getVar("PREFERRED_VERSION", True)
localdata = data.createCopy(cfgData)
localdata.setVar('OVERRIDES', "%s:pn-%s:%s" % (data.getVar('OVERRIDES', localdata), pn, pn))
bb.data.update_data(localdata)
preferred_v = localdata.getVar('PREFERRED_VERSION', True)
if preferred_v:
m = re.match('(\d+:)*(.*)(_.*)*', preferred_v)
if m:
@@ -226,7 +223,7 @@ def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
def _filterProviders(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
environment variables
environment variables and previous build results
"""
eligible = []
preferred_versions = {}
@@ -283,7 +280,7 @@ def _filterProviders(providers, item, cfgData, dataCache):
def filterProviders(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
environment variables
environment variables and previous build results
Takes a "normal" target item
"""
@@ -311,56 +308,38 @@ def filterProviders(providers, item, cfgData, dataCache):
def filterProvidersRunTime(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
environment variables
environment variables and previous build results
Takes a "runtime" target item
"""
eligible = _filterProviders(providers, item, cfgData, dataCache)
# First try and match any PREFERRED_RPROVIDER entry
prefervar = cfgData.getVar('PREFERRED_RPROVIDER_%s' % item, True)
foundUnique = False
if prefervar:
for p in eligible:
pn = dataCache.pkg_fn[p]
if prefervar == pn:
logger.verbose("selecting %s to satisfy %s due to PREFERRED_RPROVIDER", pn, item)
eligible.remove(p)
eligible = [p] + eligible
foundUnique = True
numberPreferred = 1
# Should use dataCache.preferred here?
preferred = []
preferred_vars = []
pns = {}
for p in eligible:
pns[dataCache.pkg_fn[p]] = p
for p in eligible:
pn = dataCache.pkg_fn[p]
provides = dataCache.pn_provides[pn]
for provide in provides:
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % provide, True)
#logger.debug(1, "checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
if prefervar in pns and pns[prefervar] not in preferred:
var = "PREFERRED_PROVIDER_%s = %s" % (provide, prefervar)
logger.verbose("selecting %s to satisfy runtime %s due to %s", prefervar, item, var)
preferred_vars.append(var)
pref = pns[prefervar]
eligible.remove(pref)
eligible = [pref] + eligible
preferred.append(pref)
break
# If we didn't find an RPROVIDER entry, try and infer the provider from PREFERRED_PROVIDER entries
# by looking through the provides of each eligible recipe and seeing if a PREFERRED_PROVIDER was set.
# This is most useful for virtual/ entries rather than having a RPROVIDER per entry.
if not foundUnique:
# Should use dataCache.preferred here?
preferred = []
preferred_vars = []
pns = {}
for p in eligible:
pns[dataCache.pkg_fn[p]] = p
for p in eligible:
pn = dataCache.pkg_fn[p]
provides = dataCache.pn_provides[pn]
for provide in provides:
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % provide, True)
#logger.debug(1, "checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
if prefervar in pns and pns[prefervar] not in preferred:
var = "PREFERRED_PROVIDER_%s = %s" % (provide, prefervar)
logger.verbose("selecting %s to satisfy runtime %s due to %s", prefervar, item, var)
preferred_vars.append(var)
pref = pns[prefervar]
eligible.remove(pref)
eligible = [pref] + eligible
preferred.append(pref)
break
numberPreferred = len(preferred)
numberPreferred = len(preferred)
if numberPreferred > 1:
logger.error("Trying to resolve runtime dependency %s resulted in conflicting PREFERRED_PROVIDER entries being found.\nThe providers found were: %s\nThe PREFERRED_PROVIDER entries resulting in this conflict were: %s. You could set PREFERRED_RPROVIDER_%s" % (item, preferred, preferred_vars, item))
logger.error("Trying to resolve runtime dependency %s resulted in conflicting PREFERRED_PROVIDER entries being found.\nThe providers found were: %s\nThe PREFERRED_PROVIDER entries resulting in this conflict were: %s", item, preferred, preferred_vars)
logger.debug(1, "sorted runtime providers for %s are: %s", item, eligible)
@@ -400,29 +379,3 @@ def getRuntimeProviders(dataCache, rdepend):
logger.debug(1, "Assuming %s is a dynamic package, but it may not exist" % rdepend)
return rproviders
def buildWorldTargetList(dataCache):
"""
Build package list for "bitbake world"
"""
if dataCache.world_target:
return
logger.debug(1, "collating packages for \"world\"")
for f in dataCache.possible_world:
terminal = True
pn = dataCache.pkg_fn[f]
for p in dataCache.pn_provides[pn]:
if p.startswith('virtual/'):
logger.debug(2, "World build skipping %s due to %s provider starting with virtual/", f, p)
terminal = False
break
for pf in dataCache.providers[p]:
if dataCache.pkg_fn[pf] != pn:
logger.debug(2, "World build skipping %s due to both us and %s providing %s", f, pf, p)
terminal = False
break
if terminal:
dataCache.world_target.add(pn)

View File

@@ -261,13 +261,6 @@ class RunQueueData:
taskname = self.runq_task[task] + task_name_suffix
return "%s, %s" % (fn, taskname)
def get_short_user_idstring(self, task, task_name_suffix = ""):
fn = self.taskData.fn_index[self.runq_fnid[task]]
pn = self.dataCache.pkg_fn[fn]
taskname = self.runq_task[task] + task_name_suffix
return "%s:%s" % (pn, taskname)
def get_task_id(self, fnid, taskname):
for listid in xrange(len(self.runq_fnid)):
if self.runq_fnid[listid] == fnid and self.runq_task[listid] == taskname:
@@ -641,33 +634,23 @@ class RunQueueData:
fnid = taskData.build_targets[targetid][0]
fn = taskData.fn_index[fnid]
task = target[1]
parents = False
if task.endswith('-'):
parents = True
task = task[:-1]
self.target_pairs.append((fn, task))
self.target_pairs.append((fn, target[1]))
if fnid in taskData.failed_fnids:
continue
if task not in taskData.tasks_lookup[fnid]:
if target[1] not in taskData.tasks_lookup[fnid]:
import difflib
close_matches = difflib.get_close_matches(task, taskData.tasks_lookup[fnid], cutoff=0.7)
close_matches = difflib.get_close_matches(target[1], taskData.tasks_lookup[fnid], cutoff=0.7)
if close_matches:
extra = ". Close matches:\n %s" % "\n ".join(close_matches)
else:
extra = ""
bb.msg.fatal("RunQueue", "Task %s does not exist for target %s%s" % (task, target[0], extra))
# For tasks called "XXXX-", ony run their dependencies
listid = taskData.tasks_lookup[fnid][task]
if parents:
for i in self.runq_depends[listid]:
mark_active(i, 1)
else:
mark_active(listid, 1)
bb.msg.fatal("RunQueue", "Task %s does not exist for target %s%s" % (target[1], target[0], extra))
listid = taskData.tasks_lookup[fnid][target[1]]
mark_active(listid, 1)
# Step C - Prune all inactive tasks
#
@@ -760,72 +743,11 @@ class RunQueueData:
seen_pn.append(pn)
else:
bb.fatal("Multiple versions of %s are due to be built (%s). Only one version of a given PN should be built in any given build. You likely need to set PREFERRED_VERSION_%s to select the correct version or don't depend on multiple versions." % (pn, " ".join(prov_list[prov]), pn))
msg = "Multiple .bb files are due to be built which each provide %s:\n %s" % (prov, "\n ".join(prov_list[prov]))
#
# Construct a list of things which uniquely depend on each provider
# since this may help the user figure out which dependency is triggering this warning
#
msg += "\nA list of tasks depending on these providers is shown and may help explain where the dependency comes from."
deplist = {}
commondeps = None
for provfn in prov_list[prov]:
deps = set()
for task, fnid in enumerate(self.runq_fnid):
fn = taskData.fn_index[fnid]
if fn != provfn:
continue
for dep in self.runq_revdeps[task]:
fn = taskData.fn_index[self.runq_fnid[dep]]
if fn == provfn:
continue
deps.add(self.get_short_user_idstring(dep))
if not commondeps:
commondeps = set(deps)
else:
commondeps &= deps
deplist[provfn] = deps
for provfn in deplist:
msg += "\n%s has unique dependees:\n %s" % (provfn, "\n ".join(deplist[provfn] - commondeps))
#
# Construct a list of provides and runtime providers for each recipe
# (rprovides has to cover RPROVIDES, PACKAGES, PACKAGES_DYNAMIC)
#
msg += "\nIt could be that one recipe provides something the other doesn't and should. The following provider and runtime provider differences may be helpful."
provide_results = {}
rprovide_results = {}
commonprovs = None
commonrprovs = None
for provfn in prov_list[prov]:
provides = set(self.dataCache.fn_provides[provfn])
rprovides = set()
for rprovide in self.dataCache.rproviders:
if provfn in self.dataCache.rproviders[rprovide]:
rprovides.add(rprovide)
for package in self.dataCache.packages:
if provfn in self.dataCache.packages[package]:
rprovides.add(package)
for package in self.dataCache.packages_dynamic:
if provfn in self.dataCache.packages_dynamic[package]:
rprovides.add(package)
if not commonprovs:
commonprovs = set(provides)
else:
commonprovs &= provides
provide_results[provfn] = provides
if not commonrprovs:
commonrprovs = set(rprovides)
else:
commonrprovs &= rprovides
rprovide_results[provfn] = rprovides
#msg += "\nCommon provides:\n %s" % ("\n ".join(commonprovs))
#msg += "\nCommon rprovides:\n %s" % ("\n ".join(commonrprovs))
for provfn in prov_list[prov]:
msg += "\n%s has unique provides:\n %s" % (provfn, "\n ".join(provide_results[provfn] - commonprovs))
msg += "\n%s has unique rprovides:\n %s" % (provfn, "\n ".join(rprovide_results[provfn] - commonrprovs))
msg = "Multiple .bb files are due to be built which each provide %s (%s)." % (prov, " ".join(prov_list[prov]))
if self.warn_multi_bb:
logger.warning(msg)
logger.warn(msg)
else:
msg += "\n This usually means one provides something the other doesn't and should."
logger.error(msg)
# Create a whitelist usable by the stamp checks
@@ -852,7 +774,7 @@ class RunQueueData:
taskdep = self.dataCache.task_deps[fn]
fnid = self.taskData.getfn_id(fn)
if taskname not in taskData.tasks_lookup[fnid]:
logger.warning("Task %s does not exist, invalidating this task will have no effect" % taskname)
logger.warn("Task %s does not exist, invalidating this task will have no effect" % taskname)
if 'nostamp' in taskdep and taskname in taskdep['nostamp']:
if error_nostamp:
bb.fatal("Task %s is marked nostamp, cannot invalidate this task" % taskname)
@@ -876,7 +798,7 @@ class RunQueueData:
invalidate_task(fn, st, True)
# Create and print to the logs a virtual/xxxx -> PN (fn) table
virtmap = taskData.get_providermap(prefix="virtual/")
virtmap = taskData.get_providermap()
virtpnmap = {}
for v in virtmap:
virtpnmap[v] = self.dataCache.pkg_fn[virtmap[v]]
@@ -897,7 +819,6 @@ class RunQueueData:
procdep.append(self.taskData.fn_index[self.runq_fnid[dep]] + "." + self.runq_task[dep])
self.runq_hash[task] = bb.parse.siggen.get_taskhash(self.taskData.fn_index[self.runq_fnid[task]], self.runq_task[task], procdep, self.dataCache)
bb.parse.siggen.writeout_file_checksum_cache()
return len(self.runq_fnid)
def dump_data(self, taskQueue):
@@ -953,7 +874,6 @@ class RunQueue:
if self.cooker.configuration.profile:
magic = "decafbadbad"
if fakeroot:
magic = magic + "beef"
fakerootcmd = self.cfgData.getVar("FAKEROOTCMD", True)
fakerootenv = (self.cfgData.getVar("FAKEROOTBASEENV", True) or "").split()
env = os.environ.copy()
@@ -1085,19 +1005,15 @@ class RunQueue:
stampfile3 = bb.build.stampfile(taskname2 + "_setscene", self.rqdata.dataCache, fn2)
t2 = get_timestamp(stampfile2)
t3 = get_timestamp(stampfile3)
if t3 and not t2:
continue
if t3 and t3 > t2:
continue
continue
if fn == fn2 or (fulldeptree and fn2 not in stampwhitelist):
if not t2:
logger.debug(2, 'Stampfile %s does not exist', stampfile2)
iscurrent = False
break
if t1 < t2:
logger.debug(2, 'Stampfile %s < %s', stampfile, stampfile2)
iscurrent = False
break
if recurse and iscurrent:
if dep in cache:
iscurrent = cache[dep]
@@ -1126,10 +1042,10 @@ class RunQueue:
else:
self.state = runQueueSceneInit
# we are ready to run, emit dependency info to any UI or class which
# needs it
depgraph = self.cooker.buildDependTree(self, self.rqdata.taskData)
bb.event.fire(bb.event.DepTreeGenerated(depgraph), self.cooker.data)
# we are ready to run, see if any UI client needs the dependency info
if bb.cooker.CookerFeatures.SEND_DEPENDS_TREE in self.cooker.featureset:
depgraph = self.cooker.buildDependTree(self, self.rqdata.taskData)
bb.event.fire(bb.event.DepTreeGenerated(depgraph), self.cooker.data)
if self.state is runQueueSceneInit:
dump = self.cooker.configuration.dump_signatures
@@ -1151,12 +1067,9 @@ class RunQueue:
retval = self.rqexe.execute()
if self.state is runQueueRunInit:
if self.cooker.configuration.setsceneonly:
self.state = runQueueComplete
else:
logger.info("Executing RunQueue Tasks")
self.rqexe = RunQueueExecuteTasks(self)
self.state = runQueueRunning
logger.info("Executing RunQueue Tasks")
self.rqexe = RunQueueExecuteTasks(self)
self.state = runQueueRunning
if self.state is runQueueRunning:
retval = self.rqexe.execute()
@@ -1475,7 +1388,7 @@ class RunQueueExecuteTasks(RunQueueExecute):
self.runq_buildable.append(1)
else:
self.runq_buildable.append(0)
if len(self.rqdata.runq_revdeps[task]) > 0 and self.rqdata.runq_revdeps[task].issubset(self.rq.scenequeue_covered):
if len(self.rqdata.runq_revdeps[task]) > 0 and self.rqdata.runq_revdeps[task].issubset(self.rq.scenequeue_covered) and task not in self.rq.scenequeue_notcovered:
self.rq.scenequeue_covered.add(task)
found = True
@@ -1486,7 +1399,7 @@ class RunQueueExecuteTasks(RunQueueExecute):
continue
logger.debug(1, 'Considering %s (%s): %s' % (task, self.rqdata.get_user_idstring(task), str(self.rqdata.runq_revdeps[task])))
if len(self.rqdata.runq_revdeps[task]) > 0 and self.rqdata.runq_revdeps[task].issubset(self.rq.scenequeue_covered):
if len(self.rqdata.runq_revdeps[task]) > 0 and self.rqdata.runq_revdeps[task].issubset(self.rq.scenequeue_covered) and task not in self.rq.scenequeue_notcovered:
found = True
self.rq.scenequeue_covered.add(task)
@@ -1795,7 +1708,6 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
sq_revdeps_new[point] = set()
if point in self.rqdata.runq_setscene:
sq_revdeps_new[point] = tasks
tasks = set()
for dep in self.rqdata.runq_depends[point]:
if point in sq_revdeps[dep]:
sq_revdeps[dep].remove(point)
@@ -2073,7 +1985,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
bb.event.fire(startevent, self.cfgData)
taskdep = self.rqdata.dataCache.task_deps[fn]
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not self.cooker.configuration.dry_run:
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot']:
if not self.rq.fakeworker:
self.rq.start_fakeworker(self)
self.rq.fakeworker.stdin.write("<runtask>" + pickle.dumps((fn, realtask, taskname, True, self.cooker.collection.get_file_appends(fn), None)) + "</runtask>")
@@ -2102,6 +2014,9 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
self.rq.scenequeue_covered = set()
for task in oldcovered:
self.rq.scenequeue_covered.add(self.rqdata.runq_setscene[task])
self.rq.scenequeue_notcovered = set()
for task in self.scenequeue_notcovered:
self.rq.scenequeue_notcovered.add(self.rqdata.runq_setscene[task])
logger.debug(1, 'We can skip tasks %s', sorted(self.rq.scenequeue_covered))

View File

@@ -63,9 +63,6 @@ class BitBakeBaseServerConnection():
def terminate(self):
pass
def setupEventQueue(self):
pass
""" BitBakeBaseServer class is the common ancestor to all Bitbake servers

View File

@@ -53,12 +53,10 @@ class ServerCommunicator():
while True:
# don't let the user ctrl-c while we're waiting for a response
try:
for idx in range(0,4): # 0, 1, 2, 3
if self.connection.poll(5):
return self.connection.recv()
else:
bb.warn("Timeout while attempting to communicate with bitbake server")
bb.fatal("Gave up; Too many tries: timeout while attempting to communicate with bitbake server")
if self.connection.poll(20):
return self.connection.recv()
else:
bb.fatal("Timeout while attempting to communicate with bitbake server")
except KeyboardInterrupt:
pass
@@ -108,7 +106,6 @@ class ProcessServer(Process, BaseImplServer):
# the UI and communicated to us
self.quitin.close()
signal.signal(signal.SIGINT, signal.SIG_IGN)
bb.utils.set_process_name("Cooker")
while not self.quit:
try:
if self.command_channel.poll():
@@ -215,17 +212,17 @@ class ProcessEventQueue(multiprocessing.queues.Queue):
def __init__(self, maxsize):
multiprocessing.queues.Queue.__init__(self, maxsize)
self.exit = False
bb.utils.set_process_name("ProcessEQueue")
def setexit(self):
self.exit = True
def waitEvent(self, timeout):
if self.exit:
return self.getEvent()
sys.exit(1)
try:
if not self.server.is_alive():
return self.getEvent()
self.setexit()
return None
return self.get(True, timeout)
except Empty:
return None
@@ -234,15 +231,14 @@ class ProcessEventQueue(multiprocessing.queues.Queue):
try:
if not self.server.is_alive():
self.setexit()
return None
return self.get(False)
except Empty:
if self.exit:
sys.exit(1)
return None
class BitBakeServer(BitBakeBaseServer):
def initServer(self, single_use=True):
def initServer(self):
# establish communication channels. We use bidirectional pipes for
# ui <--> server command/response pairs
# and a queue for server -> ui event notifications

View File

@@ -97,10 +97,10 @@ class BitBakeServerCommands():
# we don't allow connections if the cooker is running
if (self.cooker.state in [bb.cooker.state.parsing, bb.cooker.state.running]):
return None, "Cooker is busy: %s" % bb.cooker.state.get_name(self.cooker.state)
return None
self.event_handle = bb.event.register_UIHhandler(s, True)
return self.event_handle, 'OK'
return self.event_handle
def unregisterEventHandler(self, handlerNum):
"""
@@ -186,12 +186,13 @@ class XMLRPCServer(SimpleXMLRPCServer, BaseImplServer):
# remove this when you're done with debugging
# allow_reuse_address = True
def __init__(self, interface, single_use=False):
def __init__(self, interface):
"""
Constructor
"""
BaseImplServer.__init__(self)
self.single_use = single_use
if (interface[1] == 0): # anonymous port, not getting reused
self.single_use = True
# Use auto port configuration
if (interface[1] == -1):
interface = (interface[0], 0)
@@ -204,6 +205,7 @@ class XMLRPCServer(SimpleXMLRPCServer, BaseImplServer):
self.commands = BitBakeServerCommands(self)
self.autoregister_all_functions(self.commands, "")
self.interface = interface
self.single_use = False
def addcooker(self, cooker):
BaseImplServer.addcooker(self, cooker)
@@ -300,9 +302,7 @@ class BitBakeXMLRPCServerConnection(BitBakeBaseServerConnection):
return None
self.transport.set_connection_token(token)
return self
def setupEventQueue(self):
self.events = uievent.BBUIEventQueue(self.connection, self.clientinfo)
for event in bb.event.ui_queue:
self.events.queue_event(event)
@@ -314,6 +314,8 @@ class BitBakeXMLRPCServerConnection(BitBakeBaseServerConnection):
# no need to log it here, the error shall be sent to the client
raise BaseException(error)
return self
def removeClient(self):
if not self.observer_only:
self.connection.removeClient()
@@ -332,9 +334,9 @@ class BitBakeXMLRPCServerConnection(BitBakeBaseServerConnection):
pass
class BitBakeServer(BitBakeBaseServer):
def initServer(self, interface = ("localhost", 0), single_use = False):
def initServer(self, interface = ("localhost", 0)):
self.interface = interface
self.serverImpl = XMLRPCServer(interface, single_use)
self.serverImpl = XMLRPCServer(interface)
def detach(self):
daemonize.createDaemon(self.serverImpl.serve_forever, "bitbake-cookerdaemon.log")

View File

@@ -4,7 +4,6 @@ import os
import re
import tempfile
import bb.data
from bb.checksum import FileChecksumCache
logger = logging.getLogger('BitBake.SigGen')
@@ -38,7 +37,6 @@ class SignatureGenerator(object):
self.taskhash = {}
self.runtaskdeps = {}
self.file_checksum_values = {}
self.taints = {}
def finalise(self, fn, d, varient):
return
@@ -46,8 +44,7 @@ class SignatureGenerator(object):
def get_taskhash(self, fn, task, deps, dataCache):
return "0"
def writeout_file_checksum_cache(self):
"""Write/update the file checksum cache onto disk"""
def set_taskdata(self, hashes, deps, checksum):
return
def stampfile(self, stampbase, file_name, taskname, extrainfo):
@@ -66,10 +63,10 @@ class SignatureGenerator(object):
return
def get_taskdata(self):
return (self.runtaskdeps, self.taskhash, self.file_checksum_values, self.taints)
return (self.runtaskdeps, self.taskhash, self.file_checksum_values)
def set_taskdata(self, data):
self.runtaskdeps, self.taskhash, self.file_checksum_values, self.taints = data
self.runtaskdeps, self.taskhash, self.file_checksum_values = data
class SignatureGeneratorBasic(SignatureGenerator):
@@ -90,12 +87,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.basewhitelist = set((data.getVar("BB_HASHBASE_WHITELIST", True) or "").split())
self.taskwhitelist = None
self.init_rundepcheck(data)
checksum_cache_file = data.getVar("BB_HASH_CHECKSUM_CACHE_FILE", True)
if checksum_cache_file:
self.checksum_cache = FileChecksumCache()
self.checksum_cache.init_cache(data, checksum_cache_file)
else:
self.checksum_cache = None
def init_rundepcheck(self, data):
self.taskwhitelist = data.getVar("BB_HASHTASK_WHITELIST", True) or None
@@ -155,7 +146,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
try:
taskdeps = self._build_data(fn, d)
except:
bb.warn("Error during finalise of %s" % fn)
bb.note("Error during finalise of %s" % fn)
raise
#Slow but can be useful for debugging mismatched basehashes
@@ -187,9 +178,8 @@ class SignatureGeneratorBasic(SignatureGenerator):
k = fn + "." + task
data = dataCache.basetaskhash[k]
self.runtaskdeps[k] = []
self.file_checksum_values[k] = []
self.file_checksum_values[k] = {}
recipename = dataCache.pkg_fn[fn]
for dep in sorted(deps, key=clean_basepath):
depname = dataCache.pkg_fn[self.pkgnameextract.search(dep).group('fn')]
if not self.rundep_check(fn, recipename, task, dep, depname, dataCache):
@@ -200,12 +190,9 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.runtaskdeps[k].append(dep)
if task in dataCache.file_checksums[fn]:
if self.checksum_cache:
checksums = self.checksum_cache.get_checksums(dataCache.file_checksums[fn][task], recipename)
else:
checksums = bb.fetch2.get_file_checksums(dataCache.file_checksums[fn][task], recipename)
checksums = bb.fetch2.get_file_checksums(dataCache.file_checksums[fn][task], recipename)
for (f,cs) in checksums:
self.file_checksum_values[k].append((f,cs))
self.file_checksum_values[k][f] = cs
if cs:
data = data + cs
@@ -221,29 +208,17 @@ class SignatureGeneratorBasic(SignatureGenerator):
if taint:
data = data + taint
self.taints[k] = taint
logger.warning("%s is tainted from a forced run" % k)
logger.warn("%s is tainted from a forced run" % k)
h = hashlib.md5(data).hexdigest()
self.taskhash[k] = h
#d.setVar("BB_TASKHASH_task-%s" % task, taskhash[task])
return h
def writeout_file_checksum_cache(self):
"""Write/update the file checksum cache onto disk"""
if self.checksum_cache:
self.checksum_cache.save_extras()
self.checksum_cache.save_merge()
else:
bb.fetch2.fetcher_parse_save()
bb.fetch2.fetcher_parse_done()
def dump_sigtask(self, fn, task, stampbase, runtime):
k = fn + "." + task
referencestamp = stampbase
if isinstance(runtime, str) and runtime.startswith("customfile"):
if runtime == "customfile":
sigfile = stampbase
referencestamp = runtime[11:]
elif runtime and k in self.taskhash:
sigfile = stampbase + "." + task + ".sigdata" + "." + self.taskhash[k]
else:
@@ -252,7 +227,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
bb.utils.mkdirhier(os.path.dirname(sigfile))
data = {}
data['task'] = task
data['basewhitelist'] = self.basewhitelist
data['taskwhitelist'] = self.taskwhitelist
data['taskdeps'] = self.taskdeps[fn][task]
@@ -268,13 +242,12 @@ class SignatureGeneratorBasic(SignatureGenerator):
if runtime and k in self.taskhash:
data['runtaskdeps'] = self.runtaskdeps[k]
data['file_checksum_values'] = [(os.path.basename(f), cs) for f,cs in self.file_checksum_values[k]]
data['file_checksum_values'] = [(os.path.basename(f), cs) for f,cs in self.file_checksum_values[k].items()]
data['runtaskhashes'] = {}
for dep in data['runtaskdeps']:
data['runtaskhashes'][dep] = self.taskhash[dep]
data['taskhash'] = self.taskhash[k]
taint = self.read_taint(fn, task, referencestamp)
taint = self.read_taint(fn, task, stampbase)
if taint:
data['taint'] = taint
@@ -296,15 +269,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
pass
raise err
computed_basehash = calc_basehash(data)
if computed_basehash != self.basehash[k]:
bb.error("Basehash mismatch %s verses %s for %s" % (computed_basehash, self.basehash[k], k))
if k in self.taskhash:
computed_taskhash = calc_taskhash(data)
if computed_taskhash != self.taskhash[k]:
bb.error("Taskhash mismatch %s verses %s for %s" % (computed_taskhash, self.taskhash[k], k))
def dump_sigs(self, dataCache, options):
for fn in self.taskdeps:
for task in self.taskdeps[fn]:
@@ -344,8 +308,7 @@ def dump_this_task(outfile, d):
import bb.parse
fn = d.getVar("BB_FILENAME", True)
task = "do_" + d.getVar("BB_CURRENTTASK", True)
referencestamp = bb.build.stamp_internal(task, d, None, True)
bb.parse.siggen.dump_sigtask(fn, task, outfile, "customfile:" + referencestamp)
bb.parse.siggen.dump_sigtask(fn, task, outfile, "customfile")
def clean_basepath(a):
b = a.rsplit("/", 2)[1] + a.rsplit("/", 2)[2]
@@ -522,40 +485,6 @@ def compare_sigfiles(a, b, recursecb = None):
return output
def calc_basehash(sigdata):
task = sigdata['task']
basedata = sigdata['varvals'][task]
if basedata is None:
basedata = ''
alldeps = sigdata['taskdeps']
for dep in alldeps:
basedata = basedata + dep
val = sigdata['varvals'][dep]
if val is not None:
basedata = basedata + str(val)
return hashlib.md5(basedata).hexdigest()
def calc_taskhash(sigdata):
data = sigdata['basehash']
for dep in sigdata['runtaskdeps']:
data = data + sigdata['runtaskhashes'][dep]
for c in sigdata['file_checksum_values']:
data = data + c[1]
if 'taint' in sigdata:
if 'nostamp:' in sigdata['taint']:
data = data + sigdata['taint'][8:]
else:
data = data + sigdata['taint']
return hashlib.md5(data).hexdigest()
def dump_sigfile(a):
output = []
@@ -589,13 +518,17 @@ def dump_sigfile(a):
if 'taint' in a_data:
output.append("Tainted (by forced/invalidated task): %s" % a_data['taint'])
if 'task' in a_data:
computed_basehash = calc_basehash(a_data)
output.append("Computed base hash is %s and from file %s" % (computed_basehash, a_data['basehash']))
else:
output.append("Unable to compute base hash")
data = a_data['basehash']
for dep in a_data['runtaskdeps']:
data = data + a_data['runtaskhashes'][dep]
computed_taskhash = calc_taskhash(a_data)
output.append("Computed task hash is %s" % computed_taskhash)
for c in a_data['file_checksum_values']:
data = data + c[1]
if 'taint' in a_data:
data = data + a_data['taint']
h = hashlib.md5(data).hexdigest()
output.append("Computed Hash is %s" % h)
return output

View File

@@ -172,8 +172,6 @@ class TaskData:
if fnid in self.tasks_fnid:
return
self.add_extra_deps(fn, dataCache)
for task in task_deps['tasks']:
# Work out task dependencies
@@ -244,21 +242,6 @@ class TaskData:
self.fail_fnid(fnid)
return
def add_extra_deps(self, fn, dataCache):
func = dataCache.extradepsfunc.get(fn, None)
if func:
bb.providers.buildWorldTargetList(dataCache)
pn = dataCache.pkg_fn[fn]
params = {'deps': dataCache.deps[fn],
'world_target': dataCache.world_target,
'pkg_pn': dataCache.pkg_pn,
'self_pn': pn}
funcname = '_%s_calculate_extra_depends' % pn.replace('-', '_')
paramlist = ','.join(params.keys())
func = 'def %s(%s):\n%s\n\n%s(%s)' % (funcname, paramlist, func, funcname, paramlist)
bb.utils.better_exec(func, params)
def have_build_target(self, target):
"""
Have we a build target matching this name?
@@ -446,14 +429,7 @@ class TaskData:
return
if not item in dataCache.providers:
close_matches = self.get_close_matches(item, dataCache.providers.keys())
# Is it in RuntimeProviders ?
all_p = bb.providers.getRuntimeProviders(dataCache, item)
for fn in all_p:
new = dataCache.pkg_fn[fn] + " RPROVIDES " + item
if new not in close_matches:
close_matches.append(new)
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees_str(item), reasons=self.get_reasons(item), close_matches=close_matches), cfgData)
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees_str(item), reasons=self.get_reasons(item), close_matches=self.get_close_matches(item, dataCache.providers.keys())), cfgData)
raise bb.providers.NoProvider(item)
if self.have_build_target(item):
@@ -636,16 +612,17 @@ class TaskData:
break
# self.dump_data()
def get_providermap(self, prefix=None):
provmap = {}
def get_providermap(self):
virts = []
virtmap = {}
for name in self.build_names_index:
if prefix and not name.startswith(prefix):
continue
if self.have_build_target(name):
provider = self.get_provider(name)
if provider:
provmap[name] = self.fn_index[provider[0]]
return provmap
if name.startswith("virtual/"):
virts.append(name)
for v in virts:
if self.have_build_target(v):
virtmap[v] = self.fn_index[self.get_provider(v)[0]]
return virtmap
def dump_data(self):
"""

View File

@@ -293,16 +293,11 @@ bb.data.getVar(a(), d, False)
def test_python(self):
self.d.setVar("FOO", self.pydata)
self.setEmptyVars(["inexpand", "a", "test2", "test"])
self.d.setVarFlags("FOO", {
"func": True,
"python": True,
"lineno": 1,
"filename": "example.bb",
})
self.d.setVarFlags("FOO", {"func": True, "python": True})
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["somevar", "bar", "something", "inexpand", "test", "test2", "a"]))
self.assertEquals(deps, set(["somevar", "bar", "something", "inexpand", "test", "test2", "a"]))
shelldata = """
@@ -349,7 +344,7 @@ esac
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["somevar", "inverted"] + execs))
self.assertEquals(deps, set(["somevar", "inverted"] + execs))
def test_vardeps(self):
@@ -359,7 +354,7 @@ esac
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["oe_libinstall"]))
self.assertEquals(deps, set(["oe_libinstall"]))
def test_vardeps_expand(self):
self.d.setVar("oe_libinstall", "echo test")
@@ -368,7 +363,7 @@ esac
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
self.assertEqual(deps, set(["oe_libinstall"]))
self.assertEquals(deps, set(["oe_libinstall"]))
#Currently no wildcard support
#def test_vardeps_wildcards(self):

View File

@@ -34,14 +34,14 @@ class COWTestCase(unittest.TestCase):
from bb.COW import COWDictBase
a = COWDictBase.copy()
self.assertEqual(False, 'a' in a)
self.assertEquals(False, a.has_key('a'))
a['a'] = 'a'
a['b'] = 'b'
self.assertEqual(True, 'a' in a)
self.assertEqual(True, 'b' in a)
self.assertEqual('a', a['a'] )
self.assertEqual('b', a['b'] )
self.assertEquals(True, a.has_key('a'))
self.assertEquals(True, a.has_key('b'))
self.assertEquals('a', a['a'] )
self.assertEquals('b', a['b'] )
def testCopyCopy(self):
"""
@@ -60,31 +60,31 @@ class COWTestCase(unittest.TestCase):
c['a'] = 30
# test separation of the two instances
self.assertEqual(False, 'c' in c)
self.assertEqual(30, c['a'])
self.assertEqual(10, b['a'])
self.assertEquals(False, c.has_key('c'))
self.assertEquals(30, c['a'])
self.assertEquals(10, b['a'])
# test copy
b_2 = b.copy()
c_2 = c.copy()
self.assertEqual(False, 'c' in c_2)
self.assertEqual(10, b_2['a'])
self.assertEquals(False, c_2.has_key('c'))
self.assertEquals(10, b_2['a'])
b_2['d'] = 40
self.assertEqual(False, 'd' in c_2)
self.assertEqual(True, 'd' in b_2)
self.assertEqual(40, b_2['d'])
self.assertEqual(False, 'd' in b)
self.assertEqual(False, 'd' in c)
self.assertEquals(False, c_2.has_key('d'))
self.assertEquals(True, b_2.has_key('d'))
self.assertEquals(40, b_2['d'])
self.assertEquals(False, b.has_key('d'))
self.assertEquals(False, c.has_key('d'))
c_2['d'] = 30
self.assertEqual(True, 'd' in c_2)
self.assertEqual(True, 'd' in b_2)
self.assertEqual(30, c_2['d'])
self.assertEqual(40, b_2['d'])
self.assertEqual(False, 'd' in b)
self.assertEqual(False, 'd' in c)
self.assertEquals(True, c_2.has_key('d'))
self.assertEquals(True, b_2.has_key('d'))
self.assertEquals(30, c_2['d'])
self.assertEquals(40, b_2['d'])
self.assertEquals(False, b.has_key('d'))
self.assertEquals(False, c.has_key('d'))
# test copy of the copy
c_3 = c_2.copy()
@@ -92,19 +92,19 @@ class COWTestCase(unittest.TestCase):
b_3_2 = b_2.copy()
c_3['e'] = 4711
self.assertEqual(4711, c_3['e'])
self.assertEqual(False, 'e' in c_2)
self.assertEqual(False, 'e' in b_3)
self.assertEqual(False, 'e' in b_3_2)
self.assertEqual(False, 'e' in b_2)
self.assertEquals(4711, c_3['e'])
self.assertEquals(False, c_2.has_key('e'))
self.assertEquals(False, b_3.has_key('e'))
self.assertEquals(False, b_3_2.has_key('e'))
self.assertEquals(False, b_2.has_key('e'))
b_3['e'] = 'viel'
self.assertEqual('viel', b_3['e'])
self.assertEqual(4711, c_3['e'])
self.assertEqual(False, 'e' in c_2)
self.assertEqual(True, 'e' in b_3)
self.assertEqual(False, 'e' in b_3_2)
self.assertEqual(False, 'e' in b_2)
self.assertEquals('viel', b_3['e'])
self.assertEquals(4711, c_3['e'])
self.assertEquals(False, c_2.has_key('e'))
self.assertEquals(True, b_3.has_key('e'))
self.assertEquals(False, b_3_2.has_key('e'))
self.assertEquals(False, b_2.has_key('e'))
def testCow(self):
from bb.COW import COWDictBase
@@ -115,12 +115,12 @@ class COWTestCase(unittest.TestCase):
copy = c.copy()
self.assertEqual(1027, c['123'])
self.assertEqual(4711, c['other'])
self.assertEqual({'abc':10, 'bcd':20}, c['d'])
self.assertEqual(1027, copy['123'])
self.assertEqual(4711, copy['other'])
self.assertEqual({'abc':10, 'bcd':20}, copy['d'])
self.assertEquals(1027, c['123'])
self.assertEquals(4711, c['other'])
self.assertEquals({'abc':10, 'bcd':20}, c['d'])
self.assertEquals(1027, copy['123'])
self.assertEquals(4711, copy['other'])
self.assertEquals({'abc':10, 'bcd':20}, copy['d'])
# cow it now
copy['123'] = 1028
@@ -128,9 +128,9 @@ class COWTestCase(unittest.TestCase):
copy['d']['abc'] = 20
self.assertEqual(1027, c['123'])
self.assertEqual(4711, c['other'])
self.assertEqual({'abc':10, 'bcd':20}, c['d'])
self.assertEqual(1028, copy['123'])
self.assertEqual(4712, copy['other'])
self.assertEqual({'abc':20, 'bcd':20}, copy['d'])
self.assertEquals(1027, c['123'])
self.assertEquals(4711, c['other'])
self.assertEquals({'abc':10, 'bcd':20}, c['d'])
self.assertEquals(1028, copy['123'])
self.assertEquals(4712, copy['other'])
self.assertEquals({'abc':20, 'bcd':20}, copy['d'])

View File

@@ -80,11 +80,6 @@ class DataExpansions(unittest.TestCase):
val = self.d.expand("${@d.getVar('foo', True) + ' ${bar}'}")
self.assertEqual(str(val), "value_of_foo value_of_bar")
def test_python_unexpanded(self):
self.d.setVar("bar", "${unsetvar}")
val = self.d.expand("${@d.getVar('foo', True) + ' ${bar}'}")
self.assertEqual(str(val), "${@d.getVar('foo', True) + ' ${unsetvar}'}")
def test_python_snippet_syntax_error(self):
self.d.setVar("FOO", "${@foo = 5}")
self.assertRaises(bb.data_smart.ExpansionError, self.d.getVar, "FOO", True)
@@ -399,13 +394,13 @@ class TestFlags(unittest.TestCase):
self.d.setVarFlag("foo", "flag2", "value of flag2")
def test_setflag(self):
self.assertEqual(self.d.getVarFlag("foo", "flag1", False), "value of flag1")
self.assertEqual(self.d.getVarFlag("foo", "flag2", False), "value of flag2")
self.assertEqual(self.d.getVarFlag("foo", "flag1"), "value of flag1")
self.assertEqual(self.d.getVarFlag("foo", "flag2"), "value of flag2")
def test_delflag(self):
self.d.delVarFlag("foo", "flag2")
self.assertEqual(self.d.getVarFlag("foo", "flag1", False), "value of flag1")
self.assertEqual(self.d.getVarFlag("foo", "flag2", False), None)
self.assertEqual(self.d.getVarFlag("foo", "flag1"), "value of flag1")
self.assertEqual(self.d.getVarFlag("foo", "flag2"), None)
class Contains(unittest.TestCase):

View File

@@ -22,7 +22,6 @@
import unittest
import tempfile
import subprocess
import collections
import os
from bb.fetch2 import URI
from bb.fetch2 import FetchMethod
@@ -134,10 +133,10 @@ class URITest(unittest.TestCase):
'userinfo': 'anoncvs:anonymous',
'username': 'anoncvs',
'password': 'anonymous',
'params': collections.OrderedDict([
('tag', 'V0-99-81'),
('module', 'familiar/dist/ipkg')
]),
'params': {
'tag': 'V0-99-81',
'module': 'familiar/dist/ipkg'
},
'query': {},
'relative': False
},
@@ -229,38 +228,7 @@ class URITest(unittest.TestCase):
'params': {},
'query': {},
'relative': False
},
"http://somesite.net;someparam=1": {
'uri': 'http://somesite.net;someparam=1',
'scheme': 'http',
'hostname': 'somesite.net',
'port': None,
'hostport': 'somesite.net',
'path': '',
'userinfo': '',
'userinfo': '',
'username': '',
'password': '',
'params': {"someparam" : "1"},
'query': {},
'relative': False
},
"file://somelocation;someparam=1": {
'uri': 'file:somelocation;someparam=1',
'scheme': 'file',
'hostname': '',
'port': None,
'hostport': '',
'path': 'somelocation',
'userinfo': '',
'userinfo': '',
'username': '',
'password': '',
'params': {"someparam" : "1"},
'query': {},
'relative': True
}
}
def test_uri(self):
@@ -451,7 +419,7 @@ class MirrorUriTest(FetcherTest):
class FetcherLocalTest(FetcherTest):
def setUp(self):
def touch(fn):
with open(fn, 'a'):
with file(fn, 'a'):
os.utime(fn, None)
super(FetcherLocalTest, self).setUp()
@@ -483,7 +451,9 @@ class FetcherLocalTest(FetcherTest):
def test_local_wildcard(self):
tree = self.fetchUnpack(['file://a', 'file://dir/*'])
self.assertEqual(tree, ['a', 'dir/c', 'dir/d', 'dir/subdir/e'])
# FIXME: this is broken - it should return ['a', 'dir/c', 'dir/d', 'dir/subdir/e']
# see https://bugzilla.yoctoproject.org/show_bug.cgi?id=6128
self.assertEqual(tree, ['a', 'b', 'dir/c', 'dir/d', 'dir/subdir/e'])
def test_local_dir(self):
tree = self.fetchUnpack(['file://a', 'file://dir'])
@@ -491,15 +461,17 @@ class FetcherLocalTest(FetcherTest):
def test_local_subdir(self):
tree = self.fetchUnpack(['file://dir/subdir'])
self.assertEqual(tree, ['dir/subdir/e'])
# FIXME: this is broken - it should return ['dir/subdir/e']
# see https://bugzilla.yoctoproject.org/show_bug.cgi?id=6129
self.assertEqual(tree, ['subdir/e'])
def test_local_subdir_file(self):
tree = self.fetchUnpack(['file://dir/subdir/e'])
self.assertEqual(tree, ['dir/subdir/e'])
def test_local_subdirparam(self):
tree = self.fetchUnpack(['file://a;subdir=bar', 'file://dir;subdir=foo/moo'])
self.assertEqual(tree, ['bar/a', 'foo/moo/dir/c', 'foo/moo/dir/d', 'foo/moo/dir/subdir/e'])
tree = self.fetchUnpack(['file://a;subdir=bar'])
self.assertEqual(tree, ['bar/a'])
def test_local_deepsubdirparam(self):
tree = self.fetchUnpack(['file://dir/subdir/e;subdir=bar'])
@@ -612,68 +584,54 @@ class FetcherNetworkTest(FetcherTest):
os.chdir(os.path.dirname(self.unpackdir))
fetcher.unpack(self.unpackdir)
def test_trusted_network(self):
# Ensure trusted_network returns False when the host IS in the list.
url = "git://Someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
class TrustedNetworksTest(FetcherTest):
def test_trusted_network(self):
# Ensure trusted_network returns False when the host IS in the list.
url = "git://Someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_wild_trusted_network(self):
# Ensure trusted_network returns true when the *.host IS in the list.
url = "git://Someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_wild_trusted_network(self):
# Ensure trusted_network returns true when the *.host IS in the list.
url = "git://Someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_prefix_wild_trusted_network(self):
# Ensure trusted_network returns true when the prefix matches *.host.
url = "git://git.Someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_prefix_wild_trusted_network(self):
# Ensure trusted_network returns true when the prefix matches *.host.
url = "git://git.Someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_two_prefix_wild_trusted_network(self):
# Ensure trusted_network returns true when the prefix matches *.host.
url = "git://something.git.Someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_two_prefix_wild_trusted_network(self):
# Ensure trusted_network returns true when the prefix matches *.host.
url = "git://something.git.Someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_untrusted_network(self):
# Ensure trusted_network returns False when the host is NOT in the list.
url = "git://someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org server2.org server3.org")
self.assertFalse(bb.fetch.trusted_network(self.d, url))
def test_port_trusted_network(self):
# Ensure trusted_network returns True, even if the url specifies a port.
url = "git://someserver.org:8080/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "someserver.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_wild_untrusted_network(self):
# Ensure trusted_network returns False when the host is NOT in the list.
url = "git://*.someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org server2.org server3.org")
self.assertFalse(bb.fetch.trusted_network(self.d, url))
def test_untrusted_network(self):
# Ensure trusted_network returns False when the host is NOT in the list.
url = "git://someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org server2.org server3.org")
self.assertFalse(bb.fetch.trusted_network(self.d, url))
def test_wild_untrusted_network(self):
# Ensure trusted_network returns False when the host is NOT in the list.
url = "git://*.someserver.org/foo;rev=1"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org server2.org server3.org")
self.assertFalse(bb.fetch.trusted_network(self.d, url))
class URLHandle(unittest.TestCase):
datatable = {
"http://www.google.com/index.html" : ('http', 'www.google.com', '/index.html', '', '', {}),
"cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg" : ('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', '', {'module': 'familiar/dist/ipkg'}),
"cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg" : ('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', 'anonymous', collections.OrderedDict([('tag', 'V0-99-81'), ('module', 'familiar/dist/ipkg')])),
"git://git.openembedded.org/bitbake;branch=@foo" : ('git', 'git.openembedded.org', '/bitbake', '', '', {'branch': '@foo'}),
"file://somelocation;someparam=1": ('file', '', 'somelocation', '', '', {'someparam': '1'}),
"cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg" : ('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', 'anonymous', {'tag': 'V0-99-81', 'module': 'familiar/dist/ipkg'}),
"git://git.openembedded.org/bitbake;branch=@foo" : ('git', 'git.openembedded.org', '/bitbake', '', '', {'branch': '@foo'})
}
# we require a pathname to encodeurl but users can still pass such urls to
# decodeurl and we need to handle them
decodedata = datatable.copy()
decodedata.update({
"http://somesite.net;someparam=1": ('http', 'somesite.net', '', '', '', {'someparam': '1'}),
})
def test_decodeurl(self):
for k, v in self.decodedata.items():
for k, v in self.datatable.items():
result = bb.fetch.decodeurl(k)
self.assertEqual(result, v)
@@ -705,7 +663,7 @@ class FetchLatestVersionTest(FetcherTest):
# version pattern "yyyymmdd"
("mobile-broadband-provider-info", "git://git.gnome.org/mobile-broadband-provider-info", "4ed19e11c2975105b71b956440acdb25d46a347d", "")
: "20120614",
# packages with a valid UPSTREAM_CHECK_GITTAGREGEX
# packages with a valid GITTAGREGEX
("xf86-video-omap", "git://anongit.freedesktop.org/xorg/driver/xf86-video-omap", "ae0394e687f1a77e966cf72f895da91840dffb8f", "(?P<pver>(\d+\.(\d\.?)*))")
: "0.4.3",
("build-appliance-image", "git://git.yoctoproject.org/poky", "b37dd451a52622d5b570183a81583cc34c2ff555", "(?P<pver>(([0-9][\.|_]?)+[0-9]))")
@@ -747,7 +705,7 @@ class FetchLatestVersionTest(FetcherTest):
for k, v in self.test_git_uris.items():
self.d.setVar("PN", k[0])
self.d.setVar("SRCREV", k[2])
self.d.setVar("UPSTREAM_CHECK_GITTAGREGEX", k[3])
self.d.setVar("GITTAGREGEX", k[3])
ud = bb.fetch2.FetchData(k[1], self.d)
pupver= ud.method.latest_versionstring(ud, self.d)
verstring = pupver[0]
@@ -757,8 +715,8 @@ class FetchLatestVersionTest(FetcherTest):
def test_wget_latest_versionstring(self):
for k, v in self.test_wget_uris.items():
self.d.setVar("PN", k[0])
self.d.setVar("UPSTREAM_CHECK_URI", k[2])
self.d.setVar("UPSTREAM_CHECK_REGEX", k[3])
self.d.setVar("REGEX_URI", k[2])
self.d.setVar("REGEX", k[3])
ud = bb.fetch2.FetchData(k[1], self.d)
pupver = ud.method.latest_versionstring(ud, self.d)
verstring = pupver[0]
@@ -768,7 +726,6 @@ class FetchLatestVersionTest(FetcherTest):
class FetchCheckStatusTest(FetcherTest):
test_wget_uris = ["http://www.cups.org/software/1.7.2/cups-1.7.2-source.tar.bz2",
"http://www.cups.org/software/ipptool/ipptool-20130731-linux-ubuntu-i686.tar.gz",
"http://www.cups.org/",
"http://downloads.yoctoproject.org/releases/sato/sato-engine-0.1.tar.gz",
"http://downloads.yoctoproject.org/releases/sato/sato-engine-0.2.tar.gz",

View File

@@ -23,7 +23,6 @@ import unittest
import bb
import os
import tempfile
import re
class VerCmpString(unittest.TestCase):
@@ -177,7 +176,7 @@ do_functionname() {
# Test file doesn't get modified with some the same values
self._testeditfile({'THIS': ('that', None, 0, True),
'OTHER': ('anothervalue', None, 0, True),
'MULTILINE3': (' c1 c2 c3 ', None, 4, False)}, self._origfile)
'MULTILINE3': (' c1 c2 c3', None, 4, False)}, self._origfile)
def test_edit_metadata_file_1(self):
@@ -378,27 +377,6 @@ do_functionname() {
self.assertTrue(updated, 'List should be updated but isn\'t')
self.assertEqual(newlines, newfile5.splitlines(True))
# Make sure the orig value matches what we expect it to be
def test_edit_metadata_origvalue(self):
origfile = """
MULTILINE = " stuff \\
morestuff"
"""
expected_value = "stuff morestuff"
global value_in_callback
value_in_callback = ""
def handle_var(varname, origvalue, op, newlines):
global value_in_callback
value_in_callback = origvalue
return (origvalue, op, -1, False)
bb.utils.edit_metadata(origfile.splitlines(True),
['MULTILINE'],
handle_var)
testvalue = re.sub('\s+', ' ', value_in_callback.strip())
self.assertEqual(expected_value, testvalue)
class EditBbLayersConf(unittest.TestCase):

View File

@@ -24,7 +24,6 @@ import os
os.environ["DJANGO_SETTINGS_MODULE"] = "toaster.toastermain.settings"
import django
from django.utils import timezone
@@ -34,23 +33,20 @@ def _configure_toaster():
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), 'toaster'))
_configure_toaster()
django.setup()
from orm.models import Build, Task, Recipe, Layer_Version, Layer, Target, LogMessage, HelpText
from orm.models import Target_Image_File, BuildArtifact
from orm.models import Variable, VariableHistory
from orm.models import Package, Package_File, Target_Installed_Package, Target_File
from orm.models import Task_Dependency, Package_Dependency
from orm.models import Recipe_Dependency, Provides
from orm.models import Project, CustomImagePackage, CustomImageRecipe
from toaster.orm.models import Build, Task, Recipe, Layer_Version, Layer, Target, LogMessage, HelpText
from toaster.orm.models import Target_Image_File, BuildArtifact
from toaster.orm.models import Variable, VariableHistory
from toaster.orm.models import Package, Package_File, Target_Installed_Package, Target_File
from toaster.orm.models import Task_Dependency, Package_Dependency
from toaster.orm.models import Recipe_Dependency
from toaster.orm.models import Project
from bldcontrol.models import BuildEnvironment, BuildRequest
from bb.msg import BBLogFormatter as formatter
from django.db import models
from pprint import pformat
import logging
from datetime import datetime, timedelta
from django.db import transaction, connection
@@ -121,12 +117,6 @@ class ORMWrapper(object):
return vars(self)[dictname][key]
def _timestamp_to_datetime(self, secs):
"""
Convert timestamp in seconds to Python datetime
"""
return datetime(1970, 1, 1) + timedelta(seconds=secs)
# pylint: disable=no-self-use
# we disable detection of no self use in functions because the methods actually work on the object
# even if they don't touch self anywhere
@@ -156,7 +146,7 @@ class ORMWrapper(object):
prj = Project.objects.get(pk = project_id)
else: # this build was triggered by a legacy system, or command line interactive mode
prj = Project.objects.get_or_create_default_project()
prj = Project.objects.get_default_project()
logger.debug(1, "buildinfohelper: project is not specified, defaulting to %s" % prj)
@@ -218,15 +208,6 @@ class ORMWrapper(object):
assert isinstance(errors, int)
assert isinstance(warnings, int)
if build.outcome == Build.CANCELLED:
return
try:
if build.buildrequest.state == BuildRequest.REQ_CANCELLING:
return
except AttributeError:
# We may not have a buildrequest if this is a command line build
pass
outcome = Build.SUCCEEDED
if errors or taskfailures:
outcome = Build.FAILED
@@ -239,30 +220,6 @@ class ORMWrapper(object):
target.license_manifest_path = license_manifest_path
target.save()
def update_task_object(self, build, task_name, recipe_name, task_stats):
"""
Find the task for build which matches the recipe and task name
to be stored
"""
task_to_update = Task.objects.get(
build = build,
task_name = task_name,
recipe__name = recipe_name
)
if 'started' in task_stats and 'ended' in task_stats:
task_to_update.started = self._timestamp_to_datetime(task_stats['started'])
task_to_update.ended = self._timestamp_to_datetime(task_stats['ended'])
task_to_update.elapsed_time = (task_stats['ended'] - task_stats['started'])
task_to_update.cpu_time_user = task_stats.get('cpu_time_user')
task_to_update.cpu_time_system = task_stats.get('cpu_time_system')
if 'disk_io_read' in task_stats and 'disk_io_write' in task_stats:
task_to_update.disk_io_read = task_stats['disk_io_read']
task_to_update.disk_io_write = task_stats['disk_io_write']
task_to_update.disk_io = task_stats['disk_io_read'] + task_stats['disk_io_write']
task_to_update.save()
def get_update_task_object(self, task_information, must_exist = False):
assert 'build' in task_information
assert 'recipe' in task_information
@@ -299,6 +256,14 @@ class ORMWrapper(object):
task_object.sstate_result = Task.SSTATE_FAILED
object_changed = True
# mark down duration if we have a start time and a current time
if 'start_time' in task_information.keys() and 'end_time' in task_information.keys():
duration = task_information['end_time'] - task_information['start_time']
task_object.elapsed_time = duration
object_changed = True
del task_information['start_time']
del task_information['end_time']
if object_changed:
task_object.save()
return task_object
@@ -339,27 +304,18 @@ class ORMWrapper(object):
break
# If we're in analysis mode or if this is a custom recipe
# then we are wholly responsible for the data
# If we're in analysis mode then we are wholly responsible for the data
# and therefore we return the 'real' recipe rather than the build
# history copy of the recipe.
if recipe_information['layer_version'].build is not None and \
recipe_information['layer_version'].build.project == \
Project.objects.get_or_create_default_project():
return recipe
if built_recipe is None:
Project.objects.get_default_project():
return recipe
return built_recipe
def get_update_layer_version_object(self, build_obj, layer_obj, layer_version_information):
if isinstance(layer_obj, Layer_Version):
# Special case the toaster-custom-images layer which is created
# on the fly so don't update the values which may cause the layer
# to be duplicated on a future get_or_create
if layer_obj.layer.name == CustomImageRecipe.LAYER_NAME:
return layer_obj
# We already found our layer version for this build so just
# update it with the new build information
logger.debug("We found our layer from toaster")
@@ -369,17 +325,12 @@ class ORMWrapper(object):
# create a new copy of this layer version as a snapshot for
# historical purposes
layer_copy, c = Layer_Version.objects.get_or_create(
build=build_obj,
layer=layer_obj.layer,
up_branch=layer_obj.up_branch,
branch=layer_version_information['branch'],
commit=layer_version_information['commit'],
local_path=layer_version_information['local_path'],
)
logger.info("created new historical layer version %d",
layer_copy.pk)
layer_copy, c = Layer_Version.objects.get_or_create(build=build_obj,
layer=layer_obj.layer,
commit=layer_version_information['commit'],
local_path = layer_version_information['local_path'],
)
logger.info("created new historical layer version %d", layer_copy.pk)
self.layer_version_built.append(layer_copy)
@@ -395,7 +346,7 @@ class ORMWrapper(object):
# If we're doing a command line build then associate this new layer with the
# project to avoid it 'contaminating' toaster data
project = None
if build_obj.project == Project.objects.get_or_create_default_project():
if build_obj.project == Project.objects.get_default_project():
project = build_obj.project
layer_version_object, _ = Layer_Version.objects.get_or_create(
@@ -494,7 +445,7 @@ class ORMWrapper(object):
parent_obj = self._cached_get(Target_File, target = target_obj, path = parent_path, inodetype = Target_File.ITYPE_DIRECTORY)
tf_obj = Target_File.objects.create(
target = target_obj,
path = unicode(path, 'utf-8'),
path = path,
size = size,
inodetype = Target_File.ITYPE_DIRECTORY,
permission = permission,
@@ -519,7 +470,7 @@ class ORMWrapper(object):
tf_obj = Target_File.objects.create(
target = target_obj,
path = unicode(path, 'utf-8'),
path = path,
size = size,
inodetype = inodetype,
permission = permission,
@@ -550,9 +501,7 @@ class ORMWrapper(object):
filetarget_path = "/".join(fcpl)
try:
filetarget_obj = Target_File.objects.get(
target = target_obj,
path = unicode(filetarget_path, 'utf-8'))
filetarget_obj = Target_File.objects.get(target = target_obj, path = filetarget_path)
except Target_File.DoesNotExist:
# we might have an invalid link; no way to detect this. just set it to None
filetarget_obj = None
@@ -561,7 +510,7 @@ class ORMWrapper(object):
tf_obj = Target_File.objects.create(
target = target_obj,
path = unicode(path, 'utf-8'),
path = path,
size = size,
inodetype = Target_File.ITYPE_SYMLINK,
permission = permission,
@@ -571,14 +520,12 @@ class ORMWrapper(object):
sym_target = filetarget_obj)
def save_target_package_information(self, build_obj, target_obj, packagedict, pkgpnmap, recipes, built_package=False):
def save_target_package_information(self, build_obj, target_obj, packagedict, pkgpnmap, recipes):
assert isinstance(build_obj, Build)
assert isinstance(target_obj, Target)
errormsg = ""
for p in packagedict:
# Search name swtiches round the installed name vs package name
# by default installed name == package name
searchname = p
if p not in pkgpnmap:
logger.warning("Image packages list contains %p, but is"
@@ -589,38 +536,11 @@ class ORMWrapper(object):
if 'OPKGN' in pkgpnmap[p].keys():
searchname = pkgpnmap[p]['OPKGN']
built_recipe = recipes[pkgpnmap[p]['PN']]
if built_package:
packagedict[p]['object'], created = Package.objects.get_or_create( build = build_obj, name = searchname )
recipe = built_recipe
else:
packagedict[p]['object'], created = \
CustomImagePackage.objects.get_or_create(name=searchname)
# Clear the Package_Dependency objects as we're going to update
# the CustomImagePackage with the latest dependency information
packagedict[p]['object'].package_dependencies_target.all().delete()
packagedict[p]['object'].package_dependencies_source.all().delete()
try:
recipe = self._cached_get(
Recipe,
name=built_recipe.name,
layer_version__build=None,
layer_version__up_branch=
built_recipe.layer_version.up_branch,
file_path=built_recipe.file_path,
version=built_recipe.version
)
except (Recipe.DoesNotExist,
Recipe.MultipleObjectsReturned) as e:
logger.info("We did not find one recipe for the"
"configuration data package %s %s" % (p, e))
continue
packagedict[p]['object'], created = Package.objects.get_or_create( build = build_obj, name = searchname )
if created or packagedict[p]['object'].size == -1: # save the data anyway we can, not just if it was not created here; bug [YOCTO #6887]
# fill in everything we can from the runtime-reverse package data
try:
packagedict[p]['object'].recipe = recipe
packagedict[p]['object'].recipe = recipes[pkgpnmap[p]['PN']]
packagedict[p]['object'].version = pkgpnmap[p]['PV']
packagedict[p]['object'].installed_name = p
packagedict[p]['object'].revision = pkgpnmap[p]['PR']
@@ -646,8 +566,7 @@ class ORMWrapper(object):
packagedict[p]['object'].installed_size = packagedict[p]['size']
packagedict[p]['object'].save()
if built_package:
Target_Installed_Package.objects.create(target = target_obj, package = packagedict[p]['object'])
Target_Installed_Package.objects.create(target = target_obj, package = packagedict[p]['object'])
packagedeps_objs = []
for p in packagedict:
@@ -664,8 +583,8 @@ class ORMWrapper(object):
dep_type = tdeptype,
target = target_obj))
except KeyError as e:
logger.warning("Could not add dependency to the package %s "
"because %s is an unknown package", p, px)
logger.warn("Could not add dependency to the package %s "
"because %s is an unknown package", p, px)
if len(packagedeps_objs) > 0:
Package_Dependency.objects.bulk_create(packagedeps_objs)
@@ -673,7 +592,7 @@ class ORMWrapper(object):
logger.info("No package dependencies created")
if len(errormsg) > 0:
logger.warning("buildinfohelper: target_package_info could not identify recipes: \n%s", errormsg)
logger.warn("buildinfohelper: target_package_info could not identify recipes: \n%s", errormsg)
def save_target_image_file_information(self, target_obj, file_name, file_size):
Target_Image_File.objects.create( target = target_obj,
@@ -708,37 +627,19 @@ class ORMWrapper(object):
return log_object.save()
def save_build_package_information(self, build_obj, package_info, recipes,
built_package):
# assert isinstance(build_obj, Build)
def save_build_package_information(self, build_obj, package_info, recipes):
assert isinstance(build_obj, Build)
# create and save the object
pname = package_info['PKG']
built_recipe = recipes[package_info['PN']]
if 'OPKGN' in package_info.keys():
pname = package_info['OPKGN']
if built_package:
bp_object, _ = Package.objects.get_or_create( build = build_obj,
name = pname )
recipe = built_recipe
else:
bp_object, created = \
CustomImagePackage.objects.get_or_create(name=pname)
try:
recipe = self._cached_get(Recipe,
name=built_recipe.name,
layer_version__build=None,
file_path=built_recipe.file_path,
version=built_recipe.version)
except (Recipe.DoesNotExist, Recipe.MultipleObjectsReturned):
logger.debug("We did not find one recipe for the configuration"
"data package %s" % pname)
return
bp_object, _ = Package.objects.get_or_create( build = build_obj,
name = pname )
bp_object.installed_name = package_info['PKG']
bp_object.recipe = recipe
bp_object.recipe = recipes[package_info['PN']]
bp_object.version = package_info['PKGV']
bp_object.revision = package_info['PKGR']
bp_object.summary = package_info['SUMMARY']
@@ -758,12 +659,7 @@ class ORMWrapper(object):
Package_File.objects.bulk_create(packagefile_objects)
def _po_byname(p):
if built_package:
pkg, created = Package.objects.get_or_create(build=build_obj,
name=p)
else:
pkg, created = CustomImagePackage.objects.get_or_create(name=p)
pkg, created = Package.objects.get_or_create(build = build_obj, name = p)
if created:
pkg.size = -1
pkg.save()
@@ -804,6 +700,7 @@ class ORMWrapper(object):
def save_build_variables(self, build_obj, vardump):
assert isinstance(build_obj, Build)
helptext_objects = []
for k in vardump:
desc = vardump[k]['doc']
if desc is None:
@@ -814,9 +711,10 @@ class ORMWrapper(object):
if desc is None:
desc = ''
if len(desc):
HelpText.objects.get_or_create(build=build_obj,
area=HelpText.VARIABLE,
key=k, text=desc)
helptext_objects.append(HelpText(build=build_obj,
area=HelpText.VARIABLE,
key=k,
text=desc))
if not bool(vardump[k]['func']):
value = vardump[k]['v']
if value is None:
@@ -836,6 +734,8 @@ class ORMWrapper(object):
if len(varhist_objects):
VariableHistory.objects.bulk_create(varhist_objects)
HelpText.objects.bulk_create(helptext_objects)
class MockEvent(object):
""" This object is used to create event, for which normal event-processing methods can
@@ -862,7 +762,7 @@ class BuildInfoHelper(object):
# pylint: disable=bad-continuation
# we do not follow the python conventions for continuation indentation due to long lines here
def __init__(self, server, has_build_history = False, brbe = None):
def __init__(self, server, has_build_history = False):
self.internal_state = {}
self.internal_state['taskdata'] = {}
self.internal_state['targets'] = []
@@ -875,13 +775,8 @@ class BuildInfoHelper(object):
self.orm_wrapper = ORMWrapper()
self.has_build_history = has_build_history
self.tmp_dir = self.server.runCommand(["getVariable", "TMPDIR"])[0]
# this is set for Toaster-triggered builds by localhostbecontroller
# via toasterui
self.brbe = brbe
self.project = None
self.brbe = self.server.runCommand(["getVariable", "TOASTER_BRBE"])[0]
self.project = self.server.runCommand(["getVariable", "TOASTER_PROJECT"])[0]
logger.debug(1, "buildinfohelper: Build info helper inited %s" % vars(self))
@@ -891,6 +786,8 @@ class BuildInfoHelper(object):
def _get_build_information(self, build_log_path):
build_info = {}
# Generate an identifier for each new build
build_info['machine'] = self.server.runCommand(["getVariable", "MACHINE"])[0]
build_info['distro'] = self.server.runCommand(["getVariable", "DISTRO"])[0]
build_info['distro_version'] = self.server.runCommand(["getVariable", "DISTRO_VERSION"])[0]
@@ -899,7 +796,7 @@ class BuildInfoHelper(object):
build_info['cooker_log_path'] = build_log_path
build_info['build_name'] = self.server.runCommand(["getVariable", "BUILDNAME"])[0]
build_info['bitbake_version'] = self.server.runCommand(["getVariable", "BB_VERSION"])[0]
build_info['project'] = self.project = self.server.runCommand(["getVariable", "TOASTER_PROJECT"])[0]
return build_info
def _get_task_information(self, event, recipe):
@@ -921,18 +818,47 @@ class BuildInfoHelper(object):
assert path.startswith("/")
assert 'build' in self.internal_state
def _slkey_interactive(layer_version):
assert isinstance(layer_version, Layer_Version)
return len(layer_version.local_path)
if self.brbe is None:
def _slkey_interactive(layer_version):
assert isinstance(layer_version, Layer_Version)
return len(layer_version.local_path)
# Heuristics: we always match recipe to the deepest layer path in the discovered layers
for lvo in sorted(self.orm_wrapper.layer_version_objects, reverse=True, key=_slkey_interactive):
# we can match to the recipe file path
if path.startswith(lvo.local_path):
return lvo
# Heuristics: we always match recipe to the deepest layer path in the discovered layers
for lvo in sorted(self.orm_wrapper.layer_version_objects, reverse=True, key=_slkey_interactive):
# we can match to the recipe file path
if path.startswith(lvo.local_path):
return lvo
else:
br_id, be_id = self.brbe.split(":")
from bldcontrol.bbcontroller import getBuildEnvironmentController
bc = getBuildEnvironmentController(pk = be_id)
def _slkey_managed(layer_version):
return len(bc.getGitCloneDirectory(layer_version.giturl, layer_version.commit) + layer_version.dirpath)
# Heuristics: we match the path to where the layers have been checked out
for brl in sorted(BuildRequest.objects.get(pk = br_id).brlayer_set.all(), reverse = True, key = _slkey_managed):
localdirname = os.path.join(bc.getGitCloneDirectory(brl.giturl, brl.commit), brl.dirpath)
# we get a relative path, unless running in HEAD mode where the path is absolute
if not localdirname.startswith("/"):
localdirname = os.path.join(bc.be.sourcedir, localdirname)
if path.startswith(localdirname):
# If the build request came from toaster this field
# should contain the information from the layer_version
# That created this build request.
if brl.layer_version:
return brl.layer_version
#logger.warn("-- managed: matched path %s with layer %s " % (path, localdirname))
# we matched the BRLayer, but we need the layer_version that generated this br
for lvo in self.orm_wrapper.layer_version_objects:
if brl.name == lvo.layer.name:
return lvo
#if we get here, we didn't read layers correctly; dump whatever information we have on the error log
logger.warning("Could not match layer version for recipe path %s : %s", path, self.orm_wrapper.layer_version_objects)
logger.warn("Could not match layer version for recipe path %s : %s", path, self.orm_wrapper.layer_version_objects)
#mockup the new layer
unknown_layer, _ = Layer.objects.get_or_create(name="Unidentified layer", layer_index_url="")
@@ -964,10 +890,12 @@ class BuildInfoHelper(object):
def _get_path_information(self, task_object):
assert isinstance(task_object, Task)
build_stats_format = "{tmpdir}/buildstats/{buildname}/{package}/"
build_stats_format = "{tmpdir}/buildstats/{target}-{machine}/{buildname}/{package}/"
build_stats_path = []
for t in self.internal_state['targets']:
target = t.target
machine = self.internal_state['build'].machine
buildname = self.internal_state['build'].build_name
pe, pv = task_object.recipe.version.split(":",1)
if len(pe) > 0:
@@ -975,8 +903,8 @@ class BuildInfoHelper(object):
else:
package = task_object.recipe.name + "-" + pv
build_stats_path.append(build_stats_format.format(tmpdir=self.tmp_dir,
buildname=buildname,
build_stats_path.append(build_stats_format.format(tmpdir=self.tmp_dir, target=target,
machine=machine, buildname=buildname,
package=package))
return build_stats_path
@@ -1003,16 +931,13 @@ class BuildInfoHelper(object):
self.internal_state['lvs'][self.orm_wrapper.get_update_layer_object(layerinfos[layer], self.brbe)] = layerinfos[layer]['version']
self.internal_state['lvs'][self.orm_wrapper.get_update_layer_object(layerinfos[layer], self.brbe)]['local_path'] = layerinfos[layer]['local_path']
except NotExisting as nee:
logger.warning("buildinfohelper: cannot identify layer exception:%s ", nee)
logger.warn("buildinfohelper: cannot identify layer exception:%s ", nee)
def store_started_build(self, event, build_log_path):
assert '_pkgs' in vars(event)
build_information = self._get_build_information(build_log_path)
# Update brbe and project as they can be changed for every build
self.project = build_information['project']
build_obj = self.orm_wrapper.create_build_object(build_information, self.brbe, self.project)
self.internal_state['build'] = build_obj
@@ -1134,11 +1059,31 @@ class BuildInfoHelper(object):
def store_tasks_stats(self, event):
task_data = BuildInfoHelper._get_data_from_event(event)
for (taskfile, taskname, taskstats, recipename) in BuildInfoHelper._get_data_from_event(event):
localfilepath = taskfile.split(":")[-1]
assert localfilepath.startswith("/")
for (task_file, task_name, task_stats, recipe_name) in task_data:
build = self.internal_state['build']
self.orm_wrapper.update_task_object(build, task_name, recipe_name, task_stats)
recipe_information = self._get_recipe_information_from_taskfile(taskfile)
try:
if recipe_information['file_path'].startswith(recipe_information['layer_version'].local_path):
recipe_information['file_path'] = recipe_information['file_path'][len(recipe_information['layer_version'].local_path):].lstrip("/")
recipe_object = Recipe.objects.get(layer_version = recipe_information['layer_version'],
file_path__endswith = recipe_information['file_path'],
name = recipename)
except Recipe.DoesNotExist:
logger.error("Could not find recipe for recipe_information %s name %s" , pformat(recipe_information), recipename)
raise
task_information = {}
task_information['build'] = self.internal_state['build']
task_information['recipe'] = recipe_object
task_information['task_name'] = taskname
task_information['cpu_usage'] = taskstats['cpu_usage']
task_information['disk_io'] = taskstats['disk_io']
if 'elapsed_time' in taskstats:
task_information['elapsed_time'] = taskstats['elapsed_time']
self.orm_wrapper.get_update_task_object(task_information)
def update_and_store_task(self, event):
assert 'taskfile' in vars(event)
@@ -1160,6 +1105,13 @@ class BuildInfoHelper(object):
recipe = self.orm_wrapper.get_update_recipe_object(recipe_information, True)
task_information = self._get_task_information(event,recipe)
if 'time' in vars(event):
if not 'start_time' in self.internal_state['taskdata'][identifier]:
self.internal_state['taskdata'][identifier]['start_time'] = event.time
else:
task_information['end_time'] = event.time
task_information['start_time'] = self.internal_state['taskdata'][identifier]['start_time']
task_information['outcome'] = self.internal_state['taskdata'][identifier]['outcome']
if 'logfile' in vars(event):
@@ -1233,21 +1185,20 @@ class BuildInfoHelper(object):
for target in self.internal_state['targets']:
if target.is_image:
pkgdata = BuildInfoHelper._get_data_from_event(event)['pkgdata']
imgdata = BuildInfoHelper._get_data_from_event(event)['imgdata'].get(target.target, {})
filedata = BuildInfoHelper._get_data_from_event(event)['filedata'].get(target.target, {})
imgdata = BuildInfoHelper._get_data_from_event(event)['imgdata'][target.target]
filedata = BuildInfoHelper._get_data_from_event(event)['filedata'][target.target]
try:
self.orm_wrapper.save_target_package_information(self.internal_state['build'], target, imgdata, pkgdata, self.internal_state['recipes'], built_package=True)
self.orm_wrapper.save_target_package_information(self.internal_state['build'], target, imgdata.copy(), pkgdata, self.internal_state['recipes'], built_package=False)
self.orm_wrapper.save_target_package_information(self.internal_state['build'], target, imgdata, pkgdata, self.internal_state['recipes'])
except KeyError as e:
logger.warning("KeyError in save_target_package_information"
"%s ", e)
logger.warn("KeyError in save_target_package_information"
"%s ", e)
try:
self.orm_wrapper.save_target_file_information(self.internal_state['build'], target, filedata)
except KeyError as e:
logger.warning("KeyError in save_target_file_information"
"%s ", e)
logger.warn("KeyError in save_target_file_information"
"%s ", e)
@@ -1318,9 +1269,6 @@ class BuildInfoHelper(object):
for cls in event._depgraph['pn'][pn]['inherits']:
if cls.endswith('/image.bbclass'):
recipe.is_image = True
recipe_info['is_image'] = True
# Save the is_image state to the relevant recipe objects
self.orm_wrapper.get_update_recipe_object(recipe_info)
break
if recipe.is_image:
for t in self.internal_state['targets']:
@@ -1337,27 +1285,15 @@ class BuildInfoHelper(object):
# buildtime
recipedeps_objects = []
for recipe in event._depgraph['depends']:
target = self.internal_state['recipes'][recipe]
for dep in event._depgraph['depends'][recipe]:
if dep in assume_provided:
continue
via = None
if 'providermap' in event._depgraph and dep in event._depgraph['providermap']:
deprecipe = event._depgraph['providermap'][dep][0]
dependency = self.internal_state['recipes'][deprecipe]
via = Provides.objects.get_or_create(name=dep,
recipe=dependency)[0]
elif dep in self.internal_state['recipes']:
try:
target = self.internal_state['recipes'][recipe]
for dep in event._depgraph['depends'][recipe]:
dependency = self.internal_state['recipes'][dep]
else:
errormsg += " stpd: KeyError saving recipe dependency for %s, %s \n" % (recipe, dep)
continue
recipe_dep = Recipe_Dependency(recipe=target,
depends_on=dependency,
via=via,
dep_type=Recipe_Dependency.TYPE_DEPENDS)
recipedeps_objects.append(recipe_dep)
recipedeps_objects.append(Recipe_Dependency( recipe = target,
depends_on = dependency, dep_type = Recipe_Dependency.TYPE_DEPENDS))
except KeyError as e:
if e not in assume_provided and not str(e).startswith("virtual/"):
errormsg += " stpd: KeyError saving recipe dependency for %s, %s \n" % (recipe, e)
Recipe_Dependency.objects.bulk_create(recipedeps_objects)
# save all task information
@@ -1392,22 +1328,15 @@ class BuildInfoHelper(object):
Task_Dependency.objects.bulk_create(taskdeps_objects)
if len(errormsg) > 0:
logger.warning("buildinfohelper: dependency info not identify recipes: \n%s", errormsg)
logger.warn("buildinfohelper: dependency info not identify recipes: \n%s", errormsg)
def store_build_package_information(self, event):
package_info = BuildInfoHelper._get_data_from_event(event)
self.orm_wrapper.save_build_package_information(
self.internal_state['build'],
package_info,
self.internal_state['recipes'],
built_package=True)
self.orm_wrapper.save_build_package_information(
self.internal_state['build'],
package_info,
self.internal_state['recipes'],
built_package=False)
self.orm_wrapper.save_build_package_information(self.internal_state['build'],
package_info,
self.internal_state['recipes'],
)
def _store_build_done(self, errorcode):
logger.info("Build exited with errorcode %d", errorcode)
@@ -1416,18 +1345,9 @@ class BuildInfoHelper(object):
be.lock = BuildEnvironment.LOCK_LOCK
be.save()
br = BuildRequest.objects.get(pk = br_id)
# if we're 'done' because we got cancelled update the build outcome
if br.state == BuildRequest.REQ_CANCELLING:
logger.info("Build cancelled")
br.build.outcome = Build.CANCELLED
br.build.save()
self.internal_state['build'] = br.build
errorcode = 0
if errorcode == 0:
# request archival of the project artifacts
br.state = BuildRequest.REQ_COMPLETED
br.state = BuildRequest.REQ_ARCHIVE
else:
br.state = BuildRequest.REQ_FAILED
br.save()
@@ -1514,8 +1434,3 @@ class BuildInfoHelper(object):
if not connection.features.autocommits_when_autocommit_is_off:
transaction.set_autocommit(True)
# unset the brbe; this is to prevent subsequent command-line builds
# being incorrectly attached to the previous Toaster-triggered build;
# see https://bugzilla.yoctoproject.org/show_bug.cgi?id=9021
self.brbe = None

View File

@@ -0,0 +1,437 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import pango
import gobject
import bb.process
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hobwidget import hic, HobNotebook, HobAltButton, HobWarpCellRendererText, HobButton, HobInfoButton
from bb.ui.crumbs.runningbuild import RunningBuildTreeView
from bb.ui.crumbs.runningbuild import BuildFailureTreeView
from bb.ui.crumbs.hobpages import HobPage
from bb.ui.crumbs.hobcolor import HobColors
class BuildConfigurationTreeView(gtk.TreeView):
def __init__ (self):
gtk.TreeView.__init__(self)
self.set_rules_hint(False)
self.set_headers_visible(False)
self.set_property("hover-expand", True)
self.get_selection().set_mode(gtk.SELECTION_SINGLE)
# The icon that indicates whether we're building or failed.
renderer0 = gtk.CellRendererText()
renderer0.set_property('font-desc', pango.FontDescription('courier bold 12'))
col0 = gtk.TreeViewColumn ("Name", renderer0, text=0)
self.append_column (col0)
# The message of configuration.
renderer1 = HobWarpCellRendererText(col_number=1)
col1 = gtk.TreeViewColumn ("Values", renderer1, text=1)
self.append_column (col1)
def set_vars(self, key="", var=[""]):
d = {}
if type(var) == str:
d = {key: [var]}
elif type(var) == list and len(var) > 1:
#create the sub item line
l = []
text = ""
for item in var:
text = " - " + item
l.append(text)
d = {key: var}
return d
def set_config_model(self, show_vars):
listmodel = gtk.TreeStore(gobject.TYPE_STRING, gobject.TYPE_STRING)
parent = None
for var in show_vars:
for subitem in var.items():
name = subitem[0]
is_parent = True
for value in subitem[1]:
if is_parent:
parent = listmodel.append(parent, (name, value))
is_parent = False
else:
listmodel.append(parent, (None, value))
name = " - "
parent = None
# renew the tree model after get the configuration messages
self.set_model(listmodel)
def show(self, src_config_info, src_params):
vars = []
vars.append(self.set_vars("BB version:", src_params.bb_version))
vars.append(self.set_vars("Target arch:", src_params.target_arch))
vars.append(self.set_vars("Target OS:", src_params.target_os))
vars.append(self.set_vars("Machine:", src_config_info.curr_mach))
vars.append(self.set_vars("Distro:", src_config_info.curr_distro))
vars.append(self.set_vars("Distro version:", src_params.distro_version))
vars.append(self.set_vars("SDK machine:", src_config_info.curr_sdk_machine))
vars.append(self.set_vars("Tune features:", src_params.tune_pkgarch))
vars.append(self.set_vars("Layers:", src_config_info.layers))
for path in src_config_info.layers:
import os, os.path
if os.path.exists(path):
branch = bb.process.run('cd %s; git branch | grep "^* " | tr -d "* "' % path)[0]
if branch.startswith("fatal:"):
branch = "(unknown)"
if branch:
branch = branch.strip('\n')
vars.append(self.set_vars("Branch:", branch))
break
self.set_config_model(vars)
def reset(self):
self.set_model(None)
#
# BuildDetailsPage
#
class BuildDetailsPage (HobPage):
def __init__(self, builder):
super(BuildDetailsPage, self).__init__(builder, "Building ...")
self.num_of_issues = 0
self.endpath = (0,)
# create visual elements
self.create_visual_elements()
def create_visual_elements(self):
# create visual elements
self.vbox = gtk.VBox(False, 12)
self.progress_box = gtk.VBox(False, 12)
self.task_status = gtk.Label("\n") # to ensure layout is correct
self.task_status.set_alignment(0.0, 0.5)
self.progress_box.pack_start(self.task_status, expand=False, fill=False)
self.progress_hbox = gtk.HBox(False, 6)
self.progress_box.pack_end(self.progress_hbox, expand=True, fill=True)
self.progress_bar = HobProgressBar()
self.progress_hbox.pack_start(self.progress_bar, expand=True, fill=True)
self.stop_button = HobAltButton("Stop")
self.stop_button.connect("clicked", self.stop_button_clicked_cb)
self.stop_button.set_sensitive(False)
self.progress_hbox.pack_end(self.stop_button, expand=False, fill=False)
self.notebook = HobNotebook()
self.config_tv = BuildConfigurationTreeView()
self.scrolled_view_config = gtk.ScrolledWindow ()
self.scrolled_view_config.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
self.scrolled_view_config.add(self.config_tv)
self.notebook.append_page(self.scrolled_view_config, "Build configuration")
self.failure_tv = BuildFailureTreeView()
self.failure_model = self.builder.handler.build.model.failure_model()
self.failure_tv.set_model(self.failure_model)
self.scrolled_view_failure = gtk.ScrolledWindow ()
self.scrolled_view_failure.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
self.scrolled_view_failure.add(self.failure_tv)
self.notebook.append_page(self.scrolled_view_failure, "Issues")
self.build_tv = RunningBuildTreeView(readonly=True, hob=True)
self.build_tv.set_model(self.builder.handler.build.model)
self.scrolled_view_build = gtk.ScrolledWindow ()
self.scrolled_view_build.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
self.scrolled_view_build.add(self.build_tv)
self.notebook.append_page(self.scrolled_view_build, "Log")
self.builder.handler.build.model.connect_after("row-changed", self.scroll_to_present_row, self.scrolled_view_build.get_vadjustment(), self.build_tv)
self.button_box = gtk.HBox(False, 6)
self.back_button = HobAltButton('&lt;&lt; Back')
self.back_button.connect("clicked", self.back_button_clicked_cb)
self.button_box.pack_start(self.back_button, expand=False, fill=False)
def update_build_status(self, current, total, task):
recipe_path, recipe_task = task.split(", ")
recipe = os.path.basename(recipe_path).rstrip(".bb")
tsk_msg = "<b>Running task %s of %s:</b> %s\n<b>Recipe:</b> %s" % (current, total, recipe_task, recipe)
self.task_status.set_markup(tsk_msg)
self.stop_button.set_sensitive(True)
def reset_build_status(self):
self.task_status.set_markup("\n") # to ensure layout is correct
self.endpath = (0,)
def show_issues(self):
self.num_of_issues += 1
self.notebook.show_indicator_icon("Issues", self.num_of_issues)
self.notebook.queue_draw()
def reset_issues(self):
self.num_of_issues = 0
self.notebook.hide_indicator_icon("Issues")
def _remove_all_widget(self):
children = self.vbox.get_children() or []
for child in children:
self.vbox.remove(child)
children = self.box_group_area.get_children() or []
for child in children:
self.box_group_area.remove(child)
children = self.get_children() or []
for child in children:
self.remove(child)
def add_build_fail_top_bar(self, actions, log_file=None):
primary_action = "Edit %s" % actions
color = HobColors.ERROR
build_fail_top = gtk.EventBox()
#build_fail_top.set_size_request(-1, 200)
build_fail_top.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse(color))
build_fail_tab = gtk.Table(14, 46, True)
build_fail_top.add(build_fail_tab)
icon = gtk.Image()
icon_pix_buffer = gtk.gdk.pixbuf_new_from_file(hic.ICON_INDI_ERROR_FILE)
icon.set_from_pixbuf(icon_pix_buffer)
build_fail_tab.attach(icon, 1, 4, 0, 6)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
label.set_markup("<span size='x-large'><b>%s</b></span>" % self.title)
build_fail_tab.attach(label, 4, 26, 0, 6)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
# Ensure variable disk_full is defined
if not hasattr(self.builder, 'disk_full'):
self.builder.disk_full = False
if self.builder.disk_full:
markup = "<span size='medium'>There is no disk space left, so Hob cannot finish building your image. Free up some disk space\n"
markup += "and restart the build. Check the \"Issues\" tab for more details</span>"
label.set_markup(markup)
else:
label.set_markup("<span size='medium'>Check the \"Issues\" information for more details</span>")
build_fail_tab.attach(label, 4, 40, 4, 9)
# create button 'Edit packages'
action_button = HobButton(primary_action)
#action_button.set_size_request(-1, 40)
action_button.set_tooltip_text("Edit the %s parameters" % actions)
action_button.connect('clicked', self.failure_primary_action_button_clicked_cb, primary_action)
if log_file:
open_log_button = HobAltButton("Open log")
open_log_button.set_relief(gtk.RELIEF_HALF)
open_log_button.set_tooltip_text("Open the build's log file")
open_log_button.connect('clicked', self.open_log_button_clicked_cb, log_file)
attach_pos = (24 if log_file else 14)
file_bug_button = HobAltButton('File a bug')
file_bug_button.set_relief(gtk.RELIEF_HALF)
file_bug_button.set_tooltip_text("Open the Yocto Project bug tracking website")
file_bug_button.connect('clicked', self.failure_activate_file_bug_link_cb)
if not self.builder.disk_full:
build_fail_tab.attach(action_button, 4, 13, 9, 12)
if log_file:
build_fail_tab.attach(open_log_button, 14, 23, 9, 12)
build_fail_tab.attach(file_bug_button, attach_pos, attach_pos + 9, 9, 12)
else:
restart_build = HobButton("Restart the build")
restart_build.set_tooltip_text("Restart the build")
restart_build.connect('clicked', self.restart_build_button_clicked_cb)
build_fail_tab.attach(restart_build, 4, 13, 9, 12)
build_fail_tab.attach(action_button, 14, 23, 9, 12)
if log_file:
build_fail_tab.attach(open_log_button, attach_pos, attach_pos + 9, 9, 12)
self.builder.disk_full = False
return build_fail_top
def show_fail_page(self, title):
self._remove_all_widget()
self.title = "Hob cannot build your %s" % title
self.build_fail_bar = self.add_build_fail_top_bar(title, self.builder.current_logfile)
self.pack_start(self.group_align, expand=True, fill=True)
self.box_group_area.pack_start(self.build_fail_bar, expand=False, fill=False)
self.box_group_area.pack_start(self.vbox, expand=True, fill=True)
self.vbox.pack_start(self.notebook, expand=True, fill=True)
self.show_all()
self.notebook.set_page("Issues")
self.back_button.hide()
def add_build_stop_top_bar(self, action, log_file=None):
color = HobColors.LIGHT_GRAY
build_stop_top = gtk.EventBox()
#build_stop_top.set_size_request(-1, 200)
build_stop_top.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse(color))
build_stop_top.set_flags(gtk.CAN_DEFAULT)
build_stop_top.grab_default()
build_stop_tab = gtk.Table(11, 46, True)
build_stop_top.add(build_stop_tab)
icon = gtk.Image()
icon_pix_buffer = gtk.gdk.pixbuf_new_from_file(hic.ICON_INFO_HOVER_FILE)
icon.set_from_pixbuf(icon_pix_buffer)
build_stop_tab.attach(icon, 1, 4, 0, 6)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
label.set_markup("<span size='x-large'><b>%s</b></span>" % self.title)
build_stop_tab.attach(label, 4, 26, 0, 6)
action_button = HobButton("Edit %s" % action)
action_button.set_size_request(-1, 40)
if action == "image":
action_button.set_tooltip_text("Edit the image parameters")
elif action == "recipes":
action_button.set_tooltip_text("Edit the included recipes")
elif action == "packages":
action_button.set_tooltip_text("Edit the included packages")
action_button.connect('clicked', self.stop_primary_action_button_clicked_cb, action)
build_stop_tab.attach(action_button, 4, 13, 6, 9)
if log_file:
open_log_button = HobAltButton("Open log")
open_log_button.set_relief(gtk.RELIEF_HALF)
open_log_button.set_tooltip_text("Open the build's log file")
open_log_button.connect('clicked', self.open_log_button_clicked_cb, log_file)
build_stop_tab.attach(open_log_button, 14, 23, 6, 9)
attach_pos = (24 if log_file else 14)
build_button = HobAltButton("Build new image")
#build_button.set_size_request(-1, 40)
build_button.set_tooltip_text("Create a new image from scratch")
build_button.connect('clicked', self.new_image_button_clicked_cb)
build_stop_tab.attach(build_button, attach_pos, attach_pos + 9, 6, 9)
return build_stop_top, action_button
def show_stop_page(self, action):
self._remove_all_widget()
self.title = "Build stopped"
self.build_stop_bar, action_button = self.add_build_stop_top_bar(action, self.builder.current_logfile)
self.pack_start(self.group_align, expand=True, fill=True)
self.box_group_area.pack_start(self.build_stop_bar, expand=False, fill=False)
self.box_group_area.pack_start(self.vbox, expand=True, fill=True)
self.vbox.pack_start(self.notebook, expand=True, fill=True)
self.show_all()
self.back_button.hide()
return action_button
def show_page(self, step):
self._remove_all_widget()
if step == self.builder.PACKAGE_GENERATING or step == self.builder.FAST_IMAGE_GENERATING:
self.title = "Building packages ..."
else:
self.title = "Building image ..."
self.build_details_top = self.add_onto_top_bar(None)
self.pack_start(self.build_details_top, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
self.box_group_area.pack_start(self.vbox, expand=True, fill=True)
self.progress_bar.reset()
self.config_tv.reset()
self.vbox.pack_start(self.progress_box, expand=False, fill=False)
self.vbox.pack_start(self.notebook, expand=True, fill=True)
self.box_group_area.pack_end(self.button_box, expand=False, fill=False)
self.show_all()
self.notebook.set_page("Log")
self.back_button.hide()
self.reset_build_status()
self.reset_issues()
def update_progress_bar(self, title, fraction, status=None):
self.progress_bar.update(fraction)
self.progress_bar.set_title(title)
self.progress_bar.set_rcstyle(status)
def back_button_clicked_cb(self, button):
self.builder.show_configuration()
def new_image_button_clicked_cb(self, button):
self.builder.reset()
def show_back_button(self):
self.back_button.show()
def stop_button_clicked_cb(self, button):
self.builder.stop_build()
def hide_stop_button(self):
self.stop_button.set_sensitive(False)
self.stop_button.hide()
def scroll_to_present_row(self, model, path, iter, v_adj, treeview):
if treeview and v_adj:
if path[0] > self.endpath[0]: # check the event is a new row append or not
self.endpath = path
# check the gtk.adjustment position is at end boundary or not
if (v_adj.upper <= v_adj.page_size) or (v_adj.value == v_adj.upper - v_adj.page_size):
treeview.scroll_to_cell(path)
def show_configurations(self, configurations, params):
self.config_tv.show(configurations, params)
def failure_primary_action_button_clicked_cb(self, button, action):
if "Edit recipes" in action:
self.builder.show_recipes()
elif "Edit packages" in action:
self.builder.show_packages()
elif "Edit image" in action:
self.builder.show_configuration()
def restart_build_button_clicked_cb(self, button):
self.builder.just_bake()
def stop_primary_action_button_clicked_cb(self, button, action):
if "recipes" in action:
self.builder.show_recipes()
elif "packages" in action:
self.builder.show_packages()
elif "image" in action:
self.builder.show_configuration()
def open_log_button_clicked_cb(self, button, log_file):
if log_file:
log_file = "file:///" + log_file
gtk.show_uri(screen=button.get_screen(), uri=log_file, timestamp=0)
def failure_activate_file_bug_link_cb(self, button):
button.child.emit('activate-link', "http://bugzilla.yoctoproject.org")

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,455 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2008 Intel Corporation
#
# Authored by Rob Bradford <rob@linux.intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
import threading
import os
import datetime
import time
class BuildConfiguration:
""" Represents a potential *or* historic *or* concrete build. It
encompasses all the things that we need to tell bitbake to do to make it
build what we want it to build.
It also stored the metadata URL and the set of possible machines (and the
distros / images / uris for these. Apart from the metdata URL these are
not serialised to file (since they may be transient). In some ways this
functionality might be shifted to the loader class."""
def __init__ (self):
self.metadata_url = None
# Tuple of (distros, image, urls)
self.machine_options = {}
self.machine = None
self.distro = None
self.image = None
self.urls = []
self.extra_urls = []
self.extra_pkgs = []
def get_machines_model (self):
model = gtk.ListStore (gobject.TYPE_STRING)
for machine in self.machine_options.keys():
model.append ([machine])
return model
def get_distro_and_images_models (self, machine):
distro_model = gtk.ListStore (gobject.TYPE_STRING)
for distro in self.machine_options[machine][0]:
distro_model.append ([distro])
image_model = gtk.ListStore (gobject.TYPE_STRING)
for image in self.machine_options[machine][1]:
image_model.append ([image])
return (distro_model, image_model)
def get_repos (self):
self.urls = self.machine_options[self.machine][2]
return self.urls
# It might be a lot lot better if we stored these in like, bitbake conf
# file format.
@staticmethod
def load_from_file (filename):
conf = BuildConfiguration()
with open(filename, "r") as f:
for line in f:
data = line.split (";")[1]
if (line.startswith ("metadata-url;")):
conf.metadata_url = data.strip()
continue
if (line.startswith ("url;")):
conf.urls += [data.strip()]
continue
if (line.startswith ("extra-url;")):
conf.extra_urls += [data.strip()]
continue
if (line.startswith ("machine;")):
conf.machine = data.strip()
continue
if (line.startswith ("distribution;")):
conf.distro = data.strip()
continue
if (line.startswith ("image;")):
conf.image = data.strip()
continue
return conf
# Serialise to a file. This is part of the build process and we use this
# to be able to repeat a given build (using the same set of parameters)
# but also so that we can include the details of the image / machine /
# distro in the build manager tree view.
def write_to_file (self, filename):
f = open (filename, "w")
lines = []
if (self.metadata_url):
lines += ["metadata-url;%s\n" % (self.metadata_url)]
for url in self.urls:
lines += ["url;%s\n" % (url)]
for url in self.extra_urls:
lines += ["extra-url;%s\n" % (url)]
if (self.machine):
lines += ["machine;%s\n" % (self.machine)]
if (self.distro):
lines += ["distribution;%s\n" % (self.distro)]
if (self.image):
lines += ["image;%s\n" % (self.image)]
f.writelines (lines)
f.close ()
class BuildResult(gobject.GObject):
""" Represents an historic build. Perhaps not successful. But it includes
things such as the files that are in the directory (the output from the
build) as well as a deserialised BuildConfiguration file that is stored in
".conf" in the directory for the build.
This is GObject so that it can be included in the TreeStore."""
(STATE_COMPLETE, STATE_FAILED, STATE_ONGOING) = \
(0, 1, 2)
def __init__ (self, parent, identifier):
gobject.GObject.__init__ (self)
self.date = None
self.files = []
self.status = None
self.identifier = identifier
self.path = os.path.join (parent, identifier)
# Extract the date, since the directory name is of the
# format build-<year><month><day>-<ordinal> we can easily
# pull it out.
# TODO: Better to stat a file?
(_, date, revision) = identifier.split ("-")
print(date)
year = int (date[0:4])
month = int (date[4:6])
day = int (date[6:8])
self.date = datetime.date (year, month, day)
self.conf = None
# By default builds are STATE_FAILED unless we find a "complete" file
# in which case they are STATE_COMPLETE
self.state = BuildResult.STATE_FAILED
for file in os.listdir (self.path):
if (file.startswith (".conf")):
conffile = os.path.join (self.path, file)
self.conf = BuildConfiguration.load_from_file (conffile)
elif (file.startswith ("complete")):
self.state = BuildResult.STATE_COMPLETE
else:
self.add_file (file)
def add_file (self, file):
# Just add the file for now. Don't care about the type.
self.files += [(file, None)]
class BuildManagerModel (gtk.TreeStore):
""" Model for the BuildManagerTreeView. This derives from gtk.TreeStore
but it abstracts nicely what the columns mean and the setup of the columns
in the model. """
(COL_IDENT, COL_DESC, COL_MACHINE, COL_DISTRO, COL_BUILD_RESULT, COL_DATE, COL_STATE) = \
(0, 1, 2, 3, 4, 5, 6)
def __init__ (self):
gtk.TreeStore.__init__ (self,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_OBJECT,
gobject.TYPE_INT64,
gobject.TYPE_INT)
class BuildManager (gobject.GObject):
""" This class manages the historic builds that have been found in the
"results" directory but is also used for starting a new build."""
__gsignals__ = {
'population-finished' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'populate-error' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
())
}
def update_build_result (self, result, iter):
# Convert the date into something we can sort by.
date = long (time.mktime (result.date.timetuple()))
# Add a top level entry for the build
self.model.set (iter,
BuildManagerModel.COL_IDENT, result.identifier,
BuildManagerModel.COL_DESC, result.conf.image,
BuildManagerModel.COL_MACHINE, result.conf.machine,
BuildManagerModel.COL_DISTRO, result.conf.distro,
BuildManagerModel.COL_BUILD_RESULT, result,
BuildManagerModel.COL_DATE, date,
BuildManagerModel.COL_STATE, result.state)
# And then we use the files in the directory as the children for the
# top level iter.
for file in result.files:
self.model.append (iter, (None, file[0], None, None, None, date, -1))
# This function is called as an idle by the BuildManagerPopulaterThread
def add_build_result (self, result):
gtk.gdk.threads_enter()
self.known_builds += [result]
self.update_build_result (result, self.model.append (None))
gtk.gdk.threads_leave()
def notify_build_finished (self):
# This is a bit of a hack. If we have a running build running then we
# will have a row in the model in STATE_ONGOING. Find it and make it
# as if it was a proper historic build (well, it is completed now....)
# We need to use the iters here rather than the Python iterator
# interface to the model since we need to pass it into
# update_build_result
iter = self.model.get_iter_first()
while (iter):
(ident, state) = self.model.get(iter,
BuildManagerModel.COL_IDENT,
BuildManagerModel.COL_STATE)
if state == BuildResult.STATE_ONGOING:
result = BuildResult (self.results_directory, ident)
self.update_build_result (result, iter)
iter = self.model.iter_next(iter)
def notify_build_succeeded (self):
# Write the "complete" file so that when we create the BuildResult
# object we put into the model
complete_file_path = os.path.join (self.cur_build_directory, "complete")
f = file (complete_file_path, "w")
f.close()
self.notify_build_finished()
def notify_build_failed (self):
# Without a "complete" file then this will mark the build as failed:
self.notify_build_finished()
# This function is called as an idle
def emit_population_finished_signal (self):
gtk.gdk.threads_enter()
self.emit ("population-finished")
gtk.gdk.threads_leave()
class BuildManagerPopulaterThread (threading.Thread):
def __init__ (self, manager, directory):
threading.Thread.__init__ (self)
self.manager = manager
self.directory = directory
def run (self):
# For each of the "build-<...>" directories ..
if os.path.exists (self.directory):
for directory in os.listdir (self.directory):
if not directory.startswith ("build-"):
continue
build_result = BuildResult (self.directory, directory)
self.manager.add_build_result (build_result)
gobject.idle_add (BuildManager.emit_population_finished_signal,
self.manager)
def __init__ (self, server, results_directory):
gobject.GObject.__init__ (self)
# The builds that we've found from walking the result directory
self.known_builds = []
# Save out the bitbake server, we need this for issuing commands to
# the cooker:
self.server = server
# The TreeStore that we use
self.model = BuildManagerModel ()
# The results directory is where we create (and look for) the
# build-<xyz>-<n> directories. We need to populate ourselves from
# directory
self.results_directory = results_directory
self.populate_from_directory (self.results_directory)
def populate_from_directory (self, directory):
thread = BuildManager.BuildManagerPopulaterThread (self, directory)
thread.start()
# Come up with the name for the next build ident by combining "build-"
# with the date formatted as yyyymmdd and then an ordinal. We do this by
# an optimistic algorithm incrementing the ordinal if we find that it
# already exists.
def get_next_build_ident (self):
today = datetime.date.today ()
datestr = str (today.year) + str (today.month) + str (today.day)
revision = 0
test_name = "build-%s-%d" % (datestr, revision)
test_path = os.path.join (self.results_directory, test_name)
while (os.path.exists (test_path)):
revision += 1
test_name = "build-%s-%d" % (datestr, revision)
test_path = os.path.join (self.results_directory, test_name)
return test_name
# Take a BuildConfiguration and then try and build it based on the
# parameters of that configuration. S
def do_build (self, conf):
server = self.server
# Work out the build directory. Note we actually create the
# directories here since we need to write the ".conf" file. Otherwise
# we could have relied on bitbake's builder thread to actually make
# the directories as it proceeds with the build.
ident = self.get_next_build_ident ()
build_directory = os.path.join (self.results_directory,
ident)
self.cur_build_directory = build_directory
os.makedirs (build_directory)
conffile = os.path.join (build_directory, ".conf")
conf.write_to_file (conffile)
# Add a row to the model representing this ongoing build. It's kinda a
# fake entry. If this build completes or fails then this gets updated
# with the real stuff like the historic builds
date = long (time.time())
self.model.append (None, (ident, conf.image, conf.machine, conf.distro,
None, date, BuildResult.STATE_ONGOING))
try:
server.runCommand(["setVariable", "BUILD_IMAGES_FROM_FEEDS", 1])
server.runCommand(["setVariable", "MACHINE", conf.machine])
server.runCommand(["setVariable", "DISTRO", conf.distro])
server.runCommand(["setVariable", "PACKAGE_CLASSES", "package_ipk"])
server.runCommand(["setVariable", "BBFILES", \
"""${OEROOT}/meta/packages/*/*.bb ${OEROOT}/meta-moblin/packages/*/*.bb"""])
server.runCommand(["setVariable", "TMPDIR", "${OEROOT}/build/tmp"])
server.runCommand(["setVariable", "IPK_FEED_URIS", \
" ".join(conf.get_repos())])
server.runCommand(["setVariable", "DEPLOY_DIR_IMAGE",
build_directory])
server.runCommand(["buildTargets", [conf.image], "rootfs"])
except Exception as e:
print(e)
class BuildManagerTreeView (gtk.TreeView):
""" The tree view for the build manager. This shows the historic builds
and so forth. """
# We use this function to control what goes in the cell since we store
# the date in the model as seconds since the epoch (for sorting) and so we
# need to make it human readable.
def date_format_custom_cell_data_func (self, col, cell, model, iter):
date = model.get (iter, BuildManagerModel.COL_DATE)[0]
datestr = time.strftime("%A %d %B %Y", time.localtime(date))
cell.set_property ("text", datestr)
# This format function controls what goes in the cell. We use this to map
# the integer state to a string and also to colourise the text
def state_format_custom_cell_data_fun (self, col, cell, model, iter):
state = model.get (iter, BuildManagerModel.COL_STATE)[0]
if (state == BuildResult.STATE_ONGOING):
cell.set_property ("text", "Active")
cell.set_property ("foreground", "#000000")
elif (state == BuildResult.STATE_FAILED):
cell.set_property ("text", "Failed")
cell.set_property ("foreground", "#ff0000")
elif (state == BuildResult.STATE_COMPLETE):
cell.set_property ("text", "Complete")
cell.set_property ("foreground", "#00ff00")
else:
cell.set_property ("text", "")
def __init__ (self):
gtk.TreeView.__init__(self)
# Misc descriptiony thing
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn (None, renderer,
text=BuildManagerModel.COL_DESC)
self.append_column (col)
# Machine
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Machine", renderer,
text=BuildManagerModel.COL_MACHINE)
self.append_column (col)
# distro
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Distribution", renderer,
text=BuildManagerModel.COL_DISTRO)
self.append_column (col)
# date (using a custom function for formatting the cell contents it
# takes epoch -> human readable string)
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Date", renderer,
text=BuildManagerModel.COL_DATE)
self.append_column (col)
col.set_cell_data_func (renderer,
self.date_format_custom_cell_data_func)
# For status.
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Status", renderer,
text = BuildManagerModel.COL_STATE)
self.append_column (col)
col.set_cell_data_func (renderer,
self.state_format_custom_cell_data_fun)

View File

@@ -0,0 +1,341 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import hashlib
from bb.ui.crumbs.hobwidget import HobInfoButton, HobButton
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hig.settingsuihelper import SettingsUIHelper
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
from bb.ui.crumbs.hig.proxydetailsdialog import ProxyDetailsDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class AdvancedSettingsDialog (CrumbsDialog, SettingsUIHelper):
def details_cb(self, button, parent, protocol):
dialog = ProxyDetailsDialog(title = protocol.upper() + " Proxy Details",
user = self.configuration.proxies[protocol][1],
passwd = self.configuration.proxies[protocol][2],
parent = parent,
flags = gtk.DIALOG_MODAL
| gtk.DIALOG_DESTROY_WITH_PARENT
| gtk.DIALOG_NO_SEPARATOR)
dialog.add_button(gtk.STOCK_CLOSE, gtk.RESPONSE_OK)
response = dialog.run()
if response == gtk.RESPONSE_OK:
self.configuration.proxies[protocol][1] = dialog.user
self.configuration.proxies[protocol][2] = dialog.passwd
self.refresh_proxy_components()
dialog.destroy()
def set_save_button(self, button):
self.save_button = button
def rootfs_combo_changed_cb(self, rootfs_combo, all_package_format, check_hbox):
combo_item = self.rootfs_combo.get_active_text()
modified = False
for child in check_hbox.get_children():
if isinstance(child, gtk.CheckButton):
check_hbox.remove(child)
modified = True
for format in all_package_format:
if format != combo_item:
check_button = gtk.CheckButton(format)
check_hbox.pack_start(check_button, expand=False, fill=False)
modified = True
if modified:
check_hbox.remove(self.pkgfmt_info)
check_hbox.pack_start(self.pkgfmt_info, expand=False, fill=False)
check_hbox.show_all()
def gen_pkgfmt_widget(self, curr_package_format, all_package_format, tooltip_combo="", tooltip_extra=""):
pkgfmt_vbox = gtk.VBox(False, 6)
label = self.gen_label_widget("Root file system package format")
pkgfmt_vbox.pack_start(label, expand=False, fill=False)
rootfs_format = ""
if curr_package_format:
rootfs_format = curr_package_format.split()[0]
rootfs_format_widget, rootfs_combo = self.gen_combo_widget(rootfs_format, all_package_format, tooltip_combo)
pkgfmt_vbox.pack_start(rootfs_format_widget, expand=False, fill=False)
label = self.gen_label_widget("Additional package formats")
pkgfmt_vbox.pack_start(label, expand=False, fill=False)
check_hbox = gtk.HBox(False, 12)
pkgfmt_vbox.pack_start(check_hbox, expand=False, fill=False)
for format in all_package_format:
if format != rootfs_format:
check_button = gtk.CheckButton(format)
is_active = (format in curr_package_format.split())
check_button.set_active(is_active)
check_hbox.pack_start(check_button, expand=False, fill=False)
self.pkgfmt_info = HobInfoButton(tooltip_extra, self)
check_hbox.pack_start(self.pkgfmt_info, expand=False, fill=False)
rootfs_combo.connect("changed", self.rootfs_combo_changed_cb, all_package_format, check_hbox)
pkgfmt_vbox.show_all()
return pkgfmt_vbox, rootfs_combo, check_hbox
def __init__(self, title, configuration, all_image_types,
all_package_formats, all_distros, all_sdk_machines,
max_threads, parent, flags, buttons=None):
super(AdvancedSettingsDialog, self).__init__(title, parent, flags, buttons)
# class members from other objects
# bitbake settings from Builder.Configuration
self.configuration = configuration
self.image_types = all_image_types
self.all_package_formats = all_package_formats
self.all_distros = all_distros[:]
self.all_sdk_machines = all_sdk_machines
self.max_threads = max_threads
# class members for internal use
self.distro_combo = None
self.dldir_text = None
self.sstatedir_text = None
self.sstatemirror_text = None
self.bb_spinner = None
self.pmake_spinner = None
self.rootfs_size_spinner = None
self.extra_size_spinner = None
self.gplv3_checkbox = None
self.sdk_checkbox = None
self.image_types_checkbuttons = {}
self.md5 = self.config_md5()
self.settings_changed = False
# create visual elements on the dialog
self.save_button = None
self.create_visual_elements()
self.connect("response", self.response_cb)
def _get_sorted_value(self, var):
return " ".join(sorted(str(var).split())) + "\n"
def config_md5(self):
data = ""
data += ("PACKAGE_CLASSES: " + self.configuration.curr_package_format + '\n')
data += ("DISTRO: " + self._get_sorted_value(self.configuration.curr_distro))
data += ("IMAGE_ROOTFS_SIZE: " + self._get_sorted_value(self.configuration.image_rootfs_size))
data += ("IMAGE_EXTRA_SIZE: " + self._get_sorted_value(self.configuration.image_extra_size))
data += ("INCOMPATIBLE_LICENSE: " + self._get_sorted_value(self.configuration.incompat_license))
data += ("SDK_MACHINE: " + self._get_sorted_value(self.configuration.curr_sdk_machine))
data += ("TOOLCHAIN_BUILD: " + self._get_sorted_value(self.configuration.toolchain_build))
data += ("IMAGE_FSTYPES: " + self._get_sorted_value(self.configuration.image_fstypes))
return hashlib.md5(data).hexdigest()
def create_visual_elements(self):
self.nb = gtk.Notebook()
self.nb.set_show_tabs(True)
self.nb.append_page(self.create_image_types_page(), gtk.Label("Image types"))
self.nb.append_page(self.create_output_page(), gtk.Label("Output"))
self.nb.set_current_page(0)
self.vbox.pack_start(self.nb, expand=True, fill=True)
self.vbox.pack_end(gtk.HSeparator(), expand=True, fill=True)
self.show_all()
def get_num_checked_image_types(self):
total = 0
for b in self.image_types_checkbuttons.values():
if b.get_active():
total = total + 1
return total
def set_save_button_state(self):
if self.save_button:
self.save_button.set_sensitive(self.get_num_checked_image_types() > 0)
def image_type_checkbutton_clicked_cb(self, button):
self.set_save_button_state()
if self.get_num_checked_image_types() == 0:
# Show an error dialog
lbl = "<b>Select an image type</b>"
msg = "You need to select at least one image type."
dialog = CrumbsMessageDialog(self, lbl, gtk.MESSAGE_WARNING, msg)
button = dialog.add_button("OK", gtk.RESPONSE_OK)
HobButton.style_button(button)
response = dialog.run()
dialog.destroy()
def create_image_types_page(self):
main_vbox = gtk.VBox(False, 16)
main_vbox.set_border_width(6)
advanced_vbox = gtk.VBox(False, 6)
advanced_vbox.set_border_width(6)
distro_vbox = gtk.VBox(False, 6)
label = self.gen_label_widget("Distro:")
tooltip = "Selects the Yocto Project distribution you want"
try:
i = self.all_distros.index( "defaultsetup" )
except ValueError:
i = -1
if i != -1:
self.all_distros[ i ] = "Default"
if self.configuration.curr_distro == "defaultsetup":
self.configuration.curr_distro = "Default"
distro_widget, self.distro_combo = self.gen_combo_widget(self.configuration.curr_distro, self.all_distros,"<b>Distro</b>" + "*" + tooltip)
distro_vbox.pack_start(label, expand=False, fill=False)
distro_vbox.pack_start(distro_widget, expand=False, fill=False)
main_vbox.pack_start(distro_vbox, expand=False, fill=False)
rows = (len(self.image_types)+1)/3
table = gtk.Table(rows + 1, 10, True)
advanced_vbox.pack_start(table, expand=False, fill=False)
tooltip = "Image file system types you want."
info = HobInfoButton("<b>Image types</b>" + "*" + tooltip, self)
label = self.gen_label_widget("Image types:")
align = gtk.Alignment(0, 0.5, 0, 0)
table.attach(align, 0, 4, 0, 1)
align.add(label)
table.attach(info, 4, 5, 0, 1)
i = 1
j = 1
for image_type in sorted(self.image_types):
self.image_types_checkbuttons[image_type] = gtk.CheckButton(image_type)
self.image_types_checkbuttons[image_type].connect("toggled", self.image_type_checkbutton_clicked_cb)
article = ""
if image_type.startswith(("a", "e", "i", "o", "u")):
article = "n"
if image_type == "live":
self.image_types_checkbuttons[image_type].set_tooltip_text("Build iso and hddimg images")
else:
self.image_types_checkbuttons[image_type].set_tooltip_text("Build a%s %s image" % (article, image_type))
table.attach(self.image_types_checkbuttons[image_type], j - 1, j + 3, i, i + 1)
if image_type in self.configuration.image_fstypes.split():
self.image_types_checkbuttons[image_type].set_active(True)
i += 1
if i > rows:
i = 1
j = j + 4
main_vbox.pack_start(advanced_vbox, expand=False, fill=False)
self.set_save_button_state()
return main_vbox
def create_output_page(self):
advanced_vbox = gtk.VBox(False, 6)
advanced_vbox.set_border_width(6)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Package format</span>'), expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
tooltip_combo = "Selects the package format used to generate rootfs."
tooltip_extra = "Selects extra package formats to build"
pkgfmt_widget, self.rootfs_combo, self.check_hbox = self.gen_pkgfmt_widget(self.configuration.curr_package_format, self.all_package_formats,"<b>Root file system package format</b>" + "*" + tooltip_combo,"<b>Additional package formats</b>" + "*" + tooltip_extra)
sub_vbox.pack_start(pkgfmt_widget, expand=False, fill=False)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Image size</span>'), expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = self.gen_label_widget("Image basic size (in MB)")
tooltip = "Defines the size for the generated image. The OpenEmbedded build system determines the final size for the generated image using an algorithm that takes into account the initial disk space used for the generated image, the Image basic size value, and the Additional free space value.\n\nFor more information, check the <a href=\"http://www.yoctoproject.org/docs/current/poky-ref-manual/poky-ref-manual.html#var-IMAGE_ROOTFS_SIZE\">Yocto Project Reference Manual</a>."
rootfs_size_widget, self.rootfs_size_spinner = self.gen_spinner_widget(int(self.configuration.image_rootfs_size*1.0/1024), 0, 65536,"<b>Image basic size</b>" + "*" + tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(rootfs_size_widget, expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = self.gen_label_widget("Additional free space (in MB)")
tooltip = "Sets extra free disk space to be added to the generated image. Use this variable when you want to ensure that a specific amount of free disk space is available on a device after an image is installed and running."
extra_size_widget, self.extra_size_spinner = self.gen_spinner_widget(int(self.configuration.image_extra_size*1.0/1024), 0, 65536,"<b>Additional free space</b>" + "*" + tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(extra_size_widget, expand=False, fill=False)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Licensing</span>'), expand=False, fill=False)
self.gplv3_checkbox = gtk.CheckButton("Exclude GPLv3 packages")
self.gplv3_checkbox.set_tooltip_text("Check this box to prevent GPLv3 packages from being included in your image")
if "GPLv3" in self.configuration.incompat_license.split():
self.gplv3_checkbox.set_active(True)
else:
self.gplv3_checkbox.set_active(False)
advanced_vbox.pack_start(self.gplv3_checkbox, expand=False, fill=False)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">SDK</span>'), expand=False, fill=False)
sub_hbox = gtk.HBox(False, 6)
advanced_vbox.pack_start(sub_hbox, expand=False, fill=False)
self.sdk_checkbox = gtk.CheckButton("Populate SDK")
tooltip = "Check this box to generate an SDK tarball that consists of the cross-toolchain and a sysroot that contains development packages for your image."
self.sdk_checkbox.set_tooltip_text(tooltip)
self.sdk_checkbox.set_active(self.configuration.toolchain_build)
sub_hbox.pack_start(self.sdk_checkbox, expand=False, fill=False)
tooltip = "Select the host platform for which you want to run the toolchain contained in the SDK tarball."
sdk_machine_widget, self.sdk_machine_combo = self.gen_combo_widget(self.configuration.curr_sdk_machine, self.all_sdk_machines,"<b>Populate SDK</b>" + "*" + tooltip)
sub_hbox.pack_start(sdk_machine_widget, expand=False, fill=False)
return advanced_vbox
def response_cb(self, dialog, response_id):
package_format = []
package_format.append(self.rootfs_combo.get_active_text())
for child in self.check_hbox:
if isinstance(child, gtk.CheckButton) and child.get_active():
package_format.append(child.get_label())
self.configuration.curr_package_format = " ".join(package_format)
distro = self.distro_combo.get_active_text()
if distro == "Default":
distro = "defaultsetup"
self.configuration.curr_distro = distro
self.configuration.image_rootfs_size = self.rootfs_size_spinner.get_value_as_int() * 1024
self.configuration.image_extra_size = self.extra_size_spinner.get_value_as_int() * 1024
self.configuration.image_fstypes = ""
for image_type in self.image_types:
if self.image_types_checkbuttons[image_type].get_active():
self.configuration.image_fstypes += (" " + image_type)
self.configuration.image_fstypes.strip()
if self.gplv3_checkbox.get_active():
if "GPLv3" not in self.configuration.incompat_license.split():
self.configuration.incompat_license += " GPLv3"
else:
if "GPLv3" in self.configuration.incompat_license.split():
self.configuration.incompat_license = self.configuration.incompat_license.split().remove("GPLv3")
self.configuration.incompat_license = " ".join(self.configuration.incompat_license or [])
self.configuration.incompat_license = self.configuration.incompat_license.strip()
self.configuration.toolchain_build = self.sdk_checkbox.get_active()
self.configuration.curr_sdk_machine = self.sdk_machine_combo.get_active_text()
md5 = self.config_md5()
self.settings_changed = (self.md5 != md5)

View File

@@ -0,0 +1,163 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Cristiana Voicu <cristiana.voicu@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
from bb.ui.crumbs.hobwidget import HobAltButton
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
#
# ParsingWarningsDialog
#
class ParsingWarningsDialog (CrumbsDialog):
def __init__(self, title, warnings, parent, flags, buttons=None):
super(ParsingWarningsDialog, self).__init__(title, parent, flags, buttons)
self.warnings = warnings
self.warning_on = 0
self.warn_nb = len(warnings)
# create visual elements on the dialog
self.create_visual_elements()
def cancel_button_cb(self, button):
self.destroy()
def previous_button_cb(self, button):
self.warning_on = self.warning_on - 1
self.refresh_components()
def next_button_cb(self, button):
self.warning_on = self.warning_on + 1
self.refresh_components()
def refresh_components(self):
lbl = self.warnings[self.warning_on]
#when the warning text has more than 400 chars, it uses a scroll bar
if 0<= len(lbl) < 400:
self.warning_label.set_size_request(320, 230)
self.warning_label.set_use_markup(True)
self.warning_label.set_line_wrap(True)
self.warning_label.set_markup(lbl)
self.warning_label.set_property("yalign", 0.00)
else:
self.textWindow.set_shadow_type(gtk.SHADOW_IN)
self.textWindow.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
self.msgView = gtk.TextView()
self.msgView.set_editable(False)
self.msgView.set_wrap_mode(gtk.WRAP_WORD)
self.msgView.set_cursor_visible(False)
self.msgView.set_size_request(320, 230)
self.buf = gtk.TextBuffer()
self.buf.set_text(lbl)
self.msgView.set_buffer(self.buf)
self.textWindow.add(self.msgView)
self.msgView.show()
if self.warning_on==0:
self.previous_button.set_sensitive(False)
else:
self.previous_button.set_sensitive(True)
if self.warning_on==self.warn_nb-1:
self.next_button.set_sensitive(False)
else:
self.next_button.set_sensitive(True)
if self.warn_nb>1:
self.heading = "Warning " + str(self.warning_on + 1) + " of " + str(self.warn_nb)
self.heading_label.set_markup('<span weight="bold">%s</span>' % self.heading)
else:
self.heading = "Warning"
self.heading_label.set_markup('<span weight="bold">%s</span>' % self.heading)
self.show_all()
if 0<= len(lbl) < 400:
self.textWindow.hide()
else:
self.warning_label.hide()
def create_visual_elements(self):
self.set_size_request(350, 350)
self.heading_label = gtk.Label()
self.heading_label.set_alignment(0, 0)
self.warning_label = gtk.Label()
self.warning_label.set_selectable(True)
self.warning_label.set_alignment(0, 0)
self.textWindow = gtk.ScrolledWindow()
table = gtk.Table(1, 10, False)
cancel_button = gtk.Button()
cancel_button.set_label("Close")
cancel_button.connect("clicked", self.cancel_button_cb)
cancel_button.set_size_request(110, 30)
self.previous_button = gtk.Button()
image1 = gtk.image_new_from_stock(gtk.STOCK_GO_BACK, gtk.ICON_SIZE_BUTTON)
image1.show()
box = gtk.HBox(False, 6)
box.show()
self.previous_button.add(box)
lbl = gtk.Label("Previous")
lbl.show()
box.pack_start(image1, expand=False, fill=False, padding=3)
box.pack_start(lbl, expand=True, fill=True, padding=3)
self.previous_button.connect("clicked", self.previous_button_cb)
self.previous_button.set_size_request(110, 30)
self.next_button = gtk.Button()
image2 = gtk.image_new_from_stock(gtk.STOCK_GO_FORWARD, gtk.ICON_SIZE_BUTTON)
image2.show()
box = gtk.HBox(False, 6)
box.show()
self.next_button.add(box)
lbl = gtk.Label("Next")
lbl.show()
box.pack_start(lbl, expand=True, fill=True, padding=3)
box.pack_start(image2, expand=False, fill=False, padding=3)
self.next_button.connect("clicked", self.next_button_cb)
self.next_button.set_size_request(110, 30)
#when there more than one warning, we need "previous" and "next" button
if self.warn_nb>1:
self.vbox.pack_start(self.heading_label, expand=False, fill=False)
self.vbox.pack_start(self.warning_label, expand=False, fill=False)
self.vbox.pack_start(self.textWindow, expand=False, fill=False)
table.attach(cancel_button, 6, 7, 0, 1, xoptions=gtk.SHRINK)
table.attach(self.previous_button, 7, 8, 0, 1, xoptions=gtk.SHRINK)
table.attach(self.next_button, 8, 9, 0, 1, xoptions=gtk.SHRINK)
self.vbox.pack_end(table, expand=False, fill=False)
else:
self.vbox.pack_start(self.heading_label, expand=False, fill=False)
self.vbox.pack_start(self.warning_label, expand=False, fill=False)
self.vbox.pack_start(self.textWindow, expand=False, fill=False)
cancel_button = self.add_button("Close", gtk.RESPONSE_CANCEL)
HobAltButton.style_button(cancel_button)
self.refresh_components()

View File

@@ -0,0 +1,90 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class ProxyDetailsDialog (CrumbsDialog):
def __init__(self, title, user, passwd, parent, flags, buttons=None):
super(ProxyDetailsDialog, self).__init__(title, parent, flags, buttons)
self.connect("response", self.response_cb)
self.auth = not (user == None or passwd == None or user == "")
self.user = user or ""
self.passwd = passwd or ""
# create visual elements on the dialog
self.create_visual_elements()
def create_visual_elements(self):
self.auth_checkbox = gtk.CheckButton("Use authentication")
self.auth_checkbox.set_tooltip_text("Check this box to set the username and the password")
self.auth_checkbox.set_active(self.auth)
self.auth_checkbox.connect("toggled", self.auth_checkbox_toggled_cb)
self.vbox.pack_start(self.auth_checkbox, expand=False, fill=False)
hbox = gtk.HBox(False, 6)
self.user_label = gtk.Label("Username:")
self.user_text = gtk.Entry()
self.user_text.set_text(self.user)
hbox.pack_start(self.user_label, expand=False, fill=False)
hbox.pack_end(self.user_text, expand=False, fill=False)
self.vbox.pack_start(hbox, expand=False, fill=False)
hbox = gtk.HBox(False, 6)
self.passwd_label = gtk.Label("Password:")
self.passwd_text = gtk.Entry()
self.passwd_text.set_text(self.passwd)
hbox.pack_start(self.passwd_label, expand=False, fill=False)
hbox.pack_end(self.passwd_text, expand=False, fill=False)
self.vbox.pack_start(hbox, expand=False, fill=False)
self.refresh_auth_components()
self.show_all()
def refresh_auth_components(self):
self.user_label.set_sensitive(self.auth)
self.user_text.set_editable(self.auth)
self.user_text.set_sensitive(self.auth)
self.passwd_label.set_sensitive(self.auth)
self.passwd_text.set_editable(self.auth)
self.passwd_text.set_sensitive(self.auth)
def auth_checkbox_toggled_cb(self, button):
self.auth = self.auth_checkbox.get_active()
self.refresh_auth_components()
def response_cb(self, dialog, response_id):
if response_id == gtk.RESPONSE_OK:
if self.auth:
self.user = self.user_text.get_text()
self.passwd = self.passwd_text.get_text()
else:
self.user = None
self.passwd = None

View File

@@ -0,0 +1,51 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2013 Intel Corporation
#
# Authored by Cristiana Voicu <cristiana.voicu@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
class RetrieveImageDialog (gtk.FileChooserDialog):
"""
This class is used to create a dialog that permits to retrieve
a custom image saved previously from Hob.
"""
def __init__(self, directory,title, parent, flags, buttons=None):
super(RetrieveImageDialog, self).__init__(title, None, gtk.FILE_CHOOSER_ACTION_OPEN,
(gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL,gtk.STOCK_OPEN, gtk.RESPONSE_OK))
self.directory = directory
# create visual elements on the dialog
self.create_visual_elements()
def create_visual_elements(self):
self.set_show_hidden(True)
self.set_default_response(gtk.RESPONSE_OK)
self.set_current_folder(self.directory)
vbox = self.get_children()[0].get_children()[0].get_children()[0]
for child in vbox.get_children()[0].get_children()[0].get_children()[0].get_children():
vbox.get_children()[0].get_children()[0].get_children()[0].remove(child)
label1 = gtk.Label()
label1.set_text("File system" + self.directory)
label1.show()
vbox.get_children()[0].get_children()[0].get_children()[0].pack_start(label1, expand=False, fill=False, padding=0)
vbox.get_children()[0].get_children()[1].get_children()[0].hide()
self.get_children()[0].get_children()[1].get_children()[0].set_label("Select")

View File

@@ -0,0 +1,159 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2013 Intel Corporation
#
# Authored by Cristiana Voicu <cristiana.voicu@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import glib
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
from bb.ui.crumbs.hobwidget import HobButton
class SaveImageDialog (CrumbsDialog):
"""
This class is used to create a dialog that permits to save
a custom image in a predefined directory.
"""
def __init__(self, directory, name, description, title, parent, flags, buttons=None):
super(SaveImageDialog, self).__init__(title, parent, flags, buttons)
self.directory = directory
self.builder = parent
self.name_field = name
self.description_field = description
# create visual elements on the dialog
self.create_visual_elements()
def create_visual_elements(self):
self.set_default_response(gtk.RESPONSE_OK)
self.vbox.set_border_width(6)
sub_vbox = gtk.VBox(False, 12)
self.vbox.pack_start(sub_vbox, expand=False, fill=False)
label = gtk.Label()
label.set_alignment(0, 0)
label.set_markup("<b>Name</b>")
sub_label = gtk.Label()
sub_label.set_alignment(0, 0)
content = "Image recipe names should be all lowercase and include only alphanumeric\n"
content += "characters. The only special character you can use is the ASCII hyphen (-)."
sub_label.set_markup(content)
self.name_entry = gtk.Entry()
self.name_entry.set_text(self.name_field)
self.name_entry.set_size_request(350,30)
self.name_entry.connect("changed", self.name_entry_changed)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(sub_label, expand=False, fill=False)
sub_vbox.pack_start(self.name_entry, expand=False, fill=False)
sub_vbox = gtk.VBox(False, 12)
self.vbox.pack_start(sub_vbox, expand=False, fill=False)
label = gtk.Label()
label.set_alignment(0, 0)
label.set_markup("<b>Description</b> (optional)")
sub_label = gtk.Label()
sub_label.set_alignment(0, 0)
sub_label.set_markup("The description should be less than 150 characters long.")
self.description_entry = gtk.TextView()
description_buffer = self.description_entry.get_buffer()
description_buffer.set_text(self.description_field)
description_buffer.connect("insert-text", self.limit_description_length)
self.description_entry.set_wrap_mode(gtk.WRAP_WORD)
self.description_entry.set_size_request(350,50)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(sub_label, expand=False, fill=False)
sub_vbox.pack_start(self.description_entry, expand=False, fill=False)
sub_vbox = gtk.VBox(False, 12)
self.vbox.pack_start(sub_vbox, expand=False, fill=False)
label = gtk.Label()
label.set_alignment(0, 0)
label.set_markup("Your image recipe will be saved to:")
sub_label = gtk.Label()
sub_label.set_alignment(0, 0)
sub_label.set_markup(self.directory)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(sub_label, expand=False, fill=False)
table = gtk.Table(1, 4, True)
cancel_button = gtk.Button()
cancel_button.set_label("Cancel")
cancel_button.connect("clicked", self.cancel_button_cb)
cancel_button.set_size_request(110, 30)
self.save_button = gtk.Button()
self.save_button.set_label("Save")
self.save_button.connect("clicked", self.save_button_cb)
self.save_button.set_size_request(110, 30)
if self.name_entry.get_text() == '':
self.save_button.set_sensitive(False)
table.attach(cancel_button, 2, 3, 0, 1)
table.attach(self.save_button, 3, 4, 0, 1)
self.vbox.pack_end(table, expand=False, fill=False)
self.show_all()
def limit_description_length(self, textbuffer, iter, text, length):
buffer_bounds = textbuffer.get_bounds()
entire_text = textbuffer.get_text(*buffer_bounds)
entire_text += text
if len(entire_text)>150 or text=="\n":
textbuffer.emit_stop_by_name("insert-text")
def name_entry_changed(self, entry):
text = entry.get_text()
if text == '':
self.save_button.set_sensitive(False)
else:
self.save_button.set_sensitive(True)
def cancel_button_cb(self, button):
self.destroy()
def save_button_cb(self, button):
text = self.name_entry.get_text()
new_text = text.replace("-","")
description_buffer = self.description_entry.get_buffer()
description = description_buffer.get_text(description_buffer.get_start_iter(),description_buffer.get_end_iter())
if new_text.islower() and new_text.isalnum():
self.builder.image_details_page.image_saved = True
self.builder.customized = False
self.builder.generate_new_image(self.directory+text, description)
self.builder.recipe_model.set_in_list(text, description)
self.builder.recipe_model.set_selected_image(text)
self.builder.image_details_page.show_page(self.builder.IMAGE_GENERATED)
self.builder.image_details_page.name_field_template = text
self.builder.image_details_page.description_field_template = description
self.destroy()
else:
self.show_invalid_input_error_dialog()
def show_invalid_input_error_dialog(self):
lbl = "<b>Invalid characters in image recipe name</b>"
msg = "Image recipe names should be all lowercase and\n"
msg += "include only alphanumeric characters. The only\n"
msg += "special character you can use is the ASCII hyphen (-)."
dialog = CrumbsMessageDialog(self, lbl, gtk.MESSAGE_ERROR, msg)
button = dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
res = dialog.run()
self.name_entry.grab_focus()
dialog.destroy()

View File

@@ -0,0 +1,891 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
import hashlib
from bb.ui.crumbs.hobwidget import hic, HobInfoButton, HobButton, HobAltButton
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hig.settingsuihelper import SettingsUIHelper
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
from bb.ui.crumbs.hig.proxydetailsdialog import ProxyDetailsDialog
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
(BUILD_ENV_PAGE_ID,
SHARED_STATE_PAGE_ID,
PROXIES_PAGE_ID,
OTHERS_PAGE_ID) = range(4)
(TEST_NETWORK_NONE,
TEST_NETWORK_INITIAL,
TEST_NETWORK_RUNNING,
TEST_NETWORK_PASSED,
TEST_NETWORK_FAILED,
TEST_NETWORK_CANCELED) = range(6)
TARGETS = [
("MY_TREE_MODEL_ROW", gtk.TARGET_SAME_WIDGET, 0),
("text/plain", 0, 1),
("TEXT", 0, 2),
("STRING", 0, 3),
]
def __init__(self, title, configuration, all_image_types,
all_package_formats, all_distros, all_sdk_machines,
max_threads, parent, flags, handler, buttons=None):
super(SimpleSettingsDialog, self).__init__(title, parent, flags, buttons)
# class members from other objects
# bitbake settings from Builder.Configuration
self.configuration = configuration
self.image_types = all_image_types
self.all_package_formats = all_package_formats
self.all_distros = all_distros
self.all_sdk_machines = all_sdk_machines
self.max_threads = max_threads
# class members for internal use
self.dldir_text = None
self.sstatedir_text = None
self.sstatemirrors_list = []
self.sstatemirrors_changed = 0
self.bb_spinner = None
self.pmake_spinner = None
self.rootfs_size_spinner = None
self.extra_size_spinner = None
self.gplv3_checkbox = None
self.toolchain_checkbox = None
self.setting_store = None
self.image_types_checkbuttons = {}
self.md5 = self.config_md5()
self.proxy_md5 = self.config_proxy_md5()
self.settings_changed = False
self.proxy_settings_changed = False
self.handler = handler
self.proxy_test_ran = False
self.selected_mirror_row = 0
self.new_mirror = False
# create visual elements on the dialog
self.create_visual_elements()
self.connect("response", self.response_cb)
def _get_sorted_value(self, var):
return " ".join(sorted(str(var).split())) + "\n"
def config_proxy_md5(self):
data = ("ENABLE_PROXY: " + self._get_sorted_value(self.configuration.enable_proxy))
if self.configuration.enable_proxy:
for protocol in self.configuration.proxies.keys():
data += (protocol + ": " + self._get_sorted_value(self.configuration.combine_proxy(protocol)))
return hashlib.md5(data).hexdigest()
def config_md5(self):
data = ""
for key in self.configuration.extra_setting.keys():
data += (key + ": " + self._get_sorted_value(self.configuration.extra_setting[key]))
return hashlib.md5(data).hexdigest()
def gen_proxy_entry_widget(self, protocol, parent, need_button=True, line=0):
label = gtk.Label(protocol.upper() + " proxy")
self.proxy_table.attach(label, 0, 1, line, line+1, xpadding=24)
proxy_entry = gtk.Entry()
proxy_entry.set_size_request(300, -1)
self.proxy_table.attach(proxy_entry, 1, 2, line, line+1, ypadding=4)
self.proxy_table.attach(gtk.Label(":"), 2, 3, line, line+1, xpadding=12, ypadding=4)
port_entry = gtk.Entry()
port_entry.set_size_request(60, -1)
self.proxy_table.attach(port_entry, 3, 4, line, line+1, ypadding=4)
details_button = HobAltButton("Details")
details_button.connect("clicked", self.details_cb, parent, protocol)
self.proxy_table.attach(details_button, 4, 5, line, line+1, xpadding=4, yoptions=gtk.EXPAND)
return proxy_entry, port_entry, details_button
def refresh_proxy_components(self):
self.same_checkbox.set_sensitive(self.configuration.enable_proxy)
self.http_proxy.set_text(self.configuration.combine_host_only("http"))
self.http_proxy.set_editable(self.configuration.enable_proxy)
self.http_proxy.set_sensitive(self.configuration.enable_proxy)
self.http_proxy_port.set_text(self.configuration.combine_port_only("http"))
self.http_proxy_port.set_editable(self.configuration.enable_proxy)
self.http_proxy_port.set_sensitive(self.configuration.enable_proxy)
self.http_proxy_details.set_sensitive(self.configuration.enable_proxy)
self.https_proxy.set_text(self.configuration.combine_host_only("https"))
self.https_proxy.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.https_proxy.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.https_proxy_port.set_text(self.configuration.combine_port_only("https"))
self.https_proxy_port.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.https_proxy_port.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.https_proxy_details.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.ftp_proxy.set_text(self.configuration.combine_host_only("ftp"))
self.ftp_proxy.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.ftp_proxy.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.ftp_proxy_port.set_text(self.configuration.combine_port_only("ftp"))
self.ftp_proxy_port.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.ftp_proxy_port.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.ftp_proxy_details.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.socks_proxy.set_text(self.configuration.combine_host_only("socks"))
self.socks_proxy.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.socks_proxy.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.socks_proxy_port.set_text(self.configuration.combine_port_only("socks"))
self.socks_proxy_port.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.socks_proxy_port.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.socks_proxy_details.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.cvs_proxy.set_text(self.configuration.combine_host_only("cvs"))
self.cvs_proxy.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.cvs_proxy.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.cvs_proxy_port.set_text(self.configuration.combine_port_only("cvs"))
self.cvs_proxy_port.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.cvs_proxy_port.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
self.cvs_proxy_details.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
if self.configuration.same_proxy:
if self.http_proxy.get_text():
[w.set_text(self.http_proxy.get_text()) for w in self.same_proxy_addresses]
if self.http_proxy_port.get_text():
[w.set_text(self.http_proxy_port.get_text()) for w in self.same_proxy_ports]
def proxy_checkbox_toggled_cb(self, button):
self.configuration.enable_proxy = self.proxy_checkbox.get_active()
if not self.configuration.enable_proxy:
self.configuration.same_proxy = False
self.same_checkbox.set_active(self.configuration.same_proxy)
self.save_proxy_data()
self.refresh_proxy_components()
def same_checkbox_toggled_cb(self, button):
self.configuration.same_proxy = self.same_checkbox.get_active()
self.save_proxy_data()
self.refresh_proxy_components()
def save_proxy_data(self):
self.configuration.split_proxy("http", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
if self.configuration.same_proxy:
self.configuration.split_proxy("https", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
self.configuration.split_proxy("ftp", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
self.configuration.split_proxy("socks", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
self.configuration.split_proxy("cvs", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
else:
self.configuration.split_proxy("https", self.https_proxy.get_text() + ":" + self.https_proxy_port.get_text())
self.configuration.split_proxy("ftp", self.ftp_proxy.get_text() + ":" + self.ftp_proxy_port.get_text())
self.configuration.split_proxy("socks", self.socks_proxy.get_text() + ":" + self.socks_proxy_port.get_text())
self.configuration.split_proxy("cvs", self.cvs_proxy.get_text() + ":" + self.cvs_proxy_port.get_text())
def response_cb(self, dialog, response_id):
if response_id == gtk.RESPONSE_YES:
if self.proxy_checkbox.get_active():
# Check that all proxy entries have a corresponding port
for proxy, port in zip(self.all_proxy_addresses, self.all_proxy_ports):
if proxy.get_text() and not port.get_text():
lbl = "<b>Enter all port numbers</b>"
msg = "Proxy servers require a port number. Please make sure you have entered a port number for each proxy server."
dialog = CrumbsMessageDialog(self, lbl, gtk.MESSAGE_WARNING, msg)
button = dialog.add_button("Close", gtk.RESPONSE_OK)
HobButton.style_button(button)
response = dialog.run()
dialog.destroy()
self.emit_stop_by_name("response")
return
self.configuration.dldir = self.dldir_text.get_text()
self.configuration.sstatedir = self.sstatedir_text.get_text()
self.configuration.sstatemirror = ""
for mirror in self.sstatemirrors_list:
if mirror[1] != "" and mirror[2].startswith("file://"):
smirror = mirror[2] + " " + mirror[1] + " \\n "
self.configuration.sstatemirror += smirror
self.configuration.bbthread = self.bb_spinner.get_value_as_int()
self.configuration.pmake = self.pmake_spinner.get_value_as_int()
self.save_proxy_data()
self.configuration.extra_setting = {}
it = self.setting_store.get_iter_first()
while it:
key = self.setting_store.get_value(it, 0)
value = self.setting_store.get_value(it, 1)
self.configuration.extra_setting[key] = value
it = self.setting_store.iter_next(it)
md5 = self.config_md5()
self.settings_changed = (self.md5 != md5)
self.proxy_settings_changed = (self.proxy_md5 != self.config_proxy_md5())
def create_build_environment_page(self):
advanced_vbox = gtk.VBox(False, 6)
advanced_vbox.set_border_width(6)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Parallel threads</span>'), expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = self.gen_label_widget("BitBake parallel threads")
tooltip = "Sets the number of threads that BitBake tasks can simultaneously run. See the <a href=\""
tooltip += "http://www.yoctoproject.org/docs/current/poky-ref-manual/"
tooltip += "poky-ref-manual.html#var-BB_NUMBER_THREADS\">Poky reference manual</a> for information"
bbthread_widget, self.bb_spinner = self.gen_spinner_widget(self.configuration.bbthread, 1, self.max_threads,"<b>BitBake prallalel threads</b>" + "*" + tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(bbthread_widget, expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = self.gen_label_widget("Make parallel threads")
tooltip = "Sets the maximum number of threads the host can use during the build. See the <a href=\""
tooltip += "http://www.yoctoproject.org/docs/current/poky-ref-manual/"
tooltip += "poky-ref-manual.html#var-PARALLEL_MAKE\">Poky reference manual</a> for information"
pmake_widget, self.pmake_spinner = self.gen_spinner_widget(self.configuration.pmake, 1, self.max_threads,"<b>Make parallel threads</b>" + "*" + tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(pmake_widget, expand=False, fill=False)
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Downloaded source code</span>'), expand=False, fill=False)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = self.gen_label_widget("Downloads directory")
tooltip = "Select a folder that caches the upstream project source code"
dldir_widget, self.dldir_text = self.gen_entry_widget(self.configuration.dldir, self,"<b>Downloaded source code</b>" + "*" + tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(dldir_widget, expand=False, fill=False)
return advanced_vbox
def create_shared_state_page(self):
advanced_vbox = gtk.VBox(False)
advanced_vbox.set_border_width(12)
sub_vbox = gtk.VBox(False)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False, padding=24)
content = "<span>Shared state directory</span>"
tooltip = "Select a folder that caches your prebuilt results"
label = self.gen_label_info_widget(content,"<b>Shared state directory</b>" + "*" + tooltip)
sstatedir_widget, self.sstatedir_text = self.gen_entry_widget(self.configuration.sstatedir, self)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(sstatedir_widget, expand=False, fill=False, padding=6)
content = "<span weight=\"bold\">Shared state mirrors</span>"
tooltip = "URLs pointing to pre-built mirrors that will speed your build. "
tooltip += "Select the \'Standard\' configuration if the structure of your "
tooltip += "mirror replicates the structure of your local shared state directory. "
tooltip += "For more information on shared state mirrors, check the <a href=\""
tooltip += "http://www.yoctoproject.org/docs/current/poky-ref-manual/"
tooltip += "poky-ref-manual.html#shared-state\">Yocto Project Reference Manual</a>."
table = self.gen_label_info_widget(content,"<b>Shared state mirrors</b>" + "*" + tooltip)
advanced_vbox.pack_start(table, expand=False, fill=False, padding=6)
sub_vbox = gtk.VBox(False)
advanced_vbox.pack_start(sub_vbox, gtk.TRUE, gtk.TRUE, 0)
if self.sstatemirrors_changed == 0:
self.sstatemirrors_changed = 1
sstatemirrors = self.configuration.sstatemirror
if sstatemirrors == "":
sm_list = ["Standard", "", "file://(.*)"]
self.sstatemirrors_list.append(sm_list)
else:
sstatemirrors = [x for x in sstatemirrors.split('\\n')]
for sstatemirror in sstatemirrors:
sstatemirror_fields = [x for x in sstatemirror.split(' ') if x.strip()]
if len(sstatemirror_fields) == 2:
if sstatemirror_fields[0] == "file://(.*)" or sstatemirror_fields[0] == "file://.*":
sm_list = ["Standard", sstatemirror_fields[1], sstatemirror_fields[0]]
else:
sm_list = ["Custom", sstatemirror_fields[1], sstatemirror_fields[0]]
self.sstatemirrors_list.append(sm_list)
sstatemirrors_widget, sstatemirrors_store = self.gen_shared_sstate_widget(self.sstatemirrors_list, self)
sub_vbox.pack_start(sstatemirrors_widget, expand=True, fill=True)
table = gtk.Table(1, 10, False)
table.set_col_spacings(6)
add_mirror_button = HobAltButton("Add mirror")
add_mirror_button.connect("clicked", self.add_mirror)
add_mirror_button.set_size_request(120,30)
table.attach(add_mirror_button, 1, 2, 0, 1, xoptions=gtk.SHRINK)
self.delete_button = HobAltButton("Delete mirror")
self.delete_button.connect("clicked", self.delete_cb)
self.delete_button.set_size_request(120, 30)
table.attach(self.delete_button, 3, 4, 0, 1, xoptions=gtk.SHRINK)
advanced_vbox.pack_start(table, expand=False, fill=False, padding=6)
return advanced_vbox
def gen_shared_sstate_widget(self, sstatemirrors_list, window):
hbox = gtk.HBox(False)
sstatemirrors_store = gtk.ListStore(str, str, str)
for sstatemirror in sstatemirrors_list:
sstatemirrors_store.append(sstatemirror)
self.sstatemirrors_tv = gtk.TreeView()
self.sstatemirrors_tv.set_rules_hint(True)
self.sstatemirrors_tv.set_headers_visible(True)
tree_selection = self.sstatemirrors_tv.get_selection()
tree_selection.set_mode(gtk.SELECTION_SINGLE)
# Allow enable drag and drop of rows including row move
self.sstatemirrors_tv.enable_model_drag_source( gtk.gdk.BUTTON1_MASK,
self.TARGETS,
gtk.gdk.ACTION_DEFAULT|
gtk.gdk.ACTION_MOVE)
self.sstatemirrors_tv.enable_model_drag_dest(self.TARGETS,
gtk.gdk.ACTION_DEFAULT)
self.sstatemirrors_tv.connect("drag_data_get", self.drag_data_get_cb)
self.sstatemirrors_tv.connect("drag_data_received", self.drag_data_received_cb)
self.scroll = gtk.ScrolledWindow()
self.scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
self.scroll.set_shadow_type(gtk.SHADOW_IN)
self.scroll.connect('size-allocate', self.scroll_changed)
self.scroll.add(self.sstatemirrors_tv)
#list store for cell renderer
m = gtk.ListStore(gobject.TYPE_STRING)
m.append(["Standard"])
m.append(["Custom"])
cell0 = gtk.CellRendererCombo()
cell0.set_property("model",m)
cell0.set_property("text-column", 0)
cell0.set_property("editable", True)
cell0.set_property("has-entry", False)
col0 = gtk.TreeViewColumn("Configuration")
col0.pack_start(cell0, False)
col0.add_attribute(cell0, "text", 0)
col0.set_cell_data_func(cell0, self.configuration_field)
self.sstatemirrors_tv.append_column(col0)
cell0.connect("edited", self.combo_changed, sstatemirrors_store)
self.cell1 = gtk.CellRendererText()
self.cell1.set_padding(5,2)
col1 = gtk.TreeViewColumn('Regex', self.cell1)
col1.set_cell_data_func(self.cell1, self.regex_field)
self.sstatemirrors_tv.append_column(col1)
self.cell1.connect("edited", self.regex_changed, sstatemirrors_store)
cell2 = gtk.CellRendererText()
cell2.set_padding(5,2)
cell2.set_property("editable", True)
col2 = gtk.TreeViewColumn('URL', cell2)
col2.set_cell_data_func(cell2, self.url_field)
self.sstatemirrors_tv.append_column(col2)
cell2.connect("edited", self.url_changed, sstatemirrors_store)
self.sstatemirrors_tv.set_model(sstatemirrors_store)
self.sstatemirrors_tv.set_cursor(self.selected_mirror_row)
hbox.pack_start(self.scroll, expand=True, fill=True)
hbox.show_all()
return hbox, sstatemirrors_store
def drag_data_get_cb(self, treeview, context, selection, target_id, etime):
treeselection = treeview.get_selection()
model, iter = treeselection.get_selected()
data = model.get_string_from_iter(iter)
selection.set(selection.target, 8, data)
def drag_data_received_cb(self, treeview, context, x, y, selection, info, etime):
model = treeview.get_model()
data = []
tree_iter = model.get_iter_from_string(selection.data)
data.append(model.get_value(tree_iter, 0))
data.append(model.get_value(tree_iter, 1))
data.append(model.get_value(tree_iter, 2))
drop_info = treeview.get_dest_row_at_pos(x, y)
if drop_info:
path, position = drop_info
iter = model.get_iter(path)
if (position == gtk.TREE_VIEW_DROP_BEFORE or position == gtk.TREE_VIEW_DROP_INTO_OR_BEFORE):
model.insert_before(iter, data)
else:
model.insert_after(iter, data)
else:
model.append(data)
if context.action == gtk.gdk.ACTION_MOVE:
context.finish(True, True, etime)
return
def delete_cb(self, button):
selection = self.sstatemirrors_tv.get_selection()
tree_model, tree_iter = selection.get_selected()
index = int(tree_model.get_string_from_iter(tree_iter))
if index == 0:
self.selected_mirror_row = index
else:
self.selected_mirror_row = index - 1
self.sstatemirrors_list.pop(index)
self.refresh_shared_state_page()
if not self.sstatemirrors_list:
self.delete_button.set_sensitive(False)
def add_mirror(self, button):
self.new_mirror = True
tooltip = "Select the pre-built mirror that will speed your build"
index = len(self.sstatemirrors_list)
self.selected_mirror_row = index
sm_list = ["Standard", "", "file://(.*)"]
self.sstatemirrors_list.append(sm_list)
self.refresh_shared_state_page()
def scroll_changed(self, widget, event, data=None):
if self.new_mirror == True:
adj = widget.get_vadjustment()
adj.set_value(adj.upper - adj.page_size)
self.new_mirror = False
def combo_changed(self, widget, path, text, model):
model[path][0] = text
selection = self.sstatemirrors_tv.get_selection()
tree_model, tree_iter = selection.get_selected()
index = int(tree_model.get_string_from_iter(tree_iter))
self.sstatemirrors_list[index][0] = text
def regex_changed(self, cell, path, new_text, user_data):
user_data[path][2] = new_text
selection = self.sstatemirrors_tv.get_selection()
tree_model, tree_iter = selection.get_selected()
index = int(tree_model.get_string_from_iter(tree_iter))
self.sstatemirrors_list[index][2] = new_text
return
def url_changed(self, cell, path, new_text, user_data):
if new_text!="Enter the mirror URL" and new_text!="Match regex and replace it with this URL":
user_data[path][1] = new_text
selection = self.sstatemirrors_tv.get_selection()
tree_model, tree_iter = selection.get_selected()
index = int(tree_model.get_string_from_iter(tree_iter))
self.sstatemirrors_list[index][1] = new_text
return
def configuration_field(self, column, cell, model, iter):
cell.set_property('text', model.get_value(iter, 0))
if model.get_value(iter, 0) == "Standard":
self.cell1.set_property("sensitive", False)
self.cell1.set_property("editable", False)
else:
self.cell1.set_property("sensitive", True)
self.cell1.set_property("editable", True)
return
def regex_field(self, column, cell, model, iter):
cell.set_property('text', model.get_value(iter, 2))
return
def url_field(self, column, cell, model, iter):
text = model.get_value(iter, 1)
if text == "":
if model.get_value(iter, 0) == "Standard":
text = "Enter the mirror URL"
else:
text = "Match regex and replace it with this URL"
cell.set_property('text', text)
return
def refresh_shared_state_page(self):
page_num = self.nb.get_current_page()
self.nb.remove_page(page_num);
self.nb.insert_page(self.create_shared_state_page(), gtk.Label("Shared state"),page_num)
self.show_all()
self.nb.set_current_page(page_num)
def test_proxy_ended(self, passed):
self.proxy_test_running = False
self.set_test_proxy_state(self.TEST_NETWORK_PASSED if passed else self.TEST_NETWORK_FAILED)
self.set_sensitive(True)
self.refresh_proxy_components()
def timer_func(self):
self.test_proxy_progress.pulse()
return self.proxy_test_running
def test_network_button_cb(self, b):
self.set_test_proxy_state(self.TEST_NETWORK_RUNNING)
self.set_sensitive(False)
self.save_proxy_data()
if self.configuration.enable_proxy == True:
self.handler.set_http_proxy(self.configuration.combine_proxy("http"))
self.handler.set_https_proxy(self.configuration.combine_proxy("https"))
self.handler.set_ftp_proxy(self.configuration.combine_proxy("ftp"))
self.handler.set_socks_proxy(self.configuration.combine_proxy("socks"))
self.handler.set_cvs_proxy(self.configuration.combine_host_only("cvs"), self.configuration.combine_port_only("cvs"))
elif self.configuration.enable_proxy == False:
self.handler.set_http_proxy("")
self.handler.set_https_proxy("")
self.handler.set_ftp_proxy("")
self.handler.set_socks_proxy("")
self.handler.set_cvs_proxy("", "")
self.proxy_test_ran = True
self.proxy_test_running = True
gobject.timeout_add(100, self.timer_func)
self.handler.trigger_network_test()
def test_proxy_focus_event(self, w, direction):
if self.test_proxy_state in [self.TEST_NETWORK_PASSED, self.TEST_NETWORK_FAILED]:
self.set_test_proxy_state(self.TEST_NETWORK_INITIAL)
return False
def http_proxy_changed(self, e):
if not self.configuration.same_proxy:
return
if e == self.http_proxy:
[w.set_text(self.http_proxy.get_text()) for w in self.same_proxy_addresses]
else:
[w.set_text(self.http_proxy_port.get_text()) for w in self.same_proxy_ports]
def proxy_address_focus_out_event(self, w, direction):
text = w.get_text()
if not text:
return False
if text.find("//") == -1:
w.set_text("http://" + text)
return False
def set_test_proxy_state(self, state):
if self.test_proxy_state == state:
return
[self.proxy_table.remove(w) for w in self.test_gui_elements]
if state == self.TEST_NETWORK_INITIAL:
self.proxy_table.attach(self.test_network_button, 1, 2, 5, 6)
self.test_network_button.show()
elif state == self.TEST_NETWORK_RUNNING:
self.test_proxy_progress.set_rcstyle("running")
self.test_proxy_progress.set_text("Testing network configuration")
self.proxy_table.attach(self.test_proxy_progress, 0, 5, 5, 6, xpadding=4)
self.test_proxy_progress.show()
else: # passed or failed
self.dummy_progress.update(1.0)
if state == self.TEST_NETWORK_PASSED:
self.dummy_progress.set_text("Your network is properly configured")
self.dummy_progress.set_rcstyle("running")
else:
self.dummy_progress.set_text("Network test failed")
self.dummy_progress.set_rcstyle("fail")
self.proxy_table.attach(self.dummy_progress, 0, 4, 5, 6)
self.proxy_table.attach(self.retest_network_button, 4, 5, 5, 6, xpadding=4)
self.dummy_progress.show()
self.retest_network_button.show()
self.test_proxy_state = state
def create_network_page(self):
advanced_vbox = gtk.VBox(False, 6)
advanced_vbox.set_border_width(6)
self.same_proxy_addresses = []
self.same_proxy_ports = []
self.all_proxy_ports = []
self.all_proxy_addresses = []
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = self.gen_label_widget("<span weight=\"bold\">Set the proxies used when fetching source code</span>")
tooltip = "Set the proxies used when fetching source code. A blank field uses a direct internet connection."
info = HobInfoButton("<span weight=\"bold\">Set the proxies used when fetching source code</span>" + "*" + tooltip, self)
hbox = gtk.HBox(False, 12)
hbox.pack_start(label, expand=True, fill=True)
hbox.pack_start(info, expand=False, fill=False)
sub_vbox.pack_start(hbox, expand=False, fill=False)
proxy_test_focus = []
self.direct_checkbox = gtk.RadioButton(None, "Direct network connection")
proxy_test_focus.append(self.direct_checkbox)
self.direct_checkbox.set_tooltip_text("Check this box to use a direct internet connection with no proxy")
self.direct_checkbox.set_active(not self.configuration.enable_proxy)
sub_vbox.pack_start(self.direct_checkbox, expand=False, fill=False)
self.proxy_checkbox = gtk.RadioButton(self.direct_checkbox, "Manual proxy configuration")
proxy_test_focus.append(self.proxy_checkbox)
self.proxy_checkbox.set_tooltip_text("Check this box to manually set up a specific proxy")
self.proxy_checkbox.set_active(self.configuration.enable_proxy)
sub_vbox.pack_start(self.proxy_checkbox, expand=False, fill=False)
self.same_checkbox = gtk.CheckButton("Use the HTTP proxy for all protocols")
proxy_test_focus.append(self.same_checkbox)
self.same_checkbox.set_tooltip_text("Check this box to use the HTTP proxy for all five proxies")
self.same_checkbox.set_active(self.configuration.same_proxy)
hbox = gtk.HBox(False, 12)
hbox.pack_start(self.same_checkbox, expand=False, fill=False, padding=24)
sub_vbox.pack_start(hbox, expand=False, fill=False)
self.proxy_table = gtk.Table(6, 5, False)
self.http_proxy, self.http_proxy_port, self.http_proxy_details = self.gen_proxy_entry_widget(
"http", self, True, 0)
proxy_test_focus +=[self.http_proxy, self.http_proxy_port]
self.http_proxy.connect("changed", self.http_proxy_changed)
self.http_proxy_port.connect("changed", self.http_proxy_changed)
self.https_proxy, self.https_proxy_port, self.https_proxy_details = self.gen_proxy_entry_widget(
"https", self, True, 1)
proxy_test_focus += [self.https_proxy, self.https_proxy_port]
self.same_proxy_addresses.append(self.https_proxy)
self.same_proxy_ports.append(self.https_proxy_port)
self.ftp_proxy, self.ftp_proxy_port, self.ftp_proxy_details = self.gen_proxy_entry_widget(
"ftp", self, True, 2)
proxy_test_focus += [self.ftp_proxy, self.ftp_proxy_port]
self.same_proxy_addresses.append(self.ftp_proxy)
self.same_proxy_ports.append(self.ftp_proxy_port)
self.socks_proxy, self.socks_proxy_port, self.socks_proxy_details = self.gen_proxy_entry_widget(
"socks", self, True, 3)
proxy_test_focus += [self.socks_proxy, self.socks_proxy_port]
self.same_proxy_addresses.append(self.socks_proxy)
self.same_proxy_ports.append(self.socks_proxy_port)
self.cvs_proxy, self.cvs_proxy_port, self.cvs_proxy_details = self.gen_proxy_entry_widget(
"cvs", self, True, 4)
proxy_test_focus += [self.cvs_proxy, self.cvs_proxy_port]
self.same_proxy_addresses.append(self.cvs_proxy)
self.same_proxy_ports.append(self.cvs_proxy_port)
self.all_proxy_ports = self.same_proxy_ports + [self.http_proxy_port]
self.all_proxy_addresses = self.same_proxy_addresses + [self.http_proxy]
sub_vbox.pack_start(self.proxy_table, expand=False, fill=False)
self.proxy_table.show_all()
# Create the graphical elements for the network test feature, but don't display them yet
self.test_network_button = HobAltButton("Test network configuration")
self.test_network_button.connect("clicked", self.test_network_button_cb)
self.test_proxy_progress = HobProgressBar()
self.dummy_progress = HobProgressBar()
self.retest_network_button = HobAltButton("Retest")
self.retest_network_button.connect("clicked", self.test_network_button_cb)
self.test_gui_elements = [self.test_network_button, self.test_proxy_progress, self.dummy_progress, self.retest_network_button]
# Initialize the network tester
self.test_proxy_state = self.TEST_NETWORK_NONE
self.set_test_proxy_state(self.TEST_NETWORK_INITIAL)
self.proxy_test_passed_id = self.handler.connect("network-passed", lambda h:self.test_proxy_ended(True))
self.proxy_test_failed_id = self.handler.connect("network-failed", lambda h:self.test_proxy_ended(False))
[w.connect("focus-in-event", self.test_proxy_focus_event) for w in proxy_test_focus]
[w.connect("focus-out-event", self.proxy_address_focus_out_event) for w in self.all_proxy_addresses]
self.direct_checkbox.connect("toggled", self.proxy_checkbox_toggled_cb)
self.proxy_checkbox.connect("toggled", self.proxy_checkbox_toggled_cb)
self.same_checkbox.connect("toggled", self.same_checkbox_toggled_cb)
self.refresh_proxy_components()
return advanced_vbox
def switch_to_page(self, page_id):
self.nb.set_current_page(page_id)
def details_cb(self, button, parent, protocol):
self.save_proxy_data()
dialog = ProxyDetailsDialog(title = protocol.upper() + " Proxy Details",
user = self.configuration.proxies[protocol][1],
passwd = self.configuration.proxies[protocol][2],
parent = parent,
flags = gtk.DIALOG_MODAL
| gtk.DIALOG_DESTROY_WITH_PARENT
| gtk.DIALOG_NO_SEPARATOR)
dialog.add_button(gtk.STOCK_CLOSE, gtk.RESPONSE_OK)
response = dialog.run()
if response == gtk.RESPONSE_OK:
self.configuration.proxies[protocol][1] = dialog.user
self.configuration.proxies[protocol][2] = dialog.passwd
self.refresh_proxy_components()
dialog.destroy()
def rootfs_combo_changed_cb(self, rootfs_combo, all_package_format, check_hbox):
combo_item = self.rootfs_combo.get_active_text()
for child in check_hbox.get_children():
if isinstance(child, gtk.CheckButton):
check_hbox.remove(child)
for format in all_package_format:
if format != combo_item:
check_button = gtk.CheckButton(format)
check_hbox.pack_start(check_button, expand=False, fill=False)
check_hbox.show_all()
def gen_pkgfmt_widget(self, curr_package_format, all_package_format, tooltip_combo="", tooltip_extra=""):
pkgfmt_hbox = gtk.HBox(False, 24)
rootfs_vbox = gtk.VBox(False, 6)
pkgfmt_hbox.pack_start(rootfs_vbox, expand=False, fill=False)
label = self.gen_label_widget("Root file system package format")
rootfs_vbox.pack_start(label, expand=False, fill=False)
rootfs_format = ""
if curr_package_format:
rootfs_format = curr_package_format.split()[0]
rootfs_format_widget, rootfs_combo = self.gen_combo_widget(rootfs_format, all_package_format, tooltip_combo)
rootfs_vbox.pack_start(rootfs_format_widget, expand=False, fill=False)
extra_vbox = gtk.VBox(False, 6)
pkgfmt_hbox.pack_start(extra_vbox, expand=False, fill=False)
label = self.gen_label_widget("Additional package formats")
extra_vbox.pack_start(label, expand=False, fill=False)
check_hbox = gtk.HBox(False, 12)
extra_vbox.pack_start(check_hbox, expand=False, fill=False)
for format in all_package_format:
if format != rootfs_format:
check_button = gtk.CheckButton(format)
is_active = (format in curr_package_format.split())
check_button.set_active(is_active)
check_hbox.pack_start(check_button, expand=False, fill=False)
info = HobInfoButton(tooltip_extra, self)
check_hbox.pack_end(info, expand=False, fill=False)
rootfs_combo.connect("changed", self.rootfs_combo_changed_cb, all_package_format, check_hbox)
pkgfmt_hbox.show_all()
return pkgfmt_hbox, rootfs_combo, check_hbox
def editable_settings_cell_edited(self, cell, path_string, new_text, model):
it = model.get_iter_from_string(path_string)
column = cell.get_data("column")
model.set(it, column, new_text)
def editable_settings_add_item_clicked(self, button, model):
new_item = ["##KEY##", "##VALUE##"]
iter = model.append()
model.set (iter,
0, new_item[0],
1, new_item[1],
)
def editable_settings_remove_item_clicked(self, button, treeview):
selection = treeview.get_selection()
model, iter = selection.get_selected()
if iter:
path = model.get_path(iter)[0]
model.remove(iter)
def gen_editable_settings(self, setting, tooltip=""):
setting_hbox = gtk.HBox(False, 12)
vbox = gtk.VBox(False, 12)
setting_hbox.pack_start(vbox, expand=True, fill=True)
setting_store = gtk.ListStore(gobject.TYPE_STRING, gobject.TYPE_STRING)
for key in setting.keys():
setting_store.set(setting_store.append(), 0, key, 1, setting[key])
setting_tree = gtk.TreeView(setting_store)
setting_tree.set_headers_visible(True)
setting_tree.set_size_request(300, 100)
col = gtk.TreeViewColumn('Key')
col.set_min_width(100)
col.set_max_width(150)
col.set_resizable(True)
col1 = gtk.TreeViewColumn('Value')
col1.set_min_width(100)
col1.set_max_width(150)
col1.set_resizable(True)
setting_tree.append_column(col)
setting_tree.append_column(col1)
cell = gtk.CellRendererText()
cell.set_property('width-chars', 10)
cell.set_property('editable', True)
cell.set_data("column", 0)
cell.connect("edited", self.editable_settings_cell_edited, setting_store)
cell1 = gtk.CellRendererText()
cell1.set_property('width-chars', 10)
cell1.set_property('editable', True)
cell1.set_data("column", 1)
cell1.connect("edited", self.editable_settings_cell_edited, setting_store)
col.pack_start(cell, True)
col1.pack_end(cell1, True)
col.set_attributes(cell, text=0)
col1.set_attributes(cell1, text=1)
scroll = gtk.ScrolledWindow()
scroll.set_shadow_type(gtk.SHADOW_IN)
scroll.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
scroll.add(setting_tree)
vbox.pack_start(scroll, expand=True, fill=True)
# some buttons
hbox = gtk.HBox(True, 6)
vbox.pack_start(hbox, False, False)
button = gtk.Button(stock=gtk.STOCK_ADD)
button.connect("clicked", self.editable_settings_add_item_clicked, setting_store)
hbox.pack_start(button)
button = gtk.Button(stock=gtk.STOCK_REMOVE)
button.connect("clicked", self.editable_settings_remove_item_clicked, setting_tree)
hbox.pack_start(button)
info = HobInfoButton(tooltip, self)
setting_hbox.pack_start(info, expand=False, fill=False)
return setting_hbox, setting_store
def create_others_page(self):
advanced_vbox = gtk.VBox(False, 6)
advanced_vbox.set_border_width(6)
sub_vbox = gtk.VBox(False, 6)
advanced_vbox.pack_start(sub_vbox, expand=True, fill=True)
label = self.gen_label_widget("<span weight=\"bold\">Add your own variables:</span>")
tooltip = "These are key/value pairs for your extra settings. Click \'Add\' and then directly edit the key and the value"
setting_widget, self.setting_store = self.gen_editable_settings(self.configuration.extra_setting,"<b>Add your own variables</b>" + "*" + tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(setting_widget, expand=True, fill=True)
return advanced_vbox
def create_visual_elements(self):
self.nb = gtk.Notebook()
self.nb.set_show_tabs(True)
self.nb.append_page(self.create_build_environment_page(), gtk.Label("Build environment"))
self.nb.append_page(self.create_shared_state_page(), gtk.Label("Shared state"))
self.nb.append_page(self.create_network_page(), gtk.Label("Network"))
self.nb.append_page(self.create_others_page(), gtk.Label("Others"))
self.nb.set_current_page(0)
self.vbox.pack_start(self.nb, expand=True, fill=True)
self.vbox.pack_end(gtk.HSeparator(), expand=True, fill=True)
self.show_all()
def destroy(self):
self.handler.disconnect(self.proxy_test_passed_id)
self.handler.disconnect(self.proxy_test_failed_id)
super(SimpleSettingsDialog, self).destroy()

View File

@@ -0,0 +1,645 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gobject
import logging
import ast
from bb.ui.crumbs.runningbuild import RunningBuild
class HobHandler(gobject.GObject):
"""
This object does BitBake event handling for the hob gui.
"""
__gsignals__ = {
"package-formats-updated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"config-updated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING, gobject.TYPE_PYOBJECT,)),
"command-succeeded" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_INT,)),
"command-failed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING,)),
"parsing-warning" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING,)),
"sanity-failed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING, gobject.TYPE_INT)),
"generating-data" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"data-generated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"parsing-started" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"parsing" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"parsing-completed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"recipe-populated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"package-populated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"network-passed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"network-failed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
}
(GENERATE_CONFIGURATION, GENERATE_RECIPES, GENERATE_PACKAGES, GENERATE_IMAGE, POPULATE_PACKAGEINFO, SANITY_CHECK, NETWORK_TEST) = range(7)
(SUB_PATH_LAYERS, SUB_FILES_DISTRO, SUB_FILES_MACH, SUB_FILES_SDKMACH, SUB_MATCH_CLASS, SUB_PARSE_CONFIG, SUB_SANITY_CHECK,
SUB_GNERATE_TGTS, SUB_GENERATE_PKGINFO, SUB_BUILD_RECIPES, SUB_BUILD_IMAGE, SUB_NETWORK_TEST) = range(12)
def __init__(self, server, recipe_model, package_model):
super(HobHandler, self).__init__()
self.build = RunningBuild(sequential=True)
self.recipe_model = recipe_model
self.package_model = package_model
self.commands_async = []
self.generating = False
self.current_phase = None
self.building = False
self.recipe_queue = []
self.package_queue = []
self.server = server
self.error_msg = ""
self.initcmd = None
self.parsing = False
def set_busy(self):
if not self.generating:
self.emit("generating-data")
self.generating = True
def clear_busy(self):
if self.generating:
self.emit("data-generated")
self.generating = False
def runCommand(self, commandline):
try:
result, error = self.server.runCommand(commandline)
if error:
raise Exception("Error running command '%s': %s" % (commandline, error))
return result
except Exception as e:
self.commands_async = []
self.clear_busy()
self.emit("command-failed", "Hob Exception - %s" % (str(e)))
return None
def run_next_command(self, initcmd=None):
if initcmd != None:
self.initcmd = initcmd
if self.commands_async:
self.set_busy()
next_command = self.commands_async.pop(0)
else:
self.clear_busy()
if self.initcmd != None:
self.emit("command-succeeded", self.initcmd)
return
if next_command == self.SUB_PATH_LAYERS:
self.runCommand(["findConfigFilePath", "bblayers.conf"])
elif next_command == self.SUB_FILES_DISTRO:
self.runCommand(["findConfigFiles", "DISTRO"])
elif next_command == self.SUB_FILES_MACH:
self.runCommand(["findConfigFiles", "MACHINE"])
elif next_command == self.SUB_FILES_SDKMACH:
self.runCommand(["findConfigFiles", "MACHINE-SDK"])
elif next_command == self.SUB_MATCH_CLASS:
self.runCommand(["findFilesMatchingInDir", "rootfs_", "classes"])
elif next_command == self.SUB_PARSE_CONFIG:
self.runCommand(["resetCooker"])
elif next_command == self.SUB_GNERATE_TGTS:
self.runCommand(["generateTargetsTree", "classes/image.bbclass", []])
elif next_command == self.SUB_GENERATE_PKGINFO:
self.runCommand(["triggerEvent", "bb.event.RequestPackageInfo()"])
elif next_command == self.SUB_SANITY_CHECK:
self.runCommand(["triggerEvent", "bb.event.SanityCheck()"])
elif next_command == self.SUB_NETWORK_TEST:
self.runCommand(["triggerEvent", "bb.event.NetworkTest()"])
elif next_command == self.SUB_BUILD_RECIPES:
self.clear_busy()
self.building = True
self.runCommand(["buildTargets", self.recipe_queue, self.default_task])
self.recipe_queue = []
elif next_command == self.SUB_BUILD_IMAGE:
self.clear_busy()
self.building = True
target = self.image
if self.base_image:
# Request the build of a custom image
self.generate_hob_base_image(target)
self.set_var_in_file("LINGUAS_INSTALL", "", "local.conf")
hobImage = self.runCommand(["matchFile", target + ".bb"])
if self.base_image != self.recipe_model.__custom_image__:
baseImage = self.runCommand(["matchFile", self.base_image + ".bb"])
version = self.runCommand(["generateNewImage", hobImage, baseImage, self.package_queue, True, ""])
target += version
self.recipe_model.set_custom_image_version(version)
targets = [target]
if self.toolchain_packages:
self.set_var_in_file("TOOLCHAIN_TARGET_TASK", " ".join(self.toolchain_packages), "local.conf")
targets.append(target + ":do_populate_sdk")
self.runCommand(["buildTargets", targets, self.default_task])
def display_error(self):
self.clear_busy()
self.emit("command-failed", self.error_msg)
self.error_msg = ""
if self.building:
self.building = False
def handle_event(self, event):
if not event:
return
if self.building:
self.current_phase = "building"
self.build.handle_event(event)
if isinstance(event, bb.event.PackageInfo):
self.package_model.populate(event._pkginfolist)
self.emit("package-populated")
self.run_next_command()
elif isinstance(event, bb.event.SanityCheckPassed):
reparse = self.runCommand(["getVariable", "BB_INVALIDCONF"]) or None
if reparse is True:
self.set_var_in_file("BB_INVALIDCONF", False, "local.conf")
self.runCommand(["setPrePostConfFiles", "conf/.hob.conf", ""])
self.commands_async.prepend(self.SUB_PARSE_CONFIG)
self.run_next_command()
elif isinstance(event, bb.event.SanityCheckFailed):
self.emit("sanity-failed", event._msg, event._network_error)
elif isinstance(event, logging.LogRecord):
if not self.building:
if event.levelno >= logging.ERROR:
formatter = bb.msg.BBLogFormatter()
msg = formatter.format(event)
self.error_msg += msg + '\n'
elif event.levelno >= logging.WARNING and self.parsing == True:
formatter = bb.msg.BBLogFormatter()
msg = formatter.format(event)
warn_msg = msg + '\n'
self.emit("parsing-warning", warn_msg)
elif isinstance(event, bb.event.TargetsTreeGenerated):
self.current_phase = "data generation"
if event._model:
self.recipe_model.populate(event._model)
self.emit("recipe-populated")
elif isinstance(event, bb.event.ConfigFilesFound):
self.current_phase = "configuration lookup"
var = event._variable
values = event._values
values.sort()
self.emit("config-updated", var, values)
elif isinstance(event, bb.event.ConfigFilePathFound):
self.current_phase = "configuration lookup"
elif isinstance(event, bb.event.FilesMatchingFound):
self.current_phase = "configuration lookup"
# FIXME: hard coding, should at least be a variable shared between
# here and the caller
if event._pattern == "rootfs_":
formats = []
for match in event._matches:
classname, sep, cls = match.rpartition(".")
fs, sep, format = classname.rpartition("_")
formats.append(format)
formats.sort()
self.emit("package-formats-updated", formats)
elif isinstance(event, bb.command.CommandCompleted):
self.current_phase = None
self.run_next_command()
elif isinstance(event, bb.command.CommandFailed):
if event.error not in ("Forced shutdown", "Stopped build"):
self.error_msg += event.error
self.commands_async = []
self.display_error()
elif isinstance(event, (bb.event.ParseStarted,
bb.event.CacheLoadStarted,
bb.event.TreeDataPreparationStarted,
)):
message = {}
message["eventname"] = bb.event.getName(event)
message["current"] = 0
message["total"] = None
message["title"] = "Parsing recipes"
self.emit("parsing-started", message)
if isinstance(event, bb.event.ParseStarted):
self.parsing = True
elif isinstance(event, (bb.event.ParseProgress,
bb.event.CacheLoadProgress,
bb.event.TreeDataPreparationProgress)):
message = {}
message["eventname"] = bb.event.getName(event)
message["current"] = event.current
message["total"] = event.total
message["title"] = "Parsing recipes"
self.emit("parsing", message)
elif isinstance(event, (bb.event.ParseCompleted,
bb.event.CacheLoadCompleted,
bb.event.TreeDataPreparationCompleted)):
message = {}
message["eventname"] = bb.event.getName(event)
message["current"] = event.total
message["total"] = event.total
message["title"] = "Parsing recipes"
self.emit("parsing-completed", message)
if isinstance(event, bb.event.ParseCompleted):
self.parsing = False
elif isinstance(event, bb.event.NetworkTestFailed):
self.emit("network-failed")
self.run_next_command()
elif isinstance(event, bb.event.NetworkTestPassed):
self.emit("network-passed")
self.run_next_command()
if self.error_msg and not self.commands_async:
self.display_error()
return
def init_cooker(self):
self.runCommand(["createConfigFile", ".hob.conf"])
def set_extra_inherit(self, bbclass):
self.append_var_in_file("INHERIT", bbclass, ".hob.conf")
def set_bblayers(self, bblayers):
self.set_var_in_file("BBLAYERS", " ".join(bblayers), "bblayers.conf")
def set_machine(self, machine):
if machine:
self.early_assign_var_in_file("MACHINE", machine, "local.conf")
def set_sdk_machine(self, sdk_machine):
self.set_var_in_file("SDKMACHINE", sdk_machine, "local.conf")
def set_image_fstypes(self, image_fstypes):
self.set_var_in_file("IMAGE_FSTYPES", image_fstypes, "local.conf")
def set_distro(self, distro):
self.set_var_in_file("DISTRO", distro, "local.conf")
def set_package_format(self, format):
package_classes = ""
for pkgfmt in format.split():
package_classes += ("package_%s" % pkgfmt + " ")
self.set_var_in_file("PACKAGE_CLASSES", package_classes, "local.conf")
def set_bbthreads(self, threads):
self.set_var_in_file("BB_NUMBER_THREADS", threads, "local.conf")
def set_pmake(self, threads):
pmake = "-j %s" % threads
self.set_var_in_file("PARALLEL_MAKE", pmake, "local.conf")
def set_dl_dir(self, directory):
self.set_var_in_file("DL_DIR", directory, "local.conf")
def set_sstate_dir(self, directory):
self.set_var_in_file("SSTATE_DIR", directory, "local.conf")
def set_sstate_mirrors(self, url):
self.set_var_in_file("SSTATE_MIRRORS", url, "local.conf")
def set_extra_size(self, image_extra_size):
self.set_var_in_file("IMAGE_ROOTFS_EXTRA_SPACE", str(image_extra_size), "local.conf")
def set_rootfs_size(self, image_rootfs_size):
self.set_var_in_file("IMAGE_ROOTFS_SIZE", str(image_rootfs_size), "local.conf")
def set_incompatible_license(self, incompat_license):
self.set_var_in_file("INCOMPATIBLE_LICENSE", incompat_license, "local.conf")
def set_extra_setting(self, extra_setting):
self.set_var_in_file("EXTRA_SETTING", extra_setting, "local.conf")
def set_extra_config(self, extra_setting):
old_extra_setting = self.runCommand(["getVariable", "EXTRA_SETTING"]) or {}
old_extra_setting = str(old_extra_setting)
old_extra_setting = ast.literal_eval(old_extra_setting)
if not type(old_extra_setting) == dict:
old_extra_setting = {}
# settings not changed
if old_extra_setting == extra_setting:
return
# remove the old EXTRA SETTING variable
self.remove_var_from_file("EXTRA_SETTING")
# remove old settings from conf
for key in old_extra_setting.keys():
if key not in extra_setting:
self.remove_var_from_file(key)
# add new settings
for key, value in extra_setting.iteritems():
self.set_var_in_file(key, value, "local.conf")
if extra_setting:
self.set_var_in_file("EXTRA_SETTING", extra_setting, "local.conf")
def set_http_proxy(self, http_proxy):
self.set_var_in_file("http_proxy", http_proxy, "local.conf")
def set_https_proxy(self, https_proxy):
self.set_var_in_file("https_proxy", https_proxy, "local.conf")
def set_ftp_proxy(self, ftp_proxy):
self.set_var_in_file("ftp_proxy", ftp_proxy, "local.conf")
def set_socks_proxy(self, socks_proxy):
self.set_var_in_file("all_proxy", socks_proxy, "local.conf")
def set_cvs_proxy(self, host, port):
self.set_var_in_file("CVS_PROXY_HOST", host, "local.conf")
self.set_var_in_file("CVS_PROXY_PORT", port, "local.conf")
def request_package_info(self):
self.commands_async.append(self.SUB_GENERATE_PKGINFO)
self.run_next_command(self.POPULATE_PACKAGEINFO)
def trigger_sanity_check(self):
self.commands_async.append(self.SUB_SANITY_CHECK)
self.run_next_command(self.SANITY_CHECK)
def trigger_network_test(self):
self.commands_async.append(self.SUB_NETWORK_TEST)
self.run_next_command(self.NETWORK_TEST)
def generate_configuration(self):
self.runCommand(["setPrePostConfFiles", "conf/.hob.conf", ""])
self.commands_async.append(self.SUB_PARSE_CONFIG)
self.commands_async.append(self.SUB_PATH_LAYERS)
self.commands_async.append(self.SUB_FILES_DISTRO)
self.commands_async.append(self.SUB_FILES_MACH)
self.commands_async.append(self.SUB_FILES_SDKMACH)
self.commands_async.append(self.SUB_MATCH_CLASS)
self.run_next_command(self.GENERATE_CONFIGURATION)
def generate_recipes(self):
self.runCommand(["setPrePostConfFiles", "conf/.hob.conf", ""])
self.commands_async.append(self.SUB_PARSE_CONFIG)
self.commands_async.append(self.SUB_GNERATE_TGTS)
self.run_next_command(self.GENERATE_RECIPES)
def generate_packages(self, tgts, default_task="build"):
targets = []
targets.extend(tgts)
self.recipe_queue = targets
self.default_task = default_task
self.runCommand(["setPrePostConfFiles", "conf/.hob.conf", ""])
self.commands_async.append(self.SUB_PARSE_CONFIG)
self.commands_async.append(self.SUB_BUILD_RECIPES)
self.run_next_command(self.GENERATE_PACKAGES)
def generate_image(self, image, base_image, image_packages=None, toolchain_packages=None, default_task="build"):
self.image = image
self.base_image = base_image
if image_packages:
self.package_queue = image_packages
else:
self.package_queue = []
if toolchain_packages:
self.toolchain_packages = toolchain_packages
else:
self.toolchain_packages = []
self.default_task = default_task
self.runCommand(["setPrePostConfFiles", "conf/.hob.conf", ""])
self.commands_async.append(self.SUB_PARSE_CONFIG)
self.commands_async.append(self.SUB_BUILD_IMAGE)
self.run_next_command(self.GENERATE_IMAGE)
def generate_new_image(self, image, base_image, package_queue, description):
if base_image:
base_image = self.runCommand(["matchFile", self.base_image + ".bb"])
self.runCommand(["generateNewImage", image, base_image, package_queue, False, description])
def generate_hob_base_image(self, hob_image):
image_dir = self.get_topdir() + "/recipes/images/"
recipe_name = hob_image + ".bb"
self.ensure_dir(image_dir)
self.generate_new_image(image_dir + recipe_name, None, [], "")
def ensure_dir(self, directory):
self.runCommand(["ensureDir", directory])
def build_succeeded_async(self):
self.building = False
def build_failed_async(self):
self.initcmd = None
self.commands_async = []
self.building = False
def cancel_parse(self):
self.runCommand(["stateForceShutdown"])
def cancel_build(self, force=False):
if force:
# Force the cooker to stop as quickly as possible
self.runCommand(["stateForceShutdown"])
else:
# Wait for tasks to complete before shutting down, this helps
# leave the workdir in a usable state
self.runCommand(["stateShutdown"])
def reset_build(self):
self.build.reset()
def get_logfile(self):
return self.server.runCommand(["getVariable", "BB_CONSOLELOG"])[0]
def get_topdir(self):
return self.runCommand(["getVariable", "TOPDIR"]) or ""
def _remove_redundant(self, string):
ret = []
for i in string.split():
if i not in ret:
ret.append(i)
return " ".join(ret)
def set_var_in_file(self, var, val, default_file=None):
self.runCommand(["enableDataTracking"])
self.server.runCommand(["setVarFile", var, val, default_file, "set"])
self.runCommand(["disableDataTracking"])
def early_assign_var_in_file(self, var, val, default_file=None):
self.runCommand(["enableDataTracking"])
self.server.runCommand(["setVarFile", var, val, default_file, "earlyAssign"])
self.runCommand(["disableDataTracking"])
def remove_var_from_file(self, var):
self.server.runCommand(["removeVarFile", var])
def append_var_in_file(self, var, val, default_file=None):
self.server.runCommand(["setVarFile", var, val, default_file, "append"])
def append_to_bbfiles(self, val):
bbfiles = self.runCommand(["getVariable", "BBFILES", "False"]) or ""
bbfiles = bbfiles.split()
if val not in bbfiles:
self.append_var_in_file("BBFILES", val, "bblayers.conf")
def get_parameters(self):
# retrieve the parameters from bitbake
params = {}
params["core_base"] = self.runCommand(["getVariable", "COREBASE"]) or ""
params["layer"] = self.runCommand(["getVariable", "BBLAYERS"]) or ""
params["layers_non_removable"] = self.runCommand(["getVariable", "BBLAYERS_NON_REMOVABLE"]) or ""
params["dldir"] = self.runCommand(["getVariable", "DL_DIR"]) or ""
params["machine"] = self.runCommand(["getVariable", "MACHINE"]) or ""
params["distro"] = self.runCommand(["getVariable", "DISTRO"]) or "defaultsetup"
params["pclass"] = self.runCommand(["getVariable", "PACKAGE_CLASSES"]) or ""
params["sstatedir"] = self.runCommand(["getVariable", "SSTATE_DIR"]) or ""
params["sstatemirror"] = self.runCommand(["getVariable", "SSTATE_MIRRORS"]) or ""
num_threads = self.runCommand(["getCpuCount"])
if not num_threads:
num_threads = 1
max_threads = 65536
else:
try:
num_threads = int(num_threads)
max_threads = 16 * num_threads
except:
num_threads = 1
max_threads = 65536
params["max_threads"] = max_threads
bbthread = self.runCommand(["getVariable", "BB_NUMBER_THREADS"])
if not bbthread:
bbthread = num_threads
else:
try:
bbthread = int(bbthread)
except:
bbthread = num_threads
params["bbthread"] = bbthread
pmake = self.runCommand(["getVariable", "PARALLEL_MAKE"])
if not pmake:
pmake = num_threads
elif isinstance(pmake, int):
pass
else:
try:
pmake = int(pmake.lstrip("-j "))
except:
pmake = num_threads
params["pmake"] = "-j %s" % pmake
params["image_addr"] = self.runCommand(["getVariable", "DEPLOY_DIR_IMAGE"]) or ""
image_extra_size = self.runCommand(["getVariable", "IMAGE_ROOTFS_EXTRA_SPACE"])
if not image_extra_size:
image_extra_size = 0
else:
try:
image_extra_size = int(image_extra_size)
except:
image_extra_size = 0
params["image_extra_size"] = image_extra_size
image_rootfs_size = self.runCommand(["getVariable", "IMAGE_ROOTFS_SIZE"])
if not image_rootfs_size:
image_rootfs_size = 0
else:
try:
image_rootfs_size = int(image_rootfs_size)
except:
image_rootfs_size = 0
params["image_rootfs_size"] = image_rootfs_size
image_overhead_factor = self.runCommand(["getVariable", "IMAGE_OVERHEAD_FACTOR"])
if not image_overhead_factor:
image_overhead_factor = 1
else:
try:
image_overhead_factor = float(image_overhead_factor)
except:
image_overhead_factor = 1
params['image_overhead_factor'] = image_overhead_factor
params["incompat_license"] = self._remove_redundant(self.runCommand(["getVariable", "INCOMPATIBLE_LICENSE"]) or "")
params["sdk_machine"] = self.runCommand(["getVariable", "SDKMACHINE"]) or self.runCommand(["getVariable", "SDK_ARCH"]) or ""
params["image_fstypes"] = self._remove_redundant(self.runCommand(["getVariable", "IMAGE_FSTYPES"]) or "")
params["image_types"] = self._remove_redundant(self.runCommand(["getVariable", "IMAGE_TYPES"]) or "")
params["conf_version"] = self.runCommand(["getVariable", "CONF_VERSION"]) or ""
params["lconf_version"] = self.runCommand(["getVariable", "LCONF_VERSION"]) or ""
params["runnable_image_types"] = self._remove_redundant(self.runCommand(["getVariable", "RUNNABLE_IMAGE_TYPES"]) or "")
params["runnable_machine_patterns"] = self._remove_redundant(self.runCommand(["getVariable", "RUNNABLE_MACHINE_PATTERNS"]) or "")
params["deployable_image_types"] = self._remove_redundant(self.runCommand(["getVariable", "DEPLOYABLE_IMAGE_TYPES"]) or "")
params["kernel_image_type"] = self.runCommand(["getVariable", "KERNEL_IMAGETYPE"]) or ""
params["tmpdir"] = self.runCommand(["getVariable", "TMPDIR"]) or ""
params["distro_version"] = self.runCommand(["getVariable", "DISTRO_VERSION"]) or ""
params["target_os"] = self.runCommand(["getVariable", "TARGET_OS"]) or ""
params["target_arch"] = self.runCommand(["getVariable", "TARGET_ARCH"]) or ""
params["tune_pkgarch"] = self.runCommand(["getVariable", "TUNE_PKGARCH"]) or ""
params["bb_version"] = self.runCommand(["getVariable", "BB_MIN_VERSION"]) or ""
params["default_task"] = self.runCommand(["getVariable", "BB_DEFAULT_TASK"]) or "build"
params["socks_proxy"] = self.runCommand(["getVariable", "all_proxy"]) or ""
params["http_proxy"] = self.runCommand(["getVariable", "http_proxy"]) or ""
params["ftp_proxy"] = self.runCommand(["getVariable", "ftp_proxy"]) or ""
params["https_proxy"] = self.runCommand(["getVariable", "https_proxy"]) or ""
params["cvs_proxy_host"] = self.runCommand(["getVariable", "CVS_PROXY_HOST"]) or ""
params["cvs_proxy_port"] = self.runCommand(["getVariable", "CVS_PROXY_PORT"]) or ""
params["image_white_pattern"] = self.runCommand(["getVariable", "BBUI_IMAGE_WHITE_PATTERN"]) or ""
params["image_black_pattern"] = self.runCommand(["getVariable", "BBUI_IMAGE_BLACK_PATTERN"]) or ""
return params

View File

@@ -0,0 +1,903 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
from bb.ui.crumbs.hobpages import HobPage
#
# PackageListModel
#
class PackageListModel(gtk.ListStore):
"""
This class defines an gtk.ListStore subclass which will convert the output
of the bb.event.TargetsTreeGenerated event into a gtk.ListStore whilst also
providing convenience functions to access gtk.TreeModel subclasses which
provide filtered views of the data.
"""
(COL_NAME, COL_VER, COL_REV, COL_RNM, COL_SEC, COL_SUM, COL_RDEP, COL_RPROV, COL_SIZE, COL_RCP, COL_BINB, COL_INC, COL_FADE_INC, COL_FONT, COL_FLIST) = range(15)
__gsignals__ = {
"package-selection-changed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
}
__toolchain_required_packages__ = ["packagegroup-core-standalone-sdk-target", "packagegroup-core-standalone-sdk-target-dbg"]
def __init__(self):
self.rprov_pkg = {}
gtk.ListStore.__init__ (self,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_BOOLEAN,
gobject.TYPE_BOOLEAN,
gobject.TYPE_STRING,
gobject.TYPE_STRING)
self.sort_column_id, self.sort_order = PackageListModel.COL_NAME, gtk.SORT_ASCENDING
"""
Find the model path for the item_name
Returns the path in the model or None
"""
def find_path_for_item(self, item_name):
pkg = item_name
if item_name not in self.pn_path.keys():
if item_name not in self.rprov_pkg.keys():
return None
pkg = self.rprov_pkg[item_name]
if pkg not in self.pn_path.keys():
return None
return self.pn_path[pkg]
def find_item_for_path(self, item_path):
return self[item_path][self.COL_NAME]
"""
Helper function to determine whether an item is an item specified by filter
"""
def tree_model_filter(self, model, it, filter):
name = model.get_value(it, self.COL_NAME)
for key in filter.keys():
if key == self.COL_NAME:
if filter[key] != 'Search packages by name':
if name and filter[key] not in name:
return False
else:
if model.get_value(it, key) not in filter[key]:
return False
self.filtered_nb += 1
return True
"""
Create, if required, and return a filtered gtk.TreeModelSort
containing only the items specified by filter
"""
def tree_model(self, filter, excluded_items_ahead=False, included_items_ahead=False, search_data=None, initial=False):
model = self.filter_new()
self.filtered_nb = 0
model.set_visible_func(self.tree_model_filter, filter)
sort = gtk.TreeModelSort(model)
sort.connect ('sort-column-changed', self.sort_column_changed_cb)
if initial:
sort.set_sort_column_id(PackageListModel.COL_NAME, gtk.SORT_ASCENDING)
sort.set_default_sort_func(None)
elif excluded_items_ahead:
sort.set_default_sort_func(self.exclude_item_sort_func, search_data)
elif included_items_ahead:
sort.set_default_sort_func(self.include_item_sort_func, search_data)
else:
if search_data and search_data!='Search recipes by name' and search_data!='Search package groups by name':
sort.set_default_sort_func(self.sort_func, search_data)
else:
sort.set_sort_column_id(self.sort_column_id, self.sort_order)
sort.set_default_sort_func(None)
sort.set_sort_func(PackageListModel.COL_INC, self.sort_column, PackageListModel.COL_INC)
sort.set_sort_func(PackageListModel.COL_SIZE, self.sort_column, PackageListModel.COL_SIZE)
sort.set_sort_func(PackageListModel.COL_BINB, self.sort_binb_column)
sort.set_sort_func(PackageListModel.COL_RCP, self.sort_column, PackageListModel.COL_RCP)
return sort
def sort_column_changed_cb (self, data):
self.sort_column_id, self.sort_order = data.get_sort_column_id ()
def sort_column(self, model, row1, row2, col):
value1 = model.get_value(row1, col)
value2 = model.get_value(row2, col)
if col==PackageListModel.COL_SIZE:
value1 = HobPage._string_to_size(value1)
value2 = HobPage._string_to_size(value2)
cmp_res = cmp(value1, value2)
if cmp_res!=0:
if col==PackageListModel.COL_INC:
return -cmp_res
else:
return cmp_res
else:
name1 = model.get_value(row1, PackageListModel.COL_NAME)
name2 = model.get_value(row2, PackageListModel.COL_NAME)
return cmp(name1,name2)
def sort_binb_column(self, model, row1, row2):
value1 = model.get_value(row1, PackageListModel.COL_BINB)
value2 = model.get_value(row2, PackageListModel.COL_BINB)
value1_list = value1.split(', ')
value2_list = value2.split(', ')
value1 = value1_list[0]
value2 = value2_list[0]
cmp_res = cmp(value1, value2)
if cmp_res==0:
cmp_size = cmp(len(value1_list), len(value2_list))
if cmp_size==0:
name1 = model.get_value(row1, PackageListModel.COL_NAME)
name2 = model.get_value(row2, PackageListModel.COL_NAME)
return cmp(name1,name2)
else:
return cmp_size
else:
return cmp_res
def exclude_item_sort_func(self, model, iter1, iter2, user_data=None):
if user_data:
val1 = model.get_value(iter1, PackageListModel.COL_NAME)
val2 = model.get_value(iter2, PackageListModel.COL_NAME)
return self.cmp_vals(val1, val2, user_data)
else:
val1 = model.get_value(iter1, PackageListModel.COL_FADE_INC)
val2 = model.get_value(iter2, PackageListModel.COL_INC)
return ((val1 == True) and (val2 == False))
def include_item_sort_func(self, model, iter1, iter2, user_data=None):
if user_data:
val1 = model.get_value(iter1, PackageListModel.COL_NAME)
val2 = model.get_value(iter2, PackageListModel.COL_NAME)
return self.cmp_vals(val1, val2, user_data)
else:
val1 = model.get_value(iter1, PackageListModel.COL_INC)
val2 = model.get_value(iter2, PackageListModel.COL_INC)
return ((val1 == False) and (val2 == True))
def sort_func(self, model, iter1, iter2, user_data):
val1 = model.get_value(iter1, PackageListModel.COL_NAME)
val2 = model.get_value(iter2, PackageListModel.COL_NAME)
return self.cmp_vals(val1, val2, user_data)
def cmp_vals(self, val1, val2, user_data):
if val1 is None or val2 is None:
return 0
elif val1.startswith(user_data) and not val2.startswith(user_data):
return -1
elif not val1.startswith(user_data) and val2.startswith(user_data):
return 1
else:
return cmp(val1, val2)
def convert_vpath_to_path(self, view_model, view_path):
# view_model is the model sorted
# get the path of the model filtered
filtered_model_path = view_model.convert_path_to_child_path(view_path)
# get the model filtered
filtered_model = view_model.get_model()
# get the path of the original model
path = filtered_model.convert_path_to_child_path(filtered_model_path)
return path
def convert_path_to_vpath(self, view_model, path):
it = view_model.get_iter_first()
while it:
name = self.find_item_for_path(path)
view_name = view_model.get_value(it, PackageListModel.COL_NAME)
if view_name == name:
view_path = view_model.get_path(it)
return view_path
it = view_model.iter_next(it)
return None
"""
The populate() function takes as input the data from a
bb.event.PackageInfo event and populates the package list.
"""
def populate(self, pkginfolist):
# First clear the model, in case repopulating
self.clear()
def getpkgvalue(pkgdict, key, pkgname, defaultval = None):
value = pkgdict.get('%s_%s' % (key, pkgname), None)
if not value:
value = pkgdict.get(key, defaultval)
return value
for pkginfo in pkginfolist:
pn = pkginfo['PN']
pv = pkginfo['PV']
pr = pkginfo['PR']
pkg = pkginfo['PKG']
pkgv = getpkgvalue(pkginfo, 'PKGV', pkg)
pkgr = getpkgvalue(pkginfo, 'PKGR', pkg)
# PKGSIZE is artificial, will always be overridden with the package name if present
pkgsize = int(pkginfo.get('PKGSIZE_%s' % pkg, "0"))
# PKG_%s is the renamed version
pkg_rename = pkginfo.get('PKG_%s' % pkg, "")
# The rest may be overridden or not
section = getpkgvalue(pkginfo, 'SECTION', pkg, "")
summary = getpkgvalue(pkginfo, 'SUMMARY', pkg, "")
rdep = getpkgvalue(pkginfo, 'RDEPENDS', pkg, "")
rrec = getpkgvalue(pkginfo, 'RRECOMMENDS', pkg, "")
rprov = getpkgvalue(pkginfo, 'RPROVIDES', pkg, "")
files_list = getpkgvalue(pkginfo, 'FILES_INFO', pkg, "")
for i in rprov.split():
self.rprov_pkg[i] = pkg
recipe = pn + '-' + pv + '-' + pr
allow_empty = getpkgvalue(pkginfo, 'ALLOW_EMPTY', pkg, "")
if pkgsize == 0 and not allow_empty:
continue
size = HobPage._size_to_string(pkgsize)
self.set(self.append(), self.COL_NAME, pkg, self.COL_VER, pkgv,
self.COL_REV, pkgr, self.COL_RNM, pkg_rename,
self.COL_SEC, section, self.COL_SUM, summary,
self.COL_RDEP, rdep + ' ' + rrec,
self.COL_RPROV, rprov, self.COL_SIZE, size,
self.COL_RCP, recipe, self.COL_BINB, "",
self.COL_INC, False, self.COL_FONT, '10', self.COL_FLIST, files_list)
self.pn_path = {}
it = self.get_iter_first()
while it:
pn = self.get_value(it, self.COL_NAME)
path = self.get_path(it)
self.pn_path[pn] = path
it = self.iter_next(it)
"""
Update the model, send out the notification.
"""
def selection_change_notification(self):
self.emit("package-selection-changed")
"""
Check whether the item at item_path is included or not
"""
def path_included(self, item_path):
return self[item_path][self.COL_INC]
"""
Add this item, and any of its dependencies, to the image contents
"""
def include_item(self, item_path, binb=""):
if self.path_included(item_path):
return
item_name = self[item_path][self.COL_NAME]
item_deps = self[item_path][self.COL_RDEP]
self[item_path][self.COL_INC] = True
item_bin = self[item_path][self.COL_BINB].split(', ')
if binb and not binb in item_bin:
item_bin.append(binb)
self[item_path][self.COL_BINB] = ', '.join(item_bin).lstrip(', ')
if item_deps:
# Ensure all of the items deps are included and, where appropriate,
# add this item to their COL_BINB
for dep in item_deps.split(" "):
if dep.startswith('('):
continue
# If the contents model doesn't already contain dep, add it
dep_path = self.find_path_for_item(dep)
if not dep_path:
continue
dep_included = self.path_included(dep_path)
if dep_included and not dep in item_bin:
# don't set the COL_BINB to this item if the target is an
# item in our own COL_BINB
dep_bin = self[dep_path][self.COL_BINB].split(', ')
if not item_name in dep_bin:
dep_bin.append(item_name)
self[dep_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
elif not dep_included:
self.include_item(dep_path, binb=item_name)
def exclude_item(self, item_path):
if not self.path_included(item_path):
return
self[item_path][self.COL_INC] = False
item_name = self[item_path][self.COL_NAME]
item_deps = self[item_path][self.COL_RDEP]
if item_deps:
for dep in item_deps.split(" "):
if dep.startswith('('):
continue
dep_path = self.find_path_for_item(dep)
if not dep_path:
continue
dep_bin = self[dep_path][self.COL_BINB].split(', ')
if item_name in dep_bin:
dep_bin.remove(item_name)
self[dep_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
item_bin = self[item_path][self.COL_BINB].split(', ')
if item_bin:
for binb in item_bin:
binb_path = self.find_path_for_item(binb)
if not binb_path:
continue
self.exclude_item(binb_path)
"""
Empty self.contents by setting the include of each entry to None
"""
def reset(self):
it = self.get_iter_first()
while it:
self.set(it,
self.COL_INC, False,
self.COL_BINB, "")
it = self.iter_next(it)
self.selection_change_notification()
def get_selected_packages(self):
packagelist = []
it = self.get_iter_first()
while it:
if self.get_value(it, self.COL_INC):
name = self.get_value(it, self.COL_NAME)
packagelist.append(name)
it = self.iter_next(it)
return packagelist
def get_user_selected_packages(self):
packagelist = []
it = self.get_iter_first()
while it:
if self.get_value(it, self.COL_INC):
binb = self.get_value(it, self.COL_BINB)
if binb == "User Selected":
name = self.get_value(it, self.COL_NAME)
packagelist.append(name)
it = self.iter_next(it)
return packagelist
def get_selected_packages_toolchain(self):
packagelist = []
it = self.get_iter_first()
while it:
if self.get_value(it, self.COL_INC):
name = self.get_value(it, self.COL_NAME)
if name.endswith("-dev") or name.endswith("-dbg"):
packagelist.append(name)
it = self.iter_next(it)
return list(set(packagelist + self.__toolchain_required_packages__));
"""
Package model may be incomplete, therefore when calling the
set_selected_packages(), some packages will not be set included.
Return the un-set packages list.
"""
def set_selected_packages(self, packagelist, user_selected=False):
left = []
binb = 'User Selected' if user_selected else ''
for pn in packagelist:
if pn in self.pn_path.keys():
path = self.pn_path[pn]
self.include_item(item_path=path, binb=binb)
else:
left.append(pn)
self.selection_change_notification()
return left
"""
Return the selected package size, unit is B.
"""
def get_packages_size(self):
packages_size = 0
it = self.get_iter_first()
while it:
if self.get_value(it, self.COL_INC):
str_size = self.get_value(it, self.COL_SIZE)
if not str_size:
continue
packages_size += HobPage._string_to_size(str_size)
it = self.iter_next(it)
return packages_size
"""
Resync the state of included items to a backup column before performing the fadeout visible effect
"""
def resync_fadeout_column(self, model_first_iter=None):
it = model_first_iter
while it:
active = self.get_value(it, self.COL_INC)
self.set(it, self.COL_FADE_INC, active)
it = self.iter_next(it)
#
# RecipeListModel
#
class RecipeListModel(gtk.ListStore):
"""
This class defines an gtk.ListStore subclass which will convert the output
of the bb.event.TargetsTreeGenerated event into a gtk.ListStore whilst also
providing convenience functions to access gtk.TreeModel subclasses which
provide filtered views of the data.
"""
(COL_NAME, COL_DESC, COL_LIC, COL_GROUP, COL_DEPS, COL_BINB, COL_TYPE, COL_INC, COL_IMG, COL_INSTALL, COL_PN, COL_FADE_INC, COL_SUMMARY, COL_VERSION,
COL_REVISION, COL_HOMEPAGE, COL_BUGTRACKER, COL_FILE) = range(18)
__custom_image__ = "Start with an empty image recipe"
__gsignals__ = {
"recipe-selection-changed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
}
"""
"""
def __init__(self):
gtk.ListStore.__init__ (self,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_BOOLEAN,
gobject.TYPE_BOOLEAN,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_BOOLEAN,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING)
self.sort_column_id, self.sort_order = RecipeListModel.COL_NAME, gtk.SORT_ASCENDING
"""
Find the model path for the item_name
Returns the path in the model or None
"""
def find_path_for_item(self, item_name):
if self.non_target_name(item_name) or item_name not in self.pn_path.keys():
return None
else:
return self.pn_path[item_name]
def find_item_for_path(self, item_path):
return self[item_path][self.COL_NAME]
"""
Helper method to determine whether name is a target pn
"""
def non_target_name(self, name):
if name and ('-native' in name):
return True
return False
"""
Helper function to determine whether an item is an item specified by filter
"""
def tree_model_filter(self, model, it, filter):
name = model.get_value(it, self.COL_NAME)
if self.non_target_name(name):
return False
for key in filter.keys():
if key == self.COL_NAME:
if filter[key] != 'Search recipes by name' and filter[key] != 'Search package groups by name':
if filter[key] not in name:
return False
else:
if model.get_value(it, key) not in filter[key]:
return False
self.filtered_nb += 1
return True
def exclude_item_sort_func(self, model, iter1, iter2, user_data=None):
if user_data:
val1 = model.get_value(iter1, RecipeListModel.COL_NAME)
val2 = model.get_value(iter2, RecipeListModel.COL_NAME)
return self.cmp_vals(val1, val2, user_data)
else:
val1 = model.get_value(iter1, RecipeListModel.COL_FADE_INC)
val2 = model.get_value(iter2, RecipeListModel.COL_INC)
return ((val1 == True) and (val2 == False))
def include_item_sort_func(self, model, iter1, iter2, user_data=None):
if user_data:
val1 = model.get_value(iter1, RecipeListModel.COL_NAME)
val2 = model.get_value(iter2, RecipeListModel.COL_NAME)
return self.cmp_vals(val1, val2, user_data)
else:
val1 = model.get_value(iter1, RecipeListModel.COL_INC)
val2 = model.get_value(iter2, RecipeListModel.COL_INC)
return ((val1 == False) and (val2 == True))
def sort_func(self, model, iter1, iter2, user_data):
val1 = model.get_value(iter1, RecipeListModel.COL_NAME)
val2 = model.get_value(iter2, RecipeListModel.COL_NAME)
return self.cmp_vals(val1, val2, user_data)
def cmp_vals(self, val1, val2, user_data):
if val1 is None or val2 is None:
return 0
elif val1.startswith(user_data) and not val2.startswith(user_data):
return -1
elif not val1.startswith(user_data) and val2.startswith(user_data):
return 1
else:
return cmp(val1, val2)
"""
Create, if required, and return a filtered gtk.TreeModelSort
containing only the items specified by filter
"""
def tree_model(self, filter, excluded_items_ahead=False, included_items_ahead=False, search_data=None, initial=False):
model = self.filter_new()
self.filtered_nb = 0
model.set_visible_func(self.tree_model_filter, filter)
sort = gtk.TreeModelSort(model)
sort.connect ('sort-column-changed', self.sort_column_changed_cb)
if initial:
sort.set_sort_column_id(RecipeListModel.COL_NAME, gtk.SORT_ASCENDING)
sort.set_default_sort_func(None)
elif excluded_items_ahead:
sort.set_default_sort_func(self.exclude_item_sort_func, search_data)
elif included_items_ahead:
sort.set_default_sort_func(self.include_item_sort_func, search_data)
else:
if search_data and search_data!='Search recipes by name' and search_data!='Search package groups by name':
sort.set_default_sort_func(self.sort_func, search_data)
else:
sort.set_sort_column_id(self.sort_column_id, self.sort_order)
sort.set_default_sort_func(None)
sort.set_sort_func(RecipeListModel.COL_INC, self.sort_column, RecipeListModel.COL_INC)
sort.set_sort_func(RecipeListModel.COL_GROUP, self.sort_column, RecipeListModel.COL_GROUP)
sort.set_sort_func(RecipeListModel.COL_BINB, self.sort_binb_column)
sort.set_sort_func(RecipeListModel.COL_LIC, self.sort_column, RecipeListModel.COL_LIC)
return sort
def sort_column_changed_cb (self, data):
self.sort_column_id, self.sort_order = data.get_sort_column_id ()
def sort_column(self, model, row1, row2, col):
value1 = model.get_value(row1, col)
value2 = model.get_value(row2, col)
cmp_res = cmp(value1, value2)
if cmp_res!=0:
if col==RecipeListModel.COL_INC:
return -cmp_res
else:
return cmp_res
else:
name1 = model.get_value(row1, RecipeListModel.COL_NAME)
name2 = model.get_value(row2, RecipeListModel.COL_NAME)
return cmp(name1,name2)
def sort_binb_column(self, model, row1, row2):
value1 = model.get_value(row1, RecipeListModel.COL_BINB)
value2 = model.get_value(row2, RecipeListModel.COL_BINB)
value1_list = value1.split(', ')
value2_list = value2.split(', ')
value1 = value1_list[0]
value2 = value2_list[0]
cmp_res = cmp(value1, value2)
if cmp_res==0:
cmp_size = cmp(len(value1_list), len(value2_list))
if cmp_size==0:
name1 = model.get_value(row1, RecipeListModel.COL_NAME)
name2 = model.get_value(row2, RecipeListModel.COL_NAME)
return cmp(name1,name2)
else:
return cmp_size
else:
return cmp_res
def convert_vpath_to_path(self, view_model, view_path):
filtered_model_path = view_model.convert_path_to_child_path(view_path)
filtered_model = view_model.get_model()
# get the path of the original model
path = filtered_model.convert_path_to_child_path(filtered_model_path)
return path
def convert_path_to_vpath(self, view_model, path):
it = view_model.get_iter_first()
while it:
name = self.find_item_for_path(path)
view_name = view_model.get_value(it, RecipeListModel.COL_NAME)
if view_name == name:
view_path = view_model.get_path(it)
return view_path
it = view_model.iter_next(it)
return None
"""
The populate() function takes as input the data from a
bb.event.TargetsTreeGenerated event and populates the RecipeList.
"""
def populate(self, event_model):
# First clear the model, in case repopulating
self.clear()
# dummy image for prompt
self.set_in_list(self.__custom_image__, "Use 'Edit image recipe' to customize recipes and packages " \
"to be included in your image ")
for item in event_model["pn"]:
name = item
desc = event_model["pn"][item]["description"]
lic = event_model["pn"][item]["license"]
group = event_model["pn"][item]["section"]
inherits = event_model["pn"][item]["inherits"]
summary = event_model["pn"][item]["summary"]
version = event_model["pn"][item]["version"]
revision = event_model["pn"][item]["prevision"]
homepage = event_model["pn"][item]["homepage"]
bugtracker = event_model["pn"][item]["bugtracker"]
filename = event_model["pn"][item]["filename"]
install = []
depends = event_model["depends"].get(item, []) + event_model["rdepends-pn"].get(item, [])
if ('packagegroup.bbclass' in " ".join(inherits)):
atype = 'packagegroup'
elif ('/image.bbclass' in " ".join(inherits)):
if "edited" not in name:
atype = 'image'
install = event_model["rdepends-pkg"].get(item, []) + event_model["rrecs-pkg"].get(item, [])
elif ('meta-' in name):
atype = 'toolchain'
elif (name == 'dummy-image' or name == 'dummy-toolchain'):
atype = 'dummy'
else:
atype = 'recipe'
self.set(self.append(), self.COL_NAME, item, self.COL_DESC, desc,
self.COL_LIC, lic, self.COL_GROUP, group,
self.COL_DEPS, " ".join(depends), self.COL_BINB, "",
self.COL_TYPE, atype, self.COL_INC, False,
self.COL_IMG, False, self.COL_INSTALL, " ".join(install), self.COL_PN, item,
self.COL_SUMMARY, summary, self.COL_VERSION, version, self.COL_REVISION, revision,
self.COL_HOMEPAGE, homepage, self.COL_BUGTRACKER, bugtracker,
self.COL_FILE, filename)
self.pn_path = {}
it = self.get_iter_first()
while it:
pn = self.get_value(it, self.COL_NAME)
path = self.get_path(it)
self.pn_path[pn] = path
it = self.iter_next(it)
def set_in_list(self, item, desc):
self.set(self.append(), self.COL_NAME, item,
self.COL_DESC, desc,
self.COL_LIC, "", self.COL_GROUP, "",
self.COL_DEPS, "", self.COL_BINB, "",
self.COL_TYPE, "image", self.COL_INC, False,
self.COL_IMG, False, self.COL_INSTALL, "", self.COL_PN, item,
self.COL_SUMMARY, "", self.COL_VERSION, "", self.COL_REVISION, "",
self.COL_HOMEPAGE, "", self.COL_BUGTRACKER, "")
self.pn_path = {}
it = self.get_iter_first()
while it:
pn = self.get_value(it, self.COL_NAME)
path = self.get_path(it)
self.pn_path[pn] = path
it = self.iter_next(it)
"""
Update the model, send out the notification.
"""
def selection_change_notification(self):
self.emit("recipe-selection-changed")
def path_included(self, item_path):
return self[item_path][self.COL_INC]
"""
Add this item, and any of its dependencies, to the image contents
"""
def include_item(self, item_path, binb="", image_contents=False):
if self.path_included(item_path):
return
item_name = self[item_path][self.COL_NAME]
item_deps = self[item_path][self.COL_DEPS]
self[item_path][self.COL_INC] = True
item_bin = self[item_path][self.COL_BINB].split(', ')
if binb and not binb in item_bin:
item_bin.append(binb)
self[item_path][self.COL_BINB] = ', '.join(item_bin).lstrip(', ')
# We want to do some magic with things which are brought in by the
# base image so tag them as so
if image_contents:
self[item_path][self.COL_IMG] = True
if item_deps:
# Ensure all of the items deps are included and, where appropriate,
# add this item to their COL_BINB
for dep in item_deps.split(" "):
# If the contents model doesn't already contain dep, add it
dep_path = self.find_path_for_item(dep)
if not dep_path:
continue
dep_included = self.path_included(dep_path)
if dep_included and not dep in item_bin:
# don't set the COL_BINB to this item if the target is an
# item in our own COL_BINB
dep_bin = self[dep_path][self.COL_BINB].split(', ')
if not item_name in dep_bin:
dep_bin.append(item_name)
self[dep_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
elif not dep_included:
self.include_item(dep_path, binb=item_name, image_contents=image_contents)
dep_bin = self[item_path][self.COL_BINB].split(', ')
if self[item_path][self.COL_NAME] in dep_bin:
dep_bin.remove(self[item_path][self.COL_NAME])
self[item_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
def exclude_item(self, item_path):
if not self.path_included(item_path):
return
self[item_path][self.COL_INC] = False
item_name = self[item_path][self.COL_NAME]
item_deps = self[item_path][self.COL_DEPS]
if item_deps:
for dep in item_deps.split(" "):
dep_path = self.find_path_for_item(dep)
if not dep_path:
continue
dep_bin = self[dep_path][self.COL_BINB].split(', ')
if item_name in dep_bin:
dep_bin.remove(item_name)
self[dep_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
item_bin = self[item_path][self.COL_BINB].split(', ')
if item_bin:
for binb in item_bin:
binb_path = self.find_path_for_item(binb)
if not binb_path:
continue
self.exclude_item(binb_path)
def reset(self):
it = self.get_iter_first()
while it:
self.set(it,
self.COL_INC, False,
self.COL_BINB, "",
self.COL_IMG, False)
it = self.iter_next(it)
self.selection_change_notification()
"""
Returns two lists. One of user selected recipes and the other containing
all selected recipes
"""
def get_selected_recipes(self):
allrecipes = []
userrecipes = []
it = self.get_iter_first()
while it:
if self.get_value(it, self.COL_INC):
name = self.get_value(it, self.COL_PN)
type = self.get_value(it, self.COL_TYPE)
if type != "image":
allrecipes.append(name)
sel = "User Selected" in self.get_value(it, self.COL_BINB)
if sel:
userrecipes.append(name)
it = self.iter_next(it)
return list(set(userrecipes)), list(set(allrecipes))
def set_selected_recipes(self, recipelist):
for pn in recipelist:
if pn in self.pn_path.keys():
path = self.pn_path[pn]
self.include_item(item_path=path,
binb="User Selected")
self.selection_change_notification()
def get_selected_image(self):
it = self.get_iter_first()
while it:
if self.get_value(it, self.COL_INC):
name = self.get_value(it, self.COL_PN)
type = self.get_value(it, self.COL_TYPE)
if type == "image":
sel = "User Selected" in self.get_value(it, self.COL_BINB)
if sel:
return name
it = self.iter_next(it)
return None
def set_selected_image(self, img):
if not img:
return
self.reset()
path = self.find_path_for_item(img)
self.include_item(item_path=path,
binb="User Selected",
image_contents=True)
self.selection_change_notification()
def set_custom_image_version(self, version):
self.custom_image_version = version
def get_custom_image_version(self):
return self.custom_image_version
def is_custom_image(self):
return self.get_selected_image() == self.__custom_image__

View File

@@ -0,0 +1,128 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
from bb.ui.crumbs.hobcolor import HobColors
from bb.ui.crumbs.hobwidget import hwc
#
# HobPage: the super class for all Hob-related pages
#
class HobPage (gtk.VBox):
def __init__(self, builder, title = None):
super(HobPage, self).__init__(False, 0)
self.builder = builder
self.builder_width, self.builder_height = self.builder.size_request()
if not title:
self.title = "Hob -- Image Creator"
else:
self.title = title
self.title_label = gtk.Label()
self.box_group_area = gtk.VBox(False, 12)
self.box_group_area.set_size_request(self.builder_width - 73 - 73, self.builder_height - 88 - 15 - 15)
self.group_align = gtk.Alignment(xalign = 0, yalign=0.5, xscale=1, yscale=1)
self.group_align.set_padding(15, 15, 73, 73)
self.group_align.add(self.box_group_area)
self.box_group_area.set_homogeneous(False)
def set_title(self, title):
self.title = title
self.title_label.set_markup("<span size='x-large'>%s</span>" % self.title)
def add_onto_top_bar(self, widget = None, padding = 0):
# the top button occupies 1/7 of the page height
# setup an event box
eventbox = gtk.EventBox()
style = eventbox.get_style().copy()
style.bg[gtk.STATE_NORMAL] = eventbox.get_colormap().alloc_color(HobColors.LIGHT_GRAY, False, False)
eventbox.set_style(style)
eventbox.set_size_request(-1, 88)
hbox = gtk.HBox()
self.title_label = gtk.Label()
self.title_label.set_markup("<span size='x-large'>%s</span>" % self.title)
hbox.pack_start(self.title_label, expand=False, fill=False, padding=20)
if widget:
# add the widget in the event box
hbox.pack_end(widget, expand=False, fill=False, padding=padding)
eventbox.add(hbox)
return eventbox
def span_tag(self, size="medium", weight="normal", forground="#1c1c1c"):
span_tag = "weight='%s' foreground='%s' size='%s'" % (weight, forground, size)
return span_tag
def append_toolbar_button(self, toolbar, buttonname, icon_disp, icon_hovor, tip, cb):
# Create a button and append it on the toolbar according to button name
icon = gtk.Image()
icon_display = icon_disp
icon_hover = icon_hovor
pix_buffer = gtk.gdk.pixbuf_new_from_file(icon_display)
icon.set_from_pixbuf(pix_buffer)
tip_text = tip
button = toolbar.append_item(buttonname, tip, None, icon, cb)
return button
@staticmethod
def _size_to_string(size):
try:
if not size:
size_str = "0 B"
else:
if len(str(int(size))) > 6:
size_str = '%.1f' % (size*1.0/(1024*1024)) + ' MB'
elif len(str(int(size))) > 3:
size_str = '%.1f' % (size*1.0/1024) + ' KB'
else:
size_str = str(size) + ' B'
except:
size_str = "0 B"
return size_str
@staticmethod
def _string_to_size(str_size):
try:
if not str_size:
size = 0
else:
unit = str_size.split()
if len(unit) > 1:
if unit[1] == 'MB':
size = float(unit[0])*1024*1024
elif unit[1] == 'KB':
size = float(unit[0])*1024
elif unit[1] == 'B':
size = float(unit[0])
else:
size = 0
else:
size = float(unit[0])
except:
size = 0
return size

View File

@@ -0,0 +1,561 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import glib
import re
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hobcolor import HobColors
from bb.ui.crumbs.hobwidget import hic, HobImageButton, HobInfoButton, HobAltButton, HobButton
from bb.ui.crumbs.hoblistmodel import RecipeListModel
from bb.ui.crumbs.hobpages import HobPage
from bb.ui.crumbs.hig.retrieveimagedialog import RetrieveImageDialog
#
# ImageConfigurationPage
#
class ImageConfigurationPage (HobPage):
__dummy_machine__ = "--select a machine--"
__dummy_image__ = "--select an image recipe--"
__custom_image__ = "Select from my image recipes"
def __init__(self, builder):
super(ImageConfigurationPage, self).__init__(builder, "Image configuration")
self.image_combo_id = None
# we use machine_combo_changed_by_manual to identify the machine is changed by code
# or by manual. If by manual, all user's recipe selection and package selection are
# cleared.
self.machine_combo_changed_by_manual = True
self.stopping = False
self.warning_shift = 0
self.custom_image_selected = None
self.create_visual_elements()
def create_visual_elements(self):
# create visual elements
self.toolbar = gtk.Toolbar()
self.toolbar.set_orientation(gtk.ORIENTATION_HORIZONTAL)
self.toolbar.set_style(gtk.TOOLBAR_BOTH)
my_images_button = self.append_toolbar_button(self.toolbar,
"Images",
hic.ICON_IMAGES_DISPLAY_FILE,
hic.ICON_IMAGES_HOVER_FILE,
"Open previously built images",
self.my_images_button_clicked_cb)
settings_button = self.append_toolbar_button(self.toolbar,
"Settings",
hic.ICON_SETTINGS_DISPLAY_FILE,
hic.ICON_SETTINGS_HOVER_FILE,
"View additional build settings",
self.settings_button_clicked_cb)
self.config_top_button = self.add_onto_top_bar(self.toolbar)
self.gtable = gtk.Table(40, 40, True)
self.create_config_machine()
self.create_config_baseimg()
self.config_build_button = self.create_config_build_button()
def _remove_all_widget(self):
children = self.gtable.get_children() or []
for child in children:
self.gtable.remove(child)
children = self.box_group_area.get_children() or []
for child in children:
self.box_group_area.remove(child)
children = self.get_children() or []
for child in children:
self.remove(child)
def _pack_components(self, pack_config_build_button = False):
self._remove_all_widget()
self.pack_start(self.config_top_button, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
self.box_group_area.pack_start(self.gtable, expand=True, fill=True)
if pack_config_build_button:
self.box_group_area.pack_end(self.config_build_button, expand=False, fill=False)
else:
box = gtk.HBox(False, 6)
box.show()
subbox = gtk.HBox(False, 0)
subbox.set_size_request(205, 49)
subbox.show()
box.add(subbox)
self.box_group_area.pack_end(box, False, False)
def show_machine(self):
self.progress_bar.reset()
self._pack_components(pack_config_build_button = False)
self.set_config_machine_layout(show_progress_bar = False)
self.show_all()
def update_progress_bar(self, title, fraction, status=None):
if self.stopping == False:
self.progress_bar.update(fraction)
self.progress_bar.set_text(title)
self.progress_bar.set_rcstyle(status)
def show_info_populating(self):
self._pack_components(pack_config_build_button = False)
self.set_config_machine_layout(show_progress_bar = True)
self.show_all()
def show_info_populated(self):
self.progress_bar.reset()
self._pack_components(pack_config_build_button = False)
self.set_config_machine_layout(show_progress_bar = False)
self.set_config_baseimg_layout()
self.show_all()
def show_baseimg_selected(self):
self.progress_bar.reset()
self._pack_components(pack_config_build_button = True)
self.set_config_machine_layout(show_progress_bar = False)
self.set_config_baseimg_layout()
self.show_all()
if self.builder.recipe_model.get_selected_image() == self.builder.recipe_model.__custom_image__:
self.just_bake_button.hide()
def add_warnings_bar(self):
#create the warnings bar shown when recipes parsing generates warnings
color = HobColors.KHAKI
warnings_bar = gtk.EventBox()
warnings_bar.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse(color))
warnings_bar.set_flags(gtk.CAN_DEFAULT)
warnings_bar.grab_default()
build_stop_tab = gtk.Table(10, 20, True)
warnings_bar.add(build_stop_tab)
icon = gtk.Image()
icon_pix_buffer = gtk.gdk.pixbuf_new_from_file(hic.ICON_INDI_ALERT_FILE)
icon.set_from_pixbuf(icon_pix_buffer)
build_stop_tab.attach(icon, 0, 2, 0, 10)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
warnings_nb = len(self.builder.parsing_warnings)
if warnings_nb == 1:
label.set_markup("<span size='x-large'><b>1 recipe parsing warning</b></span>")
else:
label.set_markup("<span size='x-large'><b>%s recipe parsing warnings</b></span>" % warnings_nb)
build_stop_tab.attach(label, 2, 12, 0, 10)
view_warnings_button = HobButton("View warnings")
view_warnings_button.connect('clicked', self.view_warnings_button_clicked_cb)
build_stop_tab.attach(view_warnings_button, 15, 19, 1, 9)
return warnings_bar
def disable_warnings_bar(self):
if self.builder.parsing_warnings:
if hasattr(self, 'warnings_bar'):
self.warnings_bar.hide_all()
self.builder.parsing_warnings = []
def create_config_machine(self):
self.machine_title = gtk.Label()
self.machine_title.set_alignment(0.0, 0.5)
mark = "<span %s>Select a machine</span>" % self.span_tag('x-large', 'bold')
self.machine_title.set_markup(mark)
self.machine_title_desc = gtk.Label()
self.machine_title_desc.set_alignment(0.0, 0.5)
mark = ("<span %s>Your selection is the profile of the target machine for which you"
" are building the image.\n</span>") % (self.span_tag('medium'))
self.machine_title_desc.set_markup(mark)
self.machine_combo = gtk.combo_box_new_text()
self.machine_combo.connect("changed", self.machine_combo_changed_cb)
icon_file = hic.ICON_LAYERS_DISPLAY_FILE
hover_file = hic.ICON_LAYERS_HOVER_FILE
self.layer_button = HobImageButton("Layers", "Add support for machines, software, etc.",
icon_file, hover_file)
self.layer_button.connect("clicked", self.layer_button_clicked_cb)
markup = "Layers are a powerful mechanism to extend the Yocto Project "
markup += "with your own functionality.\n"
markup += "For more on layers, check the <a href=\""
markup += "http://www.yoctoproject.org/docs/current/dev-manual/"
markup += "dev-manual.html#understanding-and-using-layers\">reference manual</a>."
self.layer_info_icon = HobInfoButton("<b>Layers</b>" + "*" + markup, self.get_parent())
self.progress_bar = HobProgressBar()
self.stop_button = HobAltButton("Stop")
self.stop_button.connect("clicked", self.stop_button_clicked_cb)
self.machine_separator = gtk.HSeparator()
def set_config_machine_layout(self, show_progress_bar = False):
self.gtable.attach(self.machine_title, 0, 40, 0, 4)
self.gtable.attach(self.machine_title_desc, 0, 40, 4, 6)
self.gtable.attach(self.machine_combo, 0, 12, 7, 10)
self.gtable.attach(self.layer_button, 14, 36, 7, 12)
self.gtable.attach(self.layer_info_icon, 36, 40, 7, 11)
if show_progress_bar:
#self.gtable.attach(self.progress_box, 0, 40, 15, 18)
self.gtable.attach(self.progress_bar, 0, 37, 15, 18)
self.gtable.attach(self.stop_button, 37, 40, 15, 18, 0, 0)
if self.builder.parsing_warnings:
self.warnings_bar = self.add_warnings_bar()
self.gtable.attach(self.warnings_bar, 0, 40, 14, 18)
self.warning_shift = 4
else:
self.warning_shift = 0
self.gtable.attach(self.machine_separator, 0, 40, 13, 14)
def create_config_baseimg(self):
self.image_title = gtk.Label()
self.image_title.set_alignment(0, 1.0)
mark = "<span %s>Select an image recipe</span>" % self.span_tag('x-large', 'bold')
self.image_title.set_markup(mark)
self.image_title_desc = gtk.Label()
self.image_title_desc.set_alignment(0, 0.5)
mark = ("<span %s>Image recipes are a starting point for the type of image you want. "
"You can build them as \n"
"they are or edit them to suit your needs.\n</span>") % self.span_tag('medium')
self.image_title_desc.set_markup(mark)
self.image_combo = gtk.combo_box_new_text()
self.image_combo.set_row_separator_func(self.combo_separator_func, None)
self.image_combo_id = self.image_combo.connect("changed", self.image_combo_changed_cb)
self.image_desc = gtk.Label()
self.image_desc.set_alignment(0.0, 0.5)
self.image_desc.set_size_request(256, -1)
self.image_desc.set_justify(gtk.JUSTIFY_LEFT)
self.image_desc.set_line_wrap(True)
# button to view recipes
icon_file = hic.ICON_RCIPE_DISPLAY_FILE
hover_file = hic.ICON_RCIPE_HOVER_FILE
self.view_adv_configuration_button = HobImageButton("Advanced configuration",
"Select image types, package formats, etc",
icon_file, hover_file)
self.view_adv_configuration_button.connect("clicked", self.view_adv_configuration_button_clicked_cb)
self.image_separator = gtk.HSeparator()
def combo_separator_func(self, model, iter, user_data):
name = model.get_value(iter, 0)
if name == "--Separator--":
return True
def set_config_baseimg_layout(self):
self.gtable.attach(self.image_title, 0, 40, 15+self.warning_shift, 17+self.warning_shift)
self.gtable.attach(self.image_title_desc, 0, 40, 18+self.warning_shift, 22+self.warning_shift)
self.gtable.attach(self.image_combo, 0, 12, 23+self.warning_shift, 26+self.warning_shift)
self.gtable.attach(self.image_desc, 0, 12, 27+self.warning_shift, 33+self.warning_shift)
self.gtable.attach(self.view_adv_configuration_button, 14, 36, 23+self.warning_shift, 28+self.warning_shift)
self.gtable.attach(self.image_separator, 0, 40, 35+self.warning_shift, 36+self.warning_shift)
def create_config_build_button(self):
# Create the "Build packages" and "Build image" buttons at the bottom
button_box = gtk.HBox(False, 6)
# create button "Build image"
self.just_bake_button = HobButton("Build image")
self.just_bake_button.set_tooltip_text("Build the image recipe as it is")
self.just_bake_button.connect("clicked", self.just_bake_button_clicked_cb)
button_box.pack_end(self.just_bake_button, expand=False, fill=False)
# create button "Edit image recipe"
self.edit_image_button = HobAltButton("Edit image recipe")
self.edit_image_button.set_tooltip_text("Customize the recipes and packages to be included in your image")
self.edit_image_button.connect("clicked", self.edit_image_button_clicked_cb)
button_box.pack_end(self.edit_image_button, expand=False, fill=False)
return button_box
def stop_button_clicked_cb(self, button):
self.stopping = True
self.progress_bar.set_text("Stopping recipe parsing")
self.progress_bar.set_rcstyle("stop")
self.builder.cancel_parse_sync()
def view_warnings_button_clicked_cb(self, button):
self.builder.show_warning_dialog()
def machine_combo_changed_idle_cb(self):
self.builder.window.set_cursor(None)
def machine_combo_changed_cb(self, machine_combo):
self.stopping = False
self.builder.parsing_warnings = []
combo_item = machine_combo.get_active_text()
if not combo_item or combo_item == self.__dummy_machine__:
return
self.builder.window.set_cursor(gtk.gdk.Cursor(gtk.gdk.WATCH))
self.builder.wait(0.1) #wait for combo and cursor to update
# remove __dummy_machine__ item from the store list after first user selection
# because it is no longer valid
combo_store = machine_combo.get_model()
if len(combo_store) and (combo_store[0][0] == self.__dummy_machine__):
machine_combo.remove_text(0)
self.builder.configuration.curr_mach = combo_item
if self.machine_combo_changed_by_manual:
self.builder.configuration.clear_selection()
# reset machine_combo_changed_by_manual
self.machine_combo_changed_by_manual = True
self.builder.configuration.selected_image = None
# Do reparse recipes
self.builder.populate_recipe_package_info_async()
glib.idle_add(self.machine_combo_changed_idle_cb)
def update_machine_combo(self):
self.disable_warnings_bar()
all_machines = [self.__dummy_machine__] + self.builder.parameters.all_machines
model = self.machine_combo.get_model()
model.clear()
for machine in all_machines:
self.machine_combo.append_text(machine)
self.machine_combo.set_active(0)
def switch_machine_combo(self):
self.disable_warnings_bar()
self.machine_combo_changed_by_manual = False
model = self.machine_combo.get_model()
active = 0
while active < len(model):
if model[active][0] == self.builder.configuration.curr_mach:
self.machine_combo.set_active(active)
return
active += 1
if model[0][0] != self.__dummy_machine__:
self.machine_combo.insert_text(0, self.__dummy_machine__)
self.machine_combo.set_active(0)
def update_image_desc(self):
desc = ""
selected_image = self.image_combo.get_active_text()
if selected_image and selected_image in self.builder.recipe_model.pn_path.keys():
image_path = self.builder.recipe_model.pn_path[selected_image]
image_iter = self.builder.recipe_model.get_iter(image_path)
desc = self.builder.recipe_model.get_value(image_iter, self.builder.recipe_model.COL_DESC)
mark = ("<span %s>%s</span>\n") % (self.span_tag('small'), desc)
self.image_desc.set_markup(mark)
def image_combo_changed_idle_cb(self, selected_image, selected_recipes, selected_packages):
self.builder.update_recipe_model(selected_image, selected_recipes)
self.builder.update_package_model(selected_packages)
self.builder.window_sensitive(True)
def image_combo_changed_cb(self, combo):
self.builder.window_sensitive(False)
selected_image = self.image_combo.get_active_text()
if selected_image == self.__custom_image__:
topdir = self.builder.get_topdir()
images_dir = topdir + "/recipes/images/custom/"
self.builder.ensure_dir(images_dir)
dialog = RetrieveImageDialog(images_dir, "Select from my image recipes",
self.builder, gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT)
response = dialog.run()
if response == gtk.RESPONSE_OK:
image_name = dialog.get_filename()
head, tail = os.path.split(image_name)
selected_image = os.path.splitext(tail)[0]
self.custom_image_selected = selected_image
self.update_image_combo(self.builder.recipe_model, selected_image)
else:
selected_image = self.__dummy_image__
self.update_image_combo(self.builder.recipe_model, None)
dialog.destroy()
else:
if self.custom_image_selected:
self.custom_image_selected = None
self.update_image_combo(self.builder.recipe_model, selected_image)
if not selected_image or (selected_image == self.__dummy_image__):
self.builder.window_sensitive(True)
self.just_bake_button.hide()
self.edit_image_button.hide()
return
# remove __dummy_image__ item from the store list after first user selection
# because it is no longer valid
combo_store = combo.get_model()
if len(combo_store) and (combo_store[0][0] == self.__dummy_image__):
combo.remove_text(0)
self.builder.customized = False
selected_recipes = []
image_path = self.builder.recipe_model.pn_path[selected_image]
image_iter = self.builder.recipe_model.get_iter(image_path)
selected_packages = self.builder.recipe_model.get_value(image_iter, self.builder.recipe_model.COL_INSTALL).split()
self.update_image_desc()
self.builder.recipe_model.reset()
self.builder.package_model.reset()
self.show_baseimg_selected()
if selected_image == self.builder.recipe_model.__custom_image__:
self.just_bake_button.hide()
glib.idle_add(self.image_combo_changed_idle_cb, selected_image, selected_recipes, selected_packages)
def _image_combo_connect_signal(self):
if not self.image_combo_id:
self.image_combo_id = self.image_combo.connect("changed", self.image_combo_changed_cb)
def _image_combo_disconnect_signal(self):
if self.image_combo_id:
self.image_combo.disconnect(self.image_combo_id)
self.image_combo_id = None
def update_image_combo(self, recipe_model, selected_image):
# Update the image combo according to the images in the recipe_model
# populate image combo
filter = {RecipeListModel.COL_TYPE : ['image']}
image_model = recipe_model.tree_model(filter)
image_model.set_sort_column_id(recipe_model.COL_NAME, gtk.SORT_ASCENDING)
active = 0
cnt = 0
white_pattern = []
if self.builder.parameters.image_white_pattern:
for i in self.builder.parameters.image_white_pattern.split():
white_pattern.append(re.compile(i))
black_pattern = []
if self.builder.parameters.image_black_pattern:
for i in self.builder.parameters.image_black_pattern.split():
black_pattern.append(re.compile(i))
black_pattern.append(re.compile("hob-image"))
black_pattern.append(re.compile("edited(-[0-9]*)*.bb$"))
it = image_model.get_iter_first()
self._image_combo_disconnect_signal()
model = self.image_combo.get_model()
model.clear()
# Set a indicator text to combo store when first open
if not selected_image:
self.image_combo.append_text(self.__dummy_image__)
cnt = cnt + 1
self.image_combo.append_text(self.__custom_image__)
self.image_combo.append_text("--Separator--")
cnt = cnt + 2
topdir = self.builder.get_topdir()
# append and set active
while it:
path = image_model.get_path(it)
it = image_model.iter_next(it)
image_name = image_model[path][recipe_model.COL_NAME]
if image_name == self.builder.recipe_model.__custom_image__:
continue
if black_pattern:
allow = True
for pattern in black_pattern:
if pattern.search(image_name):
allow = False
break
elif white_pattern:
allow = False
for pattern in white_pattern:
if pattern.search(image_name):
allow = True
break
else:
allow = True
file_name = image_model[path][recipe_model.COL_FILE]
if file_name and topdir in file_name:
allow = False
if allow:
self.image_combo.append_text(image_name)
if image_name == selected_image:
active = cnt
cnt = cnt + 1
self.image_combo.append_text(self.builder.recipe_model.__custom_image__)
if selected_image == self.builder.recipe_model.__custom_image__:
active = cnt
if self.custom_image_selected:
self.image_combo.append_text("--Separator--")
self.image_combo.append_text(self.custom_image_selected)
cnt = cnt + 2
if self.custom_image_selected == selected_image:
active = cnt
self.image_combo.set_active(active)
if active != 0:
self.show_baseimg_selected()
self._image_combo_connect_signal()
def layer_button_clicked_cb(self, button):
# Create a layer selection dialog
self.builder.show_layer_selection_dialog()
def view_adv_configuration_button_clicked_cb(self, button):
# Create an advanced settings dialog
response, settings_changed = self.builder.show_adv_settings_dialog()
if not response:
return
if settings_changed:
self.builder.window.set_cursor(gtk.gdk.Cursor(gtk.gdk.WATCH))
self.builder.wait(0.1) #wait for adv_settings_dialog to terminate
self.builder.reparse_post_adv_settings()
self.builder.window.set_cursor(None)
def just_bake_button_clicked_cb(self, button):
self.builder.parsing_warnings = []
self.builder.just_bake()
def edit_image_button_clicked_cb(self, button):
self.builder.set_base_image()
self.builder.show_recipes()
def my_images_button_clicked_cb(self, button):
self.builder.show_load_my_images_dialog()
def settings_button_clicked_cb(self, button):
# Create an advanced settings dialog
response, settings_changed = self.builder.show_simple_settings_dialog()
if not response:
return
if settings_changed:
self.builder.reparse_post_adv_settings()

View File

@@ -0,0 +1,669 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gobject
import gtk
from bb.ui.crumbs.hobcolor import HobColors
from bb.ui.crumbs.hobwidget import hic, HobViewTable, HobAltButton, HobButton
from bb.ui.crumbs.hobpages import HobPage
import subprocess
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
from bb.ui.crumbs.hig.saveimagedialog import SaveImageDialog
#
# ImageDetailsPage
#
class ImageDetailsPage (HobPage):
class DetailBox (gtk.EventBox):
def __init__(self, widget = None, varlist = None, vallist = None, icon = None, button = None, button2=None, color = HobColors.LIGHT_GRAY):
gtk.EventBox.__init__(self)
# set color
style = self.get_style().copy()
style.bg[gtk.STATE_NORMAL] = self.get_colormap().alloc_color(color, False, False)
self.set_style(style)
self.row = gtk.Table(1, 2, False)
self.row.set_border_width(10)
self.add(self.row)
total_rows = 0
if widget:
total_rows = 10
if varlist and vallist:
# pack the icon and the text on the left
total_rows += len(varlist)
self.table = gtk.Table(total_rows, 20, True)
self.table.set_row_spacings(6)
self.table.set_size_request(100, -1)
self.row.attach(self.table, 0, 1, 0, 1, xoptions=gtk.FILL|gtk.EXPAND, yoptions=gtk.FILL)
colid = 0
rowid = 0
self.line_widgets = {}
if icon:
self.table.attach(icon, colid, colid + 2, 0, 1)
colid = colid + 2
if widget:
self.table.attach(widget, colid, 20, 0, 10)
rowid = 10
if varlist and vallist:
for row in range(rowid, total_rows):
index = row - rowid
self.line_widgets[varlist[index]] = self.text2label(varlist[index], vallist[index])
self.table.attach(self.line_widgets[varlist[index]], colid, 20, row, row + 1)
# pack the button on the right
if button:
self.bbox = gtk.VBox()
self.bbox.pack_start(button, expand=True, fill=False)
if button2:
self.bbox.pack_start(button2, expand=True, fill=False)
self.bbox.set_size_request(150,-1)
self.row.attach(self.bbox, 1, 2, 0, 1, xoptions=gtk.FILL, yoptions=gtk.EXPAND)
def update_line_widgets(self, variable, value):
if len(self.line_widgets) == 0:
return
if not isinstance(self.line_widgets[variable], gtk.Label):
return
self.line_widgets[variable].set_markup(self.format_line(variable, value))
def wrap_line(self, inputs):
# wrap the long text of inputs
wrap_width_chars = 75
outputs = ""
tmps = inputs
less_chars = len(inputs)
while (less_chars - wrap_width_chars) > 0:
less_chars -= wrap_width_chars
outputs += tmps[:wrap_width_chars] + "\n "
tmps = inputs[less_chars:]
outputs += tmps
return outputs
def format_line(self, variable, value):
wraped_value = self.wrap_line(value)
markup = "<span weight=\'bold\'>%s</span>" % variable
markup += "<span weight=\'normal\' foreground=\'#1c1c1c\' font_desc=\'14px\'>%s</span>" % wraped_value
return markup
def text2label(self, variable, value):
# append the name:value to the left box
# such as "Name: hob-core-minimal-variant-2011-12-15-beagleboard"
label = gtk.Label()
label.set_alignment(0.0, 0.5)
label.set_markup(self.format_line(variable, value))
return label
class BuildDetailBox (gtk.EventBox):
def __init__(self, varlist = None, vallist = None, icon = None, color = HobColors.LIGHT_GRAY):
gtk.EventBox.__init__(self)
# set color
style = self.get_style().copy()
style.bg[gtk.STATE_NORMAL] = self.get_colormap().alloc_color(color, False, False)
self.set_style(style)
self.hbox = gtk.HBox()
self.hbox.set_border_width(10)
self.add(self.hbox)
total_rows = 0
if varlist and vallist:
# pack the icon and the text on the left
total_rows += len(varlist)
self.table = gtk.Table(total_rows, 20, True)
self.table.set_row_spacings(6)
self.table.set_size_request(100, -1)
self.hbox.pack_start(self.table, expand=True, fill=True, padding=15)
colid = 0
rowid = 0
self.line_widgets = {}
if icon:
self.table.attach(icon, colid, colid + 2, 0, 1)
colid = colid + 2
if varlist and vallist:
for row in range(rowid, total_rows):
index = row - rowid
self.line_widgets[varlist[index]] = self.text2label(varlist[index], vallist[index])
self.table.attach(self.line_widgets[varlist[index]], colid, 20, row, row + 1)
def update_line_widgets(self, variable, value):
if len(self.line_widgets) == 0:
return
if not isinstance(self.line_widgets[variable], gtk.Label):
return
self.line_widgets[variable].set_markup(self.format_line(variable, value))
def wrap_line(self, inputs):
# wrap the long text of inputs
wrap_width_chars = 75
outputs = ""
tmps = inputs
less_chars = len(inputs)
while (less_chars - wrap_width_chars) > 0:
less_chars -= wrap_width_chars
outputs += tmps[:wrap_width_chars] + "\n "
tmps = inputs[less_chars:]
outputs += tmps
return outputs
def format_line(self, variable, value):
wraped_value = self.wrap_line(value)
markup = "<span weight=\'bold\'>%s</span>" % variable
markup += "<span weight=\'normal\' foreground=\'#1c1c1c\' font_desc=\'14px\'>%s</span>" % wraped_value
return markup
def text2label(self, variable, value):
# append the name:value to the left box
# such as "Name: hob-core-minimal-variant-2011-12-15-beagleboard"
label = gtk.Label()
label.set_alignment(0.0, 0.5)
label.set_markup(self.format_line(variable, value))
return label
def __init__(self, builder):
super(ImageDetailsPage, self).__init__(builder, "Image details")
self.image_store = []
self.button_ids = {}
self.details_bottom_buttons = gtk.HBox(False, 6)
self.image_saved = False
self.create_visual_elements()
self.name_field_template = ""
self.description_field_template = ""
def create_visual_elements(self):
# create visual elements
# create the toolbar
self.toolbar = gtk.Toolbar()
self.toolbar.set_orientation(gtk.ORIENTATION_HORIZONTAL)
self.toolbar.set_style(gtk.TOOLBAR_BOTH)
my_images_button = self.append_toolbar_button(self.toolbar,
"Images",
hic.ICON_IMAGES_DISPLAY_FILE,
hic.ICON_IMAGES_HOVER_FILE,
"Open previously built images",
self.my_images_button_clicked_cb)
settings_button = self.append_toolbar_button(self.toolbar,
"Settings",
hic.ICON_SETTINGS_DISPLAY_FILE,
hic.ICON_SETTINGS_HOVER_FILE,
"View additional build settings",
self.settings_button_clicked_cb)
self.details_top_buttons = self.add_onto_top_bar(self.toolbar)
def _remove_all_widget(self):
children = self.get_children() or []
for child in children:
self.remove(child)
children = self.box_group_area.get_children() or []
for child in children:
self.box_group_area.remove(child)
children = self.details_bottom_buttons.get_children() or []
for child in children:
self.details_bottom_buttons.remove(child)
def show_page(self, step):
self.build_succeeded = (step == self.builder.IMAGE_GENERATED)
image_addr = self.builder.parameters.image_addr
image_names = self.builder.parameters.image_names
if self.build_succeeded:
machine = self.builder.configuration.curr_mach
base_image = self.builder.recipe_model.get_selected_image()
layers = self.builder.configuration.layers
pkg_num = "%s" % len(self.builder.package_model.get_selected_packages())
log_file = self.builder.current_logfile
else:
pkg_num = "N/A"
log_file = None
# remove
for button_id, button in self.button_ids.items():
button.disconnect(button_id)
self._remove_all_widget()
# repack
self.pack_start(self.details_top_buttons, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
self.build_result = None
if self.image_saved or (self.build_succeeded and self.builder.current_step == self.builder.IMAGE_GENERATING):
# building is the previous step
icon = gtk.Image()
pixmap_path = hic.ICON_INDI_CONFIRM_FILE
color = HobColors.RUNNING
pix_buffer = gtk.gdk.pixbuf_new_from_file(pixmap_path)
icon.set_from_pixbuf(pix_buffer)
varlist = [""]
if self.image_saved:
vallist = ["Your image recipe has been saved"]
else:
vallist = ["Your image is ready"]
self.build_result = self.BuildDetailBox(varlist=varlist, vallist=vallist, icon=icon, color=color)
self.box_group_area.pack_start(self.build_result, expand=False, fill=False)
self.buttonlist = ["Build new image", "Save image recipe", "Run image", "Deploy image"]
# Name
self.image_store = []
self.toggled_image = ""
default_image_size = 0
self.num_toggled = 0
i = 0
for image_name in image_names:
image_size = HobPage._size_to_string(os.stat(os.path.join(image_addr, image_name)).st_size)
image_attr = ("run" if (self.test_type_runnable(image_name) and self.test_mach_runnable(image_name)) else \
("deploy" if self.test_deployable(image_name) else ""))
is_toggled = (image_attr != "")
if not self.toggled_image:
if i == (len(image_names) - 1):
is_toggled = True
if is_toggled:
default_image_size = image_size
self.toggled_image = image_name
split_stuff = image_name.split('.')
if "rootfs" in split_stuff:
image_type = image_name[(len(split_stuff[0]) + len(".rootfs") + 1):]
else:
image_type = image_name[(len(split_stuff[0]) + 1):]
self.image_store.append({'name': image_name,
'type': image_type,
'size': image_size,
'is_toggled': is_toggled,
'action_attr': image_attr,})
i = i + 1
self.num_toggled += is_toggled
is_runnable = self.create_bottom_buttons(self.buttonlist, self.toggled_image)
# Generated image files info
varlist = ["Name: ", "Files created: ", "Directory: "]
vallist = []
vallist.append(image_name.split('.')[0])
vallist.append(', '.join(fileitem['type'] for fileitem in self.image_store))
vallist.append(image_addr)
view_files_button = HobAltButton("View files")
view_files_button.connect("clicked", self.view_files_clicked_cb, image_addr)
view_files_button.set_tooltip_text("Open the directory containing the image files")
open_log_button = None
if log_file:
open_log_button = HobAltButton("Open log")
open_log_button.connect("clicked", self.open_log_clicked_cb, log_file)
open_log_button.set_tooltip_text("Open the build's log file")
self.image_detail = self.DetailBox(varlist=varlist, vallist=vallist, button=view_files_button, button2=open_log_button)
self.box_group_area.pack_start(self.image_detail, expand=False, fill=True)
# The default kernel box for the qemu images
self.sel_kernel = ""
self.kernel_detail = None
if 'qemu' in image_name:
self.sel_kernel = self.get_kernel_file_name()
# varlist = ["Kernel: "]
# vallist = []
# vallist.append(self.sel_kernel)
# change_kernel_button = HobAltButton("Change")
# change_kernel_button.connect("clicked", self.change_kernel_cb)
# change_kernel_button.set_tooltip_text("Change qemu kernel file")
# self.kernel_detail = self.DetailBox(varlist=varlist, vallist=vallist, button=change_kernel_button)
# self.box_group_area.pack_start(self.kernel_detail, expand=True, fill=True)
# Machine, Image recipe and Layers
layer_num_limit = 15
varlist = ["Machine: ", "Image recipe: ", "Layers: "]
vallist = []
self.setting_detail = None
if self.build_succeeded:
vallist.append(machine)
if self.builder.recipe_model.is_custom_image():
if self.builder.configuration.initial_selected_image == self.builder.recipe_model.__custom_image__:
base_image ="New image recipe"
else:
base_image = self.builder.configuration.initial_selected_image + " (edited)"
vallist.append(base_image)
i = 0
for layer in layers:
if i > layer_num_limit:
break
varlist.append(" - ")
i += 1
vallist.append("")
i = 0
for layer in layers:
if i > layer_num_limit:
break
elif i == layer_num_limit:
vallist.append("and more...")
else:
vallist.append(layer)
i += 1
edit_config_button = HobAltButton("Edit configuration")
edit_config_button.set_tooltip_text("Edit machine and image recipe")
edit_config_button.connect("clicked", self.edit_config_button_clicked_cb)
self.setting_detail = self.DetailBox(varlist=varlist, vallist=vallist, button=edit_config_button)
self.box_group_area.pack_start(self.setting_detail, expand=True, fill=True)
# Packages included, and Total image size
varlist = ["Packages included: ", "Total image size: "]
vallist = []
vallist.append(pkg_num)
vallist.append(default_image_size)
self.builder.configuration.image_size = default_image_size
self.builder.configuration.image_packages = self.builder.configuration.selected_packages
if self.build_succeeded:
edit_packages_button = HobAltButton("Edit packages")
edit_packages_button.set_tooltip_text("Edit the packages included in your image")
edit_packages_button.connect("clicked", self.edit_packages_button_clicked_cb)
else: # get to this page from "My images"
edit_packages_button = None
self.package_detail = self.DetailBox(varlist=varlist, vallist=vallist, button=edit_packages_button)
self.box_group_area.pack_start(self.package_detail, expand=True, fill=True)
# pack the buttons at the bottom, at this time they are already created.
if self.build_succeeded:
self.box_group_area.pack_end(self.details_bottom_buttons, expand=False, fill=False)
else: # for "My images" page
self.details_separator = gtk.HSeparator()
self.box_group_area.pack_start(self.details_separator, expand=False, fill=False)
self.box_group_area.pack_start(self.details_bottom_buttons, expand=False, fill=False)
self.show_all()
if self.kernel_detail and (not is_runnable):
self.kernel_detail.hide()
self.image_saved = False
def view_files_clicked_cb(self, button, image_addr):
subprocess.call("xdg-open /%s" % image_addr, shell=True)
def open_log_clicked_cb(self, button, log_file):
if log_file:
log_file = "file:///" + log_file
gtk.show_uri(screen=button.get_screen(), uri=log_file, timestamp=0)
def refresh_package_detail_box(self, image_size):
self.package_detail.update_line_widgets("Total image size: ", image_size)
def test_type_runnable(self, image_name):
type_runnable = False
for t in self.builder.parameters.runnable_image_types:
if image_name.endswith(t):
type_runnable = True
break
return type_runnable
def test_mach_runnable(self, image_name):
mach_runnable = False
for t in self.builder.parameters.runnable_machine_patterns:
if t in image_name:
mach_runnable = True
break
return mach_runnable
def test_deployable(self, image_name):
if self.builder.configuration.curr_mach.startswith("qemu"):
return False
deployable = False
for t in self.builder.parameters.deployable_image_types:
if image_name.endswith(t):
deployable = True
break
return deployable
def get_kernel_file_name(self, kernel_addr=""):
kernel_name = ""
if not kernel_addr:
kernel_addr = self.builder.parameters.image_addr
files = [f for f in os.listdir(kernel_addr) if f[0] <> '.']
for check_file in files:
if check_file.endswith(".bin"):
name_splits = check_file.split(".")[0]
if self.builder.parameters.kernel_image_type in name_splits.split("-"):
kernel_name = check_file
break
return kernel_name
def show_builded_images_dialog(self, widget, primary_action=""):
title = primary_action if primary_action else "Your builded images"
dialog = CrumbsDialog(title, self.builder,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT)
dialog.set_border_width(12)
label = gtk.Label()
label.set_use_markup(True)
label.set_alignment(0.0, 0.5)
label.set_padding(12,0)
if primary_action == "Run image":
label.set_markup("<span font_desc='12'>Select the image file you want to run:</span>")
elif primary_action == "Deploy image":
label.set_markup("<span font_desc='12'>Select the image file you want to deploy:</span>")
else:
label.set_markup("<span font_desc='12'>Select the image file you want to %s</span>" % primary_action)
dialog.vbox.pack_start(label, expand=False, fill=False)
# filter created images as action attribution (deploy or run)
action_attr = ""
action_images = []
for fileitem in self.image_store:
action_attr = fileitem['action_attr']
if (action_attr == 'run' and primary_action == "Run image") \
or (action_attr == 'deploy' and primary_action == "Deploy image"):
action_images.append(fileitem)
# pack the corresponding 'runnable' or 'deploy' radio_buttons, if there has no more than one file.
# assume that there does not both have 'deploy' and 'runnable' files in the same building result
# in possible as design.
curr_row = 0
rows = (len(action_images)) if len(action_images) < 10 else 10
table = gtk.Table(rows, 10, True)
table.set_row_spacings(6)
table.set_col_spacing(0, 12)
table.set_col_spacing(5, 12)
sel_parent_btn = None
for fileitem in action_images:
sel_btn = gtk.RadioButton(sel_parent_btn, fileitem['type'])
sel_parent_btn = sel_btn if not sel_parent_btn else sel_parent_btn
sel_btn.set_active(fileitem['is_toggled'])
sel_btn.connect('toggled', self.table_selected_cb, fileitem)
if curr_row < 10:
table.attach(sel_btn, 0, 4, curr_row, curr_row + 1, xpadding=24)
else:
table.attach(sel_btn, 5, 9, curr_row - 10, curr_row - 9, xpadding=24)
curr_row += 1
dialog.vbox.pack_start(table, expand=False, fill=False, padding=6)
button = dialog.add_button("Cancel", gtk.RESPONSE_CANCEL)
HobAltButton.style_button(button)
if primary_action:
button = dialog.add_button(primary_action, gtk.RESPONSE_YES)
HobButton.style_button(button)
dialog.show_all()
response = dialog.run()
dialog.destroy()
if response != gtk.RESPONSE_YES:
return
for fileitem in self.image_store:
if fileitem['is_toggled']:
if fileitem['action_attr'] == 'run':
self.builder.runqemu_image(fileitem['name'], self.sel_kernel)
elif fileitem['action_attr'] == 'deploy':
self.builder.deploy_image(fileitem['name'])
def table_selected_cb(self, tbutton, image):
image['is_toggled'] = tbutton.get_active()
if image['is_toggled']:
self.toggled_image = image['name']
def change_kernel_cb(self, widget):
kernel_path = self.builder.show_load_kernel_dialog()
if kernel_path and self.kernel_detail:
import os.path
self.sel_kernel = os.path.basename(kernel_path)
markup = self.kernel_detail.format_line("Kernel: ", self.sel_kernel)
label = ((self.kernel_detail.get_children()[0]).get_children()[0]).get_children()[0]
label.set_markup(markup)
def create_bottom_buttons(self, buttonlist, image_name):
# Create the buttons at the bottom
created = False
packed = False
self.button_ids = {}
is_runnable = False
# create button "Deploy image"
name = "Deploy image"
if name in buttonlist and self.test_deployable(image_name):
deploy_button = HobButton('Deploy image')
#deploy_button.set_size_request(205, 49)
deploy_button.set_tooltip_text("Burn a live image to a USB drive or flash memory")
deploy_button.set_flags(gtk.CAN_DEFAULT)
button_id = deploy_button.connect("clicked", self.deploy_button_clicked_cb)
self.button_ids[button_id] = deploy_button
self.details_bottom_buttons.pack_end(deploy_button, expand=False, fill=False)
created = True
packed = True
name = "Run image"
if name in buttonlist and self.test_type_runnable(image_name) and self.test_mach_runnable(image_name):
if created == True:
# separator
#label = gtk.Label(" or ")
#self.details_bottom_buttons.pack_end(label, expand=False, fill=False)
# create button "Run image"
run_button = HobAltButton("Run image")
else:
# create button "Run image" as the primary button
run_button = HobButton("Run image")
#run_button.set_size_request(205, 49)
run_button.set_flags(gtk.CAN_DEFAULT)
packed = True
run_button.set_tooltip_text("Start up an image with qemu emulator")
button_id = run_button.connect("clicked", self.run_button_clicked_cb)
self.button_ids[button_id] = run_button
self.details_bottom_buttons.pack_end(run_button, expand=False, fill=False)
created = True
is_runnable = True
name = "Save image recipe"
if name in buttonlist and self.builder.recipe_model.is_custom_image():
save_button = HobAltButton("Save image recipe")
save_button.set_tooltip_text("Keep your changes saving them as an image recipe")
save_button.set_sensitive(not self.image_saved)
button_id = save_button.connect("clicked", self.save_button_clicked_cb)
self.button_ids[button_id] = save_button
self.details_bottom_buttons.pack_end(save_button, expand=False, fill=False)
name = "Build new image"
if name in buttonlist:
# create button "Build new image"
if packed:
build_new_button = HobAltButton("Build new image")
else:
build_new_button = HobButton("Build new image")
build_new_button.set_flags(gtk.CAN_DEFAULT)
#build_new_button.set_size_request(205, 49)
self.details_bottom_buttons.pack_end(build_new_button, expand=False, fill=False)
build_new_button.set_tooltip_text("Create a new image from scratch")
button_id = build_new_button.connect("clicked", self.build_new_button_clicked_cb)
self.button_ids[button_id] = build_new_button
return is_runnable
def deploy_button_clicked_cb(self, button):
if self.toggled_image:
if self.num_toggled > 1:
self.set_sensitive(False)
self.show_builded_images_dialog(None, "Deploy image")
self.set_sensitive(True)
else:
self.builder.deploy_image(self.toggled_image)
def run_button_clicked_cb(self, button):
if self.toggled_image:
if self.num_toggled > 1:
self.set_sensitive(False)
self.show_builded_images_dialog(None, "Run image")
self.set_sensitive(True)
else:
self.builder.runqemu_image(self.toggled_image, self.sel_kernel)
def save_button_clicked_cb(self, button):
topdir = self.builder.get_topdir()
images_dir = topdir + "/recipes/images/custom/"
self.builder.ensure_dir(images_dir)
self.name_field_template = self.builder.image_configuration_page.custom_image_selected
if self.name_field_template:
image_path = self.builder.recipe_model.pn_path[self.name_field_template]
image_iter = self.builder.recipe_model.get_iter(image_path)
self.description_field_template = self.builder.recipe_model.get_value(image_iter, self.builder.recipe_model.COL_DESC)
else:
self.name_field_template = ""
dialog = SaveImageDialog(images_dir, self.name_field_template, self.description_field_template,
"Save image recipe", self.builder, gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT)
response = dialog.run()
dialog.destroy()
def build_new_button_clicked_cb(self, button):
self.builder.initiate_new_build_async()
def edit_config_button_clicked_cb(self, button):
self.builder.show_configuration()
def edit_packages_button_clicked_cb(self, button):
self.builder.show_packages()
def my_images_button_clicked_cb(self, button):
self.builder.show_load_my_images_dialog()
def settings_button_clicked_cb(self, button):
# Create an advanced settings dialog
response, settings_changed = self.builder.show_simple_settings_dialog()
if not response:
return
if settings_changed:
self.builder.reparse_post_adv_settings()

View File

@@ -0,0 +1,355 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import glib
from bb.ui.crumbs.hobcolor import HobColors
from bb.ui.crumbs.hobwidget import HobViewTable, HobNotebook, HobAltButton, HobButton
from bb.ui.crumbs.hoblistmodel import PackageListModel
from bb.ui.crumbs.hobpages import HobPage
#
# PackageSelectionPage
#
class PackageSelectionPage (HobPage):
pages = [
{
'name' : 'Included packages',
'tooltip' : 'The packages currently included for your image',
'filter' : { PackageListModel.COL_INC : [True] },
'search' : 'Search packages by name',
'searchtip' : 'Enter a package name to find it',
'columns' : [{
'col_name' : 'Package name',
'col_id' : PackageListModel.COL_NAME,
'col_style': 'text',
'col_min' : 100,
'col_max' : 300,
'expand' : 'True'
}, {
'col_name' : 'Size',
'col_id' : PackageListModel.COL_SIZE,
'col_style': 'text',
'col_min' : 100,
'col_max' : 300,
'expand' : 'True'
}, {
'col_name' : 'Recipe',
'col_id' : PackageListModel.COL_RCP,
'col_style': 'text',
'col_min' : 100,
'col_max' : 250,
'expand' : 'True'
}, {
'col_name' : 'Brought in by (+others)',
'col_id' : PackageListModel.COL_BINB,
'col_style': 'binb',
'col_min' : 100,
'col_max' : 350,
'expand' : 'True'
}, {
'col_name' : 'Included',
'col_id' : PackageListModel.COL_INC,
'col_style': 'check toggle',
'col_min' : 100,
'col_max' : 100
}]
}, {
'name' : 'All packages',
'tooltip' : 'All packages that have been built',
'filter' : {},
'search' : 'Search packages by name',
'searchtip' : 'Enter a package name to find it',
'columns' : [{
'col_name' : 'Package name',
'col_id' : PackageListModel.COL_NAME,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Size',
'col_id' : PackageListModel.COL_SIZE,
'col_style': 'text',
'col_min' : 100,
'col_max' : 500,
'expand' : 'True'
}, {
'col_name' : 'Recipe',
'col_id' : PackageListModel.COL_RCP,
'col_style': 'text',
'col_min' : 100,
'col_max' : 250,
'expand' : 'True'
}, {
'col_name' : 'Included',
'col_id' : PackageListModel.COL_INC,
'col_style': 'check toggle',
'col_min' : 100,
'col_max' : 100
}]
}
]
(INCLUDED,
ALL) = range(2)
def __init__(self, builder):
super(PackageSelectionPage, self).__init__(builder, "Edit packages")
# set invisible members
self.recipe_model = self.builder.recipe_model
self.package_model = self.builder.package_model
# create visual elements
self.create_visual_elements()
def included_clicked_cb(self, button):
self.ins.set_current_page(self.INCLUDED)
def create_visual_elements(self):
self.label = gtk.Label("Packages included: 0\nSelected packages size: 0 MB")
self.eventbox = self.add_onto_top_bar(self.label, 73)
self.pack_start(self.eventbox, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
# set visible members
self.ins = HobNotebook()
self.tables = [] # we need to modify table when the dialog is shown
search_names = []
search_tips = []
# append the tab
for page in self.pages:
columns = page['columns']
name = page['name']
tab = HobViewTable(columns, name)
search_names.append(page['search'])
search_tips.append(page['searchtip'])
filter = page['filter']
sort_model = self.package_model.tree_model(filter, initial=True)
tab.set_model(sort_model)
tab.connect("toggled", self.table_toggled_cb, name)
tab.connect("button-release-event", self.button_click_cb)
tab.connect("cell-fadeinout-stopped", self.after_fadeout_checkin_include, filter)
self.ins.append_page(tab, page['name'], page['tooltip'])
self.tables.append(tab)
self.ins.set_entry(search_names, search_tips)
self.ins.search.connect("changed", self.search_entry_changed)
# add all into the dialog
self.box_group_area.pack_start(self.ins, expand=True, fill=True)
self.button_box = gtk.HBox(False, 6)
self.box_group_area.pack_start(self.button_box, expand=False, fill=False)
self.build_image_button = HobButton('Build image')
#self.build_image_button.set_size_request(205, 49)
self.build_image_button.set_tooltip_text("Build target image")
self.build_image_button.set_flags(gtk.CAN_DEFAULT)
self.build_image_button.grab_default()
self.build_image_button.connect("clicked", self.build_image_clicked_cb)
self.button_box.pack_end(self.build_image_button, expand=False, fill=False)
self.back_button = HobAltButton('Cancel')
self.back_button.connect("clicked", self.back_button_clicked_cb)
self.button_box.pack_end(self.back_button, expand=False, fill=False)
def search_entry_changed(self, entry):
text = entry.get_text()
if self.ins.search_focus:
self.ins.search_focus = False
elif self.ins.page_changed:
self.ins.page_change = False
self.filter_search(entry)
elif text not in self.ins.search_names:
self.filter_search(entry)
def filter_search(self, entry):
text = entry.get_text()
current_tab = self.ins.get_current_page()
filter = self.pages[current_tab]['filter']
filter[PackageListModel.COL_NAME] = text
self.tables[current_tab].set_model(self.package_model.tree_model(filter, search_data=text))
if self.package_model.filtered_nb == 0:
if not self.ins.get_nth_page(current_tab).top_bar:
self.ins.get_nth_page(current_tab).add_no_result_bar(entry)
self.ins.get_nth_page(current_tab).top_bar.set_no_show_all(True)
self.ins.get_nth_page(current_tab).top_bar.show()
self.ins.get_nth_page(current_tab).scroll.hide()
else:
if self.ins.get_nth_page(current_tab).top_bar:
self.ins.get_nth_page(current_tab).top_bar.hide()
self.ins.get_nth_page(current_tab).scroll.show()
if entry.get_text() == '':
entry.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, False)
else:
entry.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, True)
def button_click_cb(self, widget, event):
path, col = widget.table_tree.get_cursor()
tree_model = widget.table_tree.get_model()
if path and col.get_title() != 'Included': # else activation is likely a removal
properties = {'binb': '' , 'name': '', 'size':'', 'recipe':'', 'files_list':''}
properties['binb'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_BINB)
properties['name'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_NAME)
properties['size'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_SIZE)
properties['recipe'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_RCP)
properties['files_list'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_FLIST)
self.builder.show_recipe_property_dialog(properties)
def open_log_clicked_cb(self, button, log_file):
if log_file:
log_file = "file:///" + log_file
gtk.show_uri(screen=button.get_screen(), uri=log_file, timestamp=0)
def show_page(self, log_file):
children = self.button_box.get_children() or []
for child in children:
self.button_box.remove(child)
# re-packed the buttons as request, add the 'open log' button if build success
self.button_box.pack_end(self.build_image_button, expand=False, fill=False)
if log_file:
open_log_button = HobAltButton("Open log")
open_log_button.connect("clicked", self.open_log_clicked_cb, log_file)
open_log_button.set_tooltip_text("Open the build's log file")
self.button_box.pack_end(open_log_button, expand=False, fill=False)
self.button_box.pack_end(self.back_button, expand=False, fill=False)
self.show_all()
def build_image_clicked_cb(self, button):
self.builder.parsing_warnings = []
self.builder.build_image()
def refresh_tables(self):
self.ins.reset_entry(self.ins.search, 0)
for tab in self.tables:
index = self.tables.index(tab)
filter = self.pages[index]['filter']
tab.set_model(self.package_model.tree_model(filter, initial=True))
def back_button_clicked_cb(self, button):
if self.builder.previous_step == self.builder.IMAGE_GENERATED:
self.builder.restore_initial_selected_packages()
self.refresh_selection()
self.builder.show_image_details()
else:
self.builder.show_configuration()
self.refresh_tables()
def refresh_selection(self):
self.builder.configuration.selected_packages = self.package_model.get_selected_packages()
self.builder.configuration.user_selected_packages = self.package_model.get_user_selected_packages()
selected_packages_num = len(self.builder.configuration.selected_packages)
selected_packages_size = self.package_model.get_packages_size()
selected_packages_size_str = HobPage._size_to_string(selected_packages_size)
if self.builder.configuration.image_packages == self.builder.configuration.selected_packages:
image_total_size_str = self.builder.configuration.image_size
else:
image_overhead_factor = self.builder.configuration.image_overhead_factor
image_rootfs_size = self.builder.configuration.image_rootfs_size / 1024 # image_rootfs_size is KB
image_extra_size = self.builder.configuration.image_extra_size / 1024 # image_extra_size is KB
base_size = image_overhead_factor * selected_packages_size
image_total_size = max(base_size, image_rootfs_size) + image_extra_size
if "zypper" in self.builder.configuration.selected_packages:
image_total_size += (51200 * 1024)
image_total_size_str = HobPage._size_to_string(image_total_size)
self.label.set_label("Packages included: %s\nSelected packages size: %s\nEstimated image size: %s" %
(selected_packages_num, selected_packages_size_str, image_total_size_str))
self.ins.show_indicator_icon("Included packages", selected_packages_num)
def toggle_item_idle_cb(self, path, view_tree, cell, pagename):
if not self.package_model.path_included(path):
self.package_model.include_item(item_path=path, binb="User Selected")
else:
self.pre_fadeout_checkout_include(view_tree)
self.package_model.exclude_item(item_path=path)
self.render_fadeout(view_tree, cell)
self.refresh_selection()
if not self.builder.customized:
self.builder.customized = True
self.builder.set_base_image()
self.builder.configuration.selected_image = self.recipe_model.__custom_image__
self.builder.rcppkglist_populated()
self.builder.window_sensitive(True)
view_model = view_tree.get_model()
vpath = self.package_model.convert_path_to_vpath(view_model, path)
view_tree.set_cursor(vpath)
def table_toggled_cb(self, table, cell, view_path, toggled_columnid, view_tree, pagename):
# Click to include a package
self.builder.window_sensitive(False)
view_model = view_tree.get_model()
path = self.package_model.convert_vpath_to_path(view_model, view_path)
glib.idle_add(self.toggle_item_idle_cb, path, view_tree, cell, pagename)
def pre_fadeout_checkout_include(self, tree):
#after the fadeout the table will be sorted as before
self.sort_column_id = self.package_model.sort_column_id
self.sort_order = self.package_model.sort_order
self.package_model.resync_fadeout_column(self.package_model.get_iter_first())
# Check out a model which base on the column COL_FADE_INC,
# it's save the prev state of column COL_INC before do exclude_item
filter = { PackageListModel.COL_FADE_INC : [True]}
new_model = self.package_model.tree_model(filter, excluded_items_ahead=True)
tree.set_model(new_model)
tree.expand_all()
def get_excluded_rows(self, to_render_cells, model, it):
while it:
path = model.get_path(it)
prev_cell_is_active = model.get_value(it, PackageListModel.COL_FADE_INC)
curr_cell_is_active = model.get_value(it, PackageListModel.COL_INC)
if (prev_cell_is_active == True) and (curr_cell_is_active == False):
to_render_cells.append(path)
if model.iter_has_child(it):
self.get_excluded_rows(to_render_cells, model, model.iter_children(it))
it = model.iter_next(it)
return to_render_cells
def render_fadeout(self, tree, cell):
if (not cell) or (not tree):
return
to_render_cells = []
view_model = tree.get_model()
self.get_excluded_rows(to_render_cells, view_model, view_model.get_iter_first())
cell.fadeout(tree, 1000, to_render_cells)
def after_fadeout_checkin_include(self, table, ctrl, cell, tree, filter):
self.package_model.sort_column_id = self.sort_column_id
self.package_model.sort_order = self.sort_order
tree.set_model(self.package_model.tree_model(filter))
tree.expand_all()
def set_packages_curr_tab(self, curr_page):
self.ins.set_current_page(curr_page)

View File

@@ -0,0 +1,335 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import glib
from bb.ui.crumbs.hobcolor import HobColors
from bb.ui.crumbs.hobwidget import HobViewTable, HobNotebook, HobAltButton, HobButton
from bb.ui.crumbs.hoblistmodel import RecipeListModel
from bb.ui.crumbs.hobpages import HobPage
#
# RecipeSelectionPage
#
class RecipeSelectionPage (HobPage):
pages = [
{
'name' : 'Included recipes',
'tooltip' : 'The recipes currently included for your image',
'filter' : { RecipeListModel.COL_INC : [True],
RecipeListModel.COL_TYPE : ['recipe', 'packagegroup'] },
'search' : 'Search recipes by name',
'searchtip' : 'Enter a recipe name to find it',
'columns' : [{
'col_name' : 'Recipe name',
'col_id' : RecipeListModel.COL_NAME,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Group',
'col_id' : RecipeListModel.COL_GROUP,
'col_style': 'text',
'col_min' : 100,
'col_max' : 300,
'expand' : 'True'
}, {
'col_name' : 'Brought in by (+others)',
'col_id' : RecipeListModel.COL_BINB,
'col_style': 'binb',
'col_min' : 100,
'col_max' : 500,
'expand' : 'True'
}, {
'col_name' : 'Included',
'col_id' : RecipeListModel.COL_INC,
'col_style': 'check toggle',
'col_min' : 100,
'col_max' : 100
}]
}, {
'name' : 'All recipes',
'tooltip' : 'All recipes in your configured layers',
'filter' : { RecipeListModel.COL_TYPE : ['recipe'] },
'search' : 'Search recipes by name',
'searchtip' : 'Enter a recipe name to find it',
'columns' : [{
'col_name' : 'Recipe name',
'col_id' : RecipeListModel.COL_NAME,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Group',
'col_id' : RecipeListModel.COL_GROUP,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'License',
'col_id' : RecipeListModel.COL_LIC,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Included',
'col_id' : RecipeListModel.COL_INC,
'col_style': 'check toggle',
'col_min' : 100,
'col_max' : 100
}]
}, {
'name' : 'Package Groups',
'tooltip' : 'All package groups in your configured layers',
'filter' : { RecipeListModel.COL_TYPE : ['packagegroup'] },
'search' : 'Search package groups by name',
'searchtip' : 'Enter a package group name to find it',
'columns' : [{
'col_name' : 'Package group name',
'col_id' : RecipeListModel.COL_NAME,
'col_style': 'text',
'col_min' : 100,
'col_max' : 400,
'expand' : 'True'
}, {
'col_name' : 'Included',
'col_id' : RecipeListModel.COL_INC,
'col_style': 'check toggle',
'col_min' : 100,
'col_max' : 100
}]
}
]
(INCLUDED,
ALL,
TASKS) = range(3)
def __init__(self, builder = None):
super(RecipeSelectionPage, self).__init__(builder, "Step 1 of 2: Edit recipes")
# set invisible members
self.recipe_model = self.builder.recipe_model
# create visual elements
self.create_visual_elements()
def included_clicked_cb(self, button):
self.ins.set_current_page(self.INCLUDED)
def create_visual_elements(self):
self.eventbox = self.add_onto_top_bar(None, 73)
self.pack_start(self.eventbox, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
# set visible members
self.ins = HobNotebook()
self.tables = [] # we need modify table when the dialog is shown
search_names = []
search_tips = []
# append the tabs in order
for page in self.pages:
columns = page['columns']
name = page['name']
tab = HobViewTable(columns, name)
search_names.append(page['search'])
search_tips.append(page['searchtip'])
filter = page['filter']
sort_model = self.recipe_model.tree_model(filter, initial=True)
tab.set_model(sort_model)
tab.connect("toggled", self.table_toggled_cb, name)
tab.connect("button-release-event", self.button_click_cb)
tab.connect("cell-fadeinout-stopped", self.after_fadeout_checkin_include, filter)
self.ins.append_page(tab, page['name'], page['tooltip'])
self.tables.append(tab)
self.ins.set_entry(search_names, search_tips)
self.ins.search.connect("changed", self.search_entry_changed)
# add all into the window
self.box_group_area.pack_start(self.ins, expand=True, fill=True)
button_box = gtk.HBox(False, 6)
self.box_group_area.pack_end(button_box, expand=False, fill=False)
self.build_packages_button = HobButton('Build packages')
#self.build_packages_button.set_size_request(205, 49)
self.build_packages_button.set_tooltip_text("Build selected recipes into packages")
self.build_packages_button.set_flags(gtk.CAN_DEFAULT)
self.build_packages_button.grab_default()
self.build_packages_button.connect("clicked", self.build_packages_clicked_cb)
button_box.pack_end(self.build_packages_button, expand=False, fill=False)
self.back_button = HobAltButton('Cancel')
self.back_button.connect("clicked", self.back_button_clicked_cb)
button_box.pack_end(self.back_button, expand=False, fill=False)
def search_entry_changed(self, entry):
text = entry.get_text()
if self.ins.search_focus:
self.ins.search_focus = False
elif self.ins.page_changed:
self.ins.page_change = False
self.filter_search(entry)
elif text not in self.ins.search_names:
self.filter_search(entry)
def filter_search(self, entry):
text = entry.get_text()
current_tab = self.ins.get_current_page()
filter = self.pages[current_tab]['filter']
filter[RecipeListModel.COL_NAME] = text
self.tables[current_tab].set_model(self.recipe_model.tree_model(filter, search_data=text))
if self.recipe_model.filtered_nb == 0:
if not self.ins.get_nth_page(current_tab).top_bar:
self.ins.get_nth_page(current_tab).add_no_result_bar(entry)
self.ins.get_nth_page(current_tab).top_bar.set_no_show_all(True)
self.ins.get_nth_page(current_tab).top_bar.show()
self.ins.get_nth_page(current_tab).scroll.hide()
else:
if self.ins.get_nth_page(current_tab).top_bar:
self.ins.get_nth_page(current_tab).top_bar.hide()
self.ins.get_nth_page(current_tab).scroll.show()
if entry.get_text() == '':
entry.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, False)
else:
entry.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, True)
def button_click_cb(self, widget, event):
path, col = widget.table_tree.get_cursor()
tree_model = widget.table_tree.get_model()
if path and col.get_title() != 'Included': # else activation is likely a removal
properties = {'summary': '', 'name': '', 'version': '', 'revision': '', 'binb': '', 'group': '', 'license': '', 'homepage': '', 'bugtracker': '', 'description': ''}
properties['summary'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_SUMMARY)
properties['name'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_NAME)
properties['version'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_VERSION)
properties['revision'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_REVISION)
properties['binb'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_BINB)
properties['group'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_GROUP)
properties['license'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_LIC)
properties['homepage'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_HOMEPAGE)
properties['bugtracker'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_BUGTRACKER)
properties['description'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_DESC)
self.builder.show_recipe_property_dialog(properties)
def build_packages_clicked_cb(self, button):
self.refresh_tables()
self.builder.build_packages()
def refresh_tables(self):
self.ins.reset_entry(self.ins.search, 0)
for tab in self.tables:
index = self.tables.index(tab)
filter = self.pages[index]['filter']
tab.set_model(self.recipe_model.tree_model(filter, search_data="", initial=True))
def back_button_clicked_cb(self, button):
self.builder.recipe_model.set_selected_image(self.builder.configuration.initial_selected_image)
self.builder.image_configuration_page.update_image_combo(self.builder.recipe_model, self.builder.configuration.initial_selected_image)
self.builder.image_configuration_page.update_image_desc()
self.builder.show_configuration()
self.refresh_tables()
def refresh_selection(self):
self.builder.configuration.selected_image = self.recipe_model.get_selected_image()
_, self.builder.configuration.selected_recipes = self.recipe_model.get_selected_recipes()
self.ins.show_indicator_icon("Included recipes", len(self.builder.configuration.selected_recipes))
def toggle_item_idle_cb(self, path, view_tree, cell, pagename):
if not self.recipe_model.path_included(path):
self.recipe_model.include_item(item_path=path, binb="User Selected", image_contents=False)
else:
self.pre_fadeout_checkout_include(view_tree, pagename)
self.recipe_model.exclude_item(item_path=path)
self.render_fadeout(view_tree, cell)
self.refresh_selection()
if not self.builder.customized:
self.builder.customized = True
self.builder.configuration.selected_image = self.recipe_model.__custom_image__
self.builder.rcppkglist_populated()
self.builder.window_sensitive(True)
view_model = view_tree.get_model()
vpath = self.recipe_model.convert_path_to_vpath(view_model, path)
view_tree.set_cursor(vpath)
def table_toggled_cb(self, table, cell, view_path, toggled_columnid, view_tree, pagename):
# Click to include a recipe
self.builder.window_sensitive(False)
view_model = view_tree.get_model()
path = self.recipe_model.convert_vpath_to_path(view_model, view_path)
glib.idle_add(self.toggle_item_idle_cb, path, view_tree, cell, pagename)
def pre_fadeout_checkout_include(self, tree, pagename):
#after the fadeout the table will be sorted as before
self.sort_column_id = self.recipe_model.sort_column_id
self.sort_order = self.recipe_model.sort_order
#resync the included items to a backup fade include column
it = self.recipe_model.get_iter_first()
while it:
active = self.recipe_model.get_value(it, self.recipe_model.COL_INC)
self.recipe_model.set(it, self.recipe_model.COL_FADE_INC, active)
it = self.recipe_model.iter_next(it)
# Check out a model which base on the column COL_FADE_INC,
# it's save the prev state of column COL_INC before do exclude_item
filter = { RecipeListModel.COL_FADE_INC:[True] }
if pagename == "Included recipes":
filter[RecipeListModel.COL_TYPE] = ['recipe', 'packagegroup']
elif pagename == "All recipes":
filter[RecipeListModel.COL_TYPE] = ['recipe']
else:
filter[RecipeListModel.COL_TYPE] = ['packagegroup']
new_model = self.recipe_model.tree_model(filter, excluded_items_ahead=True)
tree.set_model(new_model)
def render_fadeout(self, tree, cell):
if (not cell) or (not tree):
return
to_render_cells = []
model = tree.get_model()
it = model.get_iter_first()
while it:
path = model.get_path(it)
prev_cell_is_active = model.get_value(it, RecipeListModel.COL_FADE_INC)
curr_cell_is_active = model.get_value(it, RecipeListModel.COL_INC)
if (prev_cell_is_active == True) and (curr_cell_is_active == False):
to_render_cells.append(path)
it = model.iter_next(it)
cell.fadeout(tree, 1000, to_render_cells)
def after_fadeout_checkin_include(self, table, ctrl, cell, tree, filter):
self.recipe_model.sort_column_id = self.sort_column_id
self.recipe_model.sort_order = self.sort_order
tree.set_model(self.recipe_model.tree_model(filter))
def set_recipe_curr_tab(self, curr_page):
self.ins.set_current_page(curr_page)

View File

@@ -0,0 +1,85 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Bogdan Marinescu <bogdan.a.marinescu@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk, gobject
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hobwidget import hic
from bb.ui.crumbs.hobpages import HobPage
#
# SanityCheckPage
#
class SanityCheckPage (HobPage):
def __init__(self, builder):
super(SanityCheckPage, self).__init__(builder)
self.running = False
self.create_visual_elements()
self.show_all()
def make_label(self, text, bold=True):
label = gtk.Label()
label.set_alignment(0.0, 0.5)
mark = "<span %s>%s</span>" % (self.span_tag('x-large', 'bold') if bold else self.span_tag('medium'), text)
label.set_markup(mark)
return label
def start(self):
if not self.running:
self.running = True
gobject.timeout_add(100, self.timer_func)
def stop(self):
self.running = False
def is_running(self):
return self.running
def timer_func(self):
self.progress_bar.pulse()
return self.running
def create_visual_elements(self):
# Table'd layout. 'rows' and 'cols' give the table size
rows, cols = 30, 50
self.table = gtk.Table(rows, cols, True)
self.pack_start(self.table, expand=False, fill=False)
sx, sy = 2, 2
# 'info' icon
image = gtk.Image()
image.set_from_file(hic.ICON_INFO_DISPLAY_FILE)
self.table.attach(image, sx, sx + 2, sy, sy + 3 )
image.show()
# 'Checking' message
label = self.make_label('Hob is checking for correct build system setup')
self.table.attach(label, sx + 2, cols, sy, sy + 3, xpadding=5 )
label.show()
# 'Shouldn't take long' message.
label = self.make_label("The check shouldn't take long.", False)
self.table.attach(label, sx + 2, cols, sy + 3, sy + 4, xpadding=5)
label.show()
# Progress bar
self.progress_bar = HobProgressBar()
self.table.attach(self.progress_bar, sx + 2, cols - 3, sy + 5, sy + 7, xpadding=5)
self.progress_bar.show()
# All done
self.table.show()

109
bitbake/lib/bb/ui/hob.py Executable file
View File

@@ -0,0 +1,109 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import sys
import os
requirements = "FATAL: Hob requires Gtk+ 2.20.0 or higher, PyGtk 2.21.0 or higher"
try:
import gobject
import gtk
import pygtk
pygtk.require('2.0') # to be certain we don't have gtk+ 1.x !?!
gtkver = gtk.gtk_version
pygtkver = gtk.pygtk_version
if gtkver < (2, 20, 0) or pygtkver < (2, 21, 0):
sys.exit("%s,\nYou have Gtk+ %s and PyGtk %s." % (requirements,
".".join(map(str, gtkver)),
".".join(map(str, pygtkver))))
except ImportError as exc:
sys.exit("%s (%s)." % (requirements, str(exc)))
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(__file__))))
try:
import bb
except RuntimeError as exc:
sys.exit(str(exc))
from bb.ui import uihelper
from bb.ui.crumbs.hoblistmodel import RecipeListModel, PackageListModel
from bb.ui.crumbs.hobeventhandler import HobHandler
from bb.ui.crumbs.builder import Builder
featureSet = [bb.cooker.CookerFeatures.HOB_EXTRA_CACHES, bb.cooker.CookerFeatures.BASEDATASTORE_TRACKING]
def event_handle_idle_func(eventHandler, hobHandler):
# Consume as many messages as we can in the time available to us
if not eventHandler:
return False
event = eventHandler.getEvent()
while event:
hobHandler.handle_event(event)
event = eventHandler.getEvent()
return True
_evt_list = [ "bb.runqueue.runQueueExitWait", "bb.event.LogExecTTY", "logging.LogRecord",
"bb.build.TaskFailed", "bb.build.TaskBase", "bb.event.ParseStarted",
"bb.event.ParseProgress", "bb.event.ParseCompleted", "bb.event.CacheLoadStarted",
"bb.event.CacheLoadProgress", "bb.event.CacheLoadCompleted", "bb.command.CommandFailed",
"bb.command.CommandExit", "bb.command.CommandCompleted", "bb.cooker.CookerExit",
"bb.event.MultipleProviders", "bb.event.NoProvider", "bb.runqueue.sceneQueueTaskStarted",
"bb.runqueue.runQueueTaskStarted", "bb.runqueue.runQueueTaskFailed", "bb.runqueue.sceneQueueTaskFailed",
"bb.event.BuildBase", "bb.build.TaskStarted", "bb.build.TaskSucceeded", "bb.build.TaskFailedSilent",
"bb.event.SanityCheckPassed", "bb.event.SanityCheckFailed", "bb.event.PackageInfo",
"bb.event.TargetsTreeGenerated", "bb.event.ConfigFilesFound", "bb.event.ConfigFilePathFound",
"bb.event.FilesMatchingFound", "bb.event.NetworkTestFailed", "bb.event.NetworkTestPassed",
"bb.event.BuildStarted", "bb.event.BuildCompleted", "bb.event.DiskFull"]
def main (server, eventHandler, params):
params.updateFromServer(server)
gobject.threads_init()
# That indicates whether the Hob and the bitbake server are
# running on different machines
# recipe model and package model
recipe_model = RecipeListModel()
package_model = PackageListModel()
llevel, debug_domains = bb.msg.constructLogOptions()
server.runCommand(["setEventMask", server.getEventHandle(), llevel, debug_domains, _evt_list])
hobHandler = HobHandler(server, recipe_model, package_model)
builder = Builder(hobHandler, recipe_model, package_model)
# This timeout function regularly probes the event queue to find out if we
# have any messages waiting for us.
gobject.timeout_add(10, event_handle_idle_func, eventHandler, hobHandler)
try:
gtk.main()
except EnvironmentError as ioerror:
# ignore interrupted io
if ioerror.args[0] == 4:
pass
finally:
hobHandler.cancel_build(force = True)
if __name__ == "__main__":
try:
ret = main()
except Exception:
ret = 1
import traceback
traceback.print_exc(15)
sys.exit(ret)

View File

@@ -104,11 +104,10 @@ class InteractConsoleLogFilter(logging.Filter):
return True
class TerminalFilter(object):
rows = 25
columns = 80
def sigwinch_handle(self, signum, frame):
self.rows, self.columns = self.getTerminalColumns()
self.columns = self.getTerminalColumns()
if self._sigwinch_default:
self._sigwinch_default(signum, frame)
@@ -132,7 +131,7 @@ class TerminalFilter(object):
cr = (env['LINES'], env['COLUMNS'])
except:
cr = (25, 80)
return cr
return cr[1]
def __init__(self, main, helper, console, errconsole, format):
self.main = main
@@ -171,13 +170,9 @@ class TerminalFilter(object):
signal.signal(signal.SIGWINCH, self.sigwinch_handle)
except:
pass
self.rows, self.columns = self.getTerminalColumns()
self.columns = self.getTerminalColumns()
except:
self.cuu = None
if not self.cuu:
self.interactive = False
bb.note("Unable to use interactive mode for this terminal, using fallback")
return
console.addFilter(InteractConsoleLogFilter(self, format))
errconsole.addFilter(InteractConsoleLogFilter(self, format))
@@ -212,7 +207,7 @@ class TerminalFilter(object):
content = "Currently %s running tasks (%s of %s):" % (len(activetasks), self.helper.tasknumber_current, self.helper.tasknumber_total)
print(content)
lines = 1 + int(len(content) / (self.columns + 1))
for tasknum, task in enumerate(tasks[:(self.rows - 2)]):
for tasknum, task in enumerate(tasks):
content = "%s: %s" % (tasknum, task)
print(content)
lines = lines + 1 + int(len(content) / (self.columns + 1))
@@ -272,13 +267,10 @@ def main(server, eventHandler, params, tf = TerminalFilter):
logger.addHandler(console)
logger.addHandler(errconsole)
bb.utils.set_process_name("KnottyUI")
if params.options.remote_server and params.options.kill_server:
server.terminateServer()
return
consolelog = None
if consolelogfile and not params.options.show_environment and not params.options.show_versions:
bb.utils.mkdirhier(os.path.dirname(consolelogfile))
conlogformat = bb.msg.BBLogFormatter(format_str)
@@ -290,7 +282,6 @@ def main(server, eventHandler, params, tf = TerminalFilter):
llevel, debug_domains = bb.msg.constructLogOptions()
server.runCommand(["setEventMask", server.getEventHandle(), llevel, debug_domains, _evt_list])
universe = False
if not params.observe_only:
params.updateFromServer(server)
params.updateToServer(server, os.environ.copy())
@@ -301,8 +292,6 @@ def main(server, eventHandler, params, tf = TerminalFilter):
if 'msg' in cmdline and cmdline['msg']:
logger.error(cmdline['msg'])
return 1
if cmdline['action'][0] == "buildTargets" and "universe" in cmdline['action'][1]:
universe = True
ret, error = server.runCommand(cmdline['action'])
if error:
@@ -351,7 +340,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
tries -= 1
if tries:
continue
logger.warning(event.msg)
logger.warn(event.msg)
continue
if isinstance(event, logging.LogRecord):
@@ -360,25 +349,16 @@ def main(server, eventHandler, params, tf = TerminalFilter):
return_value = 1
elif event.levelno == format.WARNING:
warnings = warnings + 1
if event.taskpid != 0:
# For "normal" logging conditions, don't show note logs from tasks
# but do show them if the user has changed the default log level to
# include verbose/debug messages
if event.levelno <= format.NOTE and (event.levelno < llevel or (event.levelno == format.NOTE and llevel != format.VERBOSE)):
continue
# Prefix task messages with recipe/task
if event.taskpid in helper.running_tasks:
taskinfo = helper.running_tasks[event.taskpid]
event.msg = taskinfo['title'] + ': ' + event.msg
if hasattr(event, 'fn'):
event.msg = event.fn + ': ' + event.msg
# For "normal" logging conditions, don't show note logs from tasks
# but do show them if the user has changed the default log level to
# include verbose/debug messages
if event.taskpid != 0 and event.levelno <= format.NOTE and (event.levelno < llevel or (event.levelno == format.NOTE and llevel != format.VERBOSE)):
continue
logger.handle(event)
continue
if isinstance(event, bb.build.TaskFailedSilent):
logger.warning("Logfile for failed setscene task is %s" % event.logfile)
logger.warn("Logfile for failed setscene task is %s" % event.logfile)
continue
if isinstance(event, bb.build.TaskFailed):
return_value = 1
@@ -454,12 +434,11 @@ def main(server, eventHandler, params, tf = TerminalFilter):
logger.info("multiple providers are available for %s%s (%s)", event._is_runtime and "runtime " or "",
event._item,
", ".join(event._candidates))
rtime = ""
if event._is_runtime:
rtime = "R"
logger.info("consider defining a PREFERRED_%sPROVIDER entry to match %s" % (rtime, event._item))
logger.info("consider defining a PREFERRED_PROVIDER entry to match %s", event._item)
continue
if isinstance(event, bb.event.NoProvider):
return_value = 1
errors = errors + 1
if event._runtime:
r = "R"
else:
@@ -470,20 +449,13 @@ def main(server, eventHandler, params, tf = TerminalFilter):
if event._close_matches:
extra = ". Close matches:\n %s" % '\n '.join(event._close_matches)
# For universe builds, only show these as warnings, not errors
h = logger.warning
if not universe:
return_value = 1
errors = errors + 1
h = logger.error
if event._dependees:
h("Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)%s", r, event._item, ", ".join(event._dependees), r, extra)
logger.error("Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)%s", r, event._item, ", ".join(event._dependees), r, extra)
else:
h("Nothing %sPROVIDES '%s'%s", r, event._item, extra)
logger.error("Nothing %sPROVIDES '%s'%s", r, event._item, extra)
if event._reasons:
for reason in event._reasons:
h("%s", reason)
logger.error("%s", reason)
continue
if isinstance(event, bb.runqueue.sceneQueueTaskStarted):
@@ -503,15 +475,14 @@ def main(server, eventHandler, params, tf = TerminalFilter):
continue
if isinstance(event, bb.runqueue.runQueueTaskFailed):
return_value = 1
taskfailures.append(event.taskstring)
logger.error("Task %s (%s) failed with exit code '%s'",
event.taskid, event.taskstring, event.exitcode)
continue
if isinstance(event, bb.runqueue.sceneQueueTaskFailed):
logger.warning("Setscene task %s (%s) failed with exit code '%s' - real task will be run instead",
event.taskid, event.taskstring, event.exitcode)
logger.warn("Setscene task %s (%s) failed with exit code '%s' - real task will be run instead",
event.taskid, event.taskstring, event.exitcode)
continue
if isinstance(event, bb.event.DepTreeGenerated):
@@ -561,12 +532,10 @@ def main(server, eventHandler, params, tf = TerminalFilter):
main.shutdown = main.shutdown + 1
pass
except Exception as e:
import traceback
sys.stderr.write(traceback.format_exc())
sys.stderr.write(str(e))
if not params.observe_only:
_, error = server.runCommand(["stateForceShutdown"])
main.shutdown = 2
return_value = 1
try:
summary = ""
if taskfailures:
@@ -592,8 +561,4 @@ def main(server, eventHandler, params, tf = TerminalFilter):
if e.errno == errno.EPIPE:
pass
if consolelog:
logger.removeHandler(consolelog)
consolelog.close()
return return_value

425
bitbake/lib/bb/ui/puccho.py Normal file
View File

@@ -0,0 +1,425 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2008 Intel Corporation
#
# Authored by Rob Bradford <rob@linux.intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
import gtk.glade
import threading
import urllib2
import os
import contextlib
from bb.ui.crumbs.buildmanager import BuildManager, BuildConfiguration
from bb.ui.crumbs.buildmanager import BuildManagerTreeView
from bb.ui.crumbs.runningbuild import RunningBuild, RunningBuildTreeView
# The metadata loader is used by the BuildSetupDialog to download the
# available options to populate the dialog
class MetaDataLoader(gobject.GObject):
""" This class provides the mechanism for loading the metadata (the
fetching and parsing) from a given URL. The metadata encompasses details
on what machines are available. The distribution and images available for
the machine and the the uris to use for building the given machine."""
__gsignals__ = {
'success' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'error' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING,))
}
# We use these little helper functions to ensure that we take the gdk lock
# when emitting the signal. These functions are called as idles (so that
# they happen in the gtk / main thread's main loop.
def emit_error_signal (self, remark):
gtk.gdk.threads_enter()
self.emit ("error", remark)
gtk.gdk.threads_leave()
def emit_success_signal (self):
gtk.gdk.threads_enter()
self.emit ("success")
gtk.gdk.threads_leave()
def __init__ (self):
gobject.GObject.__init__ (self)
class LoaderThread(threading.Thread):
""" This class provides an asynchronous loader for the metadata (by
using threads and signals). This is useful since the metadata may be
at a remote URL."""
class LoaderImportException (Exception):
pass
def __init__(self, loader, url):
threading.Thread.__init__ (self)
self.url = url
self.loader = loader
def run (self):
result = {}
try:
with contextlib.closing (urllib2.urlopen (self.url)) as f:
# Parse the metadata format. The format is....
# <machine>;<default distro>|<distro>...;<default image>|<image>...;<type##url>|...
for line in f:
components = line.split(";")
if (len (components) < 4):
raise MetaDataLoader.LoaderThread.LoaderImportException
machine = components[0]
distros = components[1].split("|")
images = components[2].split("|")
urls = components[3].split("|")
result[machine] = (distros, images, urls)
# Create an object representing this *potential*
# configuration. It can become concrete if the machine, distro
# and image are all chosen in the UI
configuration = BuildConfiguration()
configuration.metadata_url = self.url
configuration.machine_options = result
self.loader.configuration = configuration
# Emit that we've actually got a configuration
gobject.idle_add (MetaDataLoader.emit_success_signal,
self.loader)
except MetaDataLoader.LoaderThread.LoaderImportException as e:
gobject.idle_add (MetaDataLoader.emit_error_signal, self.loader,
"Repository metadata corrupt")
except Exception as e:
gobject.idle_add (MetaDataLoader.emit_error_signal, self.loader,
"Unable to download repository metadata")
print(e)
def try_fetch_from_url (self, url):
# Try and download the metadata. Firing a signal if successful
thread = MetaDataLoader.LoaderThread(self, url)
thread.start()
class BuildSetupDialog (gtk.Dialog):
RESPONSE_BUILD = 1
# A little helper method that just sets the states on the widgets based on
# whether we've got good metadata or not.
def set_configurable (self, configurable):
if (self.configurable == configurable):
return
self.configurable = configurable
for widget in self.conf_widgets:
widget.set_sensitive (configurable)
if not configurable:
self.machine_combo.set_active (-1)
self.distribution_combo.set_active (-1)
self.image_combo.set_active (-1)
# GTK widget callbacks
def refresh_button_clicked (self, button):
# Refresh button clicked.
url = self.location_entry.get_chars (0, -1)
self.loader.try_fetch_from_url(url)
def repository_entry_editable_changed (self, entry):
if (len (entry.get_chars (0, -1)) > 0):
self.refresh_button.set_sensitive (True)
else:
self.refresh_button.set_sensitive (False)
self.clear_status_message()
# If we were previously configurable we are no longer since the
# location entry has been changed
self.set_configurable (False)
def machine_combo_changed (self, combobox):
active_iter = combobox.get_active_iter()
if not active_iter:
return
model = combobox.get_model()
if model:
chosen_machine = model.get (active_iter, 0)[0]
(distros_model, images_model) = \
self.loader.configuration.get_distro_and_images_models (chosen_machine)
self.distribution_combo.set_model (distros_model)
self.image_combo.set_model (images_model)
# Callbacks from the loader
def loader_success_cb (self, loader):
self.status_image.set_from_icon_name ("info",
gtk.ICON_SIZE_BUTTON)
self.status_image.show()
self.status_label.set_label ("Repository metadata successfully downloaded")
# Set the models on the combo boxes based on the models generated from
# the configuration that the loader has created
# We just need to set the machine here, that then determines the
# distro and image options. Cunning huh? :-)
self.configuration = self.loader.configuration
model = self.configuration.get_machines_model ()
self.machine_combo.set_model (model)
self.set_configurable (True)
def loader_error_cb (self, loader, message):
self.status_image.set_from_icon_name ("error",
gtk.ICON_SIZE_BUTTON)
self.status_image.show()
self.status_label.set_text ("Error downloading repository metadata")
for widget in self.conf_widgets:
widget.set_sensitive (False)
def clear_status_message (self):
self.status_image.hide()
self.status_label.set_label (
"""<i>Enter the repository location and press _Refresh</i>""")
def __init__ (self):
gtk.Dialog.__init__ (self)
# Cancel
self.add_button (gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL)
# Build
button = gtk.Button ("_Build", None, True)
image = gtk.Image ()
image.set_from_stock (gtk.STOCK_EXECUTE, gtk.ICON_SIZE_BUTTON)
button.set_image (image)
self.add_action_widget (button, BuildSetupDialog.RESPONSE_BUILD)
button.show_all ()
# Pull in *just* the table from the Glade XML data.
gxml = gtk.glade.XML (os.path.dirname(__file__) + "/crumbs/puccho.glade",
root = "build_table")
table = gxml.get_widget ("build_table")
self.vbox.pack_start (table, True, False, 0)
# Grab all the widgets that we need to turn on/off when we refresh...
self.conf_widgets = []
self.conf_widgets += [gxml.get_widget ("machine_label")]
self.conf_widgets += [gxml.get_widget ("distribution_label")]
self.conf_widgets += [gxml.get_widget ("image_label")]
self.conf_widgets += [gxml.get_widget ("machine_combo")]
self.conf_widgets += [gxml.get_widget ("distribution_combo")]
self.conf_widgets += [gxml.get_widget ("image_combo")]
# Grab the status widgets
self.status_image = gxml.get_widget ("status_image")
self.status_label = gxml.get_widget ("status_label")
# Grab the refresh button and connect to the clicked signal
self.refresh_button = gxml.get_widget ("refresh_button")
self.refresh_button.connect ("clicked", self.refresh_button_clicked)
# Grab the location entry and connect to editable::changed
self.location_entry = gxml.get_widget ("location_entry")
self.location_entry.connect ("changed",
self.repository_entry_editable_changed)
# Grab the machine combo and hook onto the changed signal. This then
# allows us to populate the distro and image combos
self.machine_combo = gxml.get_widget ("machine_combo")
self.machine_combo.connect ("changed", self.machine_combo_changed)
# Setup the combo
cell = gtk.CellRendererText()
self.machine_combo.pack_start(cell, True)
self.machine_combo.add_attribute(cell, 'text', 0)
# Grab the distro and image combos. We need these to populate with
# models once the machine is chosen
self.distribution_combo = gxml.get_widget ("distribution_combo")
cell = gtk.CellRendererText()
self.distribution_combo.pack_start(cell, True)
self.distribution_combo.add_attribute(cell, 'text', 0)
self.image_combo = gxml.get_widget ("image_combo")
cell = gtk.CellRendererText()
self.image_combo.pack_start(cell, True)
self.image_combo.add_attribute(cell, 'text', 0)
# Put the default descriptive text in the status box
self.clear_status_message()
# Mark as non-configurable, this is just greys out the widgets the
# user can't yet use
self.configurable = False
self.set_configurable(False)
# Show the table
table.show_all ()
# The loader and some signals connected to it to update the status
# area
self.loader = MetaDataLoader()
self.loader.connect ("success", self.loader_success_cb)
self.loader.connect ("error", self.loader_error_cb)
def update_configuration (self):
""" A poorly named function but it updates the internal configuration
from the widgets. This can make that configuration concrete and can
thus be used for building """
# Extract the chosen machine from the combo
model = self.machine_combo.get_model()
active_iter = self.machine_combo.get_active_iter()
if (active_iter):
self.configuration.machine = model.get(active_iter, 0)[0]
# Extract the chosen distro from the combo
model = self.distribution_combo.get_model()
active_iter = self.distribution_combo.get_active_iter()
if (active_iter):
self.configuration.distro = model.get(active_iter, 0)[0]
# Extract the chosen image from the combo
model = self.image_combo.get_model()
active_iter = self.image_combo.get_active_iter()
if (active_iter):
self.configuration.image = model.get(active_iter, 0)[0]
# This function operates to pull events out from the event queue and then push
# them into the RunningBuild (which then drives the RunningBuild which then
# pushes through and updates the progress tree view.)
#
# TODO: Should be a method on the RunningBuild class
def event_handle_timeout (eventHandler, build):
# Consume as many messages as we can ...
event = eventHandler.getEvent()
while event:
build.handle_event (event)
event = eventHandler.getEvent()
return True
class MainWindow (gtk.Window):
# Callback that gets fired when the user hits a button in the
# BuildSetupDialog.
def build_dialog_box_response_cb (self, dialog, response_id):
conf = None
if (response_id == BuildSetupDialog.RESPONSE_BUILD):
dialog.update_configuration()
print(dialog.configuration.machine, dialog.configuration.distro, \
dialog.configuration.image)
conf = dialog.configuration
dialog.destroy()
if conf:
self.manager.do_build (conf)
def build_button_clicked_cb (self, button):
dialog = BuildSetupDialog ()
# For some unknown reason Dialog.run causes nice little deadlocks ... :-(
dialog.connect ("response", self.build_dialog_box_response_cb)
dialog.show()
def __init__ (self):
gtk.Window.__init__ (self)
# Pull in *just* the main vbox from the Glade XML data and then pack
# that inside the window
gxml = gtk.glade.XML (os.path.dirname(__file__) + "/crumbs/puccho.glade",
root = "main_window_vbox")
vbox = gxml.get_widget ("main_window_vbox")
self.add (vbox)
# Create the tree views for the build manager view and the progress view
self.build_manager_view = BuildManagerTreeView()
self.running_build_view = RunningBuildTreeView()
# Grab the scrolled windows that we put the tree views into
self.results_scrolledwindow = gxml.get_widget ("results_scrolledwindow")
self.progress_scrolledwindow = gxml.get_widget ("progress_scrolledwindow")
# Put the tree views inside ...
self.results_scrolledwindow.add (self.build_manager_view)
self.progress_scrolledwindow.add (self.running_build_view)
# Hook up the build button...
self.build_button = gxml.get_widget ("main_toolbutton_build")
self.build_button.connect ("clicked", self.build_button_clicked_cb)
# I'm not very happy about the current ownership of the RunningBuild. I have
# my suspicions that this object should be held by the BuildManager since we
# care about the signals in the manager
def running_build_succeeded_cb (running_build, manager):
# Notify the manager that a build has succeeded. This is necessary as part
# of the 'hack' that we use for making the row in the model / view
# representing the ongoing build change into a row representing the
# completed build. Since we know only one build can be running a time then
# we can handle this.
# FIXME: Refactor all this so that the RunningBuild is owned by the
# BuildManager. It can then hook onto the signals directly and drive
# interesting things it cares about.
manager.notify_build_succeeded ()
print("build succeeded")
def running_build_failed_cb (running_build, manager):
# As above
print("build failed")
manager.notify_build_failed ()
def main (server, eventHandler):
# Initialise threading...
gobject.threads_init()
gtk.gdk.threads_init()
main_window = MainWindow ()
main_window.show_all ()
# Set up the build manager stuff in general
builds_dir = os.path.join (os.getcwd(), "results")
manager = BuildManager (server, builds_dir)
main_window.build_manager_view.set_model (manager.model)
# Do the running build setup
running_build = RunningBuild ()
main_window.running_build_view.set_model (running_build.model)
running_build.connect ("build-succeeded", running_build_succeeded_cb,
manager)
running_build.connect ("build-failed", running_build_failed_cb, manager)
# We need to save the manager into the MainWindow so that the toolbar
# button can use it.
# FIXME: Refactor ?
main_window.manager = manager
# Use a timeout function for probing the event queue to find out if we
# have a message waiting for us.
gobject.timeout_add (200,
event_handle_timeout,
eventHandler,
running_build)
gtk.main()

View File

@@ -39,7 +39,7 @@ import os
# module properties for UI modules are read by bitbake and the contract should not be broken
featureSet = [bb.cooker.CookerFeatures.HOB_EXTRA_CACHES, bb.cooker.CookerFeatures.BASEDATASTORE_TRACKING, bb.cooker.CookerFeatures.SEND_SANITYEVENTS]
featureSet = [bb.cooker.CookerFeatures.HOB_EXTRA_CACHES, bb.cooker.CookerFeatures.SEND_DEPENDS_TREE, bb.cooker.CookerFeatures.BASEDATASTORE_TRACKING, bb.cooker.CookerFeatures.SEND_SANITYEVENTS]
logger = logging.getLogger("ToasterLogger")
interactive = sys.stdout.isatty()
@@ -92,43 +92,6 @@ def _close_build_log(build_log):
build_log.close()
logger.removeHandler(build_log)
_evt_list = [
"bb.build.TaskBase",
"bb.build.TaskFailed",
"bb.build.TaskFailedSilent",
"bb.build.TaskStarted",
"bb.build.TaskSucceeded",
"bb.command.CommandCompleted",
"bb.command.CommandExit",
"bb.command.CommandFailed",
"bb.cooker.CookerExit",
"bb.event.BuildCompleted",
"bb.event.BuildStarted",
"bb.event.CacheLoadCompleted",
"bb.event.CacheLoadProgress",
"bb.event.CacheLoadStarted",
"bb.event.ConfigParsed",
"bb.event.DepTreeGenerated",
"bb.event.LogExecTTY",
"bb.event.MetadataEvent",
"bb.event.MultipleProviders",
"bb.event.NoProvider",
"bb.event.ParseCompleted",
"bb.event.ParseProgress",
"bb.event.RecipeParsed",
"bb.event.SanityCheck",
"bb.event.SanityCheckPassed",
"bb.event.TreeDataPreparationCompleted",
"bb.event.TreeDataPreparationStarted",
"bb.runqueue.runQueueTaskCompleted",
"bb.runqueue.runQueueTaskFailed",
"bb.runqueue.runQueueTaskSkipped",
"bb.runqueue.runQueueTaskStarted",
"bb.runqueue.sceneQueueTaskCompleted",
"bb.runqueue.sceneQueueTaskFailed",
"bb.runqueue.sceneQueueTaskStarted",
"logging.LogRecord"]
def main(server, eventHandler, params):
# set to a logging.FileHandler instance when a build starts;
# see _open_build_log()
@@ -152,38 +115,18 @@ def main(server, eventHandler, params):
console.setFormatter(formatter)
logger.addHandler(console)
logger.setLevel(logging.INFO)
llevel, debug_domains = bb.msg.constructLogOptions()
result, error = server.runCommand(["setEventMask", server.getEventHandle(), llevel, debug_domains, _evt_list])
if not result or error:
logger.error("can't set event mask: %s", error)
return 1
# verify and warn
build_history_enabled = True
inheritlist, _ = server.runCommand(["getVariable", "INHERIT"])
if not "buildhistory" in inheritlist.split(" "):
logger.warning("buildhistory is not enabled. Please enable INHERIT += \"buildhistory\" to see image details.")
logger.warn("buildhistory is not enabled. Please enable INHERIT += \"buildhistory\" to see image details.")
build_history_enabled = False
if not params.observe_only:
params.updateFromServer(server)
params.updateToServer(server, os.environ.copy())
cmdline = params.parseActions()
if not cmdline:
print("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
return 1
if 'msg' in cmdline and cmdline['msg']:
logger.error(cmdline['msg'])
return 1
ret, error = server.runCommand(cmdline['action'])
if error:
logger.error("Command '%s' failed: %s" % (cmdline, error))
return 1
elif ret != True:
logger.error("Command '%s' failed: returned %s" % (cmdline, ret))
return 1
logger.error("ToasterUI can only work in observer mode")
return 1
# set to 1 when toasterui needs to shut down
main.shutdown = 0
@@ -195,8 +138,7 @@ def main(server, eventHandler, params):
taskfailures = []
first = True
buildinfohelper = BuildInfoHelper(server, build_history_enabled,
os.getenv('TOASTER_BRBE'))
buildinfohelper = BuildInfoHelper(server, build_history_enabled)
# write our own log files into bitbake's log directory;
# we're only interested in the path to the parent directory of
@@ -240,11 +182,12 @@ def main(server, eventHandler, params):
continue
if isinstance(event, bb.event.BuildStarted):
# command-line builds don't fire a ParseStarted event,
# so we have to start the log file for those on BuildStarted instead
if not (build_log and build_log_file_path):
build_log, build_log_file_path = _open_build_log(log_dir)
buildinfohelper.store_started_build(event, build_log_file_path)
continue
if isinstance(event, (bb.build.TaskStarted, bb.build.TaskSucceeded, bb.build.TaskFailedSilent)):
buildinfohelper.update_and_store_task(event)
@@ -372,30 +315,28 @@ def main(server, eventHandler, params):
# update the build info helper on BuildCompleted, not on CommandXXX
buildinfohelper.update_build_information(event, errors, warnings, taskfailures)
brbe = buildinfohelper.brbe
buildinfohelper.close(errorcode)
# mark the log output; controllers may kill the toasterUI after seeing this log
logger.info("ToasterUI build done 1, brbe: %s", buildinfohelper.brbe )
# we start a new build info
if params.observe_only:
if buildinfohelper.brbe is not None:
logger.debug("ToasterUI under BuildEnvironment management - exiting after the build")
server.terminateServer()
else:
logger.debug("ToasterUI prepared for new build")
errors = 0
warnings = 0
taskfailures = []
buildinfohelper = BuildInfoHelper(server, build_history_enabled)
else:
main.shutdown = 1
logger.info("ToasterUI build done, brbe: %s", brbe)
logger.info("ToasterUI build done 2")
continue
if isinstance(event, (bb.command.CommandCompleted,
bb.command.CommandFailed,
bb.command.CommandExit)):
if params.observe_only:
errorcode = 0
else:
main.shutdown = 1
errorcode = 0
continue
@@ -416,10 +357,6 @@ def main(server, eventHandler, params):
buildinfohelper.update_artifact_image_file(event)
elif event.type == "LicenseManifestPath":
buildinfohelper.store_license_manifest_path(event)
elif event.type == "SetBRBE":
buildinfohelper.brbe = buildinfohelper._get_data_from_event(event)
elif event.type == "OSErrorException":
logger.error(event)
else:
logger.error("Unprocessed MetadataEvent %s ", str(event))
continue
@@ -429,11 +366,24 @@ def main(server, eventHandler, params):
main.shutdown = 1
continue
# ignore
if isinstance(event, (bb.event.BuildBase,
bb.event.StampUpdate,
bb.event.RecipePreFinalise,
bb.runqueue.runQueueEvent,
bb.runqueue.runQueueExitWait,
bb.event.OperationProgress,
bb.command.CommandFailed,
bb.command.CommandExit,
bb.command.CommandCompleted,
bb.event.ReachableStamps)):
continue
if isinstance(event, bb.event.DepTreeGenerated):
buildinfohelper.store_dependency_information(event)
continue
logger.warning("Unknown event: %s", event)
logger.warn("Unknown event: %s", event)
return_value += 1
except EnvironmentError as ioerror:
@@ -449,6 +399,13 @@ def main(server, eventHandler, params):
exception_data = traceback.format_exc()
logger.error("%s\n%s" , e, exception_data)
_, _, tb = sys.exc_info()
if tb is not None:
curr = tb
while curr is not None:
logger.error("Error data dump %s\n%s\n" , traceback.format_tb(curr,1), pformat(curr.tb_frame.f_locals))
curr = curr.tb_next
# save them to database, if possible; if it fails, we already logged to console.
try:
buildinfohelper.store_log_exception("%s\n%s" % (str(e), exception_data))
@@ -461,5 +418,5 @@ def main(server, eventHandler, params):
if interrupted and return_value == 0:
return_value += 1
logger.warning("Return value is %d", return_value)
logger.warn("Return value is %d", return_value)
return return_value

View File

@@ -24,7 +24,7 @@ server and queue them for the UI to process. This process must be used to avoid
client/server deadlocks.
"""
import socket, threading, pickle, collections
import socket, threading, pickle
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
class BBUIEventQueue:
@@ -44,32 +44,27 @@ class BBUIEventQueue:
server.register_function( self.send_event, "event.sendpickle" )
server.socket.settimeout(1)
self.EventHandle = None
self.EventHandler = None
count_tries = 0
# the event handler registration may fail here due to cooker being in invalid state
# this is a transient situation, and we should retry a couple of times before
# giving up
for count_tries in range(5):
ret = self.BBServer.registerEventHandler(self.host, self.port)
while self.EventHandler == None and count_tries < 5:
self.EventHandle = self.BBServer.registerEventHandler(self.host, self.port)
if isinstance(ret, collections.Iterable):
self.EventHandle, error = ret
else:
self.EventHandle = ret
error = ""
if self.EventHandle != None:
if (self.EventHandle != None):
break
errmsg = "Could not register UI event handler. Error: %s, host %s, "\
"port %d" % (error, self.host, self.port)
bb.warn("%s, retry" % errmsg)
bb.warn("Could not register UI event handler %s:%d, retry" % (self.host, self.port))
count_tries += 1
import time
time.sleep(1)
else:
raise Exception(errmsg)
if self.EventHandle == None:
raise Exception("Could not register UI event handler")
self.server = server
@@ -110,7 +105,6 @@ class BBUIEventQueue:
def startCallbackHandler(self):
self.server.timeout = 1
bb.utils.set_process_name("UIEventQueue")
while not self.server.quit:
try:
self.server.handle_request()

View File

@@ -57,3 +57,44 @@ class BBUIHelper:
self.needUpdate = False
return (self.running_tasks, self.failed_tasks)
def findServerDetails(self):
import sys
import optparse
from bb.server.xmlrpc import BitbakeServerInfo, BitBakeServerConnection
host = ""
port = 0
bind = ""
parser = optparse.OptionParser(
usage = """%prog -H host -P port -B bindaddr""")
parser.add_option("-H", "--host", help = "Bitbake server's IP address",
action = "store", dest = "host", default = None)
parser.add_option("-P", "--port", help = "Bitbake server's Port number",
action = "store", dest = "port", default = None)
parser.add_option("-B", "--bind", help = "Hob2 local bind address",
action = "store", dest = "bind", default = None)
options, args = parser.parse_args(sys.argv)
for key, val in options.__dict__.items():
if key == 'host' and val:
host = val
elif key == 'port' and val:
port = int(val)
elif key == 'bind' and val:
bind = val
if not host or not port or not bind:
parser.print_usage()
sys.exit(1)
serverinfo = BitbakeServerInfo(host, port)
clientinfo = (bind, 0)
connection = BitBakeServerConnection(serverinfo, clientinfo)
server = connection.connection
eventHandler = connection.events
return server, eventHandler, host, bind

View File

@@ -27,24 +27,18 @@ import bb
import bb.msg
import multiprocessing
import fcntl
import imp
import itertools
import subprocess
import glob
import fnmatch
import traceback
import errno
import signal
import ast
import collections
from commands import getstatusoutput
from contextlib import contextmanager
from ctypes import cdll
logger = logging.getLogger("BitBake.Util")
python_extensions = [e for e, _, _ in imp.get_suffixes()]
def clean_context():
return {
@@ -193,7 +187,7 @@ def explode_dep_versions2(s):
"DEPEND1 (optional version) DEPEND2 (optional version) ..."
and return a dictionary of dependencies and versions.
"""
r = collections.OrderedDict()
r = {}
l = s.replace(",", "").split()
lastdep = None
lastcmp = ""
@@ -297,26 +291,19 @@ def _print_trace(body, line):
error.append(' %.4d:%s' % (i, body[i-1].rstrip()))
return error
def better_compile(text, file, realfile, mode = "exec", lineno = 0):
def better_compile(text, file, realfile, mode = "exec"):
"""
A better compile method. This method
will print the offending lines.
"""
try:
cache = bb.methodpool.compile_cache(text)
if cache:
return cache
# We can't add to the linenumbers for compile, we can pad to the correct number of blank lines though
text2 = "\n" * int(lineno) + text
code = compile(text2, realfile, mode)
bb.methodpool.compile_cache_add(text, code)
return code
return compile(text, file, mode)
except Exception as e:
error = []
# split the text into lines again
body = text.split('\n')
error.append("Error in compiling python function in %s, line %s:\n" % (realfile, lineno))
if hasattr(e, "lineno"):
error.append("Error in compiling python function in %s:\n" % realfile)
if e.lineno:
error.append("The code lines resulting in this error were:")
error.extend(_print_trace(body, e.lineno))
else:
@@ -336,10 +323,8 @@ def _print_exception(t, value, tb, realfile, text, context):
exception = traceback.format_exception_only(t, value)
error.append('Error executing a python function in %s:\n' % realfile)
# Strip 'us' from the stack (better_exec call) unless that was where the
# error came from
if tb.tb_next is not None:
tb = tb.tb_next
# Strip 'us' from the stack (better_exec call)
tb = tb.tb_next
textarray = text.split('\n')
@@ -368,6 +353,15 @@ def _print_exception(t, value, tb, realfile, text, context):
error.extend(_print_trace(text, tbextract[level+1][1]))
except:
error.append(tbformat[level+1])
elif "d" in context and tbextract[level+1][2]:
# Try and find the code in the datastore based on the functionname
d = context["d"]
functionname = tbextract[level+1][2]
text = d.getVar(functionname, True)
if text:
error.extend(_print_trace(text.split('\n'), tbextract[level+1][1]))
else:
error.append(tbformat[level+1])
else:
error.append(tbformat[level+1])
nexttb = tb.tb_next
@@ -377,7 +371,7 @@ def _print_exception(t, value, tb, realfile, text, context):
finally:
logger.error("\n".join(error))
def better_exec(code, context, text = None, realfile = "<code>", pythonexception=False):
def better_exec(code, context, text = None, realfile = "<code>"):
"""
Similiar to better_compile, better_exec will
print the lines that are responsible for the
@@ -394,8 +388,6 @@ def better_exec(code, context, text = None, realfile = "<code>", pythonexception
# Error already shown so passthrough, no need for traceback
raise
except Exception as e:
if pythonexception:
raise
(t, value, tb) = sys.exc_info()
try:
_print_exception(t, value, tb, realfile, text, context)
@@ -541,21 +533,6 @@ def sha256_file(filename):
s.update(line)
return s.hexdigest()
def sha1_file(filename):
"""
Return the hex string representation of the SHA1 checksum of the filename
"""
try:
import hashlib
except ImportError:
return None
s = hashlib.sha1()
with open(filename, "rb") as f:
for line in f:
s.update(line)
return s.hexdigest()
def preserved_envvars_exported():
"""Variables which are taken from the environment and placed in and exported
from the metadata"""
@@ -568,7 +545,6 @@ def preserved_envvars_exported():
'SHELL',
'TERM',
'USER',
'LC_ALL',
]
def preserved_envvars():
@@ -596,12 +572,6 @@ def filter_environment(good_vars):
os.unsetenv(key)
del os.environ[key]
# If we spawn a python process, we need to have a UTF-8 locale, else python's file
# access methods will use ascii. You can't change that mode once the interpreter is
# started so we have to ensure a locale is set. Ideally we'd use C.UTF-8 but not all
# distros support that and we need to set something.
os.environ["LC_ALL"] = "en_US.UTF-8"
if removed_vars:
logger.debug(1, "Removed the following variables from the environment: %s", ", ".join(removed_vars.keys()))
@@ -651,7 +621,7 @@ def build_environment(d):
"""
import bb.data
for var in bb.data.keys(d):
export = d.getVarFlag(var, "export", False)
export = d.getVarFlag(var, "export")
if export:
os.environ[var] = d.getVar(var, True) or ""
@@ -830,7 +800,7 @@ def copyfile(src, dest, newmtime = None, sstat = None):
if not sstat:
sstat = os.lstat(src)
except Exception as e:
logger.warning("copyfile: stat of %s failed (%s)" % (src, e))
logger.warn("copyfile: stat of %s failed (%s)" % (src, e))
return False
destexists = 1
@@ -857,7 +827,7 @@ def copyfile(src, dest, newmtime = None, sstat = None):
#os.lchown(dest,sstat[stat.ST_UID],sstat[stat.ST_GID])
return os.lstat(dest)
except Exception as e:
logger.warning("copyfile: failed to create symlink %s to %s (%s)" % (dest, target, e))
logger.warn("copyfile: failed to create symlink %s to %s (%s)" % (dest, target, e))
return False
if stat.S_ISREG(sstat[stat.ST_MODE]):
@@ -872,7 +842,7 @@ def copyfile(src, dest, newmtime = None, sstat = None):
shutil.copyfile(src, dest + "#new")
os.rename(dest + "#new", dest)
except Exception as e:
logger.warning("copyfile: copy %s to %s failed (%s)" % (src, dest, e))
logger.warn("copyfile: copy %s to %s failed (%s)" % (src, dest, e))
return False
finally:
if srcchown:
@@ -883,13 +853,13 @@ def copyfile(src, dest, newmtime = None, sstat = None):
#we don't yet handle special, so we need to fall back to /bin/mv
a = getstatusoutput("/bin/cp -f " + "'" + src + "' '" + dest + "'")
if a[0] != 0:
logger.warning("copyfile: failed to copy special file %s to %s (%s)" % (src, dest, a))
logger.warn("copyfile: failed to copy special file %s to %s (%s)" % (src, dest, a))
return False # failure
try:
os.lchown(dest, sstat[stat.ST_UID], sstat[stat.ST_GID])
os.chmod(dest, stat.S_IMODE(sstat[stat.ST_MODE])) # Sticky is reset on chown
except Exception as e:
logger.warning("copyfile: failed to chown/chmod %s (%s)" % (dest, e))
logger.warn("copyfile: failed to chown/chmod %s (%s)" % (dest, e))
return False
if newmtime:
@@ -936,24 +906,6 @@ def to_boolean(string, default=None):
raise ValueError("Invalid value for to_boolean: %s" % string)
def contains(variable, checkvalues, truevalue, falsevalue, d):
"""Check if a variable contains all the values specified.
Arguments:
variable -- the variable name. This will be fetched and expanded (using
d.getVar(variable, True)) and then split into a set().
checkvalues -- if this is a string it is split on whitespace into a set(),
otherwise coerced directly into a set().
truevalue -- the value to return if checkvalues is a subset of variable.
falsevalue -- the value to return if variable is empty or if checkvalues is
not a subset of variable.
d -- the data store.
"""
val = d.getVar(variable, True)
if not val:
return falsevalue
@@ -962,7 +914,7 @@ def contains(variable, checkvalues, truevalue, falsevalue, d):
checkvalues = set(checkvalues.split())
else:
checkvalues = set(checkvalues)
if checkvalues.issubset(val):
if checkvalues.issubset(val):
return truevalue
return falsevalue
@@ -1188,7 +1140,7 @@ def edit_metadata(meta_lines, variables, varfunc, match_overrides=False):
if in_var.endswith('()'):
if full_value.count('{') - full_value.count('}') >= 0:
continue
full_value = full_value[:-1]
full_value = full_value[:-1]
if handle_var_end():
updated = True
checkspc = True
@@ -1433,59 +1385,3 @@ def ioprio_set(who, cls, value):
raise ValueError("Unable to set ioprio, syscall returned %s" % rc)
else:
bb.warn("Unable to set IO Prio for arch %s" % _unamearch)
def set_process_name(name):
from ctypes import cdll, byref, create_string_buffer
# This is nice to have for debugging, not essential
try:
libc = cdll.LoadLibrary('libc.so.6')
buff = create_string_buffer(len(name)+1)
buff.value = name
libc.prctl(15, byref(buff), 0, 0, 0)
except:
pass
# export common proxies variables from datastore to environment
def export_proxies(d):
import os
variables = ['http_proxy', 'HTTP_PROXY', 'https_proxy', 'HTTPS_PROXY',
'ftp_proxy', 'FTP_PROXY', 'no_proxy', 'NO_PROXY']
exported = False
for v in variables:
if v in os.environ.keys():
exported = True
else:
v_proxy = d.getVar(v, True)
if v_proxy is not None:
os.environ[v] = v_proxy
exported = True
return exported
def load_plugins(logger, plugins, pluginpath):
def load_plugin(name):
logger.debug('Loading plugin %s' % name)
fp, pathname, description = imp.find_module(name, [pluginpath])
try:
return imp.load_module(name, fp, pathname, description)
finally:
if fp:
fp.close()
logger.debug('Loading plugins from %s...' % pluginpath)
expanded = (glob.glob(os.path.join(pluginpath, '*' + ext))
for ext in python_extensions)
files = itertools.chain.from_iterable(expanded)
names = set(os.path.splitext(os.path.basename(fn))[0] for fn in files)
for name in names:
if name != '__init__':
plugin = load_plugin(name)
if hasattr(plugin, 'plugin_init'):
obj = plugin.plugin_init(plugins)
plugins.append(obj or plugin)
else:
plugins.append(plugin)

View File

@@ -11,7 +11,14 @@ from bs4.builder import (
)
from bs4.element import NamespacedAttribute
import html5lib
try:
# html5lib >= 0.99999999/1.0b9
from html5lib.treebuilders import base as treebuildersbase
except ImportError:
# html5lib <= 0.9999999/1.0b8
from html5lib.treebuilders import _base as treebuildersbase
from html5lib.constants import namespaces
from bs4.element import (
Comment,
Doctype,
@@ -54,7 +61,7 @@ class HTML5TreeBuilder(HTMLTreeBuilder):
return u'<html><head></head><body>%s</body></html>' % fragment
class TreeBuilderForHtml5lib(html5lib.treebuilders._base.TreeBuilder):
class TreeBuilderForHtml5lib(treebuildersbase.TreeBuilder):
def __init__(self, soup, namespaceHTMLElements):
self.soup = soup
@@ -92,7 +99,7 @@ class TreeBuilderForHtml5lib(html5lib.treebuilders._base.TreeBuilder):
return self.soup
def getFragment(self):
return html5lib.treebuilders._base.TreeBuilder.getFragment(self).element
return treebuildersbase.TreeBuilder.getFragment(self).element
class AttrList(object):
def __init__(self, element):
@@ -115,9 +122,9 @@ class AttrList(object):
return name in list(self.attrs.keys())
class Element(html5lib.treebuilders._base.Node):
class Element(treebuildersbase.Node):
def __init__(self, element, soup, namespace):
html5lib.treebuilders._base.Node.__init__(self, element.name)
treebuildersbase.Node.__init__(self, element.name)
self.element = element
self.soup = soup
self.namespace = namespace
@@ -277,7 +284,7 @@ class Element(html5lib.treebuilders._base.Node):
class TextNode(Element):
def __init__(self, element, soup):
html5lib.treebuilders._base.Node.__init__(self, None)
treebuildersbase.Node.__init__(self, None)
self.element = element
self.soup = soup

View File

@@ -231,14 +231,6 @@ class PRTable(object):
datainfo.append(col)
return (metainfo, datainfo)
def dump_db(self, fd):
writeCount = 0
for line in self.conn.iterdump():
writeCount = writeCount + len(line) + 1
fd.write(line)
fd.write('\n')
return writeCount
class PRData(object):
"""Object representing the PR database"""
def __init__(self, filename, nohist=True):

View File

@@ -4,7 +4,6 @@ from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
import threading
import Queue
import socket
import StringIO
try:
import sqlite3
@@ -60,10 +59,12 @@ class PRServer(SimpleXMLRPCServer):
self.register_function(self.quit, "quit")
self.register_function(self.ping, "ping")
self.register_function(self.export, "export")
self.register_function(self.dump_db, "dump_db")
self.register_function(self.importone, "importone")
self.register_introspection_functions()
self.db = prserv.db.PRData(self.dbfile)
self.table = self.db["PRMAIN"]
self.requestqueue = Queue.Queue()
self.handlerthread = threading.Thread(target = self.process_request_thread)
self.handlerthread.daemon = False
@@ -78,8 +79,6 @@ class PRServer(SimpleXMLRPCServer):
# 60 iterations between syncs or sync if dirty every ~30 seconds
iterations_between_sync = 60
bb.utils.set_process_name("PRServ Handler")
while not self.quit:
try:
(request, client_address) = self.requestqueue.get(True, 30)
@@ -99,13 +98,11 @@ class PRServer(SimpleXMLRPCServer):
self.table.sync_if_dirty()
def sigint_handler(self, signum, stack):
if self.table:
self.table.sync()
self.table.sync()
def sigterm_handler(self, signum, stack):
if self.table:
self.table.sync()
self.quit=True
self.table.sync()
raise SystemExit
def process_request(self, request, client_address):
self.requestqueue.put((request, client_address))
@@ -117,26 +114,6 @@ class PRServer(SimpleXMLRPCServer):
logger.error(str(exc))
return None
def dump_db(self):
"""
Returns a script (string) that reconstructs the state of the
entire database at the time this function is called. The script
language is defined by the backing database engine, which is a
function of server configuration.
Returns None if the database engine does not support dumping to
script or if some other error is encountered in processing.
"""
buff = StringIO.StringIO()
try:
self.table.sync()
self.table.dump_db(buff)
return buff.getvalue()
except Exception as exc:
logger.error(str(exc))
return None
finally:
buff.close()
def importone(self, version, pkgarch, checksum, value):
return self.table.importone(version, pkgarch, checksum, value)
@@ -164,12 +141,6 @@ class PRServer(SimpleXMLRPCServer):
self.quit = False
self.timeout = 0.5
bb.utils.set_process_name("PRServ")
# DB connection must be created after all forks
self.db = prserv.db.PRData(self.dbfile)
self.table = self.db["PRMAIN"]
logger.info("Started PRServer with DBfile: %s, IP: %s, PORT: %s, PID: %s" %
(self.dbfile, self.host, self.port, str(os.getpid())))
@@ -238,12 +209,13 @@ class PRServer(SimpleXMLRPCServer):
def cleanup_handles(self):
signal.signal(signal.SIGINT, self.sigint_handler)
signal.signal(signal.SIGTERM, self.sigterm_handler)
os.umask(0)
os.chdir("/")
sys.stdout.flush()
sys.stderr.flush()
si = open('/dev/null', 'r')
so = open(self.logfile, 'a+')
si = file('/dev/null', 'r')
so = file(self.logfile, 'a+')
se = so
os.dup2(si.fileno(),sys.stdin.fileno())
os.dup2(so.fileno(),sys.stdout.fileno())
@@ -263,7 +235,7 @@ class PRServer(SimpleXMLRPCServer):
# write pidfile
pid = str(os.getpid())
pf = open(self.pidfile, 'w')
pf = file(self.pidfile, 'w')
pf.write("%s\n" % pid)
pf.close()
@@ -310,9 +282,6 @@ class PRServerConnection(object):
def export(self,version=None, pkgarch=None, checksum=None, colinfo=True):
return self.connection.export(version, pkgarch, checksum, colinfo)
def dump_db(self):
return self.connection.dump_db()
def importone(self, version, pkgarch, checksum, value):
return self.connection.importone(version, pkgarch, checksum, value)
@@ -323,7 +292,7 @@ def start_daemon(dbfile, host, port, logfile):
ip = socket.gethostbyname(host)
pidfile = PIDPREFIX % (ip, port)
try:
pf = open(pidfile,'r')
pf = file(pidfile,'r')
pid = int(pf.readline().strip())
pf.close()
except IOError:
@@ -350,7 +319,7 @@ def stop_daemon(host, port):
ip = socket.gethostbyname(host)
pidfile = PIDPREFIX % (ip, port)
try:
pf = open(pidfile,'r')
pf = file(pidfile,'r')
pid = int(pf.readline().strip())
pf.close()
except IOError:

View File

@@ -18,6 +18,7 @@
from django.conf.urls import patterns, include, url
from django.views.generic import RedirectView
urlpatterns = patterns('bldcollector.views',
# landing point for pushing a bitbake_eventlog.json file to this toaster instace

View File

@@ -37,12 +37,11 @@ class BitbakeController(object):
It is outside the scope of this class on how the server is started and aquired
"""
def __init__(self, be):
self.connection = bb.server.xmlrpc._create_server(be.bbaddress,
int(be.bbport))[0]
def __init__(self, connection):
self.connection = connection
def _runCommand(self, command):
result, error = self.connection.runCommand(command)
result, error = self.connection.connection.runCommand(command)
if error:
raise Exception(error)
return result
@@ -53,20 +52,11 @@ class BitbakeController(object):
def setVariable(self, name, value):
return self._runCommand(["setVariable", name, value])
def getVariable(self, name):
return self._runCommand(["getVariable", name])
def triggerEvent(self, event):
return self._runCommand(["triggerEvent", event])
def build(self, targets, task = None):
if task is None:
task = "build"
return self._runCommand(["buildTargets", targets, task])
def forceShutDown(self):
return self._runCommand(["stateForceShutdown"])
def getBuildEnvironmentController(**kwargs):
@@ -80,10 +70,13 @@ def getBuildEnvironmentController(**kwargs):
"""
from localhostbecontroller import LocalhostBEController
from sshbecontroller import SSHBEController
be = BuildEnvironment.objects.filter(Q(**kwargs))[0]
if be.betype == BuildEnvironment.TYPE_LOCAL:
return LocalhostBEController(be)
elif be.betype == BuildEnvironment.TYPE_SSH:
return SSHBEController(be)
else:
raise Exception("FIXME: Implement BEC for type %s" % str(be.betype))
@@ -106,6 +99,9 @@ class BuildEnvironmentController(object):
on the local machine, with the "build/" directory under the "poky/" source checkout directory.
Bash is expected to be available.
* SSH controller will run the Toaster BE on a remote machine, where the current user
can connect without raise Exception("FIXME: implement")word (set up with either ssh-agent or raise Exception("FIXME: implement")phrase-less key authentication)
"""
def __init__(self, be):
""" Takes a BuildEnvironment object as parameter that points to the settings of the BE.
@@ -113,25 +109,89 @@ class BuildEnvironmentController(object):
self.be = be
self.connection = None
def setLayers(self, bitbake, ls):
@staticmethod
def _updateBBLayers(bblayerconf, layerlist):
conflines = open(bblayerconf, "r").readlines()
bblayerconffile = open(bblayerconf, "w")
skip = 0
for i in xrange(len(conflines)):
if skip > 0:
skip =- 1
continue
if conflines[i].startswith("# line added by toaster"):
skip = 1
else:
bblayerconffile.write(conflines[i])
bblayerconffile.write("# line added by toaster build control\nBBLAYERS = \"" + " ".join(layerlist) + "\"")
bblayerconffile.close()
def writeConfFile(self, variable_list = None, raw = None):
""" Writes a configuration file in the build directory. Override with buildenv-specific implementation. """
raise Exception("FIXME: Must override to actually write a configuration file")
def startBBServer(self):
""" Starts a BB server with Toaster toasterui set up to record the builds, an no controlling UI.
After this method executes, self.be bbaddress/bbport MUST point to a running and free server,
and the bbstate MUST be updated to "started".
"""
raise Exception("FIXME: Must override in order to actually start the BB server")
def stopBBServer(self):
""" Stops the currently running BB server.
The bbstate MUST be updated to "stopped".
self.connection must be none.
"""
raise Exception("FIXME: Must override stoBBServer")
def setLayers(self, bbs, ls):
""" Checks-out bitbake executor and layers from git repositories.
Sets the layer variables in the config file, after validating local layer paths.
bitbake must be a single BRBitbake instance
The bitbakes must be a 1-length list of BRBitbake
The layer paths must be in a list of BRLayer object
a word of attention: by convention, the first layer for any build will be poky!
"""
raise NotImplementedError("FIXME: Must override setLayers")
raise Exception("FIXME: Must override setLayers")
def getBBController(self):
""" returns a BitbakeController to an already started server; this is the point where the server
starts if needed; or reconnects to the server if we can
"""
if not self.connection:
self.startBBServer()
self.be.lock = BuildEnvironment.LOCK_RUNNING
self.be.save()
server = bb.server.xmlrpc.BitBakeXMLRPCClient()
server.initServer()
server.saveConnectionDetails("%s:%s" % (self.be.bbaddress, self.be.bbport))
self.connection = server.establishConnection([])
self.be.bbtoken = self.connection.transport.connection_token
self.be.save()
return BitbakeController(self.connection)
def getArtifact(self, path):
""" This call returns an artifact identified by the 'path'. How 'path' is interpreted as
up to the implementing BEC. The return MUST be a REST URL where a GET will actually return
the content of the artifact, e.g. for use as a "download link" in a web UI.
"""
raise NotImplementedError("Must return the REST URL of the artifact")
raise Exception("Must return the REST URL of the artifact")
def release(self):
""" This stops the server and releases any resources. After this point, all resources
are un-available for further reference
"""
raise Exception("Must override BE release")
def triggerBuild(self, bitbake, layers, variables, targets):
raise NotImplementedError("Must override BE release")
raise Exception("Must override BE release")
class ShellCmdException(Exception):
pass

View File

@@ -32,7 +32,7 @@ import subprocess
from toastermain import settings
from bbcontroller import BuildEnvironmentController, ShellCmdException, BuildSetupException, BitbakeController
from bbcontroller import BuildEnvironmentController, ShellCmdException, BuildSetupException
import logging
logger = logging.getLogger("toaster")
@@ -48,17 +48,16 @@ class LocalhostBEController(BuildEnvironmentController):
def __init__(self, be):
super(LocalhostBEController, self).__init__(be)
self.dburl = settings.getDATABASE_URL()
self.pokydirname = None
self.islayerset = False
def _shellcmd(self, command, cwd=None, nowait=False):
def _shellcmd(self, command, cwd = None):
if cwd is None:
cwd = self.be.sourcedir
logger.debug("lbc_shellcmmd: (%s) %s" % (cwd, command))
p = subprocess.Popen(command, cwd = cwd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if nowait:
return
(out,err) = p.communicate()
p.wait()
if p.returncode:
@@ -66,12 +65,121 @@ class LocalhostBEController(BuildEnvironmentController):
err = "command: %s \n%s" % (command, out)
else:
err = "command: %s \n%s" % (command, err)
logger.warning("localhostbecontroller: shellcmd error %s" % err)
logger.warn("localhostbecontroller: shellcmd error %s" % err)
raise ShellCmdException(err)
else:
logger.debug("localhostbecontroller: shellcmd success")
return out
def _setupBE(self):
assert self.pokydirname and os.path.exists(self.pokydirname)
path = self.be.builddir
if not path:
raise Exception("Invalid path creation specified.")
if not os.path.exists(path):
os.makedirs(path, 0755)
self._shellcmd("bash -c \"source %s/oe-init-build-env %s\"" % (self.pokydirname, path))
# delete the templateconf.cfg; it may come from an unsupported layer configuration
os.remove(os.path.join(path, "conf/templateconf.cfg"))
def writeConfFile(self, file_name, variable_list = None, raw = None):
filepath = os.path.join(self.be.builddir, file_name)
with open(filepath, "w") as conffile:
if variable_list is not None:
for i in variable_list:
conffile.write("%s=\"%s\"\n" % (i.name, i.value))
if raw is not None:
conffile.write(raw)
def startBBServer(self):
assert self.pokydirname and os.path.exists(self.pokydirname)
assert self.islayerset
# find our own toasterui listener/bitbake
from toaster.bldcontrol.management.commands.loadconf import _reduce_canon_path
own_bitbake = _reduce_canon_path(os.path.join(os.path.dirname(os.path.abspath(__file__)), "../../../bin/bitbake"))
assert os.path.exists(own_bitbake) and os.path.isfile(own_bitbake)
logger.debug("localhostbecontroller: running the listener at %s" % own_bitbake)
toaster_ui_log_filepath = os.path.join(self.be.builddir, "toaster_ui.log")
# get the file length; we need to detect the _last_ start of the toaster UI, not the first
toaster_ui_log_filelength = 0
if os.path.exists(toaster_ui_log_filepath):
with open(toaster_ui_log_filepath, "w") as f:
f.seek(0, 2) # jump to the end
toaster_ui_log_filelength = f.tell()
cmd = "bash -c \"source %s/oe-init-build-env %s 2>&1 >toaster_server.log && bitbake --read %s/conf/toaster-pre.conf --postread %s/conf/toaster.conf --server-only -t xmlrpc -B 0.0.0.0:0 2>&1 >>toaster_server.log \"" % (self.pokydirname, self.be.builddir, self.be.builddir, self.be.builddir)
port = "-1"
logger.debug("localhostbecontroller: starting builder \n%s\n" % cmd)
cmdoutput = self._shellcmd(cmd)
with open(self.be.builddir + "/toaster_server.log", "r") as f:
for i in f.readlines():
if i.startswith("Bitbake server address"):
port = i.split(" ")[-1]
logger.debug("localhostbecontroller: Found bitbake server port %s" % port)
cmd = "bash -c \"source %s/oe-init-build-env-memres -1 %s && DATABASE_URL=%s %s --observe-only -u toasterui --remote-server=0.0.0.0:-1 -t xmlrpc\"" % (self.pokydirname, self.be.builddir, self.dburl, own_bitbake)
with open(toaster_ui_log_filepath, "a+") as f:
p = subprocess.Popen(cmd, cwd = self.be.builddir, shell=True, stdout=f, stderr=f)
def _toaster_ui_started(filepath, filepos = 0):
if not os.path.exists(filepath):
return False
with open(filepath, "r") as f:
f.seek(filepos)
for line in f:
if line.startswith("NOTE: ToasterUI waiting for events"):
return True
return False
retries = 0
started = False
while not started and retries < 50:
started = _toaster_ui_started(toaster_ui_log_filepath, toaster_ui_log_filelength)
import time
logger.debug("localhostbecontroller: Waiting bitbake server to start")
time.sleep(0.5)
retries += 1
if not started:
toaster_ui_log = open(os.path.join(self.be.builddir, "toaster_ui.log"), "r").read()
toaster_server_log = open(os.path.join(self.be.builddir, "toaster_server.log"), "r").read()
raise BuildSetupException("localhostbecontroller: Bitbake server did not start in 25 seconds, aborting (Error: '%s' '%s')" % (toaster_ui_log, toaster_server_log))
logger.debug("localhostbecontroller: Started bitbake server")
while port == "-1":
# the port specification is "autodetect"; read the bitbake.lock file
with open("%s/bitbake.lock" % self.be.builddir, "r") as f:
for line in f.readlines():
if ":" in line:
port = line.split(":")[1].strip()
logger.debug("localhostbecontroller: Autodetected bitbake port %s", port)
break
assert self.be.sourcedir and os.path.exists(self.be.builddir)
self.be.bbaddress = "localhost"
self.be.bbport = port
self.be.bbstate = BuildEnvironment.SERVER_STARTED
self.be.save()
def stopBBServer(self):
assert self.pokydirname and os.path.exists(self.pokydirname)
assert self.islayerset
self._shellcmd("bash -c \"source %s/oe-init-build-env %s && %s source toaster stop\"" %
(self.pokydirname, self.be.builddir, (lambda: "" if self.be.bbtoken is None else "BBTOKEN=%s" % self.be.bbtoken)()))
self.be.bbstate = BuildEnvironment.SERVER_STOPPED
self.be.save()
logger.debug("localhostbecontroller: Stopped bitbake server")
def getGitCloneDirectory(self, url, branch):
"""Construct unique clone directory name out of url and branch."""
if branch != "HEAD":
@@ -85,22 +193,22 @@ class LocalhostBEController(BuildEnvironmentController):
return local_checkout_path
def setLayers(self, bitbake, layers, targets):
def setLayers(self, bitbakes, layers, targets):
""" a word of attention: by convention, the first layer for any build will be poky! """
assert self.be.sourcedir is not None
assert len(bitbakes) == 1
# set layers in the layersource
# 1. get a list of repos with branches, and map dirpaths for each layer
gitrepos = {}
gitrepos[(bitbake.giturl, bitbake.commit)] = []
gitrepos[(bitbake.giturl, bitbake.commit)].append( ("bitbake", bitbake.dirpath) )
gitrepos[(bitbakes[0].giturl, bitbakes[0].commit)] = []
gitrepos[(bitbakes[0].giturl, bitbakes[0].commit)].append( ("bitbake", bitbakes[0].dirpath) )
for layer in layers:
# We don't need to git clone the layer for the CustomImageRecipe
# as it's generated by us layer on if needed
if CustomImageRecipe.LAYER_NAME in layer.name:
# we don't process local URLs
if layer.giturl.startswith("file://"):
continue
if not (layer.giturl, layer.commit) in gitrepos:
gitrepos[(layer.giturl, layer.commit)] = []
@@ -168,7 +276,7 @@ class LocalhostBEController(BuildEnvironmentController):
# make sure we have a working bitbake
if not os.path.exists(os.path.join(self.pokydirname, 'bitbake')):
logger.debug("localhostbecontroller: checking bitbake into the poky dirname %s " % self.pokydirname)
self._shellcmd("git clone -b \"%s\" \"%s\" \"%s\" " % (bitbake.commit, bitbake.giturl, os.path.join(self.pokydirname, 'bitbake')))
self._shellcmd("git clone -b \"%s\" \"%s\" \"%s\" " % (bitbakes[0].commit, bitbakes[0].giturl, os.path.join(self.pokydirname, 'bitbake')))
# verify our repositories
for name, dirpath in gitrepos[(giturl, commit)]:
@@ -182,13 +290,23 @@ class LocalhostBEController(BuildEnvironmentController):
logger.debug("localhostbecontroller: current layer list %s " % pformat(layerlist))
# 5. create custom layer and add custom recipes to it
layerpath = os.path.join(self.be.builddir,
CustomImageRecipe.LAYER_NAME)
# 4. configure the build environment, so we have a conf/bblayers.conf
assert self.pokydirname is not None
self._setupBE()
# 5. update the bblayers.conf
bblayerconf = os.path.join(self.be.builddir, "conf/bblayers.conf")
if not os.path.exists(bblayerconf):
raise BuildSetupException("BE is not consistent: bblayers.conf file missing at %s" % bblayerconf)
# 6. create custom layer and add custom recipes to it
layerpath = os.path.join(self.be.sourcedir, "_meta-toaster-custom")
if os.path.isdir(layerpath):
shutil.rmtree(layerpath) # remove leftovers from previous builds
for target in targets:
try:
customrecipe = CustomImageRecipe.objects.get(name=target.target,
project=bitbake.req.project)
project=bitbakes[0].req.project)
except CustomImageRecipe.DoesNotExist:
continue # not a custom recipe, skip
@@ -204,129 +322,60 @@ class LocalhostBEController(BuildEnvironmentController):
with open(config, "w") as conf:
conf.write('BBPATH .= ":${LAYERDIR}"\nBBFILES += "${LAYERDIR}/recipes/*.bb"\n')
# Update the Layer_Version dirpath that has our base_recipe in
# to be able to read the base recipe to then generate the
# custom recipe.
br_layer_base_recipe = layers.get(
layer_version=customrecipe.base_recipe.layer_version)
br_layer_base_dirpath = \
os.path.join(self.be.sourcedir,
self.getGitCloneDirectory(
br_layer_base_recipe.giturl,
br_layer_base_recipe.commit),
customrecipe.base_recipe.layer_version.dirpath
)
customrecipe.base_recipe.layer_version.dirpath = \
br_layer_base_dirpath
customrecipe.base_recipe.layer_version.save()
# create recipe
recipe_path = \
os.path.join(layerpath, "recipes", "%s.bb" % target.target)
with open(recipe_path, "w") as recipef:
recipef.write(customrecipe.generate_recipe_file_contents())
# Update the layer and recipe objects
customrecipe.layer_version.dirpath = layerpath
customrecipe.layer_version.save()
customrecipe.file_path = recipe_path
customrecipe.save()
recipe = os.path.join(layerpath, "recipes", "%s.bb" % target.target)
with open(recipe, "w") as recipef:
recipef.write("require %s\n" % customrecipe.base_recipe.recipe.file_path)
packages = [pkg.name for pkg in customrecipe.packages.all()]
if packages:
recipef.write('IMAGE_INSTALL = "%s"\n' % ' '.join(packages))
# create *Layer* objects needed for build machinery to work
BRLayer.objects.get_or_create(req=target.req,
name=layer.name,
dirpath=layerpath,
layer = Layer.objects.get_or_create(name="Toaster Custom layer",
summary="Layer for custom recipes",
vcs_url="file://%s" % layerpath)[0]
breq = target.req
lver = Layer_Version.objects.get_or_create(project=breq.project, layer=layer,
dirpath=layerpath, build=breq.build)[0]
ProjectLayer.objects.get_or_create(project=breq.project, layercommit=lver,
optional=False)
BRLayer.objects.get_or_create(req=breq, name=layer.name, dirpath=layerpath,
giturl="file://%s" % layerpath)
if os.path.isdir(layerpath):
layerlist.append(layerpath)
BuildEnvironmentController._updateBBLayers(bblayerconf, layerlist)
self.islayerset = True
return layerlist
return True
def readServerLogFile(self):
return open(os.path.join(self.be.builddir, "toaster_server.log"), "r").read()
def release(self):
assert self.be.sourcedir and os.path.exists(self.be.builddir)
import shutil
shutil.rmtree(os.path.join(self.be.sourcedir, "build"))
assert not os.path.exists(self.be.builddir)
def triggerBuild(self, bitbake, layers, variables, targets, brbe):
layers = self.setLayers(bitbake, layers, targets)
# init build environment from the clone
builddir = '%s-toaster-%d' % (self.be.builddir, bitbake.req.project.id)
oe_init = os.path.join(self.pokydirname, 'oe-init-build-env')
# init build environment
self._shellcmd("bash -c 'source %s %s'" % (oe_init, builddir),
self.be.sourcedir)
def triggerBuild(self, bitbake, layers, variables, targets):
# set up the buid environment with the needed layers
self.setLayers(bitbake, layers, targets)
self.writeConfFile("conf/toaster-pre.conf", variables)
self.writeConfFile("conf/toaster.conf", raw = "INHERIT+=\"toaster buildhistory\"")
# update bblayers.conf
bblconfpath = os.path.join(builddir, "conf/bblayers.conf")
conflines = open(bblconfpath, "r").readlines()
skip = False
with open(bblconfpath, 'w') as bblayers:
for line in conflines:
if line.startswith("# line added by toaster"):
skip = True
continue
if skip:
skip = False
else:
bblayers.write(line)
# get the bb server running with the build req id and build env id
bbctrl = self.getBBController()
bblayers.write('# line added by toaster build control\n'
'BBLAYERS = "%s"' % ' '.join(layers))
# trigger the build command
task = reduce(lambda x, y: x if len(y)== 0 else y, map(lambda y: y.task, targets))
if len(task) == 0:
task = None
# write configuration file
confpath = os.path.join(builddir, 'conf/toaster.conf')
with open(confpath, 'w') as conf:
for var in variables:
conf.write('%s="%s"\n' % (var.name, var.value))
conf.write('INHERIT+="toaster buildhistory"')
bbctrl.build(list(map(lambda x:x.target, targets)), task)
# run bitbake server from the clone
bitbake = os.path.join(self.pokydirname, 'bitbake', 'bin', 'bitbake')
self._shellcmd('bash -c \"source %s %s; BITBAKE_UI="" %s --read %s '
'--server-only -t xmlrpc -B 0.0.0.0:0\"' % (oe_init,
builddir, bitbake, confpath), self.be.sourcedir)
logger.debug("localhostbecontroller: Build launched, exiting. Follow build logs at %s/toaster_ui.log" % self.be.builddir)
# read port number from bitbake.lock
self.be.bbport = ""
bblock = os.path.join(builddir, 'bitbake.lock')
with open(bblock) as fplock:
for line in fplock:
if ":" in line:
self.be.bbport = line.split(":")[-1].strip()
logger.debug("localhostbecontroller: bitbake port %s", self.be.bbport)
break
if not self.be.bbport:
raise BuildSetupException("localhostbecontroller: can't read bitbake port from %s" % bblock)
self.be.bbaddress = "localhost"
self.be.bbstate = BuildEnvironment.SERVER_STARTED
self.be.lock = BuildEnvironment.LOCK_RUNNING
self.be.save()
bbtargets = ''
for target in targets:
task = target.task
if task:
if not task.startswith('do_'):
task = 'do_' + task
task = ':%s' % task
bbtargets += '%s%s ' % (target.target, task)
# run build with local bitbake. stop the server after the build.
log = os.path.join(builddir, 'toaster_ui.log')
local_bitbake = os.path.join(os.path.dirname(os.getenv('BBBASEDIR')),
'bitbake')
self._shellcmd(['bash -c \"(TOASTER_BRBE="%s" BBSERVER="0.0.0.0:-1" '
'%s %s -u toasterui --token="" >>%s 2>&1;'
'BITBAKE_UI="" BBSERVER=0.0.0.0:-1 %s -m)&\"' \
% (brbe, local_bitbake, bbtargets, log, bitbake)],
builddir, nowait=True)
logger.debug('localhostbecontroller: Build launched, exiting. '
'Follow build logs at %s' % log)
# disconnect from the server
bbctrl.disconnect()

View File

@@ -70,11 +70,11 @@ class Command(NoArgsCommand):
return True
if len(be.sourcedir) == 0:
print("\n -- Validation: The layers checkout directory must be set.")
print "\n -- Validation: The layers checkout directory must be set."
is_changed = _update_sourcedir()
if not be.sourcedir.startswith("/"):
print("\n -- Validation: The layers checkout directory must be set to an absolute path.")
print "\n -- Validation: The layers checkout directory must be set to an absolute path."
is_changed = _update_sourcedir()
if is_changed:
@@ -87,16 +87,16 @@ class Command(NoArgsCommand):
return True
if len(be.builddir) == 0:
print("\n -- Validation: The build directory must be set.")
print "\n -- Validation: The build directory must be set."
is_changed = _update_builddir()
if not be.builddir.startswith("/"):
print("\n -- Validation: The build directory must to be set to an absolute path.")
print "\n -- Validation: The build directory must to be set to an absolute path."
is_changed = _update_builddir()
if is_changed:
print("\nBuild configuration saved")
print "\nBuild configuration saved"
be.save()
return True
@@ -104,20 +104,20 @@ class Command(NoArgsCommand):
if be.needs_import:
try:
config_file = os.environ.get('TOASTER_CONF')
print("\nImporting file: %s" % config_file)
print "\nImporting file: %s" % config_file
from loadconf import Command as LoadConfigCommand
LoadConfigCommand()._import_layer_config(config_file)
# we run lsupdates after config update
print("\nLayer configuration imported. Updating information from the layer sources, please wait.\nYou can re-update any time later by running bitbake/lib/toaster/manage.py lsupdates")
print "\nLayer configuration imported. Updating information from the layer sources, please wait.\nYou can re-update any time later by running bitbake/lib/toaster/manage.py lsupdates"
from django.core.management import call_command
call_command("lsupdates")
# we don't look for any other config files
return is_changed
except Exception as e:
print("Failure while trying to import the toaster config file %s: %s" %\
(config_file, e))
print "Failure while trying to import the toaster config file %s: %s" %\
(config_file, e)
traceback.print_exc(e)
return is_changed

View File

@@ -1,50 +1,43 @@
from django.core.management.base import NoArgsCommand, CommandError
from django.db import transaction
from django.db.models import Q
from bldcontrol.bbcontroller import getBuildEnvironmentController
from bldcontrol.bbcontroller import ShellCmdException, BuildSetupException
from bldcontrol.models import BuildRequest, BuildEnvironment
from bldcontrol.models import BRError, BRVariable
from orm.models import Build, ToasterSetting, LogMessage, Target
from bldcontrol.bbcontroller import getBuildEnvironmentController, ShellCmdException, BuildSetupException
from bldcontrol.models import BuildRequest, BuildEnvironment, BRError, BRVariable
import os
import logging
import time
import sys
import traceback
logger = logging.getLogger("toaster")
logger = logging.getLogger("ToasterScheduler")
class Command(NoArgsCommand):
args = ""
help = "Schedules and executes build requests as possible."
"Does not return (interrupt with Ctrl-C)"
help = "Schedules and executes build requests as possible. Does not return (interrupt with Ctrl-C)"
@transaction.atomic
@transaction.commit_on_success
def _selectBuildEnvironment(self):
bec = getBuildEnvironmentController(lock = BuildEnvironment.LOCK_FREE)
bec.be.lock = BuildEnvironment.LOCK_LOCK
bec.be.save()
return bec
@transaction.atomic
@transaction.commit_on_success
def _selectBuildRequest(self):
br = BuildRequest.objects.filter(state=BuildRequest.REQ_QUEUED).first()
br = BuildRequest.objects.filter(state = BuildRequest.REQ_QUEUED).order_by('pk')[0]
br.state = BuildRequest.REQ_INPROGRESS
br.save()
return br
def schedule(self):
import traceback
try:
# select the build environment and the request to build
br = self._selectBuildRequest()
if br:
br.state = BuildRequest.REQ_INPROGRESS
br.save()
else:
br = None
try:
# select the build environment and the request to build
br = self._selectBuildRequest()
except IndexError as e:
#logger.debug("runbuilds: No build request")
return
try:
bec = self._selectBuildEnvironment()
except IndexError as e:
@@ -54,17 +47,17 @@ class Command(NoArgsCommand):
logger.debug("runbuilds: No build env")
return
logger.debug("runbuilds: starting build %s, environment %s" % \
(str(br).decode('utf-8'), bec.be))
logger.debug("runbuilds: starting build %s, environment %s" % (br, bec.be))
# write the build identification variable
BRVariable.objects.create(req = br, name="TOASTER_BRBE", value="%d:%d" % (br.pk, bec.be.pk))
# let the build request know where it is being executed
br.environment = bec.be
br.save()
# this triggers an async build
bec.triggerBuild(br.brbitbake, br.brlayer_set.all(),
br.brvariable_set.all(), br.brtarget_set.all(),
"%d:%d" % (br.pk, bec.be.pk))
bec.triggerBuild(br.brbitbake_set.all(), br.brlayer_set.all(), br.brvariable_set.all(), br.brtarget_set.all())
except Exception as e:
logger.error("runbuilds: Error launching build %s" % e)
@@ -95,41 +88,29 @@ class Command(NoArgsCommand):
def cleanup(self):
from django.utils import timezone
from datetime import timedelta
# environments locked for more than 30 seconds
# they should be unlocked
BuildEnvironment.objects.filter(
Q(buildrequest__state__in=[BuildRequest.REQ_FAILED,
BuildRequest.REQ_COMPLETED,
BuildRequest.REQ_CANCELLING]) &
Q(lock=BuildEnvironment.LOCK_LOCK) &
Q(updated__lt=timezone.now() - timedelta(seconds = 30))
).update(lock=BuildEnvironment.LOCK_FREE)
# environments locked for more than 30 seconds - they should be unlocked
BuildEnvironment.objects.filter(buildrequest__state__in=[BuildRequest.REQ_FAILED, BuildRequest.REQ_COMPLETED]).filter(lock=BuildEnvironment.LOCK_LOCK).filter(updated__lt = timezone.now() - timedelta(seconds = 30)).update(lock = BuildEnvironment.LOCK_FREE)
# update all Builds that were in progress and failed to start
for br in BuildRequest.objects.filter(
state=BuildRequest.REQ_FAILED,
build__outcome=Build.IN_PROGRESS):
# update all Builds that failed to start
for br in BuildRequest.objects.filter(state = BuildRequest.REQ_FAILED, build__outcome = Build.IN_PROGRESS):
# transpose the launch errors in ToasterExceptions
br.build.outcome = Build.FAILED
for brerror in br.brerror_set.all():
logger.debug("Saving error %s" % brerror)
LogMessage.objects.create(build=br.build,
level=LogMessage.EXCEPTION,
message=brerror.errmsg)
LogMessage.objects.create(build = br.build, level = LogMessage.EXCEPTION, message = brerror.errmsg)
br.build.save()
# we don't have a true build object here; hence, toasterui
# didn't have a change to release the BE lock
# we don't have a true build object here; hence, toasterui didn't have a change to release the BE lock
br.environment.lock = BuildEnvironment.LOCK_FREE
br.environment.save()
# update all BuildRequests without a build created
for br in BuildRequest.objects.filter(build = None):
br.build = Build.objects.create(project=br.project,
completed_on=br.updated,
started_on=br.created)
br.build = Build.objects.create(project = br.project, completed_on = br.updated, started_on = br.created)
br.build.outcome = Build.FAILED
try:
br.build.machine = br.brvariable_set.get(name='MACHINE').value
@@ -138,42 +119,22 @@ class Command(NoArgsCommand):
br.save()
# transpose target information
for brtarget in br.brtarget_set.all():
Target.objects.create(build=br.build,
target=brtarget.target,
task=brtarget.task)
Target.objects.create(build=br.build, target=brtarget.target, task=brtarget.task)
# transpose the launch errors in ToasterExceptions
for brerror in br.brerror_set.all():
LogMessage.objects.create(build=br.build,
level=LogMessage.EXCEPTION,
message=brerror.errmsg)
LogMessage.objects.create(build = br.build, level = LogMessage.EXCEPTION, message = brerror.errmsg)
br.build.save()
# Make sure the LOCK is removed for builds which have been fully
# cancelled
for br in BuildRequest.objects.filter(
Q(build__outcome=Build.CANCELLED) &
Q(state=BuildRequest.REQ_CANCELLING) &
~Q(environment=None)):
br.environment.lock = BuildEnvironment.LOCK_FREE
br.environment.save()
pass
def handle_noargs(self, **options):
while True:
try:
self.cleanup()
except Exception as e:
logger.warning("runbuilds: cleanup exception %s" % str(e))
try:
self.archive()
except Exception as e:
logger.warning("runbuilds: archive exception %s" % str(e))
try:
self.schedule()
except Exception as e:
logger.warning("runbuilds: schedule exception %s" % str(e))
except:
pass
time.sleep(1)

View File

@@ -1,113 +1,154 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(migrations.Migration):
class Migration(SchemaMigration):
dependencies = [
('orm', '0001_initial'),
]
def forwards(self, orm):
# Adding model 'BuildEnvironment'
db.create_table(u'bldcontrol_buildenvironment', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('address', self.gf('django.db.models.fields.CharField')(max_length=254)),
('betype', self.gf('django.db.models.fields.IntegerField')()),
('bbaddress', self.gf('django.db.models.fields.CharField')(max_length=254, blank=True)),
('bbport', self.gf('django.db.models.fields.IntegerField')(default=-1)),
('bbtoken', self.gf('django.db.models.fields.CharField')(max_length=126, blank=True)),
('bbstate', self.gf('django.db.models.fields.IntegerField')(default=0)),
('lock', self.gf('django.db.models.fields.IntegerField')(default=0)),
('created', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, blank=True)),
('updated', self.gf('django.db.models.fields.DateTimeField')(auto_now=True, blank=True)),
))
db.send_create_signal(u'bldcontrol', ['BuildEnvironment'])
operations = [
migrations.CreateModel(
name='BRBitbake',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('giturl', models.CharField(max_length=254)),
('commit', models.CharField(max_length=254)),
('dirpath', models.CharField(max_length=254)),
],
),
migrations.CreateModel(
name='BRError',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('errtype', models.CharField(max_length=100)),
('errmsg', models.TextField()),
('traceback', models.TextField()),
],
),
migrations.CreateModel(
name='BRLayer',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=100)),
('giturl', models.CharField(max_length=254)),
('commit', models.CharField(max_length=254)),
('dirpath', models.CharField(max_length=254)),
('layer_version', models.ForeignKey(to='orm.Layer_Version', null=True)),
],
),
migrations.CreateModel(
name='BRTarget',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('target', models.CharField(max_length=100)),
('task', models.CharField(max_length=100, null=True)),
],
),
migrations.CreateModel(
name='BRVariable',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=100)),
('value', models.TextField(blank=True)),
],
),
migrations.CreateModel(
name='BuildEnvironment',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('address', models.CharField(max_length=254)),
('betype', models.IntegerField(choices=[(0, b'local'), (1, b'ssh')])),
('bbaddress', models.CharField(max_length=254, blank=True)),
('bbport', models.IntegerField(default=-1)),
('bbtoken', models.CharField(max_length=126, blank=True)),
('bbstate', models.IntegerField(default=0, choices=[(0, b'stopped'), (1, b'started')])),
('sourcedir', models.CharField(max_length=512, blank=True)),
('builddir', models.CharField(max_length=512, blank=True)),
('lock', models.IntegerField(default=0, choices=[(0, b'free'), (1, b'lock'), (2, b'running')])),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
],
),
migrations.CreateModel(
name='BuildRequest',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('state', models.IntegerField(default=0, choices=[(0, b'created'), (1, b'queued'), (2, b'in progress'), (3, b'completed'), (4, b'failed'), (5, b'deleted'), (6, b'archive')])),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('build', models.OneToOneField(null=True, to='orm.Build')),
('environment', models.ForeignKey(to='bldcontrol.BuildEnvironment', null=True)),
('project', models.ForeignKey(to='orm.Project')),
],
),
migrations.AddField(
model_name='brvariable',
name='req',
field=models.ForeignKey(to='bldcontrol.BuildRequest'),
),
migrations.AddField(
model_name='brtarget',
name='req',
field=models.ForeignKey(to='bldcontrol.BuildRequest'),
),
migrations.AddField(
model_name='brlayer',
name='req',
field=models.ForeignKey(to='bldcontrol.BuildRequest'),
),
migrations.AddField(
model_name='brerror',
name='req',
field=models.ForeignKey(to='bldcontrol.BuildRequest'),
),
migrations.AddField(
model_name='brbitbake',
name='req',
field=models.OneToOneField(to='bldcontrol.BuildRequest'),
),
]
# Adding model 'BuildRequest'
db.create_table(u'bldcontrol_buildrequest', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('project', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['orm.Project'])),
('build', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['orm.Build'], null=True)),
('state', self.gf('django.db.models.fields.IntegerField')(default=0)),
('created', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, blank=True)),
('updated', self.gf('django.db.models.fields.DateTimeField')(auto_now=True, blank=True)),
))
db.send_create_signal(u'bldcontrol', ['BuildRequest'])
# Adding model 'BRLayer'
db.create_table(u'bldcontrol_brlayer', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('req', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['bldcontrol.BuildRequest'])),
('name', self.gf('django.db.models.fields.CharField')(max_length=100)),
('giturl', self.gf('django.db.models.fields.CharField')(max_length=254)),
('commit', self.gf('django.db.models.fields.CharField')(max_length=254)),
))
db.send_create_signal(u'bldcontrol', ['BRLayer'])
# Adding model 'BRVariable'
db.create_table(u'bldcontrol_brvariable', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('req', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['bldcontrol.BuildRequest'])),
('name', self.gf('django.db.models.fields.CharField')(max_length=100)),
('value', self.gf('django.db.models.fields.TextField')(blank=True)),
))
db.send_create_signal(u'bldcontrol', ['BRVariable'])
# Adding model 'BRTarget'
db.create_table(u'bldcontrol_brtarget', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('req', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['bldcontrol.BuildRequest'])),
('target', self.gf('django.db.models.fields.CharField')(max_length=100)),
('task', self.gf('django.db.models.fields.CharField')(max_length=100, null=True)),
))
db.send_create_signal(u'bldcontrol', ['BRTarget'])
def backwards(self, orm):
# Deleting model 'BuildEnvironment'
db.delete_table(u'bldcontrol_buildenvironment')
# Deleting model 'BuildRequest'
db.delete_table(u'bldcontrol_buildrequest')
# Deleting model 'BRLayer'
db.delete_table(u'bldcontrol_brlayer')
# Deleting model 'BRVariable'
db.delete_table(u'bldcontrol_brvariable')
# Deleting model 'BRTarget'
db.delete_table(u'bldcontrol_brtarget')
models = {
u'bldcontrol.brlayer': {
'Meta': {'object_name': 'BRLayer'},
'commit': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'giturl': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"})
},
u'bldcontrol.brtarget': {
'Meta': {'object_name': 'BRTarget'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
'target': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'task': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True'})
},
u'bldcontrol.brvariable': {
'Meta': {'object_name': 'BRVariable'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
'value': ('django.db.models.fields.TextField', [], {'blank': 'True'})
},
u'bldcontrol.buildenvironment': {
'Meta': {'object_name': 'BuildEnvironment'},
'address': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'bbaddress': ('django.db.models.fields.CharField', [], {'max_length': '254', 'blank': 'True'}),
'bbport': ('django.db.models.fields.IntegerField', [], {'default': '-1'}),
'bbstate': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'bbtoken': ('django.db.models.fields.CharField', [], {'max_length': '126', 'blank': 'True'}),
'betype': ('django.db.models.fields.IntegerField', [], {}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'lock': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
u'bldcontrol.buildrequest': {
'Meta': {'object_name': 'BuildRequest'},
'build': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Build']", 'null': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']"}),
'state': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
u'orm.build': {
'Meta': {'object_name': 'Build'},
'bitbake_version': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'build_name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'completed_on': ('django.db.models.fields.DateTimeField', [], {}),
'cooker_log_path': ('django.db.models.fields.CharField', [], {'max_length': '500'}),
'distro': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'distro_version': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'errors_no': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'machine': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'outcome': ('django.db.models.fields.IntegerField', [], {'default': '2'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']", 'null': 'True'}),
'started_on': ('django.db.models.fields.DateTimeField', [], {}),
'timespent': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'warnings_no': ('django.db.models.fields.IntegerField', [], {'default': '0'})
},
u'orm.project': {
'Meta': {'object_name': 'Project'},
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
}
}
complete_apps = ['bldcontrol']

View File

@@ -1,19 +0,0 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('bldcontrol', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='buildenvironment',
name='betype',
field=models.IntegerField(choices=[(0, b'local')]),
),
]

View File

@@ -0,0 +1,106 @@
# -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'BuildEnvironment.sourcedir'
db.add_column(u'bldcontrol_buildenvironment', 'sourcedir',
self.gf('django.db.models.fields.CharField')(default='', max_length=512, blank=True),
keep_default=False)
# Adding field 'BuildEnvironment.builddir'
db.add_column(u'bldcontrol_buildenvironment', 'builddir',
self.gf('django.db.models.fields.CharField')(default='', max_length=512, blank=True),
keep_default=False)
def backwards(self, orm):
# Deleting field 'BuildEnvironment.sourcedir'
db.delete_column(u'bldcontrol_buildenvironment', 'sourcedir')
# Deleting field 'BuildEnvironment.builddir'
db.delete_column(u'bldcontrol_buildenvironment', 'builddir')
models = {
u'bldcontrol.brlayer': {
'Meta': {'object_name': 'BRLayer'},
'commit': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'giturl': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"})
},
u'bldcontrol.brtarget': {
'Meta': {'object_name': 'BRTarget'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
'target': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'task': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True'})
},
u'bldcontrol.brvariable': {
'Meta': {'object_name': 'BRVariable'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
'value': ('django.db.models.fields.TextField', [], {'blank': 'True'})
},
u'bldcontrol.buildenvironment': {
'Meta': {'object_name': 'BuildEnvironment'},
'address': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'bbaddress': ('django.db.models.fields.CharField', [], {'max_length': '254', 'blank': 'True'}),
'bbport': ('django.db.models.fields.IntegerField', [], {'default': '-1'}),
'bbstate': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'bbtoken': ('django.db.models.fields.CharField', [], {'max_length': '126', 'blank': 'True'}),
'betype': ('django.db.models.fields.IntegerField', [], {}),
'builddir': ('django.db.models.fields.CharField', [], {'max_length': '512', 'blank': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'lock': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'sourcedir': ('django.db.models.fields.CharField', [], {'max_length': '512', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
u'bldcontrol.buildrequest': {
'Meta': {'object_name': 'BuildRequest'},
'build': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Build']", 'null': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']"}),
'state': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
u'orm.build': {
'Meta': {'object_name': 'Build'},
'bitbake_version': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'build_name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'completed_on': ('django.db.models.fields.DateTimeField', [], {}),
'cooker_log_path': ('django.db.models.fields.CharField', [], {'max_length': '500'}),
'distro': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'distro_version': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'errors_no': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'machine': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'outcome': ('django.db.models.fields.IntegerField', [], {'default': '2'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']", 'null': 'True'}),
'started_on': ('django.db.models.fields.DateTimeField', [], {}),
'timespent': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'warnings_no': ('django.db.models.fields.IntegerField', [], {'default': '0'})
},
u'orm.project': {
'Meta': {'object_name': 'Project'},
'branch': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'short_description': ('django.db.models.fields.CharField', [], {'max_length': '50', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'user_id': ('django.db.models.fields.IntegerField', [], {'null': 'True'})
}
}
complete_apps = ['bldcontrol']

View File

@@ -1,19 +0,0 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('bldcontrol', '0002_auto_20160120_1250'),
]
operations = [
migrations.AlterField(
model_name='buildrequest',
name='state',
field=models.IntegerField(default=0, choices=[(0, b'created'), (1, b'queued'), (2, b'in progress'), (3, b'completed'), (4, b'failed'), (5, b'deleted'), (6, b'cancelling'), (7, b'archive')]),
),
]

View File

@@ -0,0 +1,99 @@
# -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'BRLayer.dirpath'
db.add_column(u'bldcontrol_brlayer', 'dirpath',
self.gf('django.db.models.fields.CharField')(default='', max_length=254),
keep_default=False)
def backwards(self, orm):
# Deleting field 'BRLayer.dirpath'
db.delete_column(u'bldcontrol_brlayer', 'dirpath')
models = {
u'bldcontrol.brlayer': {
'Meta': {'object_name': 'BRLayer'},
'commit': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'dirpath': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'giturl': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"})
},
u'bldcontrol.brtarget': {
'Meta': {'object_name': 'BRTarget'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
'target': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'task': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True'})
},
u'bldcontrol.brvariable': {
'Meta': {'object_name': 'BRVariable'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
'value': ('django.db.models.fields.TextField', [], {'blank': 'True'})
},
u'bldcontrol.buildenvironment': {
'Meta': {'object_name': 'BuildEnvironment'},
'address': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'bbaddress': ('django.db.models.fields.CharField', [], {'max_length': '254', 'blank': 'True'}),
'bbport': ('django.db.models.fields.IntegerField', [], {'default': '-1'}),
'bbstate': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'bbtoken': ('django.db.models.fields.CharField', [], {'max_length': '126', 'blank': 'True'}),
'betype': ('django.db.models.fields.IntegerField', [], {}),
'builddir': ('django.db.models.fields.CharField', [], {'max_length': '512', 'blank': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'lock': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'sourcedir': ('django.db.models.fields.CharField', [], {'max_length': '512', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
u'bldcontrol.buildrequest': {
'Meta': {'object_name': 'BuildRequest'},
'build': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Build']", 'null': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']"}),
'state': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
u'orm.build': {
'Meta': {'object_name': 'Build'},
'bitbake_version': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'build_name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'completed_on': ('django.db.models.fields.DateTimeField', [], {}),
'cooker_log_path': ('django.db.models.fields.CharField', [], {'max_length': '500'}),
'distro': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'distro_version': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'errors_no': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'machine': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'outcome': ('django.db.models.fields.IntegerField', [], {'default': '2'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']", 'null': 'True'}),
'started_on': ('django.db.models.fields.DateTimeField', [], {}),
'timespent': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'warnings_no': ('django.db.models.fields.IntegerField', [], {'default': '0'})
},
u'orm.project': {
'Meta': {'object_name': 'Project'},
'branch': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'short_description': ('django.db.models.fields.CharField', [], {'max_length': '50', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'user_id': ('django.db.models.fields.IntegerField', [], {'null': 'True'})
}
}
complete_apps = ['bldcontrol']

View File

@@ -0,0 +1,104 @@
# -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
class Migration(DataMigration):
def forwards(self, orm):
"Write your forwards methods here."
# Note: Don't use "from appname.models import ModelName".
# Use orm.ModelName to refer to models in this application,
# and orm['appname.ModelName'] for models in other applications.
try:
orm.BuildEnvironment.objects.get(pk = 1)
except:
from django.utils import timezone
orm.BuildEnvironment.objects.create(pk = 1,
created = timezone.now(),
updated = timezone.now(),
betype = 0)
def backwards(self, orm):
"Write your backwards methods here."
models = {
u'bldcontrol.brlayer': {
'Meta': {'object_name': 'BRLayer'},
'commit': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'dirpath': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'giturl': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"})
},
u'bldcontrol.brtarget': {
'Meta': {'object_name': 'BRTarget'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
'target': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'task': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True'})
},
u'bldcontrol.brvariable': {
'Meta': {'object_name': 'BRVariable'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
'value': ('django.db.models.fields.TextField', [], {'blank': 'True'})
},
u'bldcontrol.buildenvironment': {
'Meta': {'object_name': 'BuildEnvironment'},
'address': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'bbaddress': ('django.db.models.fields.CharField', [], {'max_length': '254', 'blank': 'True'}),
'bbport': ('django.db.models.fields.IntegerField', [], {'default': '-1'}),
'bbstate': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'bbtoken': ('django.db.models.fields.CharField', [], {'max_length': '126', 'blank': 'True'}),
'betype': ('django.db.models.fields.IntegerField', [], {}),
'builddir': ('django.db.models.fields.CharField', [], {'max_length': '512', 'blank': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'lock': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'sourcedir': ('django.db.models.fields.CharField', [], {'max_length': '512', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
u'bldcontrol.buildrequest': {
'Meta': {'object_name': 'BuildRequest'},
'build': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Build']", 'null': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']"}),
'state': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
u'orm.build': {
'Meta': {'object_name': 'Build'},
'bitbake_version': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'build_name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'completed_on': ('django.db.models.fields.DateTimeField', [], {}),
'cooker_log_path': ('django.db.models.fields.CharField', [], {'max_length': '500'}),
'distro': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'distro_version': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'errors_no': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'machine': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'outcome': ('django.db.models.fields.IntegerField', [], {'default': '2'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']", 'null': 'True'}),
'started_on': ('django.db.models.fields.DateTimeField', [], {}),
'timespent': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'warnings_no': ('django.db.models.fields.IntegerField', [], {'default': '0'})
},
u'orm.project': {
'Meta': {'object_name': 'Project'},
'branch': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'short_description': ('django.db.models.fields.CharField', [], {'max_length': '50', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'user_id': ('django.db.models.fields.IntegerField', [], {'null': 'True'})
}
}
complete_apps = ['bldcontrol']
symmetrical = True

View File

@@ -0,0 +1,112 @@
# -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'BRError'
db.create_table(u'bldcontrol_brerror', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('req', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['bldcontrol.BuildRequest'])),
('errtype', self.gf('django.db.models.fields.CharField')(max_length=100)),
('errmsg', self.gf('django.db.models.fields.TextField')()),
('traceback', self.gf('django.db.models.fields.TextField')()),
))
db.send_create_signal(u'bldcontrol', ['BRError'])
def backwards(self, orm):
# Deleting model 'BRError'
db.delete_table(u'bldcontrol_brerror')
models = {
u'bldcontrol.brerror': {
'Meta': {'object_name': 'BRError'},
'errmsg': ('django.db.models.fields.TextField', [], {}),
'errtype': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
'traceback': ('django.db.models.fields.TextField', [], {})
},
u'bldcontrol.brlayer': {
'Meta': {'object_name': 'BRLayer'},
'commit': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'dirpath': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'giturl': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"})
},
u'bldcontrol.brtarget': {
'Meta': {'object_name': 'BRTarget'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
'target': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'task': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True'})
},
u'bldcontrol.brvariable': {
'Meta': {'object_name': 'BRVariable'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
'value': ('django.db.models.fields.TextField', [], {'blank': 'True'})
},
u'bldcontrol.buildenvironment': {
'Meta': {'object_name': 'BuildEnvironment'},
'address': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'bbaddress': ('django.db.models.fields.CharField', [], {'max_length': '254', 'blank': 'True'}),
'bbport': ('django.db.models.fields.IntegerField', [], {'default': '-1'}),
'bbstate': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'bbtoken': ('django.db.models.fields.CharField', [], {'max_length': '126', 'blank': 'True'}),
'betype': ('django.db.models.fields.IntegerField', [], {}),
'builddir': ('django.db.models.fields.CharField', [], {'max_length': '512', 'blank': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'lock': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'sourcedir': ('django.db.models.fields.CharField', [], {'max_length': '512', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
u'bldcontrol.buildrequest': {
'Meta': {'object_name': 'BuildRequest'},
'build': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Build']", 'null': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']"}),
'state': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
u'orm.build': {
'Meta': {'object_name': 'Build'},
'bitbake_version': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'build_name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'completed_on': ('django.db.models.fields.DateTimeField', [], {}),
'cooker_log_path': ('django.db.models.fields.CharField', [], {'max_length': '500'}),
'distro': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'distro_version': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'errors_no': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'machine': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'outcome': ('django.db.models.fields.IntegerField', [], {'default': '2'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']", 'null': 'True'}),
'started_on': ('django.db.models.fields.DateTimeField', [], {}),
'timespent': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'warnings_no': ('django.db.models.fields.IntegerField', [], {'default': '0'})
},
u'orm.project': {
'Meta': {'object_name': 'Project'},
'branch': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'short_description': ('django.db.models.fields.CharField', [], {'max_length': '50', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'user_id': ('django.db.models.fields.IntegerField', [], {'null': 'True'})
}
}
complete_apps = ['bldcontrol']

View File

@@ -0,0 +1,128 @@
# -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'BRBitbake'
db.create_table(u'bldcontrol_brbitbake', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('req', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['bldcontrol.BuildRequest'], unique=True)),
('giturl', self.gf('django.db.models.fields.CharField')(max_length=254)),
('commit', self.gf('django.db.models.fields.CharField')(max_length=254)),
('dirpath', self.gf('django.db.models.fields.CharField')(max_length=254)),
))
db.send_create_signal(u'bldcontrol', ['BRBitbake'])
def backwards(self, orm):
# Deleting model 'BRBitbake'
db.delete_table(u'bldcontrol_brbitbake')
models = {
u'bldcontrol.brbitbake': {
'Meta': {'object_name': 'BRBitbake'},
'commit': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'dirpath': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'giturl': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']", 'unique': 'True'})
},
u'bldcontrol.brerror': {
'Meta': {'object_name': 'BRError'},
'errmsg': ('django.db.models.fields.TextField', [], {}),
'errtype': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
'traceback': ('django.db.models.fields.TextField', [], {})
},
u'bldcontrol.brlayer': {
'Meta': {'object_name': 'BRLayer'},
'commit': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'dirpath': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'giturl': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"})
},
u'bldcontrol.brtarget': {
'Meta': {'object_name': 'BRTarget'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
'target': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'task': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True'})
},
u'bldcontrol.brvariable': {
'Meta': {'object_name': 'BRVariable'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
'value': ('django.db.models.fields.TextField', [], {'blank': 'True'})
},
u'bldcontrol.buildenvironment': {
'Meta': {'object_name': 'BuildEnvironment'},
'address': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
'bbaddress': ('django.db.models.fields.CharField', [], {'max_length': '254', 'blank': 'True'}),
'bbport': ('django.db.models.fields.IntegerField', [], {'default': '-1'}),
'bbstate': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'bbtoken': ('django.db.models.fields.CharField', [], {'max_length': '126', 'blank': 'True'}),
'betype': ('django.db.models.fields.IntegerField', [], {}),
'builddir': ('django.db.models.fields.CharField', [], {'max_length': '512', 'blank': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'lock': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'sourcedir': ('django.db.models.fields.CharField', [], {'max_length': '512', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
u'bldcontrol.buildrequest': {
'Meta': {'object_name': 'BuildRequest'},
'build': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Build']", 'null': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']"}),
'state': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
u'orm.bitbakeversion': {
'Meta': {'object_name': 'BitbakeVersion'},
'branch': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'dirpath': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'giturl': ('django.db.models.fields.URLField', [], {'max_length': '200'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '32'})
},
u'orm.build': {
'Meta': {'object_name': 'Build'},
'bitbake_version': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'build_name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'completed_on': ('django.db.models.fields.DateTimeField', [], {}),
'cooker_log_path': ('django.db.models.fields.CharField', [], {'max_length': '500'}),
'distro': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'distro_version': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'errors_no': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'machine': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'outcome': ('django.db.models.fields.IntegerField', [], {'default': '2'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']", 'null': 'True'}),
'started_on': ('django.db.models.fields.DateTimeField', [], {}),
'timespent': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'warnings_no': ('django.db.models.fields.IntegerField', [], {'default': '0'})
},
u'orm.project': {
'Meta': {'object_name': 'Project'},
'bitbake_version': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.BitbakeVersion']"}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'short_description': ('django.db.models.fields.CharField', [], {'max_length': '50', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'user_id': ('django.db.models.fields.IntegerField', [], {'null': 'True'})
}
}
complete_apps = ['bldcontrol']

Some files were not shown because too many files have changed in this diff Show More